Writing Avaya Breeze Snap-ins Using Engagement Designer — Part Three

Welcome to “An Introduction to Avaya Breeze Prompts and Gateways,” part three in a series of videos that explore writing Avaya Breeze™ applications—or Snap-ins.

In my first two Breeze videos, I built a simple, yet functional Make Call Snap-In. I showed you how to use the Breeze tools for Snap-In creation, execution, and testing. This video expands upon that work by enhancing the Make Call Snap-in with a DTMF prompt and decision logic.

The video series:

Part 1: Avaya Breeze Basics

Part 2: Avaya Breeze Workflow Properties

Part 3: Avaya Breeze Prompts and Gateways

 

Andrew Prokop is the Director of Vertical Industries at Arrow Systems Integration. Andrew is an active blogger and his widely-read blog, SIP Adventures, discusses every imaginable topic in the world of unified communications. Follow Andrew on Twitter at @ajprokop, and read his blog, SIP Adventures.

Related Articles:

Reducing the Risks of Distributed Denial of Service Attacks

Picture what may just be one of the scariest scenarios in your career: The network has slowed to a crawl. You can barely hold a management interface, let alone control the network elements involved. The attack propagates, and as it does you watch your services drop one by one. Panic sets in. You’re experiencing a Denial of Service (DoS) attack. All resources are focused on stamping this fire out—and that may very well be the intention of the attackers.

A DoS attack might be a smokescreen to get you to focus elsewhere while the intruder goes about covert business in a much safer fashion, leaving little forensics afterward.

DoS attacks are an easy thing to comprehend. Even the term Distributed Denial of Service (DDoS) is an easy extension. But the strategy behind why they’re used and their intent can vary dramatically. A DoS attack can occur in an array of sophistication. Here’s a quick breakout from the simplest to most complex attacks:

  • Network Level attacks:

    The simplest ones—TCP, UDP, ICMP, Floods

  • Reflective/Amplified attacks:

    Service focused—DNS, NTP, SNMP, SSDP, Specific floods

  • Fragmentation:

    Session specific—overlaps, missing, too many

  • Application specific:

    Repetitive GET, slow READ or loop calls

  • Crafted:

    Stack and protocol level, buffer resources

These methods are often overlapped in a targeted fashion. In essence the attack is a series of waves that each hit in varying degrees of sophistication and focus. Other times the attack is relatively primitive and easy to isolate. The reason for this is that in the simplest levels, it’s an easy thing to do. As an example, a disgruntled student, upset over a new vending matching policy, could mount a DoS attack against his or her school administration. On the other end of the spectrum is a much darker orchestration, the sleight of the hand to get you to look elsewhere. This is typically the signature of an Advanced Persistent Threat (APT).

Unless an attack is very simple and short-lived, it needs to be distributed in the way it operates. It needs to be generated from various points of origin. This is referred to as a DDoS attack. The attacker needs to coordinate a series of end points to execute some particular event at the same point in time or perhaps, in more sophisticated examples, as phased against a time series. For a DDoS attack, the attacker requires a command and control (C2) capability. This means that they need to have access and response to the compromised systems. This is referred to as a Botnet.

Botnets do not have to be sophisticated to be successful. They only have to implement a simple set of instructions at the right point in time. Let’s take the recent reflective/amplified DDoS attack on Dynamic DNS services on the East coast of the U.S., which affected several large firms such as Amazon and Yahoo. The attack was mounted from residential video surveillance cameras. Even though there was no direct intrusion, the firms were impacted. Which leads us to two lessons.

Lesson number one: Security in IoT needs to be taken more seriously in the product design stages. Perhaps the concept and treatment of residential security systems needs to be rethought.

Lesson number two: As we move to outsourcing and cloud services we need to realize that we spread the reality of our exposed risk. Due diligence is required to assure that service providers and partners are doing their role in end-to-end security. But do you recall I mentioned that the source of the orchestrated attack was from the residential network? This brings about a new degree of challenges as we look at the new world of consumer IoT.

How do we maintain security in that sector? Clearly the residence itself should uphold best practices with a well-maintained and monitored gateway. But let’s face it, this is generally not going to happen. The monitoring of behaviors and abnormalities at the provider interface level is the next best catch and many providers are moving to reach this goal.

The other key point to remember about botnets is that in order to command, one has to control. This can happen in various ways. One is automatic. It infects and sits until a predefined time and then activates. This is the simplest. Another method requires true C2. Either way, bad code gets residence or existing code gets leveraged in negative ways. You should be able to pick out the anomalies.

Proper design with hyper-segmentation can greatly reduce the risk of propagation from the initial infection. The botnet is contained and should be readily identified, if you’re watching. Are you?

Continuous Learning: Propelling Forward in a Rapidly and Inevitably Changing World

Whether we realize it or not, advanced technologies like artificial intelligence (AI), augmented reality, and the Internet of Things (IoT) have transformed the way we think about the world around us. From how we protect our schools to the way we navigate our streets to how we shop for groceries, such technology now lies at the heart of practically everything we do today.

Just as these technologies have changed the way we live, they have changed the way we work. Today’s rapid pace of innovation has transformed nearly every business task, process, and workflow imaginable—so much so that industry analysts estimate that up to 45% of activities that employees are paid to perform can now be automated.

This digital disruption—or what many are calling the Fourth Industrial Revolution—without question redefines traditional roles and responsibilities. In fact, research shows that in five years, more than one third of skills that are considered important in today’s workforce will have changed. Even more, analysts estimate that 65% of children today will grow up to work in roles that don’t yet exist.

While we do still see employees that specialize in one skill or expertise, we’ve mostly moved away from the days of hiring an employee for just one job. As technology evolves, so too do the skills required to innovate and propel forward. Looking ahead, employees must have a propensity for continuous learning and adopting new skills to be able to recognize and respond to today’s speed of digital change.

Consider how technology has changed the marketing paradigm. As recently as 10 years ago, marketing platforms like Marketo and HubSpot had only just been founded, Facebook was still in its infancy, and the first iPhone had newly hit the market. As technologies like cloud, social, mobile and big data evolved, however, we suddenly began seeing new tools specifically designed to enhance digital media, social media marketing, and mobile marketing. As a result, companies began searching to fill roles for social media coordinators, digital campaign managers and integrated marketing planners—jobs that were unfathomable 15 to 20 years prior.

Fast forward to today and we’re seeing the emergence of new technology for marketing, such as augmented reality, geofencing, and emotion detection. The continual emergence of new technology perpetually creates skills gaps that must be filled by employees who are passionate, motivated, and invested in their own learning. These kinds of team members are committed to developing new skills and leveraging their strengths to outperform.

But not all employees can easily identify their strengths or develop new skills. This is likely why nearly half of employees today feel unengaged at work, with nearly 20% feeling “actively disengaged.” At the same time, companies are struggling to align employee strengths with organizational priorities. Employees may have certain strengths, but employers may find those skills don’t directly increase operational efficiency or performance. This is why nearly 80% of businesses are more worried about a talent shortage today than they were two years ago.

So, what’s the answer? Employees and employers must work together to identify what roles are currently filled, what skills are still needed, and who best exemplifies those skills. For employees, this means taking control of how they grow their careers and improving for the better. For employers, this means displaying an unwavering commitment to employee reinvestment by understanding key areas of interest to effectively fill skills gaps.

At Avaya, for example, we’re leading an employee enablement program under our Marketing 3.0 strategy. The initiative is designed to help strengthen our marketing organization by equipping employees with the right competencies that reflect our culture, strategy, expectations and market dynamics. By doing so, we can ensure we’re recruiting and managing talent in the most strategic way, putting the right people in the right jobs with the abilities to perform at maximum potential every day. By having each marketing function participate in a simple knowledge profile exercise, we can begin objectively determining development opportunities that best meet their needs and the needs of our business.

As technology continuously evolves, it’s crucial that employees have a propensity for continuous learning and that organizations foster an environment for this learning. In the words of former GE CEO Jack Welch, “An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage.”

We live in a world that is rapidly and inevitably changing. Employees should embrace this change to thrive, and must if they wish to propel business forward. As employers, we are responsible for strategically leveraging our resources to align employee strengths with organizational needs to succeed in this environment of constant change.

Next-Generation IT: What Does It Really Look Like?

From mainframes to virtualization to the IoT, we’ve come a long way in a very short amount of time in terms of networking, OS and applications. All this progress has led us to an inflection point of digital business innovation; a critical time in history where, as Gartner puts it best, enterprises must “recognize, prioritize and respond at the speed of digital change.” Despite this, however, many businesses still rely on legacy systems that prevent them from growing and thriving. So, what’s the deal?

I attempted to answer this in a previous blog, where I laid out as entirely as I could the evolution of interconnectivity leading up to today. What was ultimately concluded in that blog is that we have reached a point where we can finally eliminate dependency on legacy hardware and hierarchical architecture with the use of one single, next-generation software platform. The call for organizations across all industries to migrate from legacy hardware has never been stronger, and the good news is that technology has evolved to a point where they can now effectively do so.

This concept of a “next-generation platform,” however, isn’t as simple as it sounds. Just consider its many variations among industry analysts. McKinsey & Company, for example, refers to this kind of platform as “next-generation infrastructure” (NGI). Gartner, meanwhile, describes it as the “New Digital Platform.” We’re seeing market leaders emphasizing the importance of investing in a next-generation platform, yet many businesses still wonder what the technology actually looks like.

To help make it clearer, Avaya took a comparative look at top analyst definitions and broke them down into five key areas of focus for businesses industry-wide: 

  1. Next-generation IT
  2. The Internet of Things (IoT)
  3. Artificial intelligence (AI)/automation
  4. Open ecosystem
  5. The customer/citizens experience

In a series of upcoming blogs, I’ll be walking through these five pillars of a next-generation platform, outlining what they mean and how they affect businesses across every sector. So, let’s get started with the first of these: next-generation IT.

Simplifying Next-Gen IT

As IT leaders face unrelenting pressure to elevate their infrastructure, next-generation IT has emerged as a way to enable advanced new capabilities and support ever-growing business needs. But what does it consist of? Well, many things. The way we see it, however, next-generation IT is defined by four core elements: secure mobility, any-cloud deployment (more software), omnichannel and big data analytics—all of which are supported by a next-generation platform built on open communications architecture.

Secure mobility: Most digital growth today stems from mobile usage. Just consider that mobile now represents 65% of all digital media time, with the majority of traffic for over 75% of digital content—health information, news, retail, sports—coming from mobile devices. Without question, the ability to deliver a secure mobile customer/citizen experience must be part of every organizational DNA. This means enabling customers to securely consume mobile services anytime, anywhere and however desired with no physical connectivity limitations. Whether they’re on a corporate campus connected to a dedicated WLAN, at Starbucks connected to a Wi-Fi hotspot, or on the road paired to a Bluetooth device though cellular connectivity, the connection must always be seamless and secure. Businesses must start intelligently combining carrier wireless technology with next-generation Wi-Fi infrastructure to make service consumption more secure and mobile-minded with seamless hand-off between the two technologies.

Any-cloud deployment: Consumers should be able to seamlessly deploy any application or service as part of any cloud deployment model (hybrid, public or private). To enable this, businesses must sufficiently meet today’s requirements for any-to-any communication. As I discussed in my previous blog, the days of nodal configuration and virtualization are a thing of the past; any-to-any communications have won the battle. A next-generation platform built on open communications architecture is integrated, agile, and future-proof enough to effectively and securely support a services-based ecosystem. Of course, the transition towards software services is highly desirable but remember not all hardware will disappear—although where possible it should definitely be considered. This services-based design is the underlying force of many of today’s greatest digital developments (smart cars, smart cities). It’s what allows organizations across every sector to deliver the most value possible to end-users.

Omnichannel: All communication and/or collaboration platforms must be omnichannel enabled. This is not to be confused with multi-channel. Whereas the latter represents a siloed, metric-driven approach to service, the former is inherently designed to provide a 360-degree customer view, supporting the foundation of true engagement. An omnichannel approach also supports businesses with the contextual and situational awareness needed to drive anticipatory engagement at the individual account level. This means knowing that a customer has been on your website for the last 15 minutes looking at a specific product of yours, which they inquired about during a live chat session with an agent two weeks ago. This kind of contextual data needs to be brought into the picture to add value and enhance the experience of whom you service, regardless of where the interaction first started.

Big data analytics: It’s imperative that you strategically use the contextual data within your organization to compete based on the CX. A huge part of next-generation IT involves seamlessly leveraging multiple databases and analytics capabilities to transform business outcomes (and ultimately, customers’ lives). This means finally breaking siloes to tap into the explosive amount of data—structured and unstructured, historical and real-time—at your disposal. Just as importantly, this means employees being able to openly share, track, and collect data across various teams, processes, and customer touch points. This level of data visibility means a hotel being able to see that a guest’s flight got delayed, enabling the on-duty manager to let that customer know that his or her reservation will be held. It means a bank being able to push out money management tips to a customer after seeing that the individual’s last five interactions were related to account spending.

These four components are critical to next-generation IT as part of a next-generation digital platform. Organizations must start looking at each of these components if they wish to compete based on the CX and respond at the speed of digital change. Stay tuned, next we’ll be talking about the ever-growing Internet of Things!