The Sun Was Shining on Avaya in Orlando – With a Nice Cool “Breeze”!

Most people traveling to Orlando go to Disney World for cool rides or a peek at the future in the latest Star Wars adventure. This week, Disney, and other executives from globally-recognized brands, came to visit Avaya instead—to learn more about the company’s future at Enterprise Connect 2016. Put on some sunglasses and see why our future is so bright!

Each year, thousands of communications industry professionals come to the conference to get a preview of the latest innovations in cloud applications, developer tools, and the latest, greatest things they can use to better serve their customers.

Avaya was center stage at the event with a lot of news for 2016 and beyond–a remarkable step in our digital transformation away from the desk phones of the past.

Avaya has a very clever event strategy of inviting customers with their account executives to the exclusive, invitation-only Innovation Lounge: A private room where customers get a peek into the future of communications under NDA.

No, they didn’t see Star Wars VIII, but they did see more than a dozen previews of upcoming Avaya solutions, including customer, team engagement and networking solutions. No cameras allowed and no tweeting, but lots of great sharing of ideas over a drink or two, with customers including Disney, Fidelity Investments, All State, CVS, and hundreds more joined by leading industry analysts from Gartner and Forrester Research.

The lounge is intentionally larger than our show floor booth. It’s money well spent to have your most important prospective customers responsible for purchase decisions hear about the future in a private setting, while still having the show floor presence for media and general attendees.

I had the distinct pleasure of showing the brand-new Avaya Snapp Store, launched Monday morning in conjunction with Avaya Breeze, the next generation of the Engagement Development Platform.

The Snapp Store is an online marketplace for evaluating, purchasing, and downloading software from Avaya and several partners, which can easily “snap in” to Avaya Breeze applications. It’s like having an app store for business software, enabling application developers to obtain software from Avaya and our partners and even from other customers.

Developers can have fully functional, enterprise-grade applications ready in hours and days, not weeks and months. Check it out for yourself. I also shared a preview of an upcoming customer engagement client application for the web, along with a full software development kit. We’ll have more about that soon as we get closer to launch.

But the real star of the show was a new product we kept under wraps until Gary E. Barnett’s keynote address Wednesday morning. Gary began by talking about the incredible potential of Avaya Breeze and the Avaya Snapp Store, with the help of special guests—notably one customer who has already used Breeze to build 40 applications. Yes, 40! The big surprise was the announcement of Zang, a new software and services company from Avaya that will be led by Esna founder Mohammad Nezarati.

This cloud-based communications- and applications-as-a-service platform offers customers ready-to-use apps, and a powerful set of tools to either build their own Zang apps, or embed Zang’s communication capabilities into existing apps, devices and business processes. The interest from attendees was over-the-top positive.

After the keynote, I gave a presentation on the show floor called “20 Cool Applications in 20 Minutes,” to a packed audience. They loved how Breeze and Zang allow customers to design, build  and run communication-enabled business applications with ease.

The last 18 months for me has been extraordinary. Customer engagement. Team engagement. Snapp Store. Avaya Breeze. Zang. Definitely cool.

It’s all part of the transformation of being an innovation-driven software and services company that is more like Google than the IRS. It’s a good time to carry the torch forward with our reputation for earning the business of more than 95 percent of Fortune 100 and 500 businesses – further complemented with an exciting portfolio of new software and services set to meet the needs of people living the digital transformation every day.

Zang translates to the word “excellent,” which is exactly the word used by customers, partners, analysts and attendees to describe Avaya this week.

That feels about as good as the sunshine outside of the Gaylord Palms Resort shining on Avaya!

Related Articles:

8 Questions to Help Decide if You are Getting the Most From Your Cloud

Concerns around security and accountability remain top of mind (and top-of-budget) for today’s IT managers. Such concerns are the natural progression of a trend that we saw coming in 2009–discussed in our 2010 trends forecast–and have seen evolve into three different cloud worlds with different outcomes and possibilities. Fueled by department-by-department adoption of applications (many cloud based) filling vertical business needs now touches every part of an organization.

In a recent survey, 46 percent of IT executives said they were dealing with an average of four or more applications. The survey, sponsored by thinkJar and conducted by Beagle Research, polled 148 IT executives and found IT managers are “being overtaxed with work and unable to keep up with new requests and new demands,” concluding that the average IT manager is taking on more cloud services than they can handle.

Software applications in the enterprise are prolific today, and will only grow. Researchers at Strategy Analytics forecast there will be 33 billion devices in use by 2020. These new devices and applications are accelerating demand for the three clouds identified in our 2016 trend report: “80%+ of enterprises will use Public cloud, but Hybrid/Private cloud will remain the critical application workhorse for next 5 years.” We wrote:

“Going to Cloud has many benefits, but it can also lead to some new challenges that businesses need to consider. As solutions move from homogenous, monolithic technology to heterogeneous technology running on layers upon layers of cloud infrastructure, customers get increasingly concerned about Cloud security and accountability for service delivery/support of the full solution. Customers will demand accountability and value from their “point” vendors, requiring strong relationships and mastery of the infrastructure implications which includes the Cloud applications, as well as the network and desktop/mobile devices which served them.”

The Cloud was supposed to make things simpler. Instead, the added complexity of multiple applications, off-premise network quality, and interoperability ended up taxing already overwhelmed IT departments, which face new requests and new demands every day. As the Beagle Research/thinkJar study concluded, “it is a point of failure for all cloud adoption projects that IT cannot keep up with demand… New resources must be cloud-aware and cloud-educated to reduce the potential number of problems that adopting cloud applications engender for organizations.”

More concerning, perhaps, are security breaches caused by well-meaning employees. Gartner recently predicted that by the year 2020, some 95 percent of cloud security failures will be the customer’s fault.

“Many organizations still harbor security concerns about use of public cloud services. However, only a small percentage of security incidents impacting enterprises using the cloud have been due to vulnerabilities that were the provider’s fault. Customers increasingly will use cloud access security brokers products to manage and monitor their use of SaaS and other forms of public cloud services.”

In a separate report, Gartner analysts predict security will be a big cloud differentiator. “Security will displace cost and agility as the primary reason for government agencies to switch to the public cloud,” writes Computer Business Review, commenting on the Gartner report. “Increased security will be the primary driver for the extensive adoption of public cloud options for digital government platforms.” Gartner research director Neville Cannon added:

“Many cloud service providers, such as Amazon Web Services, Microsoft and Google, invest heavily in incorporating higher levels of security into their products to continue building confidence that their data is more secure. Many of these providers can invest more than what most nations could afford, let alone the average government agency.”

Surging demand for security, accountability and results will only rise in importance as IT managers are under pressure to make the most of cloud investments. Maximizing the investment begins with development of a comprehensive strategy and a partner with industry expertise, stronger and more adaptable infrastructures, commitment to best practices, and of course, a commitment to security.

So, are you getting the most from your cloud investment? Consider these 8 questions to evaluate if you are getting the most out of your cloud strategy and deployments and where you need to be change course going forward:

  • Different cloud models can lead to different outcomes. Is your business plan and current cloud strategy still aligned in terms of best outcomes? The attributes of the different cloud models lend themselves to different outcomes (Private provides customization, hybrid more flexibility…public provides rapid scale of cookie cutter applications)
  • Is your current solution delivering on the three biggest engagement and collaboration benefits — productivity, customer experience, and profitability?
  • Are all of the company’s key constituents, including employees and business units, supply chain providers, and customers, realizing all of the benefits you expected from the current cloud solution? Some may want to rapidly try new cloud application startups; some may need the cloud access for critical core business applications.
  • Recent studies have shown that 8 out of 10 new applications are introduced via the cloud. How easy is it to roll out new cloud services demanding performance, reliability, and security in your current IT environment?
  • Does your current delivery model enable scalability while managing demand spikes of critical core applications?
  • Can your current network and access to cloud provider data centers provide the necessary flexibility and quality of service to maximize cloud application experience?
  • Does the cloud solution have easy to use consumer-like tools that can be quickly adopted users without the need for extensive training?
  • While often overlooked, as the number of cloud applications are leveraged in an organization, so does the human resources to managed cloud vendors and integration. Do you have enough resources to effectively manage your cloud vendors?

Will you be ready to layer in new cloud applications coming in 2016? What security issues do you foresee in 2016? What questions are you asking when choosing a vendor?

Follow me on Twitter: @Pat_Patterson_V

How Enterprise Virtualization Will Save Your Business in the Era of IoT

Having a backyard full of trees is quite therapeutic during a marathon day of conference calls, but it also comes with a fair share of maintenance: picking up the fallen limbs from the elms, keeping the invasive cedars from choking out other species, and trimming up the oaks to keep them healthy and the fireplace burning through the winter. On those maintenance days, it’s easy to get obsessed with a tree or set of trees that are causing a problem … say, dropping large limbs dangerously close to your daughters’ trampoline. When you’re fixing up your backyard, one problem – one tree – at a time, the solution to the problem at hand often fails to take into account the needs of the larger ecosystem. Unfortunately, for many networking professionals, every day feels like a maintenance day.

We see problems with mobility and service chaining in and across data centers. We see problems with cost and reliability in the WAN. We see problems with scalability and security in the campus. In a nutshell, we see problems. Fortunately, for every problem, there’s a good ol’ fashioned snake oil salesman. We’re inundated with the latest and greatest technologies to solve our woes … even some we didn’t know we had.

The problem is that we’re putting Band-Aids on bullet holes. The bleeding stops, but the real problem is still lurking beneath the surface. It’s not that these fixes are bad. The problem is that they’re being positioned as a cure-all instead of simply tools to address localized side effects of the problem.

The problem is broader. The data center exists to host applications. Those applications exist to enable users. The WAN exists to connect the data center to the campus, which exists for the users. And, of course, the users exist to run the business.

Since the business is the thing we’re looking to keep alive and thriving, those users need to be productive. That means that they need fast, efficient access to the applications that enable their jobs. So, those problems we rattled off earlier are really just symptoms that have emerged as we tried to create enterprise services across silos of control.

If we want to remove the bullet and save the patient, we must recognize the need for end-to-end services and look holistically at Enterprise Virtualization methods that will securely extend services from user to application at scale with on-demand mobility and business continuity. Otherwise, the problem is only going to get worse.

With the Internet of Things (IoT) becoming an ever-increasing reality in the enterprise, the need for services from device to application is going to multiply exponentially. Without Enterprise Virtualization, the burden on IT to deal with every little problem across the islands of campus, WAN and data center will be overwhelming. They simply won’t be able to keep pace, and, as a result, neither will the business. The users will be limited and become frustrated, and productivity will suffer in turn. It’s a bleak picture, but it doesn’t have to be.

Enterprise Virtualization provides a number of advantages that have long been unattainable to the general enterprise. While we’ve managed to achieve “micro-segmentation” down to the virtual machine layer for applications, the very same data is set free at the data center doors and left vulnerable in the less secure world beyond.

Enterprise Virtualization enables you to extend the segmentation in the data center to the very edges of the network, where the data is consumed by users. Not only can you extend isolation, you can also view it as one contiguous service from server node to user node.

All of the tools available for measuring quality and performance have a clear view from end-to-end, rather than requiring additional tools to aggregate and correlate metrics across the three different islands of technology. Not to mention, Enterprise Virtualization allows you to significantly reduce the number of touch points while provisioning and troubleshooting, thus minimizing the likelihood of down time due to human error.

Just like that limb-dropping elm can avoid the chainsaw, your enterprise can avoid being cut down in its prime. You see, it was a problem in the ecosystem that would have eventually killed all the trees through their intertwined root systems. It was lurking beneath the surface, but the arborist took a step back to see the whole forest, and then recognized and treated the real issue. Likewise, you need to make sure that someone is looking at your forest of IT challenges … not just banging their head on a single tree.

Writing Your First WebRTC Application: Part 2

Last week I posted the first article in my series on writing your first WebRTC application. In it, I explained the high level aspects of a WebRTC signaling server along with the client-side components. At the risk of repeating myself, here are the most important concepts.

  • A WebRTC solution consists of two parts – the code that runs in the web browser and the signaling server.
  • The web browser application will consist of HTML and JavaScript.
  • HTML will be used for user input and page display.
  • JavaScript will be used for communication to the signaling server and WebRTC function calls.
  • WebRTC requires a signaling server, but gives you a lot of leeway as to what it is.

Today, I want to spend a little more time on the client-side code. Specifically, I want to write about how different web browsers have chosen to implement WebRTC.

Because programming is half science and half art, there are a multitude of ways to solve the same problem. I won’t claim that my way is the best way, but there is a rhyme to my reason and my approach gets the job done.

Companion articles that you may find useful:

WebRTC for Beginners

Writing Your First WebRTC Application: Part One

The Unfortunate State of WebRTC

There are countless articles on the web about how WebRTC allows you to create and manage multimedia calls to and from web browsers. While that is certainly true, you might not have read that the WebRTC specification is still very much a work in progress and different web browsers put their own twists on it. What might work in one web browser won’t necessarily work in another.

My experience has been limited to Chrome and Firefox, and both have a number of differences that need to be accounted for. WebRTC also runs in Opera, but I have yet to tackle that browser.

Chrome Browser Settings

It is important to make sure that WebRTC is enabled within Chrome. You do this by opening Chrome and navigating to:


Search for the string “WebRTC” and ensure “WebRTC device enumeration” is enabled.

There are a number of additional WebRTC settings that apply only to the Android operating system. If you use Android for your application, ensure that those settings have been enabled, too. So far, I have done all my work on a Windows 8 PC.

Note, as of the writing of this article, the current version of Chrome is 36.0.1985.143 m. Google may choose to change any of what I am writing in the future. Pay attention as this is a very fluid subject.

Firefox Browser Settings

Unlike Chrome, there are no settings to enable or disable WebRTC functions for Firefox. Of course, this may change over time. As I explained with Chrome, you need to pay attention.

API Differences

For reasons unclear to me, Chrome and Firefox have chosen to use different names for the same WebRTC APIs and objects. This is especially true with Firefox, which appears to have renamed most of them. You aren’t off the hook with Chrome, though. There are simply fewer changes to make.

I would highly recommend that you create a wrapper that looks to see which browser your application is running on and override the WebRTC function names as necessary.

You can check to see which browser you are on and set the function names with the following JavaScript code.

if (navigator.mozGetUserMedia) {

// Firefox specific code

RTCPeerConnection = mozRTCPeerConnection;

RTCSessionDescription = mozRTCSessionDescription;

RTCIceCandidate = mozRTCIceCandidate;

getUserMedia = navigator.mozGetUserMedia.bind(navigator);

} else if (navigator.webkitGetUserMedia) {

// Chrome specific code

RTCPeerConnection = webkitRTCPeerConnection;

getUserMedia = navigator.webkitGetUserMedia.bind(navigator);


Now, whenever you call one of these overridden WebRTC functions, it won’t matter which browser you are on.

I highly recommend that you invoke this wrapper code immediately after your HTML code lays out the page. For example, let’s name the wrapper function initAdapter() and call it from the function onPageLoad().

<body onLoad=”onPageLoad();”>

function onPageLoad() {






Stream Management
In addition to overriding the function names, Chrome and Firefox are different in how they handle media streams. To solve that, create generic functions that are specific to each browser type.

In a future article, I will put these functions to use. For now, though, I will simply define them.

For Firefox:

attachMediaStream =
function(element, stream) {
element.mozSrcObject = stream;;

reattachMediaStream =
function(to, from) {
to.mozSrcObject = from.mozSrcObject;;

if (!MediaStream.prototype.getVideoTracks) {
MediaStream.prototype.getVideoTracks = function() {
return [];

if (!MediaStream.prototype.getAudioTracks) {
MediaStream.prototype.getAudioTracks = function() {
return [];

For Chrome:

attachMediaStream =
function(element, stream) {
element.src = webkitURL.createObjectURL(stream);

reattachMediaStream =
function(to, from) {
to.src = from.src;

if (!webkitMediaStream.prototype.getVideoTracks) {
webkitMediaStream.prototype.getVideoTracks =
function() {
return this.videoTracks;
webkitMediaStream.prototype.getAudioTracks =
function() {
return this.audioTracks;

if (!webkitRTCPeerConnection.prototype.getLocalStreams) {
webkitRTCPeerConnection.prototype.getLocalStreams =
function() {
return this.localStreams;
webkitRTCPeerConnection.prototype.getRemoteStreams =
function() {
return this.remoteStreams;

Let’s Call it a Day

I was told that you should always leave ‘em wanting more, so I am going to stop here. In summary:

  • You need to ensure that WebRTC has been enabled for the web browser.
  • Firefox and Chrome use different names for the same WebRTC functions. Create a wrapper to override the browser specific names.

In my next installment, I will start to use the function calls to create the WebRTC application. I hope you come back for more because this is where it really starts to get fun.