Why a Phased Approach to Communications-in-the-Cloud Makes Sense

As IT departments refresh their assets, an increasing number of decision-makers are taking a close look at cloud-based software to replace aging, on-premise hardware.

The benefits of the cloud should be well-worn territory by now: Lower initial costs and steady monthly billing make long-term budget planning a breeze, upgrades and security patches are virtually pain-free, remote workers can access their complete suite of business tools and applications, critical data is housed in multiple locations, easing disaster recovery—the list goes on.

Business communications is getting a similar overhaul to the cloud. Desk phones can be untethered from the traditional on-premise PBX and placed in the cloud, accessible through IP-based hard phones, desktop-based soft phones and mobile apps. Similarly, contact centers can place their software, equipment and customer databases in the cloud.

Avaya has made significant investments in the cloud, and offers a range of cloud-based solutions for its customers. At VMworld 2015, a team of engineers are demonstrating Avaya UC and CC products hosted on VMware vCloud Air, the company’s cloud-based service platform.

A key feature to Avaya’s vCloud Air-based offer is its phased rollout, says Mohan Gopalakrishna, one of the lead engineers on the project, who will speak at VMworld 2015 on Wednesday.

“A phased approach gives customers a chance to try a smaller piece of the infrastructure in the cloud and builds their confidence in cloud-based solutions,” said Gopalakrishna. “We can gradually retire servers from the on-premise campus and into the cloud. Avaya protects the investment and gives you an easy migration path to the cloud at your own pace.”

Avaya’s Enterprise Solution Practice team helps architect cloud migration plans for existing customers, adopting a phased approach that uses on-premise equipment on the path to the entire solution being in the cloud. Cloud doesn’t have to mean rip-and-replace.

Currently, Gopalakrishna estimates it would take about a month to fully migrate a midsized company’s unified communications to the cloud—a target he thinks could eventually be shortened to under a week.

Contact centers are more complex, and could take 3 to 6 months based on the project’s complexity. Still, that’s much faster than the 8 to 12 months it currently takes to migrate an on-premise contact center. Eventually, Avaya expects an average reduction of 50 percent in lead-time-to-market.

Avaya works with major cloud service providers like VMware on two levels: The first is placing Avaya solutions inside the provider’s cloud service, essentially treating it as infrastructure. The second is by using the cloud provider’s unique tools to enhance Avaya software running on the infrastructure. At VMworld, Mohan will talk about the ways Avaya used VMware’s real-time hypervisor to improve its hybrid cloud solutions.

If you’re attending VMworld 2015, join Mohan Gopalakrishna on Wednesday, Sept. 2 (details here), and visit booth #541 to learn more about Avaya products and solutions.

Related Articles:

Next-Generation IT: What Does It Really Look Like?

From mainframes to virtualization to the IoT, we’ve come a long way in a very short amount of time in terms of networking, OS and applications. All this progress has led us to an inflection point of digital business innovation; a critical time in history where, as Gartner puts it best, enterprises must “recognize, prioritize and respond at the speed of digital change.” Despite this, however, many businesses still rely on legacy systems that prevent them from growing and thriving. So, what’s the deal?

I attempted to answer this in a previous blog, where I laid out as entirely as I could the evolution of interconnectivity leading up to today. What was ultimately concluded in that blog is that we have reached a point where we can finally eliminate dependency on legacy hardware and hierarchical architecture with the use of one single, next-generation software platform. The call for organizations across all industries to migrate from legacy hardware has never been stronger, and the good news is that technology has evolved to a point where they can now effectively do so.

This concept of a “next-generation platform,” however, isn’t as simple as it sounds. Just consider its many variations among industry analysts. McKinsey & Company, for example, refers to this kind of platform as “next-generation infrastructure” (NGI). Gartner, meanwhile, describes it as the “New Digital Platform.” We’re seeing market leaders emphasizing the importance of investing in a next-generation platform, yet many businesses still wonder what the technology actually looks like.

To help make it clearer, Avaya took a comparative look at top analyst definitions and broke them down into five key areas of focus for businesses industry-wide: 

  1. Next-generation IT
  2. The Internet of Things (IoT)
  3. Artificial intelligence (AI)/automation
  4. Open ecosystem
  5. The customer/citizens experience

In a series of upcoming blogs, I’ll be walking through these five pillars of a next-generation platform, outlining what they mean and how they affect businesses across every sector. So, let’s get started with the first of these: next-generation IT.

Simplifying Next-Gen IT

As IT leaders face unrelenting pressure to elevate their infrastructure, next-generation IT has emerged as a way to enable advanced new capabilities and support ever-growing business needs. But what does it consist of? Well, many things. The way we see it, however, next-generation IT is defined by four core elements: secure mobility, any-cloud deployment (more software), omnichannel and big data analytics—all of which are supported by a next-generation platform built on open communications architecture.

Secure mobility: Most digital growth today stems from mobile usage. Just consider that mobile now represents 65% of all digital media time, with the majority of traffic for over 75% of digital content—health information, news, retail, sports—coming from mobile devices. Without question, the ability to deliver a secure mobile customer/citizen experience must be part of every organizational DNA. This means enabling customers to securely consume mobile services anytime, anywhere and however desired with no physical connectivity limitations. Whether they’re on a corporate campus connected to a dedicated WLAN, at Starbucks connected to a Wi-Fi hotspot, or on the road paired to a Bluetooth device though cellular connectivity, the connection must always be seamless and secure. Businesses must start intelligently combining carrier wireless technology with next-generation Wi-Fi infrastructure to make service consumption more secure and mobile-minded with seamless hand-off between the two technologies.

Any-cloud deployment: Consumers should be able to seamlessly deploy any application or service as part of any cloud deployment model (hybrid, public or private). To enable this, businesses must sufficiently meet today’s requirements for any-to-any communication. As I discussed in my previous blog, the days of nodal configuration and virtualization are a thing of the past; any-to-any communications have won the battle. A next-generation platform built on open communications architecture is integrated, agile, and future-proof enough to effectively and securely support a services-based ecosystem. Of course, the transition towards software services is highly desirable but remember not all hardware will disappear—although where possible it should definitely be considered. This services-based design is the underlying force of many of today’s greatest digital developments (smart cars, smart cities). It’s what allows organizations across every sector to deliver the most value possible to end-users.

Omnichannel: All communication and/or collaboration platforms must be omnichannel enabled. This is not to be confused with multi-channel. Whereas the latter represents a siloed, metric-driven approach to service, the former is inherently designed to provide a 360-degree customer view, supporting the foundation of true engagement. An omnichannel approach also supports businesses with the contextual and situational awareness needed to drive anticipatory engagement at the individual account level. This means knowing that a customer has been on your website for the last 15 minutes looking at a specific product of yours, which they inquired about during a live chat session with an agent two weeks ago. This kind of contextual data needs to be brought into the picture to add value and enhance the experience of whom you service, regardless of where the interaction first started.

Big data analytics: It’s imperative that you strategically use the contextual data within your organization to compete based on the CX. A huge part of next-generation IT involves seamlessly leveraging multiple databases and analytics capabilities to transform business outcomes (and ultimately, customers’ lives). This means finally breaking siloes to tap into the explosive amount of data—structured and unstructured, historical and real-time—at your disposal. Just as importantly, this means employees being able to openly share, track, and collect data across various teams, processes, and customer touch points. This level of data visibility means a hotel being able to see that a guest’s flight got delayed, enabling the on-duty manager to let that customer know that his or her reservation will be held. It means a bank being able to push out money management tips to a customer after seeing that the individual’s last five interactions were related to account spending.

These four components are critical to next-generation IT as part of a next-generation digital platform. Organizations must start looking at each of these components if they wish to compete based on the CX and respond at the speed of digital change. Stay tuned, next we’ll be talking about the ever-growing Internet of Things!

How to (Finally) Break the Longstanding Hold of Legacy Technology

Without question, we’ve seen more technological innovation in the last 30 years than we have in the last century. We now live in a reality of seemingly limitless possibilities and outcomes. Today, virtually any object can be considered part of an advanced, interconnected ecosystem. Companies across every sector are competing to reimagine customer engagement. The user experience is fundamentally changing as people, processes and services become more dynamically connected. Today’s smart, digital era represents unmatched opportunity for forward-thinking business leaders everywhere.

At the same time, however, it poses some challenges. Specifically, this rapid pace of innovation means businesses must find a way to quickly and efficiently modernize to competitively differentiate. In a time where digital disruptors are building custom IT environments on the fly, companies can no longer let legacy architecture dampen innovation and agility

Businesses know this all too well, with 90% of IT decision makers believing that legacy systems prevent them from harnessing the digital technologies they need to grow and thrive. This is especially true in industries like government and finance, where there’s still a heavy dependency on legacy technology. For example 71% of federal IT decision makers still use old operating systems to run important applications. Meanwhile, 30% of senior investment managers say they’re concerned about the ability of their current legacy systems to meet future regulatory requirements. This list goes on.

It’s clear that something needs to be done here, and fast. So, how exactly did we get to this point of digital disruption, and what can be done about legacy systems today? Let’s take a walk through recent history, and then discuss how companies can begin moving towards digital, next-generation IT.

Data Centralization to Decentralization

Let’s start where applications first began being consumed. About 30 to 40 years ago, all application intelligence was centralized (I’m sure some of you remember the good old mainframe days of using dumb terminals or emulators to access applications and store data centrally). There were some notable benefits to centralizing data in this fashion. There weren’t many issues with storage distribution, for instance, and disaster recovery procedures were clearly documented. Security challenges were also practically nonexistent because there wasn’t any local storage on the terminal (hence, dumb).

Soon, however, we saw the rise of the personal computer, which completely changed this model. Computing and storage could now be distributed, allowing local applications to run without any centralized dependency. This was a game-changer that sparked a desktop war between key market players like Microsoft (Windows), IBM (OS2), and Apple (MacOS).

This transition to decentralization, however, wasn’t without its challenges. Employees may have gained mobility, but IT began facing new challenges in security and distributed storage. Companies were left wondering how to best control their data storage, specifically where confidential information could easily be stored on a floppy disk, local hard drive and, later, USB drives. This remains a challenge to this day—no one wants to give up their mobility, so companies must find a way to instead regain control.

One thing to note: at this point, COTS (Commercial off-the-shelf) servers could now be used. These systems were far less proprietary than previous host systems like mainframes, VAX, etc. However, they were still hardware-dependent, as each platform was usually tailored to the applications it had to run. As a result, a good amount of compute, memory and storage resources were not being fully utilized. In fact, some services were running as low as only 10-20% capacity. While there were benefits to COTS servers, they called for a better way to maximize the use of all resources.

The Rise of Virtualization

The only viable solution to these problems was to eliminate hardware in favor of ONE single software application. But how? The market experienced profound change as companies strove to answer this question, eventually leading to the emergence of virtualization.

During this time, market leaders like VMware began transforming the industry by allowing multiple virtualized OS (virtual machines) to run simultaneously on the same hardware. In this way, applications ran as if they had their own dedicated compute, memory and storage. However, it was all being shared. Simply put, the hardware server had become virtualized. Brilliant!

This allowed companies to create virtual representations of resources such as compute, memory and storage devices. Companies could now run multiple applications over the same physical hardware, in a way that appeared to the applications as though they were running over their own dedicated hardware. More importantly, companies could now fully leverage every single resource at their disposal. Nothing would be left dormant or unused in this virtualized model, unlike what we saw in the past with a dedicated appliance/server per application.

At this point, it was a no brainer to move into the virtualized application world. However, the ugly truth remained: we were still using a legacy networking framework. Many continue to refer to this as client-server, but the bottom line is that it was a hierarchical model that required each node and link to be configured to carry or simulate end-to-end virtualization. Even though the application environment was virtualized, the infrastructure on which it ran was not built with that in mind. It didn’t matter if you were using VLANs, VRFs or even MPLS—it was a complex way of providing end-to-end virtualized services.

Who would finally be able to solve this issue? It seemed the Institute of Electrical and Electronics Engineers (IEEE) and Internet Engineering Task Force (IETF) were on the right track with the standardization of an Ethernet protocol that allows end-to-end services virtualization, which finally took place in May 2012. This is known as SPB, or Shortest Path Bridging (IEEE 802.1aq and IETF RFC 6329 for those interested). And there you have it: servers, applications and networks are now finally virtualized! Are we done? Well, not quite … even desktops are being virtualized, known as VDI (Virtual Desktop Infrastructure) to re-centralize control.

Overall, virtualization became the de facto model that allowed businesses to run applications on what we know as the Cloud. With private and public models, customers could now choose what assets they wanted to own (that is, manage on premises) or have hosted through the public cloud. Soon, however, the challenge became how to run apps in these clouds. Companies quickly discovered the need to store some applications (like regulatory and compliance data) in an onsite private cloud. Meanwhile, other data was best suited for the public cloud. This is how the hybrid cloud deployment model was born.

Cloud Elasticity

Hybrid cloud allowed companies to operate in an environment that strategically utilized the best of both worlds—both on-premises private cloud and third-party public cloud services—to meet their core objectives. In this new world of cloud orchestration, we saw the rise of digital giants like Amazon, Google and Facebook. With a high level of cloud elasticity, providers could now spin up series of virtual applications or services in less than an hour to run them in the public cloud. This unhinged the doors of opportunity for companies everywhere. These providers allowed organizations to create new instances on the fly and shut them down just as quickly. It’s used, for example, to soft launch new products or test drive business in new marketplaces.

But let’s not forget the issue that remains to this day: we have yet to completely move away from all aging hardware. In today’s world of any-to-any communication, driven by technologies like the IoT, artificial intelligence, and machine learning, legacy hardware and hierarchical networking architecture are not just an inconvenience. They can break your business if you don’t have a strategy to reduce that dependency.

Finally Breaking Free of Hardware

The bottom line is that any-to-any communications have won the battle (unlike 15 years ago, where IT largely resisted and essentially shut down the peer-to-peer model). As a result, what many refer to as “meshed communication architecture” emerged as the newest and strongest-yet approach to network design.

This kind of architecture is integrated, agile and future-proof enough to effectively and securely support a services-based ecosystem. The days of nodal configuration and virtualization are a thing of the past. It’s vital that companies move to this services-based architecture to be able to support the future of the customer experience. Consider how it’s essential for supporting smart cars that can autonomously park and change lanes, while being redirected to alternate routes because of traffic congestion. It’s critical for supporting smart home solutions that enable homeowners to remotely manage utility usage. It’s crucial for delivering the most value possible to those who matter most: end-users.

For decades, we’ve been trying to eliminate a primal dependency on hardware. To finally break the silos associated with hardware, companies must begin setting themselves up to support any-to-any communication. In this environment, all services can virtually run anywhere across multiple sources of hardware that can be geographically dispersed.

Now that we know what can be done about legacy systems (transition to an open, software-enabled, meshed architecture), let’s discuss how companies can successfully integrate digital into their existing environment to transform business. Stay tuned for more.

How Enterprise Virtualization Will Save Your Business in the Era of IoT

Having a backyard full of trees is quite therapeutic during a marathon day of conference calls, but it also comes with a fair share of maintenance: picking up the fallen limbs from the elms, keeping the invasive cedars from choking out other species, and trimming up the oaks to keep them healthy and the fireplace burning through the winter. On those maintenance days, it’s easy to get obsessed with a tree or set of trees that are causing a problem … say, dropping large limbs dangerously close to your daughters’ trampoline. When you’re fixing up your backyard, one problem – one tree – at a time, the solution to the problem at hand often fails to take into account the needs of the larger ecosystem. Unfortunately, for many networking professionals, every day feels like a maintenance day.

We see problems with mobility and service chaining in and across data centers. We see problems with cost and reliability in the WAN. We see problems with scalability and security in the campus. In a nutshell, we see problems. Fortunately, for every problem, there’s a good ol’ fashioned snake oil salesman. We’re inundated with the latest and greatest technologies to solve our woes … even some we didn’t know we had.

The problem is that we’re putting Band-Aids on bullet holes. The bleeding stops, but the real problem is still lurking beneath the surface. It’s not that these fixes are bad. The problem is that they’re being positioned as a cure-all instead of simply tools to address localized side effects of the problem.

The problem is broader. The data center exists to host applications. Those applications exist to enable users. The WAN exists to connect the data center to the campus, which exists for the users. And, of course, the users exist to run the business.

Since the business is the thing we’re looking to keep alive and thriving, those users need to be productive. That means that they need fast, efficient access to the applications that enable their jobs. So, those problems we rattled off earlier are really just symptoms that have emerged as we tried to create enterprise services across silos of control.

If we want to remove the bullet and save the patient, we must recognize the need for end-to-end services and look holistically at Enterprise Virtualization methods that will securely extend services from user to application at scale with on-demand mobility and business continuity. Otherwise, the problem is only going to get worse.

With the Internet of Things (IoT) becoming an ever-increasing reality in the enterprise, the need for services from device to application is going to multiply exponentially. Without Enterprise Virtualization, the burden on IT to deal with every little problem across the islands of campus, WAN and data center will be overwhelming. They simply won’t be able to keep pace, and, as a result, neither will the business. The users will be limited and become frustrated, and productivity will suffer in turn. It’s a bleak picture, but it doesn’t have to be.

Enterprise Virtualization provides a number of advantages that have long been unattainable to the general enterprise. While we’ve managed to achieve “micro-segmentation” down to the virtual machine layer for applications, the very same data is set free at the data center doors and left vulnerable in the less secure world beyond.

Enterprise Virtualization enables you to extend the segmentation in the data center to the very edges of the network, where the data is consumed by users. Not only can you extend isolation, you can also view it as one contiguous service from server node to user node.

All of the tools available for measuring quality and performance have a clear view from end-to-end, rather than requiring additional tools to aggregate and correlate metrics across the three different islands of technology. Not to mention, Enterprise Virtualization allows you to significantly reduce the number of touch points while provisioning and troubleshooting, thus minimizing the likelihood of down time due to human error.

Just like that limb-dropping elm can avoid the chainsaw, your enterprise can avoid being cut down in its prime. You see, it was a problem in the ecosystem that would have eventually killed all the trees through their intertwined root systems. It was lurking beneath the surface, but the arborist took a step back to see the whole forest, and then recognized and treated the real issue. Likewise, you need to make sure that someone is looking at your forest of IT challenges … not just banging their head on a single tree.