The Cloud: Your Disaster Backup for Business Continuity

storm

I hit a deer on my way to work one day.

Actually, the deer hit me, since I was minding my own business when it leapt into the path of my car.

Thankfully, I wasn’t hurt and the damage to my car is manageable.

Sadly, the same cannot be said for the deer.

In case you’re wondering, I don’t live or work in the woods.  In fact, this happened in the middle of suburban Bloomington, Minnesota.  Bloomington is the home of the Mall of America and isn’t some backwater country village – at least not by Minnesota standards.

Still, there are enough places for deer to scratch out a living and come bounding out of their hiding places when least expected.

Being the nerd that I am, this made me think of SIP and how even the best laid plans can go awry: An enterprise can spend tons of cash creating a rock-solid, resilient system and still things can go wrong.  Systems go down due to stupid human tricks.  Mother Nature can rise up and bring a data center to its knees.  Software crashes and hardware fails.

Additionally, an enterprise might not be in the position of hardening every facet of their communications system.  Money is short and investing in redundant servers and gateways might not be in the budget.  Also, you might be sitting on aging equipment that isn’t worth upgrading.  The money for your next system isn’t available for a year or two, so you nurse the current one along until it’s hauled away.

No matter what the state of your system, it makes sense to consider adding one more layer of resilience.

Enter the Cloud

One option that is becoming increasingly popular is to add elements of cloud communications as part of a comprehensive business continuity strategy.

Instead of going headfirst into the cloud, you dip your toes into the water for a few key people and groups.

Perhaps you provide cloud communications to your first responders and key management personal.  Perhaps you give critical departments standby cloud resources.  The point is that this isn’t a full-blown rollout to the entire company, but only to a select few and only for times of great need.

This is how it works:  Day-to-day communication stays on your existing platform.  Perhaps you have an Avaya system that has already implemented “flatten, consolidate, and extend” for resilience at the core and survivability at the branch offices.  Any failover scenarios that exists with FC&E work as they always have.

However, imagine a time when something really catastrophic occurs.

Related article: What was Behind the Massive E911 Outages in the Pacific Northwest?

While I hate to dwell on the negative, imagine another hurricane Sandy or even worse, a Fukushima meltdown.  You’ve done everything right, but your rock-solid Avaya system has been washed out to sea.

This is where cloud communications for business continuity comes in.

Prior to the disaster, you identify the key people that require enterprise communication during the time of total failure.  These people are configured as users on your cloud system and install SIP communications software on their PCs and smart devices.  They don’t actually use the software at this point in time.  It’s simply there waiting to be told to do something.

Next, you configure SIP trunks with the numbers you intend to use during a disaster.  These trunks are “wired” and provisioned at the carrier and cloud levels, but are not activated.

When a disaster occurs, the accounts are activated and the users start up their soft clients.  Within a matter of seconds, they can communicate with their coworkers.

Next, your carrier is informed of the disaster and the SIP trunks are activated.  Depending upon the carrier, this might require a few minutes or a few hours.  Once the trunks have been activated, your cloud users are able to call out and the outside world is able to call in.

Not only can these cloud users make telephone calls, but they also have access to all sorts of unified communications functionality.  They can send instant messages, make video calls, and create and join conference calls.  They even have access to voice mail.

The users and trunks remain active for as long as they are needed and once the disaster is over, the cloud resources return to their dormant state.  It’s that simple.

Be Prepared

Like that deer that came out of nowhere, there is no way to anticipate when a disaster might strike.

I am a careful, fully aware driver and own a car with a high safety rating and yet I was still hit with the unexpected.

I would like to think that the accident wasn’t worse than it was because of my car and driving ability.  The same kind of thinking can be applied to communications.

Plan ahead for the unexpected and the impossible Being prepared for the worst might keep your business functional during very hard times.

This article originally appeared on SIP Adventures and is reprinted with permission.

Related Articles:

Continuous Learning: Propelling Forward in a Rapidly and Inevitably Changing World

Whether we realize it or not, advanced technologies like artificial intelligence (AI), augmented reality, and the Internet of Things (IoT) have transformed the way we think about the world around us. From how we protect our schools to the way we navigate our streets to how we shop for groceries, such technology now lies at the heart of practically everything we do today.

Just as these technologies have changed the way we live, they have changed the way we work. Today’s rapid pace of innovation has transformed nearly every business task, process, and workflow imaginable—so much so that industry analysts estimate that up to 45% of activities that employees are paid to perform can now be automated.

This digital disruption—or what many are calling the Fourth Industrial Revolution—without question redefines traditional roles and responsibilities. In fact, research shows that in five years, more than one third of skills that are considered important in today’s workforce will have changed. Even more, analysts estimate that 65% of children today will grow up to work in roles that don’t yet exist.

While we do still see employees that specialize in one skill or expertise, we’ve mostly moved away from the days of hiring an employee for just one job. As technology evolves, so too do the skills required to innovate and propel forward. Looking ahead, employees must have a propensity for continuous learning and adopting new skills to be able to recognize and respond to today’s speed of digital change.

Consider how technology has changed the marketing paradigm. As recently as 10 years ago, marketing platforms like Marketo and HubSpot had only just been founded, Facebook was still in its infancy, and the first iPhone had newly hit the market. As technologies like cloud, social, mobile and big data evolved, however, we suddenly began seeing new tools specifically designed to enhance digital media, social media marketing, and mobile marketing. As a result, companies began searching to fill roles for social media coordinators, digital campaign managers and integrated marketing planners—jobs that were unfathomable 15 to 20 years prior.

Fast forward to today and we’re seeing the emergence of new technology for marketing, such as augmented reality, geofencing, and emotion detection. The continual emergence of new technology perpetually creates skills gaps that must be filled by employees who are passionate, motivated, and invested in their own learning. These kinds of team members are committed to developing new skills and leveraging their strengths to outperform.

But not all employees can easily identify their strengths or develop new skills. This is likely why nearly half of employees today feel unengaged at work, with nearly 20% feeling “actively disengaged.” At the same time, companies are struggling to align employee strengths with organizational priorities. Employees may have certain strengths, but employers may find those skills don’t directly increase operational efficiency or performance. This is why nearly 80% of businesses are more worried about a talent shortage today than they were two years ago.

So, what’s the answer? Employees and employers must work together to identify what roles are currently filled, what skills are still needed, and who best exemplifies those skills. For employees, this means taking control of how they grow their careers and improving for the better. For employers, this means displaying an unwavering commitment to employee reinvestment by understanding key areas of interest to effectively fill skills gaps.

At Avaya, for example, we’re leading an employee enablement program under our Marketing 3.0 strategy. The initiative is designed to help strengthen our marketing organization by equipping employees with the right competencies that reflect our culture, strategy, expectations and market dynamics. By doing so, we can ensure we’re recruiting and managing talent in the most strategic way, putting the right people in the right jobs with the abilities to perform at maximum potential every day. By having each marketing function participate in a simple knowledge profile exercise, we can begin objectively determining development opportunities that best meet their needs and the needs of our business.

As technology continuously evolves, it’s crucial that employees have a propensity for continuous learning and that organizations foster an environment for this learning. In the words of former GE CEO Jack Welch, “An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage.”

We live in a world that is rapidly and inevitably changing. Employees should embrace this change to thrive, and must if they wish to propel business forward. As employers, we are responsible for strategically leveraging our resources to align employee strengths with organizational needs to succeed in this environment of constant change.

Next-Generation IT: What Does It Really Look Like?

From mainframes to virtualization to the IoT, we’ve come a long way in a very short amount of time in terms of networking, OS and applications. All this progress has led us to an inflection point of digital business innovation; a critical time in history where, as Gartner puts it best, enterprises must “recognize, prioritize and respond at the speed of digital change.” Despite this, however, many businesses still rely on legacy systems that prevent them from growing and thriving. So, what’s the deal?

I attempted to answer this in a previous blog, where I laid out as entirely as I could the evolution of interconnectivity leading up to today. What was ultimately concluded in that blog is that we have reached a point where we can finally eliminate dependency on legacy hardware and hierarchical architecture with the use of one single, next-generation software platform. The call for organizations across all industries to migrate from legacy hardware has never been stronger, and the good news is that technology has evolved to a point where they can now effectively do so.

This concept of a “next-generation platform,” however, isn’t as simple as it sounds. Just consider its many variations among industry analysts. McKinsey & Company, for example, refers to this kind of platform as “next-generation infrastructure” (NGI). Gartner, meanwhile, describes it as the “New Digital Platform.” We’re seeing market leaders emphasizing the importance of investing in a next-generation platform, yet many businesses still wonder what the technology actually looks like.

To help make it clearer, Avaya took a comparative look at top analyst definitions and broke them down into five key areas of focus for businesses industry-wide: 

  1. Next-generation IT
  2. The Internet of Things (IoT)
  3. Artificial intelligence (AI)/automation
  4. Open ecosystem
  5. The customer/citizens experience

In a series of upcoming blogs, I’ll be walking through these five pillars of a next-generation platform, outlining what they mean and how they affect businesses across every sector. So, let’s get started with the first of these: next-generation IT.

Simplifying Next-Gen IT

As IT leaders face unrelenting pressure to elevate their infrastructure, next-generation IT has emerged as a way to enable advanced new capabilities and support ever-growing business needs. But what does it consist of? Well, many things. The way we see it, however, next-generation IT is defined by four core elements: secure mobility, any-cloud deployment (more software), omnichannel and big data analytics—all of which are supported by a next-generation platform built on open communications architecture.

Secure mobility: Most digital growth today stems from mobile usage. Just consider that mobile now represents 65% of all digital media time, with the majority of traffic for over 75% of digital content—health information, news, retail, sports—coming from mobile devices. Without question, the ability to deliver a secure mobile customer/citizen experience must be part of every organizational DNA. This means enabling customers to securely consume mobile services anytime, anywhere and however desired with no physical connectivity limitations. Whether they’re on a corporate campus connected to a dedicated WLAN, at Starbucks connected to a Wi-Fi hotspot, or on the road paired to a Bluetooth device though cellular connectivity, the connection must always be seamless and secure. Businesses must start intelligently combining carrier wireless technology with next-generation Wi-Fi infrastructure to make service consumption more secure and mobile-minded with seamless hand-off between the two technologies.

Any-cloud deployment: Consumers should be able to seamlessly deploy any application or service as part of any cloud deployment model (hybrid, public or private). To enable this, businesses must sufficiently meet today’s requirements for any-to-any communication. As I discussed in my previous blog, the days of nodal configuration and virtualization are a thing of the past; any-to-any communications have won the battle. A next-generation platform built on open communications architecture is integrated, agile, and future-proof enough to effectively and securely support a services-based ecosystem. Of course, the transition towards software services is highly desirable but remember not all hardware will disappear—although where possible it should definitely be considered. This services-based design is the underlying force of many of today’s greatest digital developments (smart cars, smart cities). It’s what allows organizations across every sector to deliver the most value possible to end-users.

Omnichannel: All communication and/or collaboration platforms must be omnichannel enabled. This is not to be confused with multi-channel. Whereas the latter represents a siloed, metric-driven approach to service, the former is inherently designed to provide a 360-degree customer view, supporting the foundation of true engagement. An omnichannel approach also supports businesses with the contextual and situational awareness needed to drive anticipatory engagement at the individual account level. This means knowing that a customer has been on your website for the last 15 minutes looking at a specific product of yours, which they inquired about during a live chat session with an agent two weeks ago. This kind of contextual data needs to be brought into the picture to add value and enhance the experience of whom you service, regardless of where the interaction first started.

Big data analytics: It’s imperative that you strategically use the contextual data within your organization to compete based on the CX. A huge part of next-generation IT involves seamlessly leveraging multiple databases and analytics capabilities to transform business outcomes (and ultimately, customers’ lives). This means finally breaking siloes to tap into the explosive amount of data—structured and unstructured, historical and real-time—at your disposal. Just as importantly, this means employees being able to openly share, track, and collect data across various teams, processes, and customer touch points. This level of data visibility means a hotel being able to see that a guest’s flight got delayed, enabling the on-duty manager to let that customer know that his or her reservation will be held. It means a bank being able to push out money management tips to a customer after seeing that the individual’s last five interactions were related to account spending.

These four components are critical to next-generation IT as part of a next-generation digital platform. Organizations must start looking at each of these components if they wish to compete based on the CX and respond at the speed of digital change. Stay tuned, next we’ll be talking about the ever-growing Internet of Things!

How to (Finally) Break the Longstanding Hold of Legacy Technology

Without question, we’ve seen more technological innovation in the last 30 years than we have in the last century. We now live in a reality of seemingly limitless possibilities and outcomes. Today, virtually any object can be considered part of an advanced, interconnected ecosystem. Companies across every sector are competing to reimagine customer engagement. The user experience is fundamentally changing as people, processes and services become more dynamically connected. Today’s smart, digital era represents unmatched opportunity for forward-thinking business leaders everywhere.

At the same time, however, it poses some challenges. Specifically, this rapid pace of innovation means businesses must find a way to quickly and efficiently modernize to competitively differentiate. In a time where digital disruptors are building custom IT environments on the fly, companies can no longer let legacy architecture dampen innovation and agility

Businesses know this all too well, with 90% of IT decision makers believing that legacy systems prevent them from harnessing the digital technologies they need to grow and thrive. This is especially true in industries like government and finance, where there’s still a heavy dependency on legacy technology. For example 71% of federal IT decision makers still use old operating systems to run important applications. Meanwhile, 30% of senior investment managers say they’re concerned about the ability of their current legacy systems to meet future regulatory requirements. This list goes on.

It’s clear that something needs to be done here, and fast. So, how exactly did we get to this point of digital disruption, and what can be done about legacy systems today? Let’s take a walk through recent history, and then discuss how companies can begin moving towards digital, next-generation IT.

Data Centralization to Decentralization

Let’s start where applications first began being consumed. About 30 to 40 years ago, all application intelligence was centralized (I’m sure some of you remember the good old mainframe days of using dumb terminals or emulators to access applications and store data centrally). There were some notable benefits to centralizing data in this fashion. There weren’t many issues with storage distribution, for instance, and disaster recovery procedures were clearly documented. Security challenges were also practically nonexistent because there wasn’t any local storage on the terminal (hence, dumb).

Soon, however, we saw the rise of the personal computer, which completely changed this model. Computing and storage could now be distributed, allowing local applications to run without any centralized dependency. This was a game-changer that sparked a desktop war between key market players like Microsoft (Windows), IBM (OS2), and Apple (MacOS).

This transition to decentralization, however, wasn’t without its challenges. Employees may have gained mobility, but IT began facing new challenges in security and distributed storage. Companies were left wondering how to best control their data storage, specifically where confidential information could easily be stored on a floppy disk, local hard drive and, later, USB drives. This remains a challenge to this day—no one wants to give up their mobility, so companies must find a way to instead regain control.

One thing to note: at this point, COTS (Commercial off-the-shelf) servers could now be used. These systems were far less proprietary than previous host systems like mainframes, VAX, etc. However, they were still hardware-dependent, as each platform was usually tailored to the applications it had to run. As a result, a good amount of compute, memory and storage resources were not being fully utilized. In fact, some services were running as low as only 10-20% capacity. While there were benefits to COTS servers, they called for a better way to maximize the use of all resources.

The Rise of Virtualization

The only viable solution to these problems was to eliminate hardware in favor of ONE single software application. But how? The market experienced profound change as companies strove to answer this question, eventually leading to the emergence of virtualization.

During this time, market leaders like VMware began transforming the industry by allowing multiple virtualized OS (virtual machines) to run simultaneously on the same hardware. In this way, applications ran as if they had their own dedicated compute, memory and storage. However, it was all being shared. Simply put, the hardware server had become virtualized. Brilliant!

This allowed companies to create virtual representations of resources such as compute, memory and storage devices. Companies could now run multiple applications over the same physical hardware, in a way that appeared to the applications as though they were running over their own dedicated hardware. More importantly, companies could now fully leverage every single resource at their disposal. Nothing would be left dormant or unused in this virtualized model, unlike what we saw in the past with a dedicated appliance/server per application.

At this point, it was a no brainer to move into the virtualized application world. However, the ugly truth remained: we were still using a legacy networking framework. Many continue to refer to this as client-server, but the bottom line is that it was a hierarchical model that required each node and link to be configured to carry or simulate end-to-end virtualization. Even though the application environment was virtualized, the infrastructure on which it ran was not built with that in mind. It didn’t matter if you were using VLANs, VRFs or even MPLS—it was a complex way of providing end-to-end virtualized services.

Who would finally be able to solve this issue? It seemed the Institute of Electrical and Electronics Engineers (IEEE) and Internet Engineering Task Force (IETF) were on the right track with the standardization of an Ethernet protocol that allows end-to-end services virtualization, which finally took place in May 2012. This is known as SPB, or Shortest Path Bridging (IEEE 802.1aq and IETF RFC 6329 for those interested). And there you have it: servers, applications and networks are now finally virtualized! Are we done? Well, not quite … even desktops are being virtualized, known as VDI (Virtual Desktop Infrastructure) to re-centralize control.

Overall, virtualization became the de facto model that allowed businesses to run applications on what we know as the Cloud. With private and public models, customers could now choose what assets they wanted to own (that is, manage on premises) or have hosted through the public cloud. Soon, however, the challenge became how to run apps in these clouds. Companies quickly discovered the need to store some applications (like regulatory and compliance data) in an onsite private cloud. Meanwhile, other data was best suited for the public cloud. This is how the hybrid cloud deployment model was born.

Cloud Elasticity

Hybrid cloud allowed companies to operate in an environment that strategically utilized the best of both worlds—both on-premises private cloud and third-party public cloud services—to meet their core objectives. In this new world of cloud orchestration, we saw the rise of digital giants like Amazon, Google and Facebook. With a high level of cloud elasticity, providers could now spin up series of virtual applications or services in less than an hour to run them in the public cloud. This unhinged the doors of opportunity for companies everywhere. These providers allowed organizations to create new instances on the fly and shut them down just as quickly. It’s used, for example, to soft launch new products or test drive business in new marketplaces.

But let’s not forget the issue that remains to this day: we have yet to completely move away from all aging hardware. In today’s world of any-to-any communication, driven by technologies like the IoT, artificial intelligence, and machine learning, legacy hardware and hierarchical networking architecture are not just an inconvenience. They can break your business if you don’t have a strategy to reduce that dependency.

Finally Breaking Free of Hardware

The bottom line is that any-to-any communications have won the battle (unlike 15 years ago, where IT largely resisted and essentially shut down the peer-to-peer model). As a result, what many refer to as “meshed communication architecture” emerged as the newest and strongest-yet approach to network design.

This kind of architecture is integrated, agile and future-proof enough to effectively and securely support a services-based ecosystem. The days of nodal configuration and virtualization are a thing of the past. It’s vital that companies move to this services-based architecture to be able to support the future of the customer experience. Consider how it’s essential for supporting smart cars that can autonomously park and change lanes, while being redirected to alternate routes because of traffic congestion. It’s critical for supporting smart home solutions that enable homeowners to remotely manage utility usage. It’s crucial for delivering the most value possible to those who matter most: end-users.

For decades, we’ve been trying to eliminate a primal dependency on hardware. To finally break the silos associated with hardware, companies must begin setting themselves up to support any-to-any communication. In this environment, all services can virtually run anywhere across multiple sources of hardware that can be geographically dispersed.

Now that we know what can be done about legacy systems (transition to an open, software-enabled, meshed architecture), let’s discuss how companies can successfully integrate digital into their existing environment to transform business. Stay tuned for more.