How to (Finally) Break the Longstanding Hold of Legacy Technology

Without question, we’ve seen more technological innovation in the last 30 years than we have in the last century. We now live in a reality of seemingly limitless possibilities and outcomes. Today, virtually any object can be considered part of an advanced, interconnected ecosystem. Companies across every sector are competing to reimagine customer engagement. The user experience is fundamentally changing as people, processes and services become more dynamically connected. Today’s smart, digital era represents unmatched opportunity for forward-thinking business leaders everywhere.

At the same time, however, it poses some challenges. Specifically, this rapid pace of innovation means businesses must find a way to quickly and efficiently modernize to competitively differentiate. In a time where digital disruptors are building custom IT environments on the fly, companies can no longer let legacy architecture dampen innovation and agility

Businesses know this all too well, with 90% of IT decision makers believing that legacy systems prevent them from harnessing the digital technologies they need to grow and thrive. This is especially true in industries like government and finance, where there’s still a heavy dependency on legacy technology. For example 71% of federal IT decision makers still use old operating systems to run important applications. Meanwhile, 30% of senior investment managers say they’re concerned about the ability of their current legacy systems to meet future regulatory requirements. This list goes on.

It’s clear that something needs to be done here, and fast. So, how exactly did we get to this point of digital disruption, and what can be done about legacy systems today? Let’s take a walk through recent history, and then discuss how companies can begin moving towards digital, next-generation IT.

Data Centralization to Decentralization

Let’s start where applications first began being consumed. About 30 to 40 years ago, all application intelligence was centralized (I’m sure some of you remember the good old mainframe days of using dumb terminals or emulators to access applications and store data centrally). There were some notable benefits to centralizing data in this fashion. There weren’t many issues with storage distribution, for instance, and disaster recovery procedures were clearly documented. Security challenges were also practically nonexistent because there wasn’t any local storage on the terminal (hence, dumb).

Soon, however, we saw the rise of the personal computer, which completely changed this model. Computing and storage could now be distributed, allowing local applications to run without any centralized dependency. This was a game-changer that sparked a desktop war between key market players like Microsoft (Windows), IBM (OS2), and Apple (MacOS).

This transition to decentralization, however, wasn’t without its challenges. Employees may have gained mobility, but IT began facing new challenges in security and distributed storage. Companies were left wondering how to best control their data storage, specifically where confidential information could easily be stored on a floppy disk, local hard drive and, later, USB drives. This remains a challenge to this day—no one wants to give up their mobility, so companies must find a way to instead regain control.

One thing to note: at this point, COTS (Commercial off-the-shelf) servers could now be used. These systems were far less proprietary than previous host systems like mainframes, VAX, etc. However, they were still hardware-dependent, as each platform was usually tailored to the applications it had to run. As a result, a good amount of compute, memory and storage resources were not being fully utilized. In fact, some services were running as low as only 10-20% capacity. While there were benefits to COTS servers, they called for a better way to maximize the use of all resources.

The Rise of Virtualization

The only viable solution to these problems was to eliminate hardware in favor of ONE single software application. But how? The market experienced profound change as companies strove to answer this question, eventually leading to the emergence of virtualization.

During this time, market leaders like VMware began transforming the industry by allowing multiple virtualized OS (virtual machines) to run simultaneously on the same hardware. In this way, applications ran as if they had their own dedicated compute, memory and storage. However, it was all being shared. Simply put, the hardware server had become virtualized. Brilliant!

This allowed companies to create virtual representations of resources such as compute, memory and storage devices. Companies could now run multiple applications over the same physical hardware, in a way that appeared to the applications as though they were running over their own dedicated hardware. More importantly, companies could now fully leverage every single resource at their disposal. Nothing would be left dormant or unused in this virtualized model, unlike what we saw in the past with a dedicated appliance/server per application.

At this point, it was a no brainer to move into the virtualized application world. However, the ugly truth remained: we were still using a legacy networking framework. Many continue to refer to this as client-server, but the bottom line is that it was a hierarchical model that required each node and link to be configured to carry or simulate end-to-end virtualization. Even though the application environment was virtualized, the infrastructure on which it ran was not built with that in mind. It didn’t matter if you were using VLANs, VRFs or even MPLS—it was a complex way of providing end-to-end virtualized services.

Who would finally be able to solve this issue? It seemed the Institute of Electrical and Electronics Engineers (IEEE) and Internet Engineering Task Force (IETF) were on the right track with the standardization of an Ethernet protocol that allows end-to-end services virtualization, which finally took place in May 2012. This is known as SPB, or Shortest Path Bridging (IEEE 802.1aq and IETF RFC 6329 for those interested). And there you have it: servers, applications and networks are now finally virtualized! Are we done? Well, not quite … even desktops are being virtualized, known as VDI (Virtual Desktop Infrastructure) to re-centralize control.

Overall, virtualization became the de facto model that allowed businesses to run applications on what we know as the Cloud. With private and public models, customers could now choose what assets they wanted to own (that is, manage on premises) or have hosted through the public cloud. Soon, however, the challenge became how to run apps in these clouds. Companies quickly discovered the need to store some applications (like regulatory and compliance data) in an onsite private cloud. Meanwhile, other data was best suited for the public cloud. This is how the hybrid cloud deployment model was born.

Cloud Elasticity

Hybrid cloud allowed companies to operate in an environment that strategically utilized the best of both worlds—both on-premises private cloud and third-party public cloud services—to meet their core objectives. In this new world of cloud orchestration, we saw the rise of digital giants like Amazon, Google and Facebook. With a high level of cloud elasticity, providers could now spin up series of virtual applications or services in less than an hour to run them in the public cloud. This unhinged the doors of opportunity for companies everywhere. These providers allowed organizations to create new instances on the fly and shut them down just as quickly. It’s used, for example, to soft launch new products or test drive business in new marketplaces.

But let’s not forget the issue that remains to this day: we have yet to completely move away from all aging hardware. In today’s world of any-to-any communication, driven by technologies like the IoT, artificial intelligence, and machine learning, legacy hardware and hierarchical networking architecture are not just an inconvenience. They can break your business if you don’t have a strategy to reduce that dependency.

Finally Breaking Free of Hardware

The bottom line is that any-to-any communications have won the battle (unlike 15 years ago, where IT largely resisted and essentially shut down the peer-to-peer model). As a result, what many refer to as “meshed communication architecture” emerged as the newest and strongest-yet approach to network design.

This kind of architecture is integrated, agile and future-proof enough to effectively and securely support a services-based ecosystem. The days of nodal configuration and virtualization are a thing of the past. It’s vital that companies move to this services-based architecture to be able to support the future of the customer experience. Consider how it’s essential for supporting smart cars that can autonomously park and change lanes, while being redirected to alternate routes because of traffic congestion. It’s critical for supporting smart home solutions that enable homeowners to remotely manage utility usage. It’s crucial for delivering the most value possible to those who matter most: end-users.

For decades, we’ve been trying to eliminate a primal dependency on hardware. To finally break the silos associated with hardware, companies must begin setting themselves up to support any-to-any communication. In this environment, all services can virtually run anywhere across multiple sources of hardware that can be geographically dispersed.

Now that we know what can be done about legacy systems (transition to an open, software-enabled, meshed architecture), let’s discuss how companies can successfully integrate digital into their existing environment to transform business. Stay tuned for more.

Related Articles:

Connected Health: The Digital Transformation of Care Innovation

All around the world, across the spectrum of disease, IT is changing our approach to chronic conditions and how we approach connected health. Text messages remind people living with HIV to take their medication and keep their medical appointments. Smartphone apps diagnose post-traumatic stress disorder by analyzing a user’s voice. Online forums enable breast cancer patients and survivors to trade information related to every stage of their care.

Collectively known as “connected health,” these recent, IT-driven innovations represent the intersection of digital technology and care. They’re transforming not only the way people manage their own health, but also the way they interact with their healthcare providers.

Unintended, but welcomed, consequences

By and large, connected health is an adaptation of technologies that were originally developed for other purposes. Mobile technology started out as a voice communication tool. Instant messaging was an outgrowth of online chat rooms. Social media became a means for making new friends.

Now these technologies have evolved and converged in a way that is overcoming formerly intractable barriers to care. By minding the agenda of day-to-day care, for instance, they give people the opportunity to stay in adherence with their treatments even where clinical visits are impractical due to cost, distance or availability. And by helping patients preserve their privacy, make sense of their conditions, and learn from others with similar experiences, health IT can lift the stifling veil of stigma from disease. 

The implications don’t stop with the individual. Connected health also helps people manage their own disease state so they don’t spread it to others. Across whole populations, it can allow interventions aimed at preventing chronic diseases, such as behavioral modifications that reduce the incidence of obesity.

Changing care innovation paradigms

In all these respects, connectivity is bringing to medicine a level of accountability and democratization that seemed unimaginable not so long ago. But it’s also dialing up the urgency of some unanswered questions. Among them:

  • What information is appropriate to gather? Not all information has value in a healthcare setting.
  • Will information remain proprietary? It’s unclear to what extent stakeholders are willing to advance the interests of the community ahead of the interests of a company.
  • What would a sharing paradigm look like? If companies were to share information, they would need a seamless, cohesive way to do it.
  • How will privacy and security be preserved? Artificial intelligence and machine learning are critical pieces of this equation.
  • How will healthcare use technologies to create new models of care? Today’s applications are largely geared toward improving quality and outcomes of existing care models.

There’s no one-size fits all solution to these questions. Neither is care innovation strictly a technology issue. Technologists must collaborate with clinicians, patients, and patient advocates to take care coordination and operational efficiency to the next level in helping people cope with long-term diseases. A new, technology-powered paradigm—one that transcends existing constraints of time and resources—can bring a welcome transformation in the ongoing management of care coordination and the patient experience.

Avaya Equinox, Now with Team Collaboration, Just Got More “Go-To”

 

I recently read that the Apple App Store now contains about 2.2 million apps. It’s an amazing number and a testament to the creativity of developers and the variety of our human interests and needs. But it made me wonder: how many apps can we really use on a regular basis & for what? Are they for fun? Are they informative? Do they increase team collaboration? If your smartphone is like mine, you’ve got a number of go-to apps that you use regularly, let’s say weekly, and probably a few you use daily or almost constantly. Then there are the Tier 2 apps, hiding in your folders that seldom see the light of day. It’s fun to delve into these folders every few months and rediscover the apps that I thought looked so interesting at the time but now languish for months on end.

What’s fun for personal apps however, can often become a nightmare in the work world. We all have someone in the office that has that need to be first with the latest hot app, to provide their take on what’s cool and what’s not and make everyone else feel a little short of the mark for not using it first. Of course most of these apps get frenzied activity for about 3 ½ days and then slip into oblivion. The issue for most of us is we simply have too much on the go to be constantly changing the way we work and coercing others to adopt our favorite app of the week.

What my work day really needs is a true go-to app. One that makes me more productive, more reachable, more on track and that lets me get to my tasks and meetings with a single touch. If you’ve read my previous blogs, you know where I’m going with this: my go-to app is Avaya Equinox®. With its “mobile-first” Top of Mind screen, it provides me with at-a-glance visibility to meetings, instant messages and my call history giving me a single place to keep up to date and productive regardless of where my day may take me.

I’m happy to say that my go-to app just got more, well, “go-to”. The Avaya UC experience that I rely on every day is now being extended with the integration of a cloud-based team collaboration capability.  It gives me the full benefits of a team work environment that integrates voice, video, persistent team chat and messaging, along with file and screen sharing, all from within the Avaya Equinox experience.

Let me give you an example of these new Equinox team collaboration capabilities in action. I’m currently working with an external vendor on a major project. Our work will carry on for several quarters with new materials being created that need review, discussion, and likely several rounds of back and forth. To get the project kicked off and a vendor selected, we needed the full gamut of collaboration capabilities from simple voice calls to several all-day video conferences with participants joining from around the world – something easily managed with Avaya Equinox. 

The next step was to establish a core team and shift into a regular cadence of interaction. Adding the participants to the team collaboration space from both inside and outside Avaya was a snap and we were instantly able to communicate with one another – I use one to one instant messaging for small items or questions and chat when I want to involve the entire team for broader issues. Tasks get assigned within Avaya Equinox to keep our review cycles on track and we use the file sharing capability avoid clogging up our email. If I’m off line at some point, due to travel or other activity, a quick glance at Avaya Equinox gets me back up to speed with the team’s progress.

On a weekly basis, we usually need some face time, and Avaya Equinox provides complete meeting capabilities including audio / video conferencing with screen sharing so we all gain the advantages of personal interaction. No matter where we are or what we are doing, we can all collaborate on content in real-time – it’s more productive and prevents misunderstandings across a widely distributed team. 

In many ways our team collaboration space has become a virtual “war room”.  Information is clearly visible and easily shared, I can see who’s available at any time and formal and informal discussions can be initiated with ease.

There’s no shortage of apps available to anyone with a mobile device and the time to spend browsing around an app store. The real challenge is finding those few go-to apps that you’ll use every day. If you aren’t using Avaya Equinox yet, I’d encourage you to give it a try. I think it will make your short list of “go-to” apps and in a month or two, you might wonder how you got through your day without it!

Building SMS Text Bots is a Breeze

As a nerdy guy, I love movies about other nerdy guys. Give me movies like “A Beautiful Mind,” “The Theory of Everything,” or “Einstein and Eddington” (two nerdy scientists for the price of one), and I am in geek heaven. Recently, I was thrilled by “The Imitation Game”—the story of Alan Turing and his quest to break Germany’s WWII secret code. While I would never dare to compare myself to Mr. Turing, I like to think that we would have a few things in common. One area would be our shared interest in natural language processing and intelligent behavior.

Way back in 1950, Turing crystallized his research into these studies in what has become known as The Turing Test. Simply put, The Turing Test is a test of a machine’s ability to impersonate a human being. For a machine to pass The Turing Test, it must be able to participate in a conversation with a human being to the point where the human doesn’t realize that he or she is interacting with a machine. I can only imagine what Turing would think of today’s technology such as Siri, Alexa, and Google Home. Better yet, imagine Alan conversing with the robot, Sophia. Would he be excited or frightened? Personally, I am a little of both.

Real or Not

If you have been reading my articles on No Jitter and here on the Avaya blog, you know how enamored I am of the Breeze and Zang workflow designers. Although I have spent the bulk of my professional life writing software in programming languages such C++ and Java, I have fallen in love with how quickly I can use the Breeze/Zang tools to go from idea, to prototype, to a production-quality application. I like to say that if you can draw it on a whiteboard, you can “code” it with Breeze.

So, the day I decided to build a text bot, I knew exactly how I was going to do it. Starting with a list of things I wanted my text bot to do, I was soon drawing out message flows and decision points (if this, do that). Once I was happy I had captured all the salient points, I turned to my computer and began typing. Early on, I realized that there was no way on earth I could capture all the different text messages my application would need to process. For instance, how many different ways can you ask for the location of a store? “Where are you located?” “What is your address?” “What city are you in?” “How can I find you?” The variations are nearly endless.

To solve this problem, I turned to natural language processing (NLP) and artificial intelligence (AI). That, of course, led me to the 500-pound gorilla in the room—IBM Watson. With Watson, I can build “Conversations” that allow me to create intents, entities, and dialogs. Intents are used to classify a request. You can think of entities as modifiers to those intents. Dialogs are the words you want to “speak” after determining the intent.

For example, consider the phrase “Are you open on Sunday?” Here, the intent could be classified as “hours.” The entity is “Sunday.” A proper dialog could be, “We are open on Sunday from 12:00 to 5:00.” To keep things simple, I created three intents for my bot: Directions, Holidays, Hours. Those intents resulted in three dialogs. I left off entities for now.

Building SMS Text Bots is a Breeze-Img1

 

My next decision point had to do with maintaining a conversation over many text messages. For that I choose Avaya’s Contest Store, which allows me to temporarily store information about a text conversation. This information can then be accessed over the life of the chat.

Building SMS Text Bots is a Breeze-Img2

Now that I had an engine to process incoming text messages (Watson), and a method of maintaining a chat’s context (Contest Store), it was time to launch the Avaya Breeze Engagement Designer. I will admit that I still had a few logic problems to work through, but I would not be stretching the truth if I said that I had a rough draft of my text bot up and running in less than an hour. Working through those remaining issues consumed another couple of hours, but in a fraction of the time it would take me to write my application in Java, my bot was accepting text messages, building contexts, and texting back replies.

Building SMS Text Bots is a Breeze-Img3

I should also say that my bot is fully multi-user. It didn’t matter if one or one hundred people were all texting in at the same time. My bot kept track of each individual conversation and no one received a text meant for someone else.

 
Building SMS Text Bots is a Breeze-Img4
 

While my example bot is fairly simple in terms of what it can handle, the framework is extendable to just about any SMS conversations you might want to support. Future plans have me using Context Store to save the entire conversation between human and machine. Not only could this be useful for determining how accurately my bot responds to incoming requests, but it could also be used to help better serve customers. A recorded chat sessions could be presented to a human agent in the case where the user moved from text to a phone call.

Next, I would love to incorporate some of the other features that Watson provides. For example, by detecting the tone/sentiment of the conversation, my bot could sense if the human was becoming frustrated with the answers he or she was receiving from my bot. This would allow the bot to either escalate the chat to a live agent, or have an agent follow up afterwards to help soothe over what might have been an unpleasant experience – or both.

Mischief Managed

Human to human conversations aren’t going away anytime soon, but more and more machines are going to step in to handle the easy to moderately hard stuff. The point is not to trick people into thinking they are talking to a human being. The point is that machines can handle tedious jobs without coming across as machines.

While I highly doubt that anyone will ever make a movie about Andrew and his fabulous text bots, it isn’t all about fame and glory, right? This is exciting technology and the fact that I can use Breeze to create sophisticated bots by easily combining powerful, but disparate technologies, is red-carpet stuff.