Dispelling "F.U.D." in the Brave New World of Software-Defined Networking

The world is changing… at least my world of networking. It’s evolving into a long overdue state. As a result, my skills have to evolve. So, I might as well extend my comfort zone with this whole blogging thing while I’m at it. Who said an old dog can’t learn new tricks?

To kick it off, I want to address this change I mentioned: SDN. No, I’m not going to subject you to yet another definition of software-defined networking, or an overview of the many different vendor options. I get to pontificate about that enough doing my day job. Instead, I want to talk about the real issue here: FUD.

Fear, Uncertainty and Doubt: the weapons of choice for any good smear campaign. FUD is being wielded in the SDN discussion with brutal cunning. I’m just waiting for an election year-type ad spot with homeless engineers and an ominous voiceover telling us how the proponents of SDN are destroying our great nation.

Clearly, the whole concept is an evil plot perpetrated by Hydra to undermine the networking staff in IT and allow world domination. Captain America 3 is going to be a box office smash with that storyline.

In all seriousness, most people are uncomfortable with change… even more so when the outcome of that change is uncertain. Unfortunately for SDN, we’re still trying to define that outcome. We know that we want it to be a state where businesses are more agile and productive, but we don’t know quite how we’re going to accomplish that on a global scale across a multitude of technologies and vendors. That’s where the FUD comes into play.

Many in our industry are slinging mud and/or stirring up uncertainty in attempts at self-preservation. Some are doing it to postpone the inevitable, while others are doing it out of their own lack of understanding. Regardless, the results are the same: those with the most to gain in this transition are at odds.

Related article: Why We’re Joining the OpenDaylight Project

Businesses are excited by the promises of SDN, and they’re actively looking to craft strategies that will help them gain an edge with their customers and employees through technology. Network engineers, on the other hand, are the targets of the FUD. They’re constantly being told that SDN is going to drive them to extinction in favor of programmers. Of course, that couldn’t be further from the truth.

Even if network configuration and provisioning were to suddenly shift tomorrow from CLI to Python or YANG or whatever, would it really make a difference? Good network engineers have learned multiple CLIs over the years… even if they’ve only dealt with one vendor.

Some have even succumbed to that whole GUI thing. So, learning a new set of commands and UI is trivial for most engineers. Not to mention, no change is going to occur overnight. It’s going to be a gradual evolution that may have more than one appearance in the end.

There’s plenty of time for good network engineers looking to evolve themselves to pick up the necessary skills along the way.

Don’t be fooled by the new skills piece, though. The value of network engineers isn’t their ability to configure boxes–it’s their ability to design elegant systems, understand complex protocol relationships and reduce technology issues to the lowest common denominator rapidly for resolution.

So, what if we even go as far as automating configuration and provisioning? That just means that network engineers can spend more time on strategic initiatives that will actually challenge them and leverage their true value. You know, those things that have occupied slots 3 through 5 on the to-do list for the last 18 months, but have been repeatedly usurped by the latest fire drill.

Of course, there’s also that crazy concept of integrating the applications and network to provide greater awareness. How terrible would that be? I mean imagine what crazy things might come of actually knowing the requirements and real-time status of applications on the network. Dare I say network engineers may never have to deal with another “Hey, the Internet is slow today” call again… ever?

Yes, change is scary, but it’s also very exciting. It’s a chance to learn new things, to make new contributions and to lead. Network engineers aren’t going to suddenly be hanging out on street corners with “will route for food” signs.

The good ones will simply find themselves in new spheres of influence collaborating with different teams on how their combined skill sets will improve IT. Maybe they’ll even start bartering with the programmers: You teach me YANG modeling, and I’ll teach you link state protocols. The sooner we can all get past the FUD and embrace this reality, the sooner we can move this industry forward.

This is tech. If we’re not here to innovate, then why are we here?

Related Articles:

Continuous Learning: Propelling Forward in a Rapidly and Inevitably Changing World

Whether we realize it or not, advanced technologies like artificial intelligence (AI), augmented reality, and the Internet of Things (IoT) have transformed the way we think about the world around us. From how we protect our schools to the way we navigate our streets to how we shop for groceries, such technology now lies at the heart of practically everything we do today.

Just as these technologies have changed the way we live, they have changed the way we work. Today’s rapid pace of innovation has transformed nearly every business task, process, and workflow imaginable—so much so that industry analysts estimate that up to 45% of activities that employees are paid to perform can now be automated.

This digital disruption—or what many are calling the Fourth Industrial Revolution—without question redefines traditional roles and responsibilities. In fact, research shows that in five years, more than one third of skills that are considered important in today’s workforce will have changed. Even more, analysts estimate that 65% of children today will grow up to work in roles that don’t yet exist.

While we do still see employees that specialize in one skill or expertise, we’ve mostly moved away from the days of hiring an employee for just one job. As technology evolves, so too do the skills required to innovate and propel forward. Looking ahead, employees must have a propensity for continuous learning and adopting new skills to be able to recognize and respond to today’s speed of digital change.

Consider how technology has changed the marketing paradigm. As recently as 10 years ago, marketing platforms like Marketo and HubSpot had only just been founded, Facebook was still in its infancy, and the first iPhone had newly hit the market. As technologies like cloud, social, mobile and big data evolved, however, we suddenly began seeing new tools specifically designed to enhance digital media, social media marketing, and mobile marketing. As a result, companies began searching to fill roles for social media coordinators, digital campaign managers and integrated marketing planners—jobs that were unfathomable 15 to 20 years prior.

Fast forward to today and we’re seeing the emergence of new technology for marketing, such as augmented reality, geofencing, and emotion detection. The continual emergence of new technology perpetually creates skills gaps that must be filled by employees who are passionate, motivated, and invested in their own learning. These kinds of team members are committed to developing new skills and leveraging their strengths to outperform.

But not all employees can easily identify their strengths or develop new skills. This is likely why nearly half of employees today feel unengaged at work, with nearly 20% feeling “actively disengaged.” At the same time, companies are struggling to align employee strengths with organizational priorities. Employees may have certain strengths, but employers may find those skills don’t directly increase operational efficiency or performance. This is why nearly 80% of businesses are more worried about a talent shortage today than they were two years ago.

So, what’s the answer? Employees and employers must work together to identify what roles are currently filled, what skills are still needed, and who best exemplifies those skills. For employees, this means taking control of how they grow their careers and improving for the better. For employers, this means displaying an unwavering commitment to employee reinvestment by understanding key areas of interest to effectively fill skills gaps.

At Avaya, for example, we’re leading an employee enablement program under our Marketing 3.0 strategy. The initiative is designed to help strengthen our marketing organization by equipping employees with the right competencies that reflect our culture, strategy, expectations and market dynamics. By doing so, we can ensure we’re recruiting and managing talent in the most strategic way, putting the right people in the right jobs with the abilities to perform at maximum potential every day. By having each marketing function participate in a simple knowledge profile exercise, we can begin objectively determining development opportunities that best meet their needs and the needs of our business.

As technology continuously evolves, it’s crucial that employees have a propensity for continuous learning and that organizations foster an environment for this learning. In the words of former GE CEO Jack Welch, “An organization’s ability to learn, and translate that learning into action rapidly, is the ultimate competitive advantage.”

We live in a world that is rapidly and inevitably changing. Employees should embrace this change to thrive, and must if they wish to propel business forward. As employers, we are responsible for strategically leveraging our resources to align employee strengths with organizational needs to succeed in this environment of constant change.

Next-Generation IT: What Does It Really Look Like?

From mainframes to virtualization to the IoT, we’ve come a long way in a very short amount of time in terms of networking, OS and applications. All this progress has led us to an inflection point of digital business innovation; a critical time in history where, as Gartner puts it best, enterprises must “recognize, prioritize and respond at the speed of digital change.” Despite this, however, many businesses still rely on legacy systems that prevent them from growing and thriving. So, what’s the deal?

I attempted to answer this in a previous blog, where I laid out as entirely as I could the evolution of interconnectivity leading up to today. What was ultimately concluded in that blog is that we have reached a point where we can finally eliminate dependency on legacy hardware and hierarchical architecture with the use of one single, next-generation software platform. The call for organizations across all industries to migrate from legacy hardware has never been stronger, and the good news is that technology has evolved to a point where they can now effectively do so.

This concept of a “next-generation platform,” however, isn’t as simple as it sounds. Just consider its many variations among industry analysts. McKinsey & Company, for example, refers to this kind of platform as “next-generation infrastructure” (NGI). Gartner, meanwhile, describes it as the “New Digital Platform.” We’re seeing market leaders emphasizing the importance of investing in a next-generation platform, yet many businesses still wonder what the technology actually looks like.

To help make it clearer, Avaya took a comparative look at top analyst definitions and broke them down into five key areas of focus for businesses industry-wide: 

  1. Next-generation IT
  2. The Internet of Things (IoT)
  3. Artificial intelligence (AI)/automation
  4. Open ecosystem
  5. The customer/citizens experience

In a series of upcoming blogs, I’ll be walking through these five pillars of a next-generation platform, outlining what they mean and how they affect businesses across every sector. So, let’s get started with the first of these: next-generation IT.

Simplifying Next-Gen IT

As IT leaders face unrelenting pressure to elevate their infrastructure, next-generation IT has emerged as a way to enable advanced new capabilities and support ever-growing business needs. But what does it consist of? Well, many things. The way we see it, however, next-generation IT is defined by four core elements: secure mobility, any-cloud deployment (more software), omnichannel and big data analytics—all of which are supported by a next-generation platform built on open communications architecture.

Secure mobility: Most digital growth today stems from mobile usage. Just consider that mobile now represents 65% of all digital media time, with the majority of traffic for over 75% of digital content—health information, news, retail, sports—coming from mobile devices. Without question, the ability to deliver a secure mobile customer/citizen experience must be part of every organizational DNA. This means enabling customers to securely consume mobile services anytime, anywhere and however desired with no physical connectivity limitations. Whether they’re on a corporate campus connected to a dedicated WLAN, at Starbucks connected to a Wi-Fi hotspot, or on the road paired to a Bluetooth device though cellular connectivity, the connection must always be seamless and secure. Businesses must start intelligently combining carrier wireless technology with next-generation Wi-Fi infrastructure to make service consumption more secure and mobile-minded with seamless hand-off between the two technologies.

Any-cloud deployment: Consumers should be able to seamlessly deploy any application or service as part of any cloud deployment model (hybrid, public or private). To enable this, businesses must sufficiently meet today’s requirements for any-to-any communication. As I discussed in my previous blog, the days of nodal configuration and virtualization are a thing of the past; any-to-any communications have won the battle. A next-generation platform built on open communications architecture is integrated, agile, and future-proof enough to effectively and securely support a services-based ecosystem. Of course, the transition towards software services is highly desirable but remember not all hardware will disappear—although where possible it should definitely be considered. This services-based design is the underlying force of many of today’s greatest digital developments (smart cars, smart cities). It’s what allows organizations across every sector to deliver the most value possible to end-users.

Omnichannel: All communication and/or collaboration platforms must be omnichannel enabled. This is not to be confused with multi-channel. Whereas the latter represents a siloed, metric-driven approach to service, the former is inherently designed to provide a 360-degree customer view, supporting the foundation of true engagement. An omnichannel approach also supports businesses with the contextual and situational awareness needed to drive anticipatory engagement at the individual account level. This means knowing that a customer has been on your website for the last 15 minutes looking at a specific product of yours, which they inquired about during a live chat session with an agent two weeks ago. This kind of contextual data needs to be brought into the picture to add value and enhance the experience of whom you service, regardless of where the interaction first started.

Big data analytics: It’s imperative that you strategically use the contextual data within your organization to compete based on the CX. A huge part of next-generation IT involves seamlessly leveraging multiple databases and analytics capabilities to transform business outcomes (and ultimately, customers’ lives). This means finally breaking siloes to tap into the explosive amount of data—structured and unstructured, historical and real-time—at your disposal. Just as importantly, this means employees being able to openly share, track, and collect data across various teams, processes, and customer touch points. This level of data visibility means a hotel being able to see that a guest’s flight got delayed, enabling the on-duty manager to let that customer know that his or her reservation will be held. It means a bank being able to push out money management tips to a customer after seeing that the individual’s last five interactions were related to account spending.

These four components are critical to next-generation IT as part of a next-generation digital platform. Organizations must start looking at each of these components if they wish to compete based on the CX and respond at the speed of digital change. Stay tuned, next we’ll be talking about the ever-growing Internet of Things!

How to (Finally) Break the Longstanding Hold of Legacy Technology

Without question, we’ve seen more technological innovation in the last 30 years than we have in the last century. We now live in a reality of seemingly limitless possibilities and outcomes. Today, virtually any object can be considered part of an advanced, interconnected ecosystem. Companies across every sector are competing to reimagine customer engagement. The user experience is fundamentally changing as people, processes and services become more dynamically connected. Today’s smart, digital era represents unmatched opportunity for forward-thinking business leaders everywhere.

At the same time, however, it poses some challenges. Specifically, this rapid pace of innovation means businesses must find a way to quickly and efficiently modernize to competitively differentiate. In a time where digital disruptors are building custom IT environments on the fly, companies can no longer let legacy architecture dampen innovation and agility

Businesses know this all too well, with 90% of IT decision makers believing that legacy systems prevent them from harnessing the digital technologies they need to grow and thrive. This is especially true in industries like government and finance, where there’s still a heavy dependency on legacy technology. For example 71% of federal IT decision makers still use old operating systems to run important applications. Meanwhile, 30% of senior investment managers say they’re concerned about the ability of their current legacy systems to meet future regulatory requirements. This list goes on.

It’s clear that something needs to be done here, and fast. So, how exactly did we get to this point of digital disruption, and what can be done about legacy systems today? Let’s take a walk through recent history, and then discuss how companies can begin moving towards digital, next-generation IT.

Data Centralization to Decentralization

Let’s start where applications first began being consumed. About 30 to 40 years ago, all application intelligence was centralized (I’m sure some of you remember the good old mainframe days of using dumb terminals or emulators to access applications and store data centrally). There were some notable benefits to centralizing data in this fashion. There weren’t many issues with storage distribution, for instance, and disaster recovery procedures were clearly documented. Security challenges were also practically nonexistent because there wasn’t any local storage on the terminal (hence, dumb).

Soon, however, we saw the rise of the personal computer, which completely changed this model. Computing and storage could now be distributed, allowing local applications to run without any centralized dependency. This was a game-changer that sparked a desktop war between key market players like Microsoft (Windows), IBM (OS2), and Apple (MacOS).

This transition to decentralization, however, wasn’t without its challenges. Employees may have gained mobility, but IT began facing new challenges in security and distributed storage. Companies were left wondering how to best control their data storage, specifically where confidential information could easily be stored on a floppy disk, local hard drive and, later, USB drives. This remains a challenge to this day—no one wants to give up their mobility, so companies must find a way to instead regain control.

One thing to note: at this point, COTS (Commercial off-the-shelf) servers could now be used. These systems were far less proprietary than previous host systems like mainframes, VAX, etc. However, they were still hardware-dependent, as each platform was usually tailored to the applications it had to run. As a result, a good amount of compute, memory and storage resources were not being fully utilized. In fact, some services were running as low as only 10-20% capacity. While there were benefits to COTS servers, they called for a better way to maximize the use of all resources.

The Rise of Virtualization

The only viable solution to these problems was to eliminate hardware in favor of ONE single software application. But how? The market experienced profound change as companies strove to answer this question, eventually leading to the emergence of virtualization.

During this time, market leaders like VMware began transforming the industry by allowing multiple virtualized OS (virtual machines) to run simultaneously on the same hardware. In this way, applications ran as if they had their own dedicated compute, memory and storage. However, it was all being shared. Simply put, the hardware server had become virtualized. Brilliant!

This allowed companies to create virtual representations of resources such as compute, memory and storage devices. Companies could now run multiple applications over the same physical hardware, in a way that appeared to the applications as though they were running over their own dedicated hardware. More importantly, companies could now fully leverage every single resource at their disposal. Nothing would be left dormant or unused in this virtualized model, unlike what we saw in the past with a dedicated appliance/server per application.

At this point, it was a no brainer to move into the virtualized application world. However, the ugly truth remained: we were still using a legacy networking framework. Many continue to refer to this as client-server, but the bottom line is that it was a hierarchical model that required each node and link to be configured to carry or simulate end-to-end virtualization. Even though the application environment was virtualized, the infrastructure on which it ran was not built with that in mind. It didn’t matter if you were using VLANs, VRFs or even MPLS—it was a complex way of providing end-to-end virtualized services.

Who would finally be able to solve this issue? It seemed the Institute of Electrical and Electronics Engineers (IEEE) and Internet Engineering Task Force (IETF) were on the right track with the standardization of an Ethernet protocol that allows end-to-end services virtualization, which finally took place in May 2012. This is known as SPB, or Shortest Path Bridging (IEEE 802.1aq and IETF RFC 6329 for those interested). And there you have it: servers, applications and networks are now finally virtualized! Are we done? Well, not quite … even desktops are being virtualized, known as VDI (Virtual Desktop Infrastructure) to re-centralize control.

Overall, virtualization became the de facto model that allowed businesses to run applications on what we know as the Cloud. With private and public models, customers could now choose what assets they wanted to own (that is, manage on premises) or have hosted through the public cloud. Soon, however, the challenge became how to run apps in these clouds. Companies quickly discovered the need to store some applications (like regulatory and compliance data) in an onsite private cloud. Meanwhile, other data was best suited for the public cloud. This is how the hybrid cloud deployment model was born.

Cloud Elasticity

Hybrid cloud allowed companies to operate in an environment that strategically utilized the best of both worlds—both on-premises private cloud and third-party public cloud services—to meet their core objectives. In this new world of cloud orchestration, we saw the rise of digital giants like Amazon, Google and Facebook. With a high level of cloud elasticity, providers could now spin up series of virtual applications or services in less than an hour to run them in the public cloud. This unhinged the doors of opportunity for companies everywhere. These providers allowed organizations to create new instances on the fly and shut them down just as quickly. It’s used, for example, to soft launch new products or test drive business in new marketplaces.

But let’s not forget the issue that remains to this day: we have yet to completely move away from all aging hardware. In today’s world of any-to-any communication, driven by technologies like the IoT, artificial intelligence, and machine learning, legacy hardware and hierarchical networking architecture are not just an inconvenience. They can break your business if you don’t have a strategy to reduce that dependency.

Finally Breaking Free of Hardware

The bottom line is that any-to-any communications have won the battle (unlike 15 years ago, where IT largely resisted and essentially shut down the peer-to-peer model). As a result, what many refer to as “meshed communication architecture” emerged as the newest and strongest-yet approach to network design.

This kind of architecture is integrated, agile and future-proof enough to effectively and securely support a services-based ecosystem. The days of nodal configuration and virtualization are a thing of the past. It’s vital that companies move to this services-based architecture to be able to support the future of the customer experience. Consider how it’s essential for supporting smart cars that can autonomously park and change lanes, while being redirected to alternate routes because of traffic congestion. It’s critical for supporting smart home solutions that enable homeowners to remotely manage utility usage. It’s crucial for delivering the most value possible to those who matter most: end-users.

For decades, we’ve been trying to eliminate a primal dependency on hardware. To finally break the silos associated with hardware, companies must begin setting themselves up to support any-to-any communication. In this environment, all services can virtually run anywhere across multiple sources of hardware that can be geographically dispersed.

Now that we know what can be done about legacy systems (transition to an open, software-enabled, meshed architecture), let’s discuss how companies can successfully integrate digital into their existing environment to transform business. Stay tuned for more.