Avaya Software-Defined Networking 1.0 — Doing it Differently

Let’s start with a few figures that explicitly demonstrate the unparalleled scale and diversity of devices to the Internet of Things:

  • Number of connected devices worldwide in 2016: 5 billion
  • Projected number of connected devices in 2020: 50-75 billion
  • Expected economic impact of IoT by 2025: $10-12 trillion

What’s not so apparent within these statistics is the monumental challenge to security. This is the challenge for every industry vertical, from healthcare to building construction, from FDA-approved medical devices connecting to the network to smarter-connected kitchens: How can they reliably and securely send control and network traffic into the cloud or within an enterprise without being compromised?

Securing the growing number of connected devices used in business—particularly in healthcare—is the focus of the Avaya SDN Fx Healthcare solution. First presented this year at HIMSS, a global healthcare technology conference, the solution is built around open source technologies to protect sensitive medical devices from would be hackers who could use the devices as entry points to the rest of the hospital network and introduce potentially malicious traffic originating on physically compromised devices.

At Avaya, we get it! Every good solution begins with key questions that capture the challenge. Simply put, we started with these three:

  • How can we help businesses address IoT without worrying about network deployment complexities, downtimes, upgrades and patching?
  • How can we provide the same or better level of security, availability and reliability of our customers’ systems as they grow with IoT?
  • How can we enable customers to get these done in a simple manner without massive forklifting or impacting their day to day operations?

Our strategy to overcome these challenges took a multi-prong approach, an open, extensible architecture and an eye toward the end users.

How Avaya Does SDN Differently

First, we implemented a controller-based architecture that creates secure service paths based on the profile of IoT devices connecting. Service profile is a container that encapsulates a network profile which is determined by who is accessing the network and a security profile determined by what is being accessed. This is called an access-context. This access-context-based service enablement ensures on-demand provisioning, putting critical security and compute resources where they are needed the most. The Dial-home feature of our smart IoT gateway device—the Avaya Open Network Adapter—helps ensure that it always connects to the controller to declare itself. Moreover, a combination of certificate, TLS-based authentication allows secure communication between the Open Network Adapter and the controller for safe, reliable control traffic exchange.

The next task was to drive a solution that addresses the potential for and likelihood of human error and system failure. Our underlying strategy therefore focused on insulating critical customer IoT devices and applications from these failure scenarios and delivering millisecond recovery with minimal down time. The Avaya Open Network Adapter—our IoT gateway device—can operate uninterrupted to provide secure connectivity for medical devices even when the connection to the SDN controller is lost or has experienced a failover. This guarantees continuous, secure availability of network resources for mission-critical business workflows.

To further augment our strong belief in availability and reliability, the controller plane architecture embraces a hybrid model of Active-Active and Active-Standby systems. The Active-Active model allows data and resource replications that are critical for southbound control operations. It is important to note that data implies user, application and context along with network details. This is done at a higher layer called the Clustering Engine layer using distributed main-memory database and Advanced Messaging Queuing Protocol (AMQP) bus. AMQP implemented using RabbitMQ allows controller to perform large number of transactions close to 1 million per second. Main memory databases using Mnesia allows distributed, fault tolerant DBMS implementation. Tables can be replicated on different compute nodes with system that guarantees location transparency. Therefore, the application accessing the data requires no knowledge of where the different tables reside. This allows sub-second response time, extremely fast data replication with smaller CPU cycles to execute, which are critical for control operations, high volume data analytics. Linearly scaling cluster implementation allows critical data to be replicated without being shared and thereby improving the overall availability of the data. Active-Standby implementation enabled preservation of singleton data (data that should not be replicated across system) such as licensing details, etc.

The architecture involves Master nodes (>=1) that actively replicate data to ensure guaranteed availability, a leader node that is elected from among master nodes to provide a unique northbound IP interface and host singleton data and a slave nodes (>=0) performing lower level control operations.

The embedded load balancer allows control traffic to be evenly distributed in the cluster comprising of N-nodes (where N<=255), supporting more than 30,000 transactions per second. This allows the controller to cater to any surge in control traffic and handle network storms. Moreover, it facilitates a single virtual IP addressing at the southbound data interface for the IoT devices connecting to the controller and thereby providing a single-system view to the outside world.

To ensure that systems can recover from failed state and that data integrity can be maintained without any compromise after recovery, we implemented a fault-tolerant model that uses a supervisor tree theory. Supervisor tree theory is a hierarchical model and notation that defines what needs to be monitored, remedied within active system. Supervisory tree has leaf nodes that act as independent supervisors looking at their own assigned set of resources and reporting to their parent node. Loosely coupled architecture ensures each leaf supervisory node to act on its behalf making independent decisions based on the operating conditions or from reports from its child nodes and configuration parameters. Remediation typically involves retries and migration of resources.

Last but not least, to ensure complexities don’t precipitate to the application layer, we have implemented a three-tier SDN architecture that is simple to deploy, use and most importantly allows customers to focus on their day to day operations without having worry about infrastructure. Installation is greatly simplified by our zero touch deployment which is unique in the industry. With just three simple commands per node user can deploy the full two-node cluster.

Northbound APIs allow customers, partners and developers to create secure network and connectivity services for their applications without requiring any advanced knowledge of the underlying infrastructure, SDN controller complexities. Application registration and authorization process using OAuth framework extends security to user and application space.

With an average of 12 connected devices per hospital bed and upwards of 100,000 connected devices in a given hospital system, it’s easy to see the need for an architectural model like this to push the accepted limits of the tools openly available. It’s also easy to see the importance of vendors like Avaya being champions for the standardization of these solutions rather than just opportunistically offering up professional services to overwhelmed IT organizations or leaving them to deal with the aftermath of breaches. Securing the diverse array of Things connected to the network and doing so at unprecedented scale is about more than just innovation. It’s a matter of solving the real-world problems faced by our customers in these rapidly changing times.

Related Articles:

The IoT Chronicles Part 2: Three Big Security Threats—and How to Solve Them

With projected market revenue of $1.7 trillion by 2020, the Internet of Things (IoT) stands to forever change the world as we know it. In part 1 of this series, I demystified the IoT and explored how leaders can create a vertical-driven strategy that produces positive and proactive business outcomes. Your strategy won’t get you far, however, if it doesn’t explicitly address the unique security threats that are inherent to this level of connectivity.

These kinds of threats aren’t easy to identify or mitigate, which is exactly why nearly 60% of companies say they plan to eventually implement the IoT (i.e., once security no longer concerns them) and why nearly 30% have no plans to implement the IoT at all, but this is likely to change quickly.

With the number of connected “things” growing, it’s expected that more hackers will feed off the ever-growing possibilities to attack, threaten and compromise business. Consider the recent IoT-driven DDoS attack on Internet performance company Dyn, which disrupted websites like PayPal, Spotify and Twitter. Dyn’s Chief Strategy Officer admitted last month that some of the traffic that attacked the company came from compromised IoT devices.

As I continue this four-part IoT crash course, I’d be remiss in not discussing security. Having said that, here are three massive IoT security threats we’re seeing today (and how to expertly address them):

  1. Personally-owned devices:

    Research shows that about 40% of U.S. employees at large enterprises bring their own device(s) to work, and 75% of companies currently permit or plan to permit BYOD in the workplace. Today, there’s a clear need among businesses to securely connect these personally owned devices that simultaneously perform multiple functions and connect to public, private and hybrid clouds. It may be easy to secure enterprise IoT, but this gets a lot trickier when you factor in the devices employees are using on your network. Just consider the 10 million Android devices that were infected this summer with Chinese malware.

    My suggestion: implement some sort of malware detection mechanism and deliver some level of automation that can quickly detect abnormalities on employee devices and prevent them from spreading.

  2. Open APIs:

    An open API model is advantageous in that it allows developers outside of companies to easily access and use APIs to create breakthrough innovations. At the same time, however, publicly available APIs are also exposed ones. Promoting openness means anyone can write new APIs (which is a good thing), but that can cause some challenges in the market. If an organization has undocumented features of its API, for instance, or if someone is rolling out an API and doesn’t have it properly documented or controlled, hackers can potentially take advantage. At the end of the day, businesses must be cautious as to what is being exposed and documented when writing APIs.

  3. Influx of data:

    The amount of data being gathered from today’s ever-growing number of connected “things” is simply astounding. In fact, research shows that about 90% of all data in the world today was created in just the past few years (2.5 billion GB of data were being produced every day in 2012 alone!) While big data has the potential to transform internal processes and the customer experience, leaders must ensure they have the right infrastructure in place to securely distribute and store the massive amount of data that flows through their organizations daily.

    My suggestion: have a solid understanding of how much data your network can handle (never overestimate your network capabilities) and plan to scale accordingly. Also, know the origination of your data and what privacy regulations you might need to take depending on the industry in which you operate. Healthcare, for example, must abide by very strict regulations. Be sure to also keep in mind the legality of where you store your data, depending on where that data comes from. Countries like Germany, for instance, have strict privacy laws that others don’t.

The One Thing to Remember

Here’s the thing business leaders must keep top of mind: although the possibilities for data compromise are growing, they’ll never become realities with network security solutions offered from the right provider. This doesn’t mean your security concerns aren’t valid. It simply means that, with the right technology, there’s no longer a reason to let those concerns prevent you from tapping into the immeasurable growth brought about by the IoT.

So, what’s my final suggestion? Organizations should consider a layered approach:

  • Phase I: Analyze, monitor and inspect.
  • Phase II: When classifying a device as suspect, isolate it to a different segment and perform forensic analysis.
  • Phase III:
    • Quarantine the device if known malware is detected and identified.
    • If the cause is unknown/unidentified, maintain isolation in a honeypot—a quarantine zone to understand malware—and deploy counter measures as soon as possible once a fix becomes available.
  • Phase IV: Once malware is clearly identified, quarantine all devices potentially infected while informing the end users and LOBs impacted.

For Phases II and III, invoke an automated sophisticated workflow to notify the right team for just-in-time analysis.

To properly execute on these phases, you need an automated and more secure networking foundation. The legacy client-server is simply not suitable for this new IoT world. Whatever services your connected devices or systems provide, do whatever you can to ensure they are logically segmented on your infrastructure. This is something that can be achieved through end-to-end network segmentation.

An end-to-end network segmentation solution eliminates nodal configuration by leveraging end-to-end Virtual Services Networks (VSNs). This allows businesses to provision their networks only at specific points of service, where those services are being consumed by end users or devices. Ultimately, end-to-end segmentation transforms your network core into an automated and intelligent virtualized transport. Your network segments will be stealth to hackers, flexible for secure and authorized use, and truly isolated from one another. These core capabilities nearly guarantee network security no matter what devices your employees are using, how much data they are generating and sharing, or what APIs are being written.

Your network security strategy will never be effective if your underlying architecture isn’t what it needs to be. In my opinion, end-to-end network segmentation is the most effective way to minimize and control the inherent security risks of the IoT. And the best news is that there are end-to-end segmentation solutions proven to deliver next-generation IoT security—even for companies still leveraging aging infrastructure. The technology is possible, real and waiting to be utilized.

As we move forward with the IoT, we must ensure security is always top of mind. There are a set of best practices that organizations must implement to substantially reduce the risks associated with IoT deployment. Keep in mind, there are no immune systems, but understanding the risks and minimizing the potential business impact is key. In the end, status quo will likely be a disaster for organizations endorsing the IoT at a rapid pace—changes to legacy practices and infrastructure are a must! Thankfully, technology advancements can provide the connectivity, stability and security required to enable companies to take advantage of the opportunities provided by the IoT.

 

The IoT Chronicles Part 1: Demystification and Strategy

If you follow my blog, then you know I talk a lot about the Internet of Things (IoT). As Avaya’s Chief Technologist for Software Defined Architecture, I love to discuss the IoT, a vast topic that I believe all business leaders should continuously educate themselves on. (See my recent blog and my colleague Mark Fletcher’s blog about the possibilities and outcomes of smarter public safety and emergency response—something that affects every one of us.)

With research indicating that the next decade will be marked by record IoT growth—some firms expect revenue to break into the trillions—it’s never a bad idea to revisit the fundamentals. There’s no question the IoT is hot right now, but businesses need to buckle down and take the right steps that will make a lasting impact. Leaders must have a solid understanding of the IoT and, most importantly, what it means for their organizations outside all the hype. While the IoT offers ample opportunity for innovation and growth, there are undoubtedly key considerations that must be made for seeing success.

So, hype aside, what do you need to know about the IoT? This is the first in a this four-part blog series, a crash course on the ever-growing world of connected “things”—from top challenges to solutions to predictions and trends. Got your pencil and paper? Good, let’s get started…

What is the IoT?

Let’s start with the basics: what exactly is the IoT? The term has been tossed around so much that you’d think by now you’d have an inherent understanding of it. Like many other concepts though, definitions vary and can be subject to opinion.

Wikipedia, for instance, defines the IoT as “the internetworking of physical devices, vehicles, buildings and other items that are embedded with software, sensors and network connectivity capabilities that enable these objects to collect and exchange data.” This definition isn’t inaccurate; however, the way I see it, anything that can connect to either a network or provide any sort of service (not just data collection and exchange) should be considered part of the IoT.

We may be living in an age full of wearable technologies and 4/5G-enabled devices, but who’s to say that older technologies like printers, digital and analog phones or first-generation video conferencing systems aren’t part of the IoT? If anything, these were significant predecessors that paved the way to IoT greatness.

My point is that the IoT is very loosely defined in today’s market, but the end goal is the same: create automated (and in many cases data-driven) processes that generate the exact business outcome you’re looking for.

What kinds of outcomes? Imagine a sensor that can detect a forest fire and send out real-time notifications to emergency response teams to prevent it from spreading. Consider a 4G-connected car that can detect a flat tire and immediately notify the nearest repairman. Picture bank tellers that can identify customers (or criminals) as soon as they walk through the door via facial recognition. The use cases for IoT are truly endless, which is why we at Avaya define the IoT as simply having an open scope. Virtually anything can be considered part of the IoT, and so anything is possible.

How to Create Your Best IoT Approach

Do a quick Google search on the IoT and you’ll see all sorts of results like, “The IoT Is Far Bigger Than Anyone Realizes” or “How People Are Using the IoT.” These are good pieces of information, but how many resources are out there for helping you create the best IoT approach for your specific organization? What steps should you take? What steps can you take given your circumstances? You must consider such things as budgets and internal bandwidth to ensure you properly invest in and get the most value out of IoT.

I can’t give you a custom-tailored IoT strategy on the fly (although that’s something we at Avaya can help you map out and execute in time). What I can do right now is shed light on the reality of the IoT, and how businesses can leverage it in a positive way.

The first step to figuring out how the IoT can deliver proactive, positive outcomes for your business is to look at your specific vertical needs. It’s critical that businesses understand the vertical-specific nature of the IoT. Every industry has its own set of opportunities, as well as its own set of challenges to overcome.

For instance, within a hospital, there’s the critical need for fully secure connectivity between life-saving medical devices, as well as the need to seamlessly and immediately deliver patient data to medical staff. Meanwhile, a financial institution is concerned with how to guarantee account protection and secure financial transactions while providing a personalized experience for customers. A retailer may be focused on detecting the proximity of a customer in order to push relevant promotions based on big data analytics.

Every industry is centered on different yet equally important business outcomes that lead to better customer experiences. Needless to say, you’ll fall 10 steps behind your competitors if you partner with a provider that touts a “one-size-fits-all” IoT platform.

Steps You Can Take Now for IoT Success

So, after you finish reading this, what can you begin doing to set yourself up for IoT success? One question you can ask yourself in terms of security (a massive IoT concern I’ll be tackling in Part 2 of this series) is: “Am I segmenting my network to ensure no one can see my connected devices, or access those devices without proper authorization?” I’ll be digging deeper into this in Part 2. In the meantime, read up on end-to-end network segmentation.

To make a lasting impact, you should also avoid a siloed IoT approach at all costs (or break your existing siloed approach). All lines of business (LOBs) must move at one unified pace of innovation to produce better business outcomes and customer experiences. I can’t stress the importance of this cross-LOB initiative enough. If one department is adding connected devices, you must ensure those devices can intelligently connect to all other LOBs. In today’s smart, digital world, the IoT is rooted in being able to seamlessly and intelligently gather and share data organization-wide. Today, tangible ROI and benefits are found in enterprise-wide connectivity and data exchange.

Coming up: In part 2 of this series, I’ll address the elephant in the room when it comes to the IoT: security.

Time for a New Network Engine: Start Running on a Software-Defined Network

I grew up on a wheat farm in the 70s. I spent much of my teens and early 20s working on farm machinery, before starting my career in software and computer technology. I learned distributor caps, points, carburetors, plugs, etc. to be able to tune up an engine to get it run well. I still have a timing light and dwell meter to be able to work on my old Studebaker. However, I don’t work on my modern vehicles—I have a trustworthy mechanic with the tools to interact with the onboard computer systems.

Engines have progressed a long way since the 70s. I had a 1979 Hurst/Olds Cutlass, one of the top factory muscle cars of the late 70s. Engine was rated at 170 HP and got 12 MPG on a good day. A 2014 Mustang GT500 has 662 HP and gets 24 MPG, or almost four times the HP and twice the mileage.  Aerodynamics has some effect, but the big difference is engine technology (plus modern transmissions, but bear with my analogy for a few more paragraphs).

OEM (original equipment manufacturer) and aftermarket parts companies proposed many components to try to improve the good old 70s V8 engines. Distributors and points were replaced by electronic ignition systems, providing more accurate spark and reduced component deterioration. Carburetors were replaced by throttle body fuel injectors that eliminated the bowls and floats and provided better fuel delivery. These components helped but weren’t capable of delivering orders of magnitude improvement required to deliver horsepower to a mileage conscious consumer (or government agencies).

Modern engines are a marvel of computer technology. The fundamentals of the internal combustion engine haven’t changed: compress a mixture of air and fuel, introduce a spark, convert the explosion to mechanical energy, exhaust the spent fuel, and repeat. Now, computers do a better job of tuning the engine than I could ever dream of and tuning is performed constantly, adjusting the engine for atmospherics, load, fuel quality, terrain, driver style, etc. to maximize efficiency.

The networking industry is at a similar place today as engine designers were in the 80s. We’re trying to modernize the 90s network technology by adding Software-Defined Network (SDN) controllers. As requirements for network services evolved, network manufacturers created protocols (some open, some proprietary) to deliver the services. The result is a stack of network protocols that present a very complex management challenge.

I read a book in my teens (Danny Dunn and the Homework Machine, Abrashkin and Williams, 1958) about a student who programmed a NASA computer to do his math homework. The student’s math teacher found out about the program. The student assumed he was going to fail the class because he didn’t do his own homework. However, the teacher said the student had to understand more about how to solve the math problems to program the computer than was required to do the problems. This story has stuck with me for 40+ years because of the underlying truth: You have to understand a problem very well to be able to automate a solution.

I don’t claim to be a network admin, but I know several. They tell me managing the full network stack is as much art as it is science. Put a half-dozen network experts around a table with an endless supply of beer, and the beer will run out before they come to a consensus on how to best architect and operate a complex network. If they can’t agree how to manage a network, how can there be an agreement on the best way to automate it?

If auto manufacturers had tried to computerize a carburetor and dynamically adjust timing by putting a step motor on the distributer, we’d still be driving sub-200 HP performance cars with poor reliability and complex service requirements. To significantly improve the network, we need to start by simplifying the network. This doesn’t mean that we need an entirely new network paradigm. Engine designers maintained the core hardware design with pistons, valves, cam- and crank-shafts (though some people did play with a rotary engine concept). The basic network is fine—cabling, switches, Ethernet, TCP/IP, etc. However, the delivery of upper level services needs to be greatly simplified to achieve the promise of a significantly improved network.

But what’s meant by “improved network”? Engine designers were driven to improve the engine efficiency to get more power from a unit of fuel. But I’m sure there were other secondary goals, such as improved reliability that allowed vehicle manufacturers to offer much longer product warrantees. So what are the goals of an improved network?

  • Security:

    Data security is top of mind (and front of newspapers) today. Complexity is an antagonist of safety. Complex environments provide too many attack surfaces and make it very easy for well-intentioned maintenance to accidentally open a back door to your data.

  • Flexibility:

    Complex environments are hard to change. It used to be that provisioning a server took weeks and configuring the network took minutes. With virtualization, a server can be provisioned in minutes, but a VLAN takes weeks to create (safely).

  • Resiliency:

    In the 7×24 connected world, taking minutes to hours to recover from a network component failure isn’t acceptable.

  • Manageability:

    This is somewhat a self-fulfilling statement. Less complex environments are simpler to understand and simpler to manage effectively.

Avaya’s SDN Fx™ Architecture, based on SPB or Shortest Path Bridging (802.1aq), provides an alternative to the traditional network protocol stack for L2/L3 unicast and multicast network services. SPB has several attributes that make it a much better engine to drive the requirements of modern networks.

  • Provisioned at the edge:

    Network services are defined on the access switches, turning the core of network into a vehicle for date transfer, which is never touched. (See point No. 3 in Top 10 things you need to know about Avaya Fabric Connect.)

  • Hyper-segmentation:

    SPB supports 16 million virtual networks, so every service can have its own virtual network segment, a key to providing network level data security. (For more information, see Avaya Chief Technologist of SDA Jean Turgeon’s three-part blog on network segmentation. Read about hyper-segmentation, native stealth and elasticity.)

  • Very fast re-convergence:

    SPB identifies all possible paths through the network and selects the best path. If a path disappears, the next best path is already determined and chosen in a couple of hundred milliseconds or less. (See point No. 7 in Top 10 things you need to know about Avaya Fabric Connect.)

  • Internet of Things (IoT) support:

    SPB works equally well connecting racks of virtualized compute infrastructure as connecting Wireless Access Points (WAPs), CCTV cameras, sensors, controls, phones, etc. See the blog Security and the IoT: Where to Start, How to Solve for more information.

One benefit that engine designers had that network engineers don’t have is the new model year. Consumers don’t expect to take their old car into the dealer and get an engine upgrade. They take their car in to get an entirely new car. Network engineers are expected to upgrade the network by replacing parts, usually while the network is still running. Avaya’s Fabric Extend allows SPB to be deployed by simply replacing the edge switches and utilizing your existing core network. Spanning the core of the network doesn’t provide all of the benefits of a full fabric deployment, but does provide a means to execute a rolling fabric conversion, kind-of-like upgrading the carburetor while the car is running.