Time for a New Network Engine: Start Running on a Software-Defined Network

I grew up on a wheat farm in the 70s. I spent much of my teens and early 20s working on farm machinery, before starting my career in software and computer technology. I learned distributor caps, points, carburetors, plugs, etc. to be able to tune up an engine to get it run well. I still have a timing light and dwell meter to be able to work on my old Studebaker. However, I don’t work on my modern vehicles—I have a trustworthy mechanic with the tools to interact with the onboard computer systems.

Engines have progressed a long way since the 70s. I had a 1979 Hurst/Olds Cutlass, one of the top factory muscle cars of the late 70s. Engine was rated at 170 HP and got 12 MPG on a good day. A 2014 Mustang GT500 has 662 HP and gets 24 MPG, or almost four times the HP and twice the mileage.  Aerodynamics has some effect, but the big difference is engine technology (plus modern transmissions, but bear with my analogy for a few more paragraphs).

OEM (original equipment manufacturer) and aftermarket parts companies proposed many components to try to improve the good old 70s V8 engines. Distributors and points were replaced by electronic ignition systems, providing more accurate spark and reduced component deterioration. Carburetors were replaced by throttle body fuel injectors that eliminated the bowls and floats and provided better fuel delivery. These components helped but weren’t capable of delivering orders of magnitude improvement required to deliver horsepower to a mileage conscious consumer (or government agencies).

Modern engines are a marvel of computer technology. The fundamentals of the internal combustion engine haven’t changed: compress a mixture of air and fuel, introduce a spark, convert the explosion to mechanical energy, exhaust the spent fuel, and repeat. Now, computers do a better job of tuning the engine than I could ever dream of and tuning is performed constantly, adjusting the engine for atmospherics, load, fuel quality, terrain, driver style, etc. to maximize efficiency.

The networking industry is at a similar place today as engine designers were in the 80s. We’re trying to modernize the 90s network technology by adding Software-Defined Network (SDN) controllers. As requirements for network services evolved, network manufacturers created protocols (some open, some proprietary) to deliver the services. The result is a stack of network protocols that present a very complex management challenge.

I read a book in my teens (Danny Dunn and the Homework Machine, Abrashkin and Williams, 1958) about a student who programmed a NASA computer to do his math homework. The student’s math teacher found out about the program. The student assumed he was going to fail the class because he didn’t do his own homework. However, the teacher said the student had to understand more about how to solve the math problems to program the computer than was required to do the problems. This story has stuck with me for 40+ years because of the underlying truth: You have to understand a problem very well to be able to automate a solution.

I don’t claim to be a network admin, but I know several. They tell me managing the full network stack is as much art as it is science. Put a half-dozen network experts around a table with an endless supply of beer, and the beer will run out before they come to a consensus on how to best architect and operate a complex network. If they can’t agree how to manage a network, how can there be an agreement on the best way to automate it?

If auto manufacturers had tried to computerize a carburetor and dynamically adjust timing by putting a step motor on the distributer, we’d still be driving sub-200 HP performance cars with poor reliability and complex service requirements. To significantly improve the network, we need to start by simplifying the network. This doesn’t mean that we need an entirely new network paradigm. Engine designers maintained the core hardware design with pistons, valves, cam- and crank-shafts (though some people did play with a rotary engine concept). The basic network is fine—cabling, switches, Ethernet, TCP/IP, etc. However, the delivery of upper level services needs to be greatly simplified to achieve the promise of a significantly improved network.

But what’s meant by “improved network”? Engine designers were driven to improve the engine efficiency to get more power from a unit of fuel. But I’m sure there were other secondary goals, such as improved reliability that allowed vehicle manufacturers to offer much longer product warrantees. So what are the goals of an improved network?

  • Security:

    Data security is top of mind (and front of newspapers) today. Complexity is an antagonist of safety. Complex environments provide too many attack surfaces and make it very easy for well-intentioned maintenance to accidentally open a back door to your data.

  • Flexibility:

    Complex environments are hard to change. It used to be that provisioning a server took weeks and configuring the network took minutes. With virtualization, a server can be provisioned in minutes, but a VLAN takes weeks to create (safely).

  • Resiliency:

    In the 7×24 connected world, taking minutes to hours to recover from a network component failure isn’t acceptable.

  • Manageability:

    This is somewhat a self-fulfilling statement. Less complex environments are simpler to understand and simpler to manage effectively.

Avaya’s SDN Fx™ Architecture, based on SPB or Shortest Path Bridging (802.1aq), provides an alternative to the traditional network protocol stack for L2/L3 unicast and multicast network services. SPB has several attributes that make it a much better engine to drive the requirements of modern networks.

  • Provisioned at the edge:

    Network services are defined on the access switches, turning the core of network into a vehicle for date transfer, which is never touched. (See point No. 3 in Top 10 things you need to know about Avaya Fabric Connect.)

  • Hyper-segmentation:

    SPB supports 16 million virtual networks, so every service can have its own virtual network segment, a key to providing network level data security. (For more information, see Avaya Chief Technologist of SDA Jean Turgeon’s three-part blog on network segmentation. Read about hyper-segmentation, native stealth and elasticity.)

  • Very fast re-convergence:

    SPB identifies all possible paths through the network and selects the best path. If a path disappears, the next best path is already determined and chosen in a couple of hundred milliseconds or less. (See point No. 7 in Top 10 things you need to know about Avaya Fabric Connect.)

  • Internet of Things (IoT) support:

    SPB works equally well connecting racks of virtualized compute infrastructure as connecting Wireless Access Points (WAPs), CCTV cameras, sensors, controls, phones, etc. See the blog Security and the IoT: Where to Start, How to Solve for more information.

One benefit that engine designers had that network engineers don’t have is the new model year. Consumers don’t expect to take their old car into the dealer and get an engine upgrade. They take their car in to get an entirely new car. Network engineers are expected to upgrade the network by replacing parts, usually while the network is still running. Avaya’s Fabric Extend allows SPB to be deployed by simply replacing the edge switches and utilizing your existing core network. Spanning the core of the network doesn’t provide all of the benefits of a full fabric deployment, but does provide a means to execute a rolling fabric conversion, kind-of-like upgrading the carburetor while the car is running.

Related Articles:

APTs Part 1: Protection Against Advanced Persistent Threats to Your Data

Hardly a day goes by without hearing about a data breach somewhere in the world. So it’s timely that we launch this new blog series about Security. To kick the series off, we’ll take a look at some of the alarming trends in the development of Advanced Persistent Threats (APTs). We’ll explore what they are and how they operate. Along the way, we’ll provide simple advice to help you limit their impact on your enterprise.

In the old days, we mainly dealt with fly-by automated attacks. We all recall worms and Trojans and the other little beasts in the menagerie of malware. They were fairly simple at first but as time moved forward, the degree of sophistication and stealthy behavior of this code has drastically increased. There are a couple of reasons for this. First, code naturally evolves as multiple individuals contribute to its evolution, growing in feature set or reliability. Even malicious code benefits from collaborative development. Second, the design goal has changed from doing immediate damage to remaining hidden. This is the goal of the APT.

  • APTs are advanced.

    Typically, they come from a sizable group of individuals who are well-funded and equipped. Many people will automatically think APTs come from China and Russia, but the reality is they can be and are anywhere. The U.K. is one of the leading nations and there are plenty in the U.S. as well. They are also given a set of targets or perhaps even a single target.

  • APTs are persistent.

    This is a group that owes its whole existence to penetrating the assigned target. Many times, there are handsome bonuses for success. They will persist for months and even years, if necessary, waiting for the right moment.

  • And while they do not seek to do immediate damage, they most definitely are a threat.

    Their goal is to penetrate and access sensitive information, and establish command and control points within the network with devastating results. The recent data breach at Yahoo is the latest, with roughly 400 million records stolen. Let’s also not forget that the NSA itself was breached with the result being the exfiltration of sensitive cyberattack tools.

While many will still say “not in my network,” research indicates the attacker in most breaches is resident in the network for an average of 256 days without being discovered. Further, about 81% of those breached did not identify it themselves. They were notified by third parties such as banks, credit card vendors, or law enforcement—and though we can’t tell exactly, it’s suspected that up to 94% don’t know they’ve been hacked until long afterward.

Now don’t get me wrong, we still have plenty of malware out there and it’s growing in volume every day. As an example, there are 25 million new instances of malware that cannot be blocked by traditional antivirus solutions. The added venom to the mix, however, is that now there are well-equipped teams using malware in a tightly orchestrated fashion. It’s reported that 70% of known breaches involved the use of malware, but the breaches are done in a well-thought-out orchestrated manner. The rules have changed so we had better up our game. In my next blog, we’ll take a closer look at a typical method of APT operations and the concepts of kill chains and attack trees, as well as how they go about getting into your enterprise.

You’re likely wondering what you can do to protect yourself. Well, the NSA recommends implementing highly granular microsegments. This prevents lateral movement, which is critical to the attackers’ ability to escalate privilege into the environment. They also recommend creating stealth or black networks that yield little or no information to scans and probes. Finally, these secure microsegments should ideally be ships in the night with no or at least very constricted communications capability to other segments.

Avaya has embraced this philosophy in our recent security launch. Hyper-segmentation provides for high granular segmentation, stealth provides for the black network environment, and elasticity provides for strong perimeter protection, allowing access to users and devices only once they have been vetted, established as trusted, and authenticated. We’ll go much deeper into this in the third installment of this series on APTs. Until then, don’t be afraid. Be prepared.

Data Protection Part 3: Where Does Encryption fit into Data Protection?

I’ve mentioned that the SNIA SDC Conference provided the catalyst for writing this blog series on data protection. (See part 1 and part 2.) While there was a lot of discussion at the Conference on protecting data from loss, there were also discussions about encryption of data at rest. I noticed that the conversations about data protection and encryption were separate and distinct. Networks also employ data encryption methods to protect data in transit. I was sitting in one of the sessions when I began to wonder how many times a data block is encrypted and decrypted in its life. I also wondered how many times encrypted data is encrypted again as “data at rest” becomes “data in transit.”

Data at rest encryption can be divided into two categories: media encryption and object encryption. Media encryption focuses on protecting data at the physical device level such as the disk drive or tape. Encryption for data at rest can be performed at multiple places in the data storage stack—host, SAN appliance or target device (tape drive, tape library, disk drive). There are use cases for each deployment option, but generally it’s best to encrypt as close to the physical media as the use case allows. There are often trade-offs to be examined. For instance, encryption defeats the value of deduplication. In most data sets, there is a lot of repeated data that can be managed more efficiently if deduplicated. If host-based encryption is employed, the value of deduplicating data downstream, such as WAN acceleration, is eliminated.

The benefit of encryption at the media level has generated interest in encrypting tape drives and Self-Encrypting hard Drives (SED). Tape use cases are pretty straight forward: create a data copy on media that can be shipped off-site for protection of the data at the primary site. Tape used to be the primary backup media, but with long recovery times and the data explosion, tape has been relegated to tertiary copies and long term archive of data that doesn’t need to be online. The key is tape is designed to be shipped off-site, meaning the shipment could be hijacked. Encrypting the data on the tapes makes a lot of sense. A box of encrypted tapes has the same value as a box of used tapes, i.e., not worth hijacking.

I have “sold” a lot of SEDs in my career. I’ve always tried to be honest with customers. SEDs have limited value in data center operations. Drives deployed in the data center aren’t intended to transport data out of the data center. (There are a couple of valid use cases where moving data on drives makes sense, such as data center moves or seeding a disaster recovery site, but of all the drives deployed, very few are used in this manner). I’d often test a customer’s view of SEDs with a simple question, “Do you have locks on your data center doors?” Some customers would get the joke and I knew I could have a frank conversation. If the customer didn’t get the joke (i.e., understand that SEDs only provide protection if the physical drive falls into the wrong hands), I proceeded cautiously. Two other factors come into play at that moment: there was an account manager in the room who is commission driven (SEDs are slightly more expensive than non-SEDs) and paranoid customers with deep pockets are an IT supplier’s best friend. Of course, just because you’re paranoid, doesn’t mean that there aren’t a few thousand hackers out there looking to gain value from your data.

Bottom line is, if someone gains access to your network or manages to compromise someone’s username/password, SEDs don’t help. The encryption key is automatically applied to the drive when the drive is started. Any process that has access to the system after that isn’t going to be denied access by the drive encryption.

The primary value of SEDs is when a drive is deployed outside the data center, where the primary or secondary data protection is at the drive. Best example is a laptop. You can assume that the data on a stolen laptop without encryption is in the hands of the thief. Other portable devices, such as tablets, smartphones, etc., also have encrypted storage devices, though not a hard drive in a conventional sense. Note: Many Solid State Drives (SSDs) are also SEDs, which makes the case for the SSD option in your next laptop stronger.

Before my friends in the storage industry start tweeting about me, I do see a few values for SEDs in the data center.

  1. Compliance. Many security offices require SEDs—it never hurts to have SEDs, just understand where they fit in the security stack.
  2. Storage is going to be repurposed. Lab environments, cloud providers, etc., where storage may be used for one project or customer today and another tomorrow may desire or require compete data erasure. The easiest way to erase data is to encrypt it and then delete the key. The data will technically be on the drive, but not accessible.
  3. End of life destruction. Drives (spinning and SSD) do wear out and need to be disposed of. Some people require physical destruction of the drive (heard of people shooting them with a high-powered rifle, but never witnessed it). There are services that will crush or shred the drive. However it’s easier to shred the key. (Paranoid people do both.)

Object level encryption is another way to address protecting data at rest. I’m using a very vague definition of object for this discussion. Objects are often associated with a particular cataloging mechanism that supports very large numbers of items to be stored. In this case, I’m not being that specific. Think of an object as something that could be addressed at the application level. I spent a great portion of my storage career working on Networked Attached Storage (NAS) systems, another poorly named technology. NAS is essentially a purpose-built file server. For this conversation a file and the file system could be considered objects.

I’ve had many conversations with customers about protecting files with encryption. Customers often wanted to encrypt the entire file system. This is pretty straight forward: one key to manage for all of the content in the file system. The problem is the one key tends to be widely distributed—any person or process that needs access to a file gets the key to the entire file system. A side effect of this kind of a solution is that all metadata of the file system is also encrypted. So operations like backups that operate based on the creation and modification timestamp need to have the key to the file system. Therefore systems like backup servers, anti-virus servers, etc., have to be secure as they literally have the keys to the kingdom.

Another approach is to encrypt the files within the file system. Think of systematic zipping of files with encryption and a password. This has the benefit of not affecting the metadata. A file can be moved, emailed, deleted, etc., without decrypting the file. The backup software doesn’t need to have the key to execute, and the files in the backup are encrypted. Operations that need to access the internals of the file, such as anti-virus or full-text search still require the keys. The challenge is managing the keys and access control lists. Some files are written and read by only one person/application. However, most files are intended to be shared. For instance, emailing an encrypted file doesn’t do the recipient any good unless you also provide the key. I know a lot of people who encrypt any file that they put in their “free cloud storage.” It isn’t that they don’t trust the cloud provider—it’s just that sometimes a little paranoia is a good thing.

So why not encrypt everything everywhere? As I pointed out above, encrypted file systems are hard to manage. Encryption also makes it harder to detect intrusions in the network when the data-in-transit is encrypted. I can remember pointed discussions between the storage admins and the network admins about encrypting replicated data. The storage admin wanted the data encrypted at the source array and decrypted at the target array. The network admin wanted to encrypt at the WAN edge device, so they had visibility into the data leaving the building.

An interesting shift is the use of encryption by hackers. Rather than copy your data, they encrypt it and then offer to sell you the key. This phenomenon is called ransomware. While detection of the malicious code is the preferred defense, a good data backup enables a good backup plan. Suppose you have hourly copies of your data. Rather than choose to pay the ransom, you could choose to restore your data to the point in time before your data became encrypted.

At this point, if you’re expecting me to tie a nice little bow around data protection, you’re going to be disappointed. Protecting data in a world where the threats were application errors, failed components, undetected bit swaps and natural disasters was a challenge. Today, the threats are using teams of well-funded experts focused on finding the weak links in your data security structure. The threat landscape is constantly changing. It is very difficult, if not impossible, to protect against all threats. The IT technology industry is working to provide a solution component. However, the threat volatility forces overall protection to be reactive to the threat technology.

Organizations need to look at the problem the way the Navy does when protecting data.

  • Implement layers of security
  • Assume that any layer is penetrable
  • Minimize the damage of a penetration

First step, limit access to your data infrastructure through identity checks and limit access to need to know. Avaya Identity Engines provide a powerful portfolio of tools for managing user and device access to your network. However, assume that someone will figure out how to forge credentials and gain access to your infrastructure.

Avaya SDN Fx provides a key foundational component of a data security solution, minimizing the exposure of your network to unauthorized access. So when the spy gains access to your network, you can limit the exposure and keep the perpetrator from wandering around your network looking for the good stuff.

Data in transit and data at rest encryption and data backups provide another level of defense and recoverability when other layers are breached.

Finally, everybody needs to be involved in keeping data secure. I was interrupted while writing this conclusion to help a sales engineer with an opportunity. I emailed him several docs and links to others as background information. Even though the docs are marked as to which ones were for internal use only, I noted in the email which docs were sensitive and couldn’t be shared with the customer. Proper strategies include systems, processes, and people all working together across organizations and technology stacks to prevent data from being lost or ripped off.

I’ve always been a believer that the effort to make things idiot proof was often wasted because they just keep making better idiots. In this case, they’re making better experts to come after your data. Fortunately, we have very intelligent experts working for the good guys too. We’ll always be one step behind, but we can continue to strive to minimize the threat surface and minimize the impact of the surface being violated.

Data Protection Part 2: What About Unauthorized Data Access?

In part 1, we explored the physical loss of data, meaning the data is no longer available for access (intelligently) by anyone. There’s another, perhaps more significant threat to your data protection efforts: someone gaining access to sensitive information, often referred to as a data breach. An argument could be made that in some cases it is better to have lost data than to have data become available to unauthorized individuals. For example, when it comes to personal photos, would you prefer nobody sees them again or everyone sees them?

Data breaches didn’t start with the digital age. In my blog on Wireless Location Based Services (WLBS), I explained that I started my career working for the U.S. Navy. We had very specific rules for handling classified documents (and penalties for mishandling them). Not all documents were handled the same. We had file cabinets with different levels of security (weight, strength, lock) for different document classifications. The idea being the more valuable the information, the harder it would be gain access to the documents. At one facility the windows were replaced with glass block to prevent someone from outside the facility reading documents on our desks. Watergate is an example of a high-profile data breach long before the digital age.

Not all unauthorized data access is from outside an organization. At one point in my career, someone left a document on the printer that contained specific employee compensation information. Information didn’t leave the company, but still caused significant issues for HR.

Frederick Wilfrid Lancaster proposed that computers would create a paperless society. I’d argue that computers made it more efficient to generate paper, but with the advent of better user interfaces and a generation that didn’t grow up dependent on paper, maybe he’ll be correct. The information age, however, has changed the threat profile for data breaches. The challenge is the same: keep people from gaining access to the information. Security guards, identity badges, and locks provided the primary security mechanisms for physical document protection. There are many movies about spies duplicating badges, picking locks, and using cameras that looked like ball-point pens or lighters to photograph documents. But you never see a spy photographing the entire contents of a file cabinet (stealing the cabinet would require a forklift and draw too much attention).

Now the thieves don’t have to go to the facility. They gain access to your data via your network. You don’t know they got access to your data until after they’re gone. Now, the big difference is the thief can make a copy of the entire file cabinet or database. Stealing credit card information is not new. Some unscrupulous sales people have always copied down credit card numbers and gone on shopping sprees. The difference is scale. A salesperson or bartender might get a handful of numbers at a time, but hackers get millions.

In the paper world, incursions were addressed by defense in depth tactics. When I entered the Navy base, my ID was checked at the outer gate, granting me access to certain areas on the base. As I got closer to the pier, my ID would be checked again to ensure I was authorized to access the ship area. My ID would be checked when I got to my office building, again to verify that I should be there. At this point I still didn’t have access to any sensitive information. To gain access to documents, I had to know the combination to the safe. So up to this point, security was focused on identifying who had the privilege to present the combination to the safe. Similar tactics are deployed in the digital world today. User names, passwords, and access control lists (ACLs) are common methods for identification, authentication, and authorization. Avaya’s Identity Engines provide a powerful portfolio of tools for managing user and device access to your network.

Layers of access control are great as long as data is placed behind the proper level of security and there isn’t a way to sneak between layers. If someone had placed sensitive information just inside the outer gate of the base, then anyone who gained access to the base would have had access to the information. Europol terror data was compromised because someone made a copy on an unprotected device exposed to the Internet; this defeated all other security measures. As another example, think about the difference between web content posted for customers on the public website vs content posted on the internal sales portal. Some information is available to everyone and is posted on the sales portal for convenience (e.g., spec sheets). However, there is information that is made available to partners that shouldn’t be available to customers (or the competition) such as pricing, sales presentations, or competitive positioning.

Poke around many company public websites and you’ll find information you aren’t supposed to see. Often, the documents will even be labeled with “internal use only.” IT can deploy Identity Engines or similar solutions to the best of their ability, but if the rest of the organization fails to pay attention to information security, data will be leaked.

Nobody is perfect, neither is any system. Occasionally cracks are exposed in data security. Hackers are very persistent and will keep poking around until they find a weak spot. Because of the architecture and complexity of modern networks, once a hacker gains a modicum of access, they often can get full access to the network. The intruder can use tools to easily discover the network topology and then determine where to gain access to valuable information.

The Navy doesn’t allow everyone to roam around the base just because they gain access through the perimeter gate. You don’t want people to roam around your network just because they found a weak entry point. Suppose a spy with forged credentials shows up at the gate in a food service truck. Food security isn’t a high concern, so security checks on the credentials are minimal. If the base is wide open, the spy could drive the truck anywhere. However, interior checks prevent the food service truck from access to sensitive areas such as the munitions warehouse.

Network segmentation with Avaya SDN Fx Architecture provides similar protections. Shortest Path Bridging (SPB) is based on a Virtual Service Network (VSN). A VSN is similar to a VLAN, except the VSN is totally isolated from all other segments unless specifically authorized to have access to another segment (L3 route). If the Navy could implement a VSN concept for vehicular traffic, the food truck spy would be assigned to a virtual road that only went to buildings that served food. The spy wouldn’t be aware of any other road, wouldn’t see any other buildings, and wouldn’t have any idea how to get to the munitions building. In fact, there wouldn’t even be any indication that a munitions building even existed.

Further, suppose there’s a celebration being held at the ball field on base. The celebration has a temporary kitchen set up that requires a food delivery. A virtual road could be set up to allow food service trucks to get to the ball field. As soon as the event concludes, the virtual road is retracted, eliminating food service truck access to the ball field.

To explain more about this approach, Jean Turgeon, Vice President and Chief Technologist for SDN at Avaya, has a three-part blog series on the security benefits of end-to-end network segmentation.

Jean Turgeon mentions the 2013 data breach at Target. The hackers gained access to an HVAC network and wandered around until they gained access to PCI information. If Target had implemented hyper-segmentation, the worst the hackers could have done was change the ambient environment, hardly an event that would have made headlines around the world or blog topics three years later.

In part 3 of this series, we’ll explore the role of data encryption in preventing data loss.