An Exploration of End-to-End Network Segmentation—Part II: Native Stealth

As I’ve said before, no one provider can completely eliminate network security risks. There is, however, a proven way to reduce risk and network exposure: end-to-end segmentation, which is comprised of hyper-segmentation, native stealth, and automated elasticity. In part I of this series, I explored the concept of hyper-segmentation. In a nutshell, hyper-segmentation involves using SPB (Shortest Path Bridging–802.1aq) to quickly and easily create virtual network segments that are completely isolated from one another. This enables network security tools to perform with greater efficiency, offering businesses full transparency into network activity.

Now imagine if you could create these virtual segments on the fabric infrastructure itself, meaning the topology used to carry the traffic would be completely invisible to any IP discovery or hacking. That’s exactly what we’re going to discuss here in part II: delivering a stealth network that keeps hackers in the dark. Let’s jump right in.

The Risk of IP Hopping

If you still rely on IP hopping, it’s likely a matter of time before someone enters your network and quickly discovers your full network topology, potentially without you knowing (if someone hasn’t already). I understand it can be difficult to grasp how a method that’s been in practice for nearly 30 years can be so unsecure, but remember: just because a methodology has been around for a long time doesn’t mean it’s conducive to today’s business requirements.

The problem with IP hopping is simple: once someone successfully enters a network using any kind of automated or reasonably sophisticated tool, they can begin discovering IP hop routes. These tools, when in the wrong hands, can allow attackers to gain full visibility into an organization’s IP architecture.

This means if a hacker successfully penetrates your firewall, they will within minutes be able to see all of your network topology and devices (and you thought Halloween was scary!). With this level of transparency, attackers can effortlessly detect where video surveillance is, for example, or where patient records are stored in order to begin impacting those devices, databases, nodes or systems.

This is one of the reasons so many companies hesitate to offer guest Wi-Fi services. It’s one of the easiest and lowest-risk ways for hackers to penetrate a company’s firewall and begin gaining network visibility. Remember, RF leaks out of building/walls; sit in parking lot near a building and et voilà!

Stealth Networks: Invisible to Hackers, Invincible for Companies

If you recall in part I, we discussed the importance of provisioning the network only at the point of services where offered and where that service is consumed by the end-user or device (IoT, as an example). In provisioning only at points of services—using an IP shortcut—the rest of the network essentially becomes a transport because we make use of Ethernet Switch Paths (ESPs) instead of typical IP hopping from node to node. This eliminates hackers’ dependencies on IP routes and allows them to only see entry and exit points. Everything else becomes stealth or invisible.

Remember the above example about penetrating the firewall through a Wi-Fi network? Let’s say this happens to a company that’s implemented an end-to-end segmentation solution. The hacker may successfully connect to the company’s physical infrastructure but, because of native stealth, they will only be able to see as far as that one segment. The attackers can’t hack what they can’t see. Meanwhile, organizations gain more controlled insight into where attackers are trying to do damage.

At the end of the day, you can’t stop hackers from penetrating your network, firewall, or gaining access to your building. If they do, however, end-to-end hyper-segmentation allows you to control what hackers see with peace of mind so that your customer databases, credit card numbers, etc. are securely isolated and undiscoverable. Hence, don’t expose your customer’s credit card information (PCI), patient records or others. Isolate that critical data in a secure virtual segment and run it over that ONE converged infrastructure. No more need for a separate physical network to meet your business security needs when you implement the right solution.

We’re almost done exploring the core of end-to-end segmentation. Elasticity is the final capability that completes this network security trifecta, and I dig into it in part III next week.

Related Articles:

APTs Part 1: Protection Against Advanced Persistent Threats to Your Data

Hardly a day goes by without hearing about a data breach somewhere in the world. So it’s timely that we launch this new blog series about Security. To kick the series off, we’ll take a look at some of the alarming trends in the development of Advanced Persistent Threats (APTs). We’ll explore what they are and how they operate. Along the way, we’ll provide simple advice to help you limit their impact on your enterprise.

In the old days, we mainly dealt with fly-by automated attacks. We all recall worms and Trojans and the other little beasts in the menagerie of malware. They were fairly simple at first but as time moved forward, the degree of sophistication and stealthy behavior of this code has drastically increased. There are a couple of reasons for this. First, code naturally evolves as multiple individuals contribute to its evolution, growing in feature set or reliability. Even malicious code benefits from collaborative development. Second, the design goal has changed from doing immediate damage to remaining hidden. This is the goal of the APT.

  • APTs are advanced.

    Typically, they come from a sizable group of individuals who are well-funded and equipped. Many people will automatically think APTs come from China and Russia, but the reality is they can be and are anywhere. The U.K. is one of the leading nations and there are plenty in the U.S. as well. They are also given a set of targets or perhaps even a single target.

  • APTs are persistent.

    This is a group that owes its whole existence to penetrating the assigned target. Many times, there are handsome bonuses for success. They will persist for months and even years, if necessary, waiting for the right moment.

  • And while they do not seek to do immediate damage, they most definitely are a threat.

    Their goal is to penetrate and access sensitive information, and establish command and control points within the network with devastating results. The recent data breach at Yahoo is the latest, with roughly 400 million records stolen. Let’s also not forget that the NSA itself was breached with the result being the exfiltration of sensitive cyberattack tools.

While many will still say “not in my network,” research indicates the attacker in most breaches is resident in the network for an average of 256 days without being discovered. Further, about 81% of those breached did not identify it themselves. They were notified by third parties such as banks, credit card vendors, or law enforcement—and though we can’t tell exactly, it’s suspected that up to 94% don’t know they’ve been hacked until long afterward.

Now don’t get me wrong, we still have plenty of malware out there and it’s growing in volume every day. As an example, there are 25 million new instances of malware that cannot be blocked by traditional antivirus solutions. The added venom to the mix, however, is that now there are well-equipped teams using malware in a tightly orchestrated fashion. It’s reported that 70% of known breaches involved the use of malware, but the breaches are done in a well-thought-out orchestrated manner. The rules have changed so we had better up our game. In my next blog, we’ll take a closer look at a typical method of APT operations and the concepts of kill chains and attack trees, as well as how they go about getting into your enterprise.

You’re likely wondering what you can do to protect yourself. Well, the NSA recommends implementing highly granular microsegments. This prevents lateral movement, which is critical to the attackers’ ability to escalate privilege into the environment. They also recommend creating stealth or black networks that yield little or no information to scans and probes. Finally, these secure microsegments should ideally be ships in the night with no or at least very constricted communications capability to other segments.

Avaya has embraced this philosophy in our recent security launch. Hyper-segmentation provides for high granular segmentation, stealth provides for the black network environment, and elasticity provides for strong perimeter protection, allowing access to users and devices only once they have been vetted, established as trusted, and authenticated. We’ll go much deeper into this in the third installment of this series on APTs. Until then, don’t be afraid. Be prepared.

Data Protection Part 3: Where Does Encryption fit into Data Protection?

I’ve mentioned that the SNIA SDC Conference provided the catalyst for writing this blog series on data protection. (See part 1 and part 2.) While there was a lot of discussion at the Conference on protecting data from loss, there were also discussions about encryption of data at rest. I noticed that the conversations about data protection and encryption were separate and distinct. Networks also employ data encryption methods to protect data in transit. I was sitting in one of the sessions when I began to wonder how many times a data block is encrypted and decrypted in its life. I also wondered how many times encrypted data is encrypted again as “data at rest” becomes “data in transit.”

Data at rest encryption can be divided into two categories: media encryption and object encryption. Media encryption focuses on protecting data at the physical device level such as the disk drive or tape. Encryption for data at rest can be performed at multiple places in the data storage stack—host, SAN appliance or target device (tape drive, tape library, disk drive). There are use cases for each deployment option, but generally it’s best to encrypt as close to the physical media as the use case allows. There are often trade-offs to be examined. For instance, encryption defeats the value of deduplication. In most data sets, there is a lot of repeated data that can be managed more efficiently if deduplicated. If host-based encryption is employed, the value of deduplicating data downstream, such as WAN acceleration, is eliminated.

The benefit of encryption at the media level has generated interest in encrypting tape drives and Self-Encrypting hard Drives (SED). Tape use cases are pretty straight forward: create a data copy on media that can be shipped off-site for protection of the data at the primary site. Tape used to be the primary backup media, but with long recovery times and the data explosion, tape has been relegated to tertiary copies and long term archive of data that doesn’t need to be online. The key is tape is designed to be shipped off-site, meaning the shipment could be hijacked. Encrypting the data on the tapes makes a lot of sense. A box of encrypted tapes has the same value as a box of used tapes, i.e., not worth hijacking.

I have “sold” a lot of SEDs in my career. I’ve always tried to be honest with customers. SEDs have limited value in data center operations. Drives deployed in the data center aren’t intended to transport data out of the data center. (There are a couple of valid use cases where moving data on drives makes sense, such as data center moves or seeding a disaster recovery site, but of all the drives deployed, very few are used in this manner). I’d often test a customer’s view of SEDs with a simple question, “Do you have locks on your data center doors?” Some customers would get the joke and I knew I could have a frank conversation. If the customer didn’t get the joke (i.e., understand that SEDs only provide protection if the physical drive falls into the wrong hands), I proceeded cautiously. Two other factors come into play at that moment: there was an account manager in the room who is commission driven (SEDs are slightly more expensive than non-SEDs) and paranoid customers with deep pockets are an IT supplier’s best friend. Of course, just because you’re paranoid, doesn’t mean that there aren’t a few thousand hackers out there looking to gain value from your data.

Bottom line is, if someone gains access to your network or manages to compromise someone’s username/password, SEDs don’t help. The encryption key is automatically applied to the drive when the drive is started. Any process that has access to the system after that isn’t going to be denied access by the drive encryption.

The primary value of SEDs is when a drive is deployed outside the data center, where the primary or secondary data protection is at the drive. Best example is a laptop. You can assume that the data on a stolen laptop without encryption is in the hands of the thief. Other portable devices, such as tablets, smartphones, etc., also have encrypted storage devices, though not a hard drive in a conventional sense. Note: Many Solid State Drives (SSDs) are also SEDs, which makes the case for the SSD option in your next laptop stronger.

Before my friends in the storage industry start tweeting about me, I do see a few values for SEDs in the data center.

  1. Compliance. Many security offices require SEDs—it never hurts to have SEDs, just understand where they fit in the security stack.
  2. Storage is going to be repurposed. Lab environments, cloud providers, etc., where storage may be used for one project or customer today and another tomorrow may desire or require compete data erasure. The easiest way to erase data is to encrypt it and then delete the key. The data will technically be on the drive, but not accessible.
  3. End of life destruction. Drives (spinning and SSD) do wear out and need to be disposed of. Some people require physical destruction of the drive (heard of people shooting them with a high-powered rifle, but never witnessed it). There are services that will crush or shred the drive. However it’s easier to shred the key. (Paranoid people do both.)

Object level encryption is another way to address protecting data at rest. I’m using a very vague definition of object for this discussion. Objects are often associated with a particular cataloging mechanism that supports very large numbers of items to be stored. In this case, I’m not being that specific. Think of an object as something that could be addressed at the application level. I spent a great portion of my storage career working on Networked Attached Storage (NAS) systems, another poorly named technology. NAS is essentially a purpose-built file server. For this conversation a file and the file system could be considered objects.

I’ve had many conversations with customers about protecting files with encryption. Customers often wanted to encrypt the entire file system. This is pretty straight forward: one key to manage for all of the content in the file system. The problem is the one key tends to be widely distributed—any person or process that needs access to a file gets the key to the entire file system. A side effect of this kind of a solution is that all metadata of the file system is also encrypted. So operations like backups that operate based on the creation and modification timestamp need to have the key to the file system. Therefore systems like backup servers, anti-virus servers, etc., have to be secure as they literally have the keys to the kingdom.

Another approach is to encrypt the files within the file system. Think of systematic zipping of files with encryption and a password. This has the benefit of not affecting the metadata. A file can be moved, emailed, deleted, etc., without decrypting the file. The backup software doesn’t need to have the key to execute, and the files in the backup are encrypted. Operations that need to access the internals of the file, such as anti-virus or full-text search still require the keys. The challenge is managing the keys and access control lists. Some files are written and read by only one person/application. However, most files are intended to be shared. For instance, emailing an encrypted file doesn’t do the recipient any good unless you also provide the key. I know a lot of people who encrypt any file that they put in their “free cloud storage.” It isn’t that they don’t trust the cloud provider—it’s just that sometimes a little paranoia is a good thing.

So why not encrypt everything everywhere? As I pointed out above, encrypted file systems are hard to manage. Encryption also makes it harder to detect intrusions in the network when the data-in-transit is encrypted. I can remember pointed discussions between the storage admins and the network admins about encrypting replicated data. The storage admin wanted the data encrypted at the source array and decrypted at the target array. The network admin wanted to encrypt at the WAN edge device, so they had visibility into the data leaving the building.

An interesting shift is the use of encryption by hackers. Rather than copy your data, they encrypt it and then offer to sell you the key. This phenomenon is called ransomware. While detection of the malicious code is the preferred defense, a good data backup enables a good backup plan. Suppose you have hourly copies of your data. Rather than choose to pay the ransom, you could choose to restore your data to the point in time before your data became encrypted.

At this point, if you’re expecting me to tie a nice little bow around data protection, you’re going to be disappointed. Protecting data in a world where the threats were application errors, failed components, undetected bit swaps and natural disasters was a challenge. Today, the threats are using teams of well-funded experts focused on finding the weak links in your data security structure. The threat landscape is constantly changing. It is very difficult, if not impossible, to protect against all threats. The IT technology industry is working to provide a solution component. However, the threat volatility forces overall protection to be reactive to the threat technology.

Organizations need to look at the problem the way the Navy does when protecting data.

  • Implement layers of security
  • Assume that any layer is penetrable
  • Minimize the damage of a penetration

First step, limit access to your data infrastructure through identity checks and limit access to need to know. Avaya Identity Engines provide a powerful portfolio of tools for managing user and device access to your network. However, assume that someone will figure out how to forge credentials and gain access to your infrastructure.

Avaya SDN Fx provides a key foundational component of a data security solution, minimizing the exposure of your network to unauthorized access. So when the spy gains access to your network, you can limit the exposure and keep the perpetrator from wandering around your network looking for the good stuff.

Data in transit and data at rest encryption and data backups provide another level of defense and recoverability when other layers are breached.

Finally, everybody needs to be involved in keeping data secure. I was interrupted while writing this conclusion to help a sales engineer with an opportunity. I emailed him several docs and links to others as background information. Even though the docs are marked as to which ones were for internal use only, I noted in the email which docs were sensitive and couldn’t be shared with the customer. Proper strategies include systems, processes, and people all working together across organizations and technology stacks to prevent data from being lost or ripped off.

I’ve always been a believer that the effort to make things idiot proof was often wasted because they just keep making better idiots. In this case, they’re making better experts to come after your data. Fortunately, we have very intelligent experts working for the good guys too. We’ll always be one step behind, but we can continue to strive to minimize the threat surface and minimize the impact of the surface being violated.

Data Protection Part 2: What About Unauthorized Data Access?

In part 1, we explored the physical loss of data, meaning the data is no longer available for access (intelligently) by anyone. There’s another, perhaps more significant threat to your data protection efforts: someone gaining access to sensitive information, often referred to as a data breach. An argument could be made that in some cases it is better to have lost data than to have data become available to unauthorized individuals. For example, when it comes to personal photos, would you prefer nobody sees them again or everyone sees them?

Data breaches didn’t start with the digital age. In my blog on Wireless Location Based Services (WLBS), I explained that I started my career working for the U.S. Navy. We had very specific rules for handling classified documents (and penalties for mishandling them). Not all documents were handled the same. We had file cabinets with different levels of security (weight, strength, lock) for different document classifications. The idea being the more valuable the information, the harder it would be gain access to the documents. At one facility the windows were replaced with glass block to prevent someone from outside the facility reading documents on our desks. Watergate is an example of a high-profile data breach long before the digital age.

Not all unauthorized data access is from outside an organization. At one point in my career, someone left a document on the printer that contained specific employee compensation information. Information didn’t leave the company, but still caused significant issues for HR.

Frederick Wilfrid Lancaster proposed that computers would create a paperless society. I’d argue that computers made it more efficient to generate paper, but with the advent of better user interfaces and a generation that didn’t grow up dependent on paper, maybe he’ll be correct. The information age, however, has changed the threat profile for data breaches. The challenge is the same: keep people from gaining access to the information. Security guards, identity badges, and locks provided the primary security mechanisms for physical document protection. There are many movies about spies duplicating badges, picking locks, and using cameras that looked like ball-point pens or lighters to photograph documents. But you never see a spy photographing the entire contents of a file cabinet (stealing the cabinet would require a forklift and draw too much attention).

Now the thieves don’t have to go to the facility. They gain access to your data via your network. You don’t know they got access to your data until after they’re gone. Now, the big difference is the thief can make a copy of the entire file cabinet or database. Stealing credit card information is not new. Some unscrupulous sales people have always copied down credit card numbers and gone on shopping sprees. The difference is scale. A salesperson or bartender might get a handful of numbers at a time, but hackers get millions.

In the paper world, incursions were addressed by defense in depth tactics. When I entered the Navy base, my ID was checked at the outer gate, granting me access to certain areas on the base. As I got closer to the pier, my ID would be checked again to ensure I was authorized to access the ship area. My ID would be checked when I got to my office building, again to verify that I should be there. At this point I still didn’t have access to any sensitive information. To gain access to documents, I had to know the combination to the safe. So up to this point, security was focused on identifying who had the privilege to present the combination to the safe. Similar tactics are deployed in the digital world today. User names, passwords, and access control lists (ACLs) are common methods for identification, authentication, and authorization. Avaya’s Identity Engines provide a powerful portfolio of tools for managing user and device access to your network.

Layers of access control are great as long as data is placed behind the proper level of security and there isn’t a way to sneak between layers. If someone had placed sensitive information just inside the outer gate of the base, then anyone who gained access to the base would have had access to the information. Europol terror data was compromised because someone made a copy on an unprotected device exposed to the Internet; this defeated all other security measures. As another example, think about the difference between web content posted for customers on the public website vs content posted on the internal sales portal. Some information is available to everyone and is posted on the sales portal for convenience (e.g., spec sheets). However, there is information that is made available to partners that shouldn’t be available to customers (or the competition) such as pricing, sales presentations, or competitive positioning.

Poke around many company public websites and you’ll find information you aren’t supposed to see. Often, the documents will even be labeled with “internal use only.” IT can deploy Identity Engines or similar solutions to the best of their ability, but if the rest of the organization fails to pay attention to information security, data will be leaked.

Nobody is perfect, neither is any system. Occasionally cracks are exposed in data security. Hackers are very persistent and will keep poking around until they find a weak spot. Because of the architecture and complexity of modern networks, once a hacker gains a modicum of access, they often can get full access to the network. The intruder can use tools to easily discover the network topology and then determine where to gain access to valuable information.

The Navy doesn’t allow everyone to roam around the base just because they gain access through the perimeter gate. You don’t want people to roam around your network just because they found a weak entry point. Suppose a spy with forged credentials shows up at the gate in a food service truck. Food security isn’t a high concern, so security checks on the credentials are minimal. If the base is wide open, the spy could drive the truck anywhere. However, interior checks prevent the food service truck from access to sensitive areas such as the munitions warehouse.

Network segmentation with Avaya SDN Fx Architecture provides similar protections. Shortest Path Bridging (SPB) is based on a Virtual Service Network (VSN). A VSN is similar to a VLAN, except the VSN is totally isolated from all other segments unless specifically authorized to have access to another segment (L3 route). If the Navy could implement a VSN concept for vehicular traffic, the food truck spy would be assigned to a virtual road that only went to buildings that served food. The spy wouldn’t be aware of any other road, wouldn’t see any other buildings, and wouldn’t have any idea how to get to the munitions building. In fact, there wouldn’t even be any indication that a munitions building even existed.

Further, suppose there’s a celebration being held at the ball field on base. The celebration has a temporary kitchen set up that requires a food delivery. A virtual road could be set up to allow food service trucks to get to the ball field. As soon as the event concludes, the virtual road is retracted, eliminating food service truck access to the ball field.

To explain more about this approach, Jean Turgeon, Vice President and Chief Technologist for SDN at Avaya, has a three-part blog series on the security benefits of end-to-end network segmentation.

Jean Turgeon mentions the 2013 data breach at Target. The hackers gained access to an HVAC network and wandered around until they gained access to PCI information. If Target had implemented hyper-segmentation, the worst the hackers could have done was change the ambient environment, hardly an event that would have made headlines around the world or blog topics three years later.

In part 3 of this series, we’ll explore the role of data encryption in preventing data loss.