An Exploration of End-to-End Network Segmentation—Part I: Hyper-Segmentation

More than 90% of businesses say they have some sort of cybersecurity framework in place, but here’s the truth: a network security strategy will never be effective if a company’s underlying architecture isn’t what it needs to be. Traditional, hierarchical, client-server architecture is simply not built to support today’s next-generation network, or protect against the increased risk of exposure inherent in it (this is something I recently blogged about for the Huffington Post). This is like riding a horse and buggy down the freeway and expecting life-saving crash protection.

Cue the thousands of solution providers vying for market share, all selling the concept of failsafe network security. But let’s be honest: any provider that claims to offer foolproof security is only fooling you. Considering today’s rapid pace of innovation, we’ll hopefully see this day soon. Until then, not even the best provider can absolutely guarantee network security 24×7.

There are, however, a few ways to safeguard your organization with a (near) impenetrable network that significantly minimizes security risks and reduces exposure. It all comes down to the technology you use and from whom you get that technology. At Avaya, we believe companies need to take a foundational approach to network security by implementing an end-to-end segmentation solution that inherently protects from the inside out. This approach consists of three core capabilities:

  • Hyper-Segmentation:

    The ability to create stealth segments that span the entire network.

  • Native Stealth:

    The characteristic of a hyper-segment that’s invisible to hackers.

  • Automated Elasticity:

    Extending and retracting hyper-segments access automatically.

The way we see it, endorsed by many cyber security experts, end-to-end segmentation is the holy grail of network security today. This critical level of protection should be as simple as safety is for a driver getting behind the wheel. All companies need to do is buckle up and enjoy the ride.

At Avaya, our goal is always the same: equip business leaders with the necessary skills, knowledge and know-how to do what’s ultimately best for their organizations. For IT leaders contemplating a better way to protect their networks, I’ve put together a three-part series that pulls back the veil on our all-new end-to-end segmentation solution and its core capabilities of hyper-segmentation, stealth and elasticity.

Ready to join me? If so, let’s kick things off by exploring the incredible concept of hyper-segmentation.

Out with the Old

A classic segmentation method, in which virtual local area networks (VLANs) are created, is one that companies have been using for 20+ years. This method involves isolating segments in order to maximize quality of service, ensuring one type of traffic doesn’t impact the other. In this case, each segment carries different traffic types that require different characteristics to deliver the desired quality of experience.

For example, one segment may carry real-time voice traffic while another would carry best-effort data traffic such as web browsing. This approach sounds simple, but there’s one big problem: as organizations grow, so too must their segments. This creates high levels of complexity and increases risk of failure, as VLANs used are subject to loops created by human errors while having to learn about each node that physically joins the virtual network.

So, these segments must inevitably grow in order to meet evolving network and application needs. As they do, they become increasingly difficult to troubleshoot and manage, leading to greater network strain and performance issues. At this point, a company’s only resolution is to create more smaller segments, which simply introduces more complexity into their already intricate network environments.

All the while, these network segments aren’t truly isolated from one another; rather, they’re communicating extensively when IP services are enabled. These are known as Layer 2 virtualized networks. To make matters worse, Layer 3 virtualization is also typically required when IP services need to be isolated from one another. Think of two departments or two tenants wanting to share a common networking infrastructure. At that point, the concept of VRF (Virtual Route Forwarding) needs to be introduced. Once again, each node participating in this Layer 3 virtualized network must be configured.

Hence, end to end segmentation is achieved by performing complex nodal configuration. Not very scalable when you think about it, yet it does work! Add to this other services such as multicast and you now have a fragile house of cards to deal with, as all these layers have interdependency. Because of this interdependency, this stack can (and will) collapse if just one layer is affected (think of how easy it is to knock down a house of cards with just a flick of the finger). Each layer depends on one another in order to keep the stack running and secure. For businesses relying on legacy architecture, this setup of multiple interdependent protocol layers can lead to tragic outcomes if even just one segment is affected. This is exactly what happened with the infamous 2013 Target breach. An HVAC vendor, external to Target, had authorized access to service the HVAC system. As the network was statically configured using VLANs, hackers were able to get into that HVAC virtual segment. But rather than being contained there (we’ll get to that shortly), they broke out of the HVAC segment and into the segment that hosts credit card data. So you see, in this environment, the inherent lack of security at layer 2 (e.g., HVAC segment) negatively impacted other layers — including the mission-critical apps that resided in them. Safe to say this is not the business outcome you want.

The goal, then, is to greatly simplify the way segmentation can be achieved. I guess you could say, let’s manage less and, in doing so, better converge, sustain and control the network. Right? Well, sort of. This “less is more” approach can also lead to network complexities. Hear me out: fewer segments to manage means greater risk in terms of network performance and outages. Without a certain level of segment isolation, one misbehaving device, human error or system glitch can create instability to the entire network. In other words, you should be cautious about putting all of your eggs in one basket (one huge virtual segment).

Are there any other options? Well, MPLS has been designed to deliver what many considered “true” end-to-end virtualization. However, does it really deliver what companies need? It’s true that MPLS does offer end-to-end virtualization, but it’s still based on a restrictive nodal labeling methodology with even more layers of protocols. Obviously, end-users don’t notice this, as this complexity is expertly masked by providers or IT using highly sophisticated provisioning tools. These tools allow them to quickly deploy an end-to-end virtualized network while hiding all backend complexity. It’s a powerful and scalable solution, yet in the end built on a similar and unfortunately more complex foundation.

In with the New

Now I want to clarify that there’s nothing necessarily wrong with MPLS. Many large organizations still run on an MPLS model they deployed long ago. This is fine if you have the skill set and have made the investment in provisioning tools. What businesses are beginning to realize, however, is that they need to better support the dynamic changes happening not just within the data center, but where all data is consumed by mobile users and other devices. Nothing is static anymore. They must be able to add new services on the fly, make changes to existing services within minutes and build new network segments on demand across the entire enterprise. Remember, end users and IoT devices don’t sit in the data center!

You simply can’t deny that today’s business environment looks drastically different than it did 20+ years ago. So, why would we still rely on legacy segmentation methods from that period? The only way to flexibly and securely meet today’s network needs is to deploy a solution that eliminates nodal configuration and yet achieves true segmentation. Hyper-segmentation does just this by using the concept of end-to-end Virtual Services Networks (VSNs). This enables businesses to provision their networks only at specific points of service. In other words, where the service is offered and where the service is being consumed by end users or device(s). That’s it! The core becomes an automated and intelligent virtualized transport.

By eliminating nodal configuration, companies are able to drastically reduce complexity and create hundreds—even thousands—of agile and secure virtual segments that are completely isolated from one another (meaning no communication by default). This allows companies to decide if they want to establish communication between segments, verses having to prevent it. With hyper-segmentation, segments can be quickly created (and easily provisioned) without the need for time-consuming or error-prone network nodal configurations.

This result is achieved because of the technology’s ability to isolate segments by default on one secure, converged network. This transforms network protection by allowing security tools to focus on performing the specific functions they’re implemented for, verses having to serve as a barrier between segments to prevent chatter. In this way, hyper-segmentation allows companies to gain maximum transparency into how their networks are behaving in order to quickly prevent, identify and mitigate security incidents.

Remember Target? With hyper-segmentation, the hackers would have been contained to the HVAC segment (in other words, isolated there). All other segments would have been invisible (natively stealth) to them. It doesn’t matter how skilled of a hacker they are. You can’t hack what you can’t see.

In this new world, multilayer protocols exist only because we must maintain backwards inter-operability with the legacy model, but new virtualized services can be delivered with just one protocol. No more house of cards, unless you absolutely need it!

Now if your company depends on MPLS, you may be thinking, “Where does this leave me?” Here’s my advice: leave your MPLS network environment as static as possible so you can embrace the dynamic configuration of hyper-segmentation and leverage its strength of provisioning at the point-of-service only. In doing so, you’ll benefit from some of the next-generation segmentation technology without having to forklift your current investment as hyper-segments can now traverse any IP WAN solutions including IP MPLS or SD-WAN solutions from vendors such as FatPipe. In the end, it’s now up to you to decide how you want to implement end-to-end hyper-segmentation. No more dependency on the service provider’s to configure and extend a service (VLAN, VRF, DC inter-connect, etc.) across the WAN … you now control your own destiny!

Now on to the next question: what role does native stealth play in end-to-end segmentation? Learn more in Part II next week.

 

Related Articles:

Secure IoT Deployments with Avaya SDN Fx™ Architecture Solutions

Let’s look at how to deploy the IoT in a safe and sane manner—a top-of-mind business challenge. Before diving into the technology, let’s remember why secure IoT deployments are so important. The Yahoo breach is a lesson learned: Yahoo CEO Marissa Mayer lost $12M in bonuses over the Yahoo data breach and Yahoo paid $16M to investigate the breach and cover legal expenses as of March 2, 1017. It’s clear that the cost of not building a safe infrastructure is much more than the cost to build one.

Software Defined Networking (SDN) is sometimes over-hyped. At a base level, separating the control plane from the data plane makes sense (if one understands the definitions of a data plane and control plane). In a practical sense, it means the network infrastructure doesn’t need to be managed on a node-by-node basis (i.e., logging into network devices on each end of the cable to make complementary changes to configure a network link). This is where SDN can be over-hyped. The SDN solution automates the process of making the changes to each end of the cable, making the network easier to manage. But, it doesn’t reduce the complexity, increase the resiliency (other than reduce outages due to typing errors), or make it easier to troubleshoot or expand.

Avaya SDN FxTM Architecture is based on fabric, not network technology. The architecture was designed to be managed as an entity of subcomponents and not a bunch of nodes that are interconnected to create a larger entity. In other words, it’s like designing something to manage a forest, as opposed to managing the trees. Would you really want to manage a forest one tree at a time?

How SDN Fx Architecture Benefits the IoT

Although the SDN Fx network architecture wasn’t specifically designed for the IoT, it works well for providing a solid foundation to deploy IoT solutions. These are the key components of the SDN Fx Architecture that benefit the IoT:

Avaya Fabric Connect is Avaya’s implementation of Shortest Path Bridging (SPB/IEEE 802.1aq). SPB replaces the traditional network stack, greatly simplifying network configuration, management and security. Three key benefits of Fabric Connect apply directly to IoT deployment use case:

  • Hyper-Segmentation: SPB supports 16 million+ network segments. In theory, every IoT device on a network could have its own segment. More realistically, every device type can have its own segment. For instance, HVAC could be one network, security cameras could be on another, employees on a third, guests on a fourth, etc. It’s worth noting that the NSA sees segmenting IoT networks as a key to limiting exposure of IoT deployments. (In my next blog, I’ll examine how Avaya solutions provide security between devices on the same segment.)
  • Automatic Elasticity: Services in SPB are provisioned at the edge without touching the core of the network. This makes it very straightforward to provision network services for the hundreds or thousands of IoT devices that the business wants up and running yesterday. Plus, edge provisioning makes moving devices simple. When a device is disconnected from the network, the network service to that port is disabled and eliminates open holes in the network security. When the device is connected to the same or different port, the device is authenticated and services are automatically configured for the port.
  • Native Stealth: SPB operates at the Ethernet, not the IP layer. For example, if a would-be hacker gains access to one segment of a traditional network, they can go IP-snooping to discover the network architecture. A traditional network is only as secure as the least secure segment/component. With Fabric Connect, if a security loophole is overlooked in a less important network project, there isn’t a back door to access the rest of the network and the corporate data.

Avaya Fabric Extend provides the ability to extend an SPB fabric across a non-fabric network, such as IP core, between campuses over Multiprotocol Label Switching (MPLS), or out to the cloud over WAN. IoT deployments enable the phased adoption of SDN Fx so that IoT projects can gain the values above, without ripping and replacing significant network infrastructure or affecting non-IoT workloads.

Avaya Fabric Attach automates the elasticity of the SPB fabric for IoT devices and other devices supporting Automatic Attachment (IEEE 802.1Qcj). Fabric Attach allows the device to signal the network that it needs in order to connect to a service. If the device is authorized, the service is automatically provisioned. When the device is disconnected, the service is terminated. If the device is moved to a different network port, the service will be provisioned automatically to the new port. This makes deploying and moving Fabric Attach-enabled devices very simple. For a real-world example, see how Axis Communications is starting to deploy Fabric Attach in their IoT devices.

Avaya Open Networking Adapters—an Open Network Adapter is a small device that sits in-line with an IoT device to provide programmable security for IoT devices that lack adequate network security. One component of the solution is Fabric Attach, which provides automated service provisioning and mobility to devices that don’t have the auto-attach capability. (I’ll explore more about the power of Open Networking Adapters in an upcoming blog.)

The Avaya Identity Engines Portfolio provides powerful tools for managing user and device access to a network, commonly referred to as Authentication, Authorization, and Accounting. In the IoT use case, Identity Engines authenticate a device by MAC address or MAC address group and use predefined policies for the device type to dynamically configure services. For instance, a camera could be assigned to Video VLAN 30 and provisioned for multicast, while a phone would be authenticated, assigned to VLAN 20, and configured for SIP communications. This provides security for unauthorized devices joining the network and provides automatic segmentation based on device type and service requirements.

I’m not sure if there ever was a time when network design and implementation was static, but there was a time when the devices connected to the network could be predicted: servers, printers, storage, PCs, etc. With IoT, IT is being asked to design networks for devices that haven’t been thought of yet. The old network technologies were designed for mobility by work order, and IT was able to list the number of device types that wouldn’t work on the network. SDN Fx provides a true software-defined network and not software-defined automation on old network constructs. A fabric network has the intrinsic flexibility and security required for tomorrow’s IoT projects, today.

In my recent blogs about the IoT, I’ve looked at how the IoT enables Digital Transformation and examined a business-first approach to IoT technology adoption. Next in this blog series, I’ll explore the newest component of the SDN Fx solution for the IoT, the Avaya Surge™ Solution.

APTs Part 1: Protection Against Advanced Persistent Threats to Your Data

Hardly a day goes by without hearing about a data breach somewhere in the world. So it’s timely that we launch this new blog series about Security. To kick the series off, we’ll take a look at some of the alarming trends in the development of Advanced Persistent Threats (APTs). We’ll explore what they are and how they operate. Along the way, we’ll provide simple advice to help you limit their impact on your enterprise.

In the old days, we mainly dealt with fly-by automated attacks. We all recall worms and Trojans and the other little beasts in the menagerie of malware. They were fairly simple at first but as time moved forward, the degree of sophistication and stealthy behavior of this code has drastically increased. There are a couple of reasons for this. First, code naturally evolves as multiple individuals contribute to its evolution, growing in feature set or reliability. Even malicious code benefits from collaborative development. Second, the design goal has changed from doing immediate damage to remaining hidden. This is the goal of the APT.

  • APTs are advanced.

    Typically, they come from a sizable group of individuals who are well-funded and equipped. Many people will automatically think APTs come from China and Russia, but the reality is they can be and are anywhere. The U.K. is one of the leading nations and there are plenty in the U.S. as well. They are also given a set of targets or perhaps even a single target.

  • APTs are persistent.

    This is a group that owes its whole existence to penetrating the assigned target. Many times, there are handsome bonuses for success. They will persist for months and even years, if necessary, waiting for the right moment.

  • And while they do not seek to do immediate damage, they most definitely are a threat.

    Their goal is to penetrate and access sensitive information, and establish command and control points within the network with devastating results. The recent data breach at Yahoo is the latest, with roughly 400 million records stolen. Let’s also not forget that the NSA itself was breached with the result being the exfiltration of sensitive cyberattack tools.

While many will still say “not in my network,” research indicates the attacker in most breaches is resident in the network for an average of 256 days without being discovered. Further, about 81% of those breached did not identify it themselves. They were notified by third parties such as banks, credit card vendors, or law enforcement—and though we can’t tell exactly, it’s suspected that up to 94% don’t know they’ve been hacked until long afterward.

Now don’t get me wrong, we still have plenty of malware out there and it’s growing in volume every day. As an example, there are 25 million new instances of malware that cannot be blocked by traditional antivirus solutions. The added venom to the mix, however, is that now there are well-equipped teams using malware in a tightly orchestrated fashion. It’s reported that 70% of known breaches involved the use of malware, but the breaches are done in a well-thought-out orchestrated manner. The rules have changed so we had better up our game. In my next blog, we’ll take a closer look at a typical method of APT operations and the concepts of kill chains and attack trees, as well as how they go about getting into your enterprise.

You’re likely wondering what you can do to protect yourself. Well, the NSA recommends implementing highly granular microsegments. This prevents lateral movement, which is critical to the attackers’ ability to escalate privilege into the environment. They also recommend creating stealth or black networks that yield little or no information to scans and probes. Finally, these secure microsegments should ideally be ships in the night with no or at least very constricted communications capability to other segments.

Avaya has embraced this philosophy in our recent security launch. Hyper-segmentation provides for high granular segmentation, stealth provides for the black network environment, and elasticity provides for strong perimeter protection, allowing access to users and devices only once they have been vetted, established as trusted, and authenticated. We’ll go much deeper into this in the third installment of this series on APTs. Until then, don’t be afraid. Be prepared.

Data Protection Part 3: Where Does Encryption fit into Data Protection?

I’ve mentioned that the SNIA SDC Conference provided the catalyst for writing this blog series on data protection. (See part 1 and part 2.) While there was a lot of discussion at the Conference on protecting data from loss, there were also discussions about encryption of data at rest. I noticed that the conversations about data protection and encryption were separate and distinct. Networks also employ data encryption methods to protect data in transit. I was sitting in one of the sessions when I began to wonder how many times a data block is encrypted and decrypted in its life. I also wondered how many times encrypted data is encrypted again as “data at rest” becomes “data in transit.”

Data at rest encryption can be divided into two categories: media encryption and object encryption. Media encryption focuses on protecting data at the physical device level such as the disk drive or tape. Encryption for data at rest can be performed at multiple places in the data storage stack—host, SAN appliance or target device (tape drive, tape library, disk drive). There are use cases for each deployment option, but generally it’s best to encrypt as close to the physical media as the use case allows. There are often trade-offs to be examined. For instance, encryption defeats the value of deduplication. In most data sets, there is a lot of repeated data that can be managed more efficiently if deduplicated. If host-based encryption is employed, the value of deduplicating data downstream, such as WAN acceleration, is eliminated.

The benefit of encryption at the media level has generated interest in encrypting tape drives and Self-Encrypting hard Drives (SED). Tape use cases are pretty straight forward: create a data copy on media that can be shipped off-site for protection of the data at the primary site. Tape used to be the primary backup media, but with long recovery times and the data explosion, tape has been relegated to tertiary copies and long term archive of data that doesn’t need to be online. The key is tape is designed to be shipped off-site, meaning the shipment could be hijacked. Encrypting the data on the tapes makes a lot of sense. A box of encrypted tapes has the same value as a box of used tapes, i.e., not worth hijacking.

I have “sold” a lot of SEDs in my career. I’ve always tried to be honest with customers. SEDs have limited value in data center operations. Drives deployed in the data center aren’t intended to transport data out of the data center. (There are a couple of valid use cases where moving data on drives makes sense, such as data center moves or seeding a disaster recovery site, but of all the drives deployed, very few are used in this manner). I’d often test a customer’s view of SEDs with a simple question, “Do you have locks on your data center doors?” Some customers would get the joke and I knew I could have a frank conversation. If the customer didn’t get the joke (i.e., understand that SEDs only provide protection if the physical drive falls into the wrong hands), I proceeded cautiously. Two other factors come into play at that moment: there was an account manager in the room who is commission driven (SEDs are slightly more expensive than non-SEDs) and paranoid customers with deep pockets are an IT supplier’s best friend. Of course, just because you’re paranoid, doesn’t mean that there aren’t a few thousand hackers out there looking to gain value from your data.

Bottom line is, if someone gains access to your network or manages to compromise someone’s username/password, SEDs don’t help. The encryption key is automatically applied to the drive when the drive is started. Any process that has access to the system after that isn’t going to be denied access by the drive encryption.

The primary value of SEDs is when a drive is deployed outside the data center, where the primary or secondary data protection is at the drive. Best example is a laptop. You can assume that the data on a stolen laptop without encryption is in the hands of the thief. Other portable devices, such as tablets, smartphones, etc., also have encrypted storage devices, though not a hard drive in a conventional sense. Note: Many Solid State Drives (SSDs) are also SEDs, which makes the case for the SSD option in your next laptop stronger.

Before my friends in the storage industry start tweeting about me, I do see a few values for SEDs in the data center.

  1. Compliance. Many security offices require SEDs—it never hurts to have SEDs, just understand where they fit in the security stack.
  2. Storage is going to be repurposed. Lab environments, cloud providers, etc., where storage may be used for one project or customer today and another tomorrow may desire or require compete data erasure. The easiest way to erase data is to encrypt it and then delete the key. The data will technically be on the drive, but not accessible.
  3. End of life destruction. Drives (spinning and SSD) do wear out and need to be disposed of. Some people require physical destruction of the drive (heard of people shooting them with a high-powered rifle, but never witnessed it). There are services that will crush or shred the drive. However it’s easier to shred the key. (Paranoid people do both.)

Object level encryption is another way to address protecting data at rest. I’m using a very vague definition of object for this discussion. Objects are often associated with a particular cataloging mechanism that supports very large numbers of items to be stored. In this case, I’m not being that specific. Think of an object as something that could be addressed at the application level. I spent a great portion of my storage career working on Networked Attached Storage (NAS) systems, another poorly named technology. NAS is essentially a purpose-built file server. For this conversation a file and the file system could be considered objects.

I’ve had many conversations with customers about protecting files with encryption. Customers often wanted to encrypt the entire file system. This is pretty straight forward: one key to manage for all of the content in the file system. The problem is the one key tends to be widely distributed—any person or process that needs access to a file gets the key to the entire file system. A side effect of this kind of a solution is that all metadata of the file system is also encrypted. So operations like backups that operate based on the creation and modification timestamp need to have the key to the file system. Therefore systems like backup servers, anti-virus servers, etc., have to be secure as they literally have the keys to the kingdom.

Another approach is to encrypt the files within the file system. Think of systematic zipping of files with encryption and a password. This has the benefit of not affecting the metadata. A file can be moved, emailed, deleted, etc., without decrypting the file. The backup software doesn’t need to have the key to execute, and the files in the backup are encrypted. Operations that need to access the internals of the file, such as anti-virus or full-text search still require the keys. The challenge is managing the keys and access control lists. Some files are written and read by only one person/application. However, most files are intended to be shared. For instance, emailing an encrypted file doesn’t do the recipient any good unless you also provide the key. I know a lot of people who encrypt any file that they put in their “free cloud storage.” It isn’t that they don’t trust the cloud provider—it’s just that sometimes a little paranoia is a good thing.

So why not encrypt everything everywhere? As I pointed out above, encrypted file systems are hard to manage. Encryption also makes it harder to detect intrusions in the network when the data-in-transit is encrypted. I can remember pointed discussions between the storage admins and the network admins about encrypting replicated data. The storage admin wanted the data encrypted at the source array and decrypted at the target array. The network admin wanted to encrypt at the WAN edge device, so they had visibility into the data leaving the building.

An interesting shift is the use of encryption by hackers. Rather than copy your data, they encrypt it and then offer to sell you the key. This phenomenon is called ransomware. While detection of the malicious code is the preferred defense, a good data backup enables a good backup plan. Suppose you have hourly copies of your data. Rather than choose to pay the ransom, you could choose to restore your data to the point in time before your data became encrypted.

At this point, if you’re expecting me to tie a nice little bow around data protection, you’re going to be disappointed. Protecting data in a world where the threats were application errors, failed components, undetected bit swaps and natural disasters was a challenge. Today, the threats are using teams of well-funded experts focused on finding the weak links in your data security structure. The threat landscape is constantly changing. It is very difficult, if not impossible, to protect against all threats. The IT technology industry is working to provide a solution component. However, the threat volatility forces overall protection to be reactive to the threat technology.

Organizations need to look at the problem the way the Navy does when protecting data.

  • Implement layers of security
  • Assume that any layer is penetrable
  • Minimize the damage of a penetration

First step, limit access to your data infrastructure through identity checks and limit access to need to know. Avaya Identity Engines provide a powerful portfolio of tools for managing user and device access to your network. However, assume that someone will figure out how to forge credentials and gain access to your infrastructure.

Avaya SDN Fx provides a key foundational component of a data security solution, minimizing the exposure of your network to unauthorized access. So when the spy gains access to your network, you can limit the exposure and keep the perpetrator from wandering around your network looking for the good stuff.

Data in transit and data at rest encryption and data backups provide another level of defense and recoverability when other layers are breached.

Finally, everybody needs to be involved in keeping data secure. I was interrupted while writing this conclusion to help a sales engineer with an opportunity. I emailed him several docs and links to others as background information. Even though the docs are marked as to which ones were for internal use only, I noted in the email which docs were sensitive and couldn’t be shared with the customer. Proper strategies include systems, processes, and people all working together across organizations and technology stacks to prevent data from being lost or ripped off.

I’ve always been a believer that the effort to make things idiot proof was often wasted because they just keep making better idiots. In this case, they’re making better experts to come after your data. Fortunately, we have very intelligent experts working for the good guys too. We’ll always be one step behind, but we can continue to strive to minimize the threat surface and minimize the impact of the surface being violated.