Dipping Your Toes into the SIP Stream

SIP copy

There are surprises and there are surprises.

For instance, I like it when I come home after a long day at work to find that my wife made dinner reservations at my favorite Saint Paul restaurant – W. A. Frost. I also like it when I finish my tax returns and discover that I don’t owe thousands of dollars in unpaid taxes.

I’ve also had the less-fortunate variety several times and can do without that kind of excitement.

One thing is certain: I don’t like getting surprised at work. Those surprises generally involve more toil, looking foolish, ending up with less money, putting in longer hours, or all of the above. The looking foolish part happened to me recently, but instead of moping about it, I decided to use it as a teaching tool.

This article originally appeared on SIP Adventures and is reprinted with permission.

I recently began working with a company and their carrier on adding SIP trunks to an Avaya PBX. Unfortunately, I was brought into the project late in the game and quite a bit of discussion had already occurred. Perhaps I was told all the details or perhaps I was not, but the result was that I was under a few false assumptions about what the customer wanted and what the carrier was set to deliver.

Specifically, it turned out that yes, SIP trunks were being deployed, but the customer wasn’t actually set up to work with direct SIP. That carrier, which happens to be Verizon, was providing the SIP, but the SIP was terminating on a Cisco 2911 router configured as a SIP-to-PRI gateway. So, SIP to the demark point and ISDN to the customer’s communications system.

It’s no wonder that I couldn’t get straight answers about session border controllers and session managers. There weren’t any and the customer wasn’t about to deploy them.

Now, if I were an unscrupulous sales guy, I might try and tell the customer that he was making a bad decision and up-sell him on equipment that he wasn’t ready to deploy, but thankfully I am neither a sales guy nor unscrupulous. Instead, I embraced this as a viable solution that will serve the customer well until he is ready to move a little deeper into SIP.

There are situations where SIP — at least total SIP immersion — is not the best answer. A business may have a number of good reasons why it wants to dip its toes into the SIP stream, but it wants to do so in a measured and controlled manner. It wants to reap some of the benefits of SIP, but is fully conscious of what it can and cannot afford.

This particular business has an Avaya system that has been kept up-to-date on software and hardware, but is still predominately TDM. They understand the benefits of VoIP, but haven’t invested in a VoIP-ready network. Additionally, the nature of this business is not one that has Ethernet cables where they are needed for telephony. Although antiquated by today’s standards, analog and digital telephones are still in wide use by many businesses. This business needed a compelling reason to change and until they got it, things were staying put.

Still, they wanted to take advantage of some of the benefits of SIP and SIP trunks are a great place to start. They can eliminate many of their costly ISDN trunks, create a better business continuity strategy, consolidate networks, and take the first step towards what may ultimately be a much larger leap into SIP.

Related article: Wow, I Can Do That with Unified Communications?

Baby Steps

So, the idea of bringing in those SIP trunks, running them through a SIP-to-PRI gateway, and terminating T1s on their existing line cards is a perfectly good choice. One day they may decide to take things a little further, but other than having to re-purpose a fairly inexpensive Cisco router, they haven’t thrown good money at bad.

Later on, they can move those SIP trunks away from the 2911 to an SBC without having to completely redesign their SIP solution. SIP is flexible enough to support quite a few of these transition solutions. That Cisco router could just as easily have been one of the many SIP-to-TDM gateways offered by AudioCodes.

In the end, my surprise turned out just fine. Granted, I was a little confused for a while, but that’s nothing new. Once I understood what was what, I was able to assist Verizon with their implementation questions and get the customer rolling down the road to SIP.

That’s the kind of work surprise I can deal with.

Related Articles:

Hoteliers: Your Best Laid Plans Will Go Awry Without Strong Wi-Fi

In today’s smart, digital world, the average guest experience isn’t so average anymore. Traditional room service, for example, is being replaced by in-room kiosks that offer guests interactive experiences at the touch of a finger. Virtual concierges can now arrange for everything from guests’ daily papers to checkout applications. The days of hotel card keys (let alone actual keys) are dwindling as more hotels enable keyless room entry via smartphones.

Technology has become an integral part of the way we work and live, and that certainly includes the way we travel. It’s not surprising, then, that forward-thinking hoteliers are working to quickly adopt emerging new technologies to enable next-generation guest experiences. At the heart of this massive technological shift, however, is one critically important thing that cannot be underestimated: Wi-Fi.

Wi-Fi: Once a Luxury, Now a Necessity

Wi-Fi has never been as significant for the hospitality industry as it is now, especially considering the rise of new mobile network standards like 5G (pilot networks are expected to be available by the end of next year).

Whereas Wi-Fi once meant offering guests uninterrupted music and video streaming, it today means an integrated mobile app experience that enables anywhere/anytime service, supported by faster response times and more anticipatory engagement. In fact, this is something 81% of hospitality leaders we recently surveyed said they’d like to incorporate for improving on-location services. For hotel management, this means streamlining staff communications with dynamic Unified Communications (when asked what most negatively affects the quality of their guest experiences, nearly 40% said fragmented team communications).

These key objectives are dependent on a strong Wi-Fi network. In fact, “network Wi-Fi” was cited by those we surveyed as having the biggest impact on improving the guest experience. It makes sense, then, that over 40% of hoteliers plan to upgrade their Wi-Fi service by the end of 2017.

This is certainly a good thing, especially as hospitality leaders look to make their properties more multifaceted and profitable. Imagine larger hotels, for example, being able to send automatic text notifications to convention attendees informing them of changes to a schedule of events. Or, consider smaller entities—like boutique hotels and B&Bs—being able to maintain their classic charm while amplifying backend operations with a robust data network and IP phone solution.

Regardless of size, hospitality leaders and/or IT decision makers can agree on one thing: the right Wi-Fi network is crucial for supporting today’s next-generation guest experience, as well as mission-critical operations.

Simplifying the Solution: Five Things You Need from Your Wi-Fi Network

The right network solution can transform large and small properties alike (we should know, we currently provide solutions to over 2,500 hotels worldwide). If you’re part of the almost half of hoteliers looking to improve Wi-Fi networking in the coming year, we encourage you to look for a solution that:

  1. Meets the ever-evolving needs of today’s next-generation guest:

    Your network should offer reliable and scalable applications that enable you to expertly handle guest communications via any point of interaction: voice, email, fax, video, Web, IM, social or mobile. As I mentioned in a previous blog, open and extensible network architecture supports this dynamic communication environment with the ability to create apps that customize and extend your hotel’s call center.

  2. Allows you to easily expand:

    For many hotels, continuous expansion is a challenging yet necessary goal. The key is to implement a network solution that can seamlessly scale alongside your property. Consider Best Western Premier Boulder Falls Inn. The Oregon-based hotel—newly opened in 2015—implemented an Avaya data network, Wi-Fi, and an Avaya IP Office phone solution to not only meet its current needs, but those that would inevitably arise as the property further developed. Future expansion plans, according to General Manager Nia Ridley, include the addition of a luxury spa, as well as a focus on hosting regional and state conventions.

  3. Supports legacy architecture:

    Restructuring older networking can be exceedingly difficult, especially when legacy solutions have been heavily invested in and, for the most part, still run efficiently. In this case, it’s important that a hotelier’s new network investment supports an intelligent migration from existing technologies to next-generation networking with little to no interoperability issues. This is especially important for international hotel groups, where all entities must move at one unified pace of innovation.

  4. Streamlines back-office operations:

    Your network solution should fully and seamlessly integrate with key hotel administrative platforms—like your property management and call accounting systems—to simplify and maximize back-office support. In addition to seamless integration with existing vertical applications, you should be able to easily extend or customize your telephony and UC features as you see fit to improve your various back-office systems.

  5. Embodies ease and reliability:

    In the end, you need a flawless system implementation that quickly creates value for your organization. This means a network solution that is truly easy to implement and manage. At the same time, it means a solution that is reliable, secure, and agile enough to support whatever is on the horizon for your property—from withstanding unexpected conditions to enabling strategically planned growth. Just consider Best Western Premier Boulder Falls Inn. The hotel was preparing for its grand opening when leaders decided on the last-minute addition of a second-floor bar, which required serious reconfiguration of the building during construction. Nevertheless, the property “came online faster than any hotel in the history of Best Western,” according to Ridley.

There is no shortage of opportunities for hospitality leaders to transform today’s guest experience. Just as significant as these advancements are, however, is what makes them all possible: next-generation wireless networking.

Data Protection Part 3: Where Does Encryption fit into Data Protection?

I’ve mentioned that the SNIA SDC Conference provided the catalyst for writing this blog series on data protection. (See part 1 and part 2.) While there was a lot of discussion at the Conference on protecting data from loss, there were also discussions about encryption of data at rest. I noticed that the conversations about data protection and encryption were separate and distinct. Networks also employ data encryption methods to protect data in transit. I was sitting in one of the sessions when I began to wonder how many times a data block is encrypted and decrypted in its life. I also wondered how many times encrypted data is encrypted again as “data at rest” becomes “data in transit.”

Data at rest encryption can be divided into two categories: media encryption and object encryption. Media encryption focuses on protecting data at the physical device level such as the disk drive or tape. Encryption for data at rest can be performed at multiple places in the data storage stack—host, SAN appliance or target device (tape drive, tape library, disk drive). There are use cases for each deployment option, but generally it’s best to encrypt as close to the physical media as the use case allows. There are often trade-offs to be examined. For instance, encryption defeats the value of deduplication. In most data sets, there is a lot of repeated data that can be managed more efficiently if deduplicated. If host-based encryption is employed, the value of deduplicating data downstream, such as WAN acceleration, is eliminated.

The benefit of encryption at the media level has generated interest in encrypting tape drives and Self-Encrypting hard Drives (SED). Tape use cases are pretty straight forward: create a data copy on media that can be shipped off-site for protection of the data at the primary site. Tape used to be the primary backup media, but with long recovery times and the data explosion, tape has been relegated to tertiary copies and long term archive of data that doesn’t need to be online. The key is tape is designed to be shipped off-site, meaning the shipment could be hijacked. Encrypting the data on the tapes makes a lot of sense. A box of encrypted tapes has the same value as a box of used tapes, i.e., not worth hijacking.

I have “sold” a lot of SEDs in my career. I’ve always tried to be honest with customers. SEDs have limited value in data center operations. Drives deployed in the data center aren’t intended to transport data out of the data center. (There are a couple of valid use cases where moving data on drives makes sense, such as data center moves or seeding a disaster recovery site, but of all the drives deployed, very few are used in this manner). I’d often test a customer’s view of SEDs with a simple question, “Do you have locks on your data center doors?” Some customers would get the joke and I knew I could have a frank conversation. If the customer didn’t get the joke (i.e., understand that SEDs only provide protection if the physical drive falls into the wrong hands), I proceeded cautiously. Two other factors come into play at that moment: there was an account manager in the room who is commission driven (SEDs are slightly more expensive than non-SEDs) and paranoid customers with deep pockets are an IT supplier’s best friend. Of course, just because you’re paranoid, doesn’t mean that there aren’t a few thousand hackers out there looking to gain value from your data.

Bottom line is, if someone gains access to your network or manages to compromise someone’s username/password, SEDs don’t help. The encryption key is automatically applied to the drive when the drive is started. Any process that has access to the system after that isn’t going to be denied access by the drive encryption.

The primary value of SEDs is when a drive is deployed outside the data center, where the primary or secondary data protection is at the drive. Best example is a laptop. You can assume that the data on a stolen laptop without encryption is in the hands of the thief. Other portable devices, such as tablets, smartphones, etc., also have encrypted storage devices, though not a hard drive in a conventional sense. Note: Many Solid State Drives (SSDs) are also SEDs, which makes the case for the SSD option in your next laptop stronger.

Before my friends in the storage industry start tweeting about me, I do see a few values for SEDs in the data center.

  1. Compliance. Many security offices require SEDs—it never hurts to have SEDs, just understand where they fit in the security stack.
  2. Storage is going to be repurposed. Lab environments, cloud providers, etc., where storage may be used for one project or customer today and another tomorrow may desire or require compete data erasure. The easiest way to erase data is to encrypt it and then delete the key. The data will technically be on the drive, but not accessible.
  3. End of life destruction. Drives (spinning and SSD) do wear out and need to be disposed of. Some people require physical destruction of the drive (heard of people shooting them with a high-powered rifle, but never witnessed it). There are services that will crush or shred the drive. However it’s easier to shred the key. (Paranoid people do both.)

Object level encryption is another way to address protecting data at rest. I’m using a very vague definition of object for this discussion. Objects are often associated with a particular cataloging mechanism that supports very large numbers of items to be stored. In this case, I’m not being that specific. Think of an object as something that could be addressed at the application level. I spent a great portion of my storage career working on Networked Attached Storage (NAS) systems, another poorly named technology. NAS is essentially a purpose-built file server. For this conversation a file and the file system could be considered objects.

I’ve had many conversations with customers about protecting files with encryption. Customers often wanted to encrypt the entire file system. This is pretty straight forward: one key to manage for all of the content in the file system. The problem is the one key tends to be widely distributed—any person or process that needs access to a file gets the key to the entire file system. A side effect of this kind of a solution is that all metadata of the file system is also encrypted. So operations like backups that operate based on the creation and modification timestamp need to have the key to the file system. Therefore systems like backup servers, anti-virus servers, etc., have to be secure as they literally have the keys to the kingdom.

Another approach is to encrypt the files within the file system. Think of systematic zipping of files with encryption and a password. This has the benefit of not affecting the metadata. A file can be moved, emailed, deleted, etc., without decrypting the file. The backup software doesn’t need to have the key to execute, and the files in the backup are encrypted. Operations that need to access the internals of the file, such as anti-virus or full-text search still require the keys. The challenge is managing the keys and access control lists. Some files are written and read by only one person/application. However, most files are intended to be shared. For instance, emailing an encrypted file doesn’t do the recipient any good unless you also provide the key. I know a lot of people who encrypt any file that they put in their “free cloud storage.” It isn’t that they don’t trust the cloud provider—it’s just that sometimes a little paranoia is a good thing.

So why not encrypt everything everywhere? As I pointed out above, encrypted file systems are hard to manage. Encryption also makes it harder to detect intrusions in the network when the data-in-transit is encrypted. I can remember pointed discussions between the storage admins and the network admins about encrypting replicated data. The storage admin wanted the data encrypted at the source array and decrypted at the target array. The network admin wanted to encrypt at the WAN edge device, so they had visibility into the data leaving the building.

An interesting shift is the use of encryption by hackers. Rather than copy your data, they encrypt it and then offer to sell you the key. This phenomenon is called ransomware. While detection of the malicious code is the preferred defense, a good data backup enables a good backup plan. Suppose you have hourly copies of your data. Rather than choose to pay the ransom, you could choose to restore your data to the point in time before your data became encrypted.

At this point, if you’re expecting me to tie a nice little bow around data protection, you’re going to be disappointed. Protecting data in a world where the threats were application errors, failed components, undetected bit swaps and natural disasters was a challenge. Today, the threats are using teams of well-funded experts focused on finding the weak links in your data security structure. The threat landscape is constantly changing. It is very difficult, if not impossible, to protect against all threats. The IT technology industry is working to provide a solution component. However, the threat volatility forces overall protection to be reactive to the threat technology.

Organizations need to look at the problem the way the Navy does when protecting data.

  • Implement layers of security
  • Assume that any layer is penetrable
  • Minimize the damage of a penetration

First step, limit access to your data infrastructure through identity checks and limit access to need to know. Avaya Identity Engines provide a powerful portfolio of tools for managing user and device access to your network. However, assume that someone will figure out how to forge credentials and gain access to your infrastructure.

Avaya SDN Fx provides a key foundational component of a data security solution, minimizing the exposure of your network to unauthorized access. So when the spy gains access to your network, you can limit the exposure and keep the perpetrator from wandering around your network looking for the good stuff.

Data in transit and data at rest encryption and data backups provide another level of defense and recoverability when other layers are breached.

Finally, everybody needs to be involved in keeping data secure. I was interrupted while writing this conclusion to help a sales engineer with an opportunity. I emailed him several docs and links to others as background information. Even though the docs are marked as to which ones were for internal use only, I noted in the email which docs were sensitive and couldn’t be shared with the customer. Proper strategies include systems, processes, and people all working together across organizations and technology stacks to prevent data from being lost or ripped off.

I’ve always been a believer that the effort to make things idiot proof was often wasted because they just keep making better idiots. In this case, they’re making better experts to come after your data. Fortunately, we have very intelligent experts working for the good guys too. We’ll always be one step behind, but we can continue to strive to minimize the threat surface and minimize the impact of the surface being violated.

Data Protection Part 2: What About Unauthorized Data Access?

In part 1, we explored the physical loss of data, meaning the data is no longer available for access (intelligently) by anyone. There’s another, perhaps more significant threat to your data protection efforts: someone gaining access to sensitive information, often referred to as a data breach. An argument could be made that in some cases it is better to have lost data than to have data become available to unauthorized individuals. For example, when it comes to personal photos, would you prefer nobody sees them again or everyone sees them?

Data breaches didn’t start with the digital age. In my blog on Wireless Location Based Services (WLBS), I explained that I started my career working for the U.S. Navy. We had very specific rules for handling classified documents (and penalties for mishandling them). Not all documents were handled the same. We had file cabinets with different levels of security (weight, strength, lock) for different document classifications. The idea being the more valuable the information, the harder it would be gain access to the documents. At one facility the windows were replaced with glass block to prevent someone from outside the facility reading documents on our desks. Watergate is an example of a high-profile data breach long before the digital age.

Not all unauthorized data access is from outside an organization. At one point in my career, someone left a document on the printer that contained specific employee compensation information. Information didn’t leave the company, but still caused significant issues for HR.

Frederick Wilfrid Lancaster proposed that computers would create a paperless society. I’d argue that computers made it more efficient to generate paper, but with the advent of better user interfaces and a generation that didn’t grow up dependent on paper, maybe he’ll be correct. The information age, however, has changed the threat profile for data breaches. The challenge is the same: keep people from gaining access to the information. Security guards, identity badges, and locks provided the primary security mechanisms for physical document protection. There are many movies about spies duplicating badges, picking locks, and using cameras that looked like ball-point pens or lighters to photograph documents. But you never see a spy photographing the entire contents of a file cabinet (stealing the cabinet would require a forklift and draw too much attention).

Now the thieves don’t have to go to the facility. They gain access to your data via your network. You don’t know they got access to your data until after they’re gone. Now, the big difference is the thief can make a copy of the entire file cabinet or database. Stealing credit card information is not new. Some unscrupulous sales people have always copied down credit card numbers and gone on shopping sprees. The difference is scale. A salesperson or bartender might get a handful of numbers at a time, but hackers get millions.

In the paper world, incursions were addressed by defense in depth tactics. When I entered the Navy base, my ID was checked at the outer gate, granting me access to certain areas on the base. As I got closer to the pier, my ID would be checked again to ensure I was authorized to access the ship area. My ID would be checked when I got to my office building, again to verify that I should be there. At this point I still didn’t have access to any sensitive information. To gain access to documents, I had to know the combination to the safe. So up to this point, security was focused on identifying who had the privilege to present the combination to the safe. Similar tactics are deployed in the digital world today. User names, passwords, and access control lists (ACLs) are common methods for identification, authentication, and authorization. Avaya’s Identity Engines provide a powerful portfolio of tools for managing user and device access to your network.

Layers of access control are great as long as data is placed behind the proper level of security and there isn’t a way to sneak between layers. If someone had placed sensitive information just inside the outer gate of the base, then anyone who gained access to the base would have had access to the information. Europol terror data was compromised because someone made a copy on an unprotected device exposed to the Internet; this defeated all other security measures. As another example, think about the difference between web content posted for customers on the public website vs content posted on the internal sales portal. Some information is available to everyone and is posted on the sales portal for convenience (e.g., spec sheets). However, there is information that is made available to partners that shouldn’t be available to customers (or the competition) such as pricing, sales presentations, or competitive positioning.

Poke around many company public websites and you’ll find information you aren’t supposed to see. Often, the documents will even be labeled with “internal use only.” IT can deploy Identity Engines or similar solutions to the best of their ability, but if the rest of the organization fails to pay attention to information security, data will be leaked.

Nobody is perfect, neither is any system. Occasionally cracks are exposed in data security. Hackers are very persistent and will keep poking around until they find a weak spot. Because of the architecture and complexity of modern networks, once a hacker gains a modicum of access, they often can get full access to the network. The intruder can use tools to easily discover the network topology and then determine where to gain access to valuable information.

The Navy doesn’t allow everyone to roam around the base just because they gain access through the perimeter gate. You don’t want people to roam around your network just because they found a weak entry point. Suppose a spy with forged credentials shows up at the gate in a food service truck. Food security isn’t a high concern, so security checks on the credentials are minimal. If the base is wide open, the spy could drive the truck anywhere. However, interior checks prevent the food service truck from access to sensitive areas such as the munitions warehouse.

Network segmentation with Avaya SDN Fx Architecture provides similar protections. Shortest Path Bridging (SPB) is based on a Virtual Service Network (VSN). A VSN is similar to a VLAN, except the VSN is totally isolated from all other segments unless specifically authorized to have access to another segment (L3 route). If the Navy could implement a VSN concept for vehicular traffic, the food truck spy would be assigned to a virtual road that only went to buildings that served food. The spy wouldn’t be aware of any other road, wouldn’t see any other buildings, and wouldn’t have any idea how to get to the munitions building. In fact, there wouldn’t even be any indication that a munitions building even existed.

Further, suppose there’s a celebration being held at the ball field on base. The celebration has a temporary kitchen set up that requires a food delivery. A virtual road could be set up to allow food service trucks to get to the ball field. As soon as the event concludes, the virtual road is retracted, eliminating food service truck access to the ball field.

To explain more about this approach, Jean Turgeon, Vice President and Chief Technologist for SDN at Avaya, has a three-part blog series on the security benefits of end-to-end network segmentation.

Jean Turgeon mentions the 2013 data breach at Target. The hackers gained access to an HVAC network and wandered around until they gained access to PCI information. If Target had implemented hyper-segmentation, the worst the hackers could have done was change the ambient environment, hardly an event that would have made headlines around the world or blog topics three years later.

In part 3 of this series, we’ll explore the role of data encryption in preventing data loss.