PSTN Abandonment: Is it happening?

This Avaya CONNECTED Blog
is also available as an MP3 Audio File


What was once a 20 year life cycle for core networking switched voice equipment (the central office) has been reduced to 10 years, or even less. When you reduce a piece of equipment’s natural lifespan, you increase the monthly cost of its amortization accordingly.

Decreasing Equipment Lifecycles
Based on simple math alone, if you bought a “box” to provide a service to your customers, and the life expectancy of that box was 20 years, you could easily calculate your cost per month per customer. If, because of new technology, the life expectancy were reduced to five years, your monthly-amortized cost would increase by four times to compensate for that event.

Diminishing Customer Base
Based on simple math alone, if you bought a “box” to provide a service to your customers, and your customer base diminishes by 50%, based on the above “20 year model”, your annual amortized expense would double per customer.

Decreased Support Cost For New Equipment
The legacy network required a trained workforce, roving around in vehicles, full of expensive test equipment. New modern networks can reside in “dark centers” where access to programming and diagnostics is all accomplished remotely through a data connection. This does nothing to reduce the expense of a trained workforce, but it does remove the requirement to have that workforce out in the streets in vehicles. Not only does this eliminate transportation expenses, it reduces the average time to repair since travel is not required.

The Perfect Storm: SANDY
Late in the fall of 2012, Hurricane Sandy barreled its way up the East Coast causing significant damage to our telecommunications infrastructure from Washington DC to New England. It was a classic example of the 100 year storm, and in addition to causing several outages, much of the infrastructure became destroyed. This poses a unique problem to telecommunications carriers like AT&T, Verizon and Century Link. Do they rebuild their aging infrastructure that was just recently amortized off the books, or was just about to be? If they do, they have to start the “20 year clock” all over again, but they are faced with statements that the PSTN will start to go away in five short years. [See my blog at www.Avaya.com/Fletcher – PSTN to end in 2018]

It’s not surprising then why several stories are cropping up like the one about Fire Island, New York, and Verizon attempting to NOT restore the legacy telephone network on copper lines. See the story “Verizon Seeks to Abandon Landlines on Fire Island”, [http://stopthecap.com/2013/03/20/verizon-seeks-to-abandon-landlines-on-fire-island-wireless-or-you-are-on-your-own/].

The story reports that concerned residents may register a complaint either by filling out a complaint form on the New York State Public Service Commission website or calling the NYSPSC directly at (800) 342-3377.

The story also reports that “Verizon officials have defended their decision, claiming a wireless system is more robust and can withstand severe weather better than a wired network.” However, it seems that Verizon lost 25 percent of its landline business in the last two years, as the company claims 80 percent of Verizon-handled calls to and from the island are through Verizon Wireless.

The question that remains, “Does the Verizon Wireless Voice Link service offer the same functionality as traditional land lines?” Apparently, the answer to that question is “NO”. Verizon’s response is that Voice Link is a voice-only product. It does not support advanced services such as:

  • Broadband
  • Telephone modem connections
  • Faxing
  • Alarm monitoring
  • Home medical monitoring
  • TDD/TTY for the hearing impaired or deaf
  • Credit card processing

Once Again E911 is Questionable
Reportedly, it does support E911, however, I would like to know through what mechanism is E911 being supported. If it is being treated as a wired fixed location, then I have a concern with someone moving the service and not updating that location. If it provides E911 through the cellular network, just pick up the newspaper and you will see a plethora of stories with public safety recommending that you NOT use a cellular phone, but instead use a landline phone which by default provides address information to a 911 operator.

Cellular phones do not always use GPS positioning, especially indoors where a GPS signal is not available. In these cases, TDOA (time delay on arrival) algorithms are used to detect the distance of the device from one or more towers, therefore providing a general area. If interpreted incorrectly, public safety may show up at your neighbor’s house, while you lie on the floor unable to move or speak.

Since text messaging to 911 is NOT yet available, the requirement to have an analog line still exists for people who are deaf or hard of hearing and require the use of a TTY or TDD device. Once again a classic example of how this community of people is completely ignored from a technology perspective, and treated like second-class citizens.

Big Brother is NOT Watching
Most of the conspiracy theorists think that there is too much oversight and watching by the government. No matter what side of that argument that you sit on, there is a concern where no government oversight exists. While customers are afforded some level of protection with legacy telephony services and oversight by the Public Service Commission and the FCC, customers will lose that oversight if things go wrong with Voice Link. As it stands today, Voice Link, is an unregulated service not subject to government oversight.

Got a complaint? Call the PSC.
Oh wait, that’s right, you can’t call the PSC, as your phone line is dead.

New England is Not Alone
Hear that rumbling? That’s not the daily thunderstorm rolling through the Sunshine State. It’s Verizon’s “Project Thunder”. It seems that the extensive buried underground facilities are deteriorating beyond repair, and if you are outside of a Fios service area, when you reported trouble on your copper circuit you will be persuaded to move to Voice Link wireless services.

The Crystal Ball Predicts . . .
This evolution of the Public Switched Telephone Network should come as no surprise to anyone. Several articles have been written including this one by Teresa Mastrangelo last July in her article, ” Verizon Getting Aggressive with Copper Plant Shutdown
[http://broadbandtrends.com/blog1/2012/07/23/verizon-getting-aggressive-with-copper-plant-shutdown/]

As published in “BroadbandTrends”:

“Historically, the gating factor to shutting down the PSTN was regulatory. However, Verizon has successfully lobbied in Florida and Virginia and Texas to pass some forms of deregulation, which allows Verizon “to invest where customers want us to invest and start to sunset some of the older technology.” As such, it appears that once FiOS reaches a certain penetration level in a market, the decision is made to migrate all customers towards FiOS as is already happening in markets such as Dallas.”


Want more Technology, News and Information from Avaya? Be sure to check out the Avaya Podcast Network landing page at http://avaya.com/APN . There you will find additional Podcasts from Industry Events such as Avaya Evolutions and INTEROP, as well as other informative series by the APN Staff.

APN Blog Banner

Thanks for stopping by and reading the Avaya CONNECTED Blog on E9-1-1, I value your opinions, so please feel free to comment below or if you prefer, you can email me privately.

Public comments, suggestions, corrections and loose change is all graciously accepted 😉
Until next week. . . dial carefully.

Be sure to follow me on Twitter @Fletch911

Fletch_Sig.png 


CacheFly LogoAPN is Powered by Cachefly
CacheFly is the world’s fastest CDN, delivering rich-media content up to 10x faster than traditional delivery methods. With a proven track record and over a decade’s worth of CDN experience, companies around the world choose the CacheFly CDN for reliable and unbeatable performance. For more information, visit www.cachefly.com

Related Articles:

Call it what you will: Multi-channel, Omnichannel—It isn’t about the Contact Center!

At this point, we know that most companies are competing exclusively on the customer experience (83%, according to Dimension Data). McKinsey Insights shows that effective customer journeys are impactful: increase revenue by up to 15%, boost customer satisfaction by up to 20%, and turn predictive insight into customers’ needs by up to 30%. The issue isn’t that companies fail to understand the importance of the customer experience (CX). The problem is that over half of companies today fail to grasp what is arguably the single most important driver of a successful CX strategy: organizational alignment.

This isn’t to say that companies aren’t taking the necessary steps to strengthen their CX strategies. Looking back five years ago, 92% of organizations were already working to integrate multiple interaction channels—call it multi-channel, omnichannel, digital transformation—to deliver more consistent, contextualized experiences. The needle is moving in the right direction. However, companies will find themselves in a stalemate if they limit the customer experience to the contact center.

Customer Experience is the Entire Brand Journey

That’s right, the customer experience is NOT about the contact center. In fact, it never was. The customer experience is instead about seamlessly supporting consumers across their entire brand journey regardless of where, when, how and with whom it happens. This means supporting not just one business area (i.e., the contact center), but the entire organization as one living, breathing entity. This means supporting not just one single interaction, but the entire experience a customer has with a company from start to—well, forever. After all, the customer journey never truly ends.

Are companies ready for this future of the customer experience? Perhaps not: 52% of companies currently don’t share customer intelligence outside of the contact center, according to Deloitte.

Executives are planning for not only contact channels to expand but most are expecting these interaction journeys to grow in complexity. It’s clear that a contact-center-only structure doesn’t cut it anymore. At today’s rate of growth and change, it’s easy to see how a CX strategy can miss the mark when the entire customer journey is being limited to the contact center. Imagine how much stronger a company would perform if it supported the customer experience as the natural enterprise-wide journey it is? A journey where interactions take place across multiple channels and devices, unfolding across multiple key areas of business (i.e., sales, HR, billing, marketing)?

Imagine, for instance, a hospital immediately routing an outpatient to the travel nurse who cared for him last week, although she is now on the road to her next location. Imagine a bank being able to automatically route a customer to a money management expert after seeing that the last five questions asked via live chat were about account spending. Imagine a salesperson knowing that a customer attended a webinar last week on a new product launch and had submitted three questions—all before picking up the phone. Imagine a retail store associate knowing you walked in and that you were searching online for formal attire.

Contextual Awareness is Critical

Today’s CX strategy is no longer about asking the right questions: it’s about having the right information at the right time to drive anticipatory engagement. It’s no longer about being able to resolve a customer issue quickly. It’s about building an authentic, organization-wide relationship based on contextual awareness. In short, this means companies being able to openly track, measure, and share customer data across all teams, processes, and customer touch points. This ability either makes or breaks the CX today.

So, are you near the breaking point? Consider that nearly 40% of executives say their agents’ top frustration is that they can’t access all of the information they need. Less than 25% of contact centers today enjoy full collaboration on process design with their entire enterprise. Connected customer journeys and the overall CX are now top areas of focus as most organizations support up to nine channel options. CX will encounter a dramatic shift of reimagined customer engagements that will be able to incorporate technologies such as artificial intelligence, IoT, analytics, and augmented reality and virtual reality.

The bottom line is this: organizations must support an enterprise-wide customer journey to support the future of the CX now! They must share contextual data inside and outside of the contact center, and they need seamless and immediate access to that data anytime, anywhere, under any given circumstance. Above all, organizations need the right architectural foundation to support this anytime, anywhere ecosystem—otherwise, even their best moves will always result in a draw.

Get out of the Queue: Drive Your CX with Attribute Matching

At this point, nearly every company is working overtime to realign around two simple words: customer experience (CX). So much so that nearly 90% of companies now compete solely on CX—a drastic increase from 36 % in 2010—and 50 % of consumer product investments are expected to be redirected to CX innovations—like attribute matching—by the end of this year.

But what exactly does the CX consist of, especially in today’s new world of digital business innovation? This next-generation CX is supported by several advanced technologies—big data analytics, omnichannel, automation—however, these investments are all aimed at driving one thing: contextualization.

The rise of contextualized service—the ability for companies to not only gain insightful information about their customers but also deliver information in a way that is relevant and meaningful to customers based on individual circumstances to improve their experience—has evolved the CX to a point where it looks virtually nothing like it did as recently as 10 years ago. Whereas consumers once primarily focused on the act of purchasing, driven by such things as product quality and price, they now focus on the richness of brand relationships, driven by the personal value that companies deliver throughout the customer journey. Just consider that 70% of buying experiences are now based on how customers feel they are being treated. This is the key factor that sets apart market leaders like Amazon, Trader Joe’s, and Apple from the competition.

According to Accenture, there is an estimated $6 trillion in global revenue up for grabs due to dissatisfied customers constantly switching providers. The ability for companies to offer contextualized service is vital for operating at the speed of the consumer and capturing more of this market share. There’s just one thing preventing companies from seizing this limitless potential: the traditional call queue.

Every customer is familiar with the call queue. This is the place where statements like, “Your call is important to us. Please continue to hold,” and “Let me transfer you to a specialized team who can help you with that” perpetually live. It’s where exhaustive efforts to route customers to the correct service rep become lost, or where consumers must repeat the same information to multiple agents across different teams. It’s the greatest barrier preventing companies from being more dynamically connected to their consumers, and one of the greatest reasons why customers reduce their commitment to a brand.

Driving Contextualization with Attribute Matching

In a world where customers demand a profound level of connection and transparency, organizations can no longer support a contact center environment in which calls are distributed among agents who are organized by function (i.e., sales, service, support). In today’s smart, digital world, companies must transform the traditional call center into an integrated, digital communications hub. This means moving away from a siloed, metric-driven queue and instead working to put customers in touch with the best organizational resource depending on their exact need or circumstance as immediately as possible. The most effective way to achieve this is to migrate from archaic infrastructure towards an integrated, agile, next-generation platform built on open communications architecture.

Open communications architecture allows organizations to seamlessly collect, track and share contextual data across various teams, processes, and customer touch points. This integrated environment supports a real-time data repository from which businesses can pull from to route customers based on needs beyond traditional characteristics (like language preference). Rather, the technology allows companies to build customized learning algorithms that drive anticipatory engagement, enabling them to match customers based on next-level variables like personality, emotion and relatability.

Imagine, for example, a hotel routing a customer directly to an IT staffer after seeing that the person tweeted about a poor in-room Wi-Fi connection. Imagine a bank being able to route a customer to a money management expert after seeing that the last five questions asked via live chat were about account spending. Imagine an athletic apparel company matching a customer with an agent who is an avid runner after noticing that the individual recently signed up for a 5K.

The future of the CX means creating and continually building a contextualized view of customers throughout their entire brand journey. It means going beyond customer service to establish unparalleled, organization-wide relationships. It means transforming peoples’ lives, verses simply answering questions. This is what companies must work to align themselves with. The good news is that technology has evolved to a point where they can now easily, effectively and cost-efficiently do so.

Interested in learning more or getting beyond the queue to Redefine Your Customer and Employee Experiences? Contact us. We’d love to hear from you.

Reducing the Risks of Distributed Denial of Service Attacks

Picture what may just be one of the scariest scenarios in your career: The network has slowed to a crawl. You can barely hold a management interface, let alone control the network elements involved. The attack propagates, and as it does you watch your services drop one by one. Panic sets in. You’re experiencing a Denial of Service (DoS) attack. All resources are focused on stamping this fire out—and that may very well be the intention of the attackers.

A DoS attack might be a smokescreen to get you to focus elsewhere while the intruder goes about covert business in a much safer fashion, leaving little forensics afterward.

DoS attacks are an easy thing to comprehend. Even the term Distributed Denial of Service (DDoS) is an easy extension. But the strategy behind why they’re used and their intent can vary dramatically. A DoS attack can occur in an array of sophistication. Here’s a quick breakout from the simplest to most complex attacks:

  • Network Level attacks:

    The simplest ones—TCP, UDP, ICMP, Floods

  • Reflective/Amplified attacks:

    Service focused—DNS, NTP, SNMP, SSDP, Specific floods

  • Fragmentation:

    Session specific—overlaps, missing, too many

  • Application specific:

    Repetitive GET, slow READ or loop calls

  • Crafted:

    Stack and protocol level, buffer resources

These methods are often overlapped in a targeted fashion. In essence the attack is a series of waves that each hit in varying degrees of sophistication and focus. Other times the attack is relatively primitive and easy to isolate. The reason for this is that in the simplest levels, it’s an easy thing to do. As an example, a disgruntled student, upset over a new vending matching policy, could mount a DoS attack against his or her school administration. On the other end of the spectrum is a much darker orchestration, the sleight of the hand to get you to look elsewhere. This is typically the signature of an Advanced Persistent Threat (APT).

Unless an attack is very simple and short-lived, it needs to be distributed in the way it operates. It needs to be generated from various points of origin. This is referred to as a DDoS attack. The attacker needs to coordinate a series of end points to execute some particular event at the same point in time or perhaps, in more sophisticated examples, as phased against a time series. For a DDoS attack, the attacker requires a command and control (C2) capability. This means that they need to have access and response to the compromised systems. This is referred to as a Botnet.

Botnets do not have to be sophisticated to be successful. They only have to implement a simple set of instructions at the right point in time. Let’s take the recent reflective/amplified DDoS attack on Dynamic DNS services on the East coast of the U.S., which affected several large firms such as Amazon and Yahoo. The attack was mounted from residential video surveillance cameras. Even though there was no direct intrusion, the firms were impacted. Which leads us to two lessons.

Lesson number one: Security in IoT needs to be taken more seriously in the product design stages. Perhaps the concept and treatment of residential security systems needs to be rethought.

Lesson number two: As we move to outsourcing and cloud services we need to realize that we spread the reality of our exposed risk. Due diligence is required to assure that service providers and partners are doing their role in end-to-end security. But do you recall I mentioned that the source of the orchestrated attack was from the residential network? This brings about a new degree of challenges as we look at the new world of consumer IoT.

How do we maintain security in that sector? Clearly the residence itself should uphold best practices with a well-maintained and monitored gateway. But let’s face it, this is generally not going to happen. The monitoring of behaviors and abnormalities at the provider interface level is the next best catch and many providers are moving to reach this goal.

The other key point to remember about botnets is that in order to command, one has to control. This can happen in various ways. One is automatic. It infects and sits until a predefined time and then activates. This is the simplest. Another method requires true C2. Either way, bad code gets residence or existing code gets leveraged in negative ways. You should be able to pick out the anomalies.

Proper design with hyper-segmentation can greatly reduce the risk of propagation from the initial infection. The botnet is contained and should be readily identified, if you’re watching. Are you?