911 Answer Delays – Ready, Set, NO!

This Avaya CONNECTED Blog
is also available as an MP3 Audio File


Nobody likes to wait for anything, especially for a 911 call taker during an emergency, and 911 Center Public Safety Answer Points (PSAP) Average Call Answer Times have been under scrutiny lately in several states. But before you throw the baby out with the bath water and criticize 911 Call Taker efficiency, you need to validate the data.

Before you measure something, you need to define where you start measuring. For example, when you run a foot race, whether it’s 50 yards, or 5 km, there is a distinctive starting point and finishing point to the race. The clock starts when you cross the start line, and stops when you cross the finish. This sounds logical right? Looking at statistics for emergency calls, is no different; as long as you understand where the clock started, and why.

While discussing average answer times with a colleague this past week, a point came out in the conversation that created a significant amount of confusion around this very topic. In an emergency seconds matter and as it turns out, some emergency dispatchers were being penalized by not meeting a state-mandated answer time. Other agencies were accused of “fudging the numbers” in an effort to make their statistics fall within acceptable guidelines. While looking at the data, however, it became very apparent that the REAL problem was potentially no one was paying attention to when the clock started, only when it ended.

MSN: A 911 response in Detroit takes how long?
San Diego County’s 9-1-1 Communications Home Page


Cattarin-Gary_LG.jpgSo imagine yourself running a five-minute mile, only to find out later that your time was actually 10 minutes because the clock started while you are tying your shoes. Not really too fair, right? Well, the same goes with 911. For those that have asked for more ‘tech content’, here you go. For the rest of you, you’re welcome to read on, especially if it’s bedtime and your looking for a natural sleeping aid!


The Anatomy Of a 911 Call
Unless you’ve listened to the trunk side of a 911 call, you might be slightly astonished at the archaic analog nature of getting a call from Point A to Point B. About two years ago, I was fortunate enough to receive an audio clip from a 911 call that I quite often use for training purposes; as it highlights several points that otherwise aren’t very obvious.

911 Call Pre-amble: Getting Ready To Get Ready
911 CAMA trunks that connect the PSAP to the 911 Tandem central office, are specialized analog circuits similar to Centrex lines. When a call is presented to them from the 911 network, this signaling mechanism is not ringing voltage, as found on a normal telephone line. The central office will “wink” towards the PSAP by applying reverse battery on the circuit. The PBX will then “wink” back towards the central office confirming its readiness to accept a call. The central office will then “wink” back at the PBX confirming that the response was received, and digits will be coming down the line.

When you look at audio as a sound wave, these “winks” are clearly noticeable as a sharp spike in the audio file and can even be heard as a loud click on the line.

In this example you can clearly see the three winks at the very beginning of the call, and if you are measuring answer time from the central office side, this would be a likely spot to start counting from zero.

160-01.jpg

At this point in time, the audio path is now open between the central office and the 911 PSAP call taking equipment. The central office then signals to the CPE equipment in band information using Multi-Frequency tones for digits and specialize signaling characters to indicate the pANI of the inbound 911 call. Depending on the area, and the carrier, the ANI that is received could be 7 (NNX-XXXX), 8 (I-NNX-XXXX), or 10 digits (NPA-NNX-XXXX)in length. Once again, looking at our example audio, the MF tones are clearly discernible in the audio wave.

160-02.jpg

You will also notice that there is another audio spike, which is the PBX signaling a “wink” back to the central office acknowledging receipt and acceptance of the ANI information. It also serves as a go-ahead signal for the central office to open up the audio channel between the original caller and the PSAP.

At this point, based on the audio in this example, the PBX applies ringing to the line, and you can see the abrupt change in audio as the callers audio is now also patched through.

As an interesting side note, what has happened up until this point is fairly critical in processing and delivering the 911 call to the PSAP. I have seen cases in the past where adjunct equipment has been installed on the CAMA trunks to capture the ANI information and send it over to the CPE 911 equipment for processing. But, because they are signaling back to the central office was not in proper sequence, they returned answer supervision to early to the central office and the callers audio actually corrupted the receipt of the MF tones. In fact, as it turns out a woman screaming can often mimic an MF tone, causing the system to process garbage data and potentially make the call fail.

Another interesting thing happens at this point, and that is the CPE equipment now is aware of the call, and has the information required to process it, and typically generates a Call Detail Reporting (CDR) start record. Once again another potential starting point for the call. The only problem here, is that this starting point is three seconds out of sync with the central office starting point.

The next step in the sequence would be for the CPE or PBX to process the call, and deliver it to a 911 call taker. After analyzing and listing closely to this sample recording, it appears that the 911 call taker answered the line immediately after the first ring (and remember a ring cycle is to second on followed by four seconds of silence). Since we cannot see or hear the second ring, we can assume that the call was picked up almost immediately after the first ring, and in fact you can see a small blip of audio when the line is connected, which is immediately followed by the dispatcher saying “911 what is the location of your emergency?”

160-03.jpg

At this point on the timeline, nine seconds has now passed from when the central office initiated the call, yet depending where the starting point is, can significantly skew the data, and the dispatcher could actually be penalized for a nine second delay
when in fact they answered the call within two seconds of it being presented to them.

Keeping it fair for everyone
let’s face it, we certainly want to make sure that our nation’s public safety operators are doing their job, and are performing within the excepted national specifications. What we have to be careful of though is to make sure that we are not penalizing them by looking at bad data.


Want more Technology, News and Information from Avaya? Be sure to check out the Avaya Podcast Network landing page at http://avaya.com/APN . There you will find additional Podcasts from Industry Events such as Avaya Evolutions and INTEROP, as well as other informative series by the APN Staff.

APN Blog Banner

Thanks for stopping by and reading the Avaya CONNECTED Blog on E9-1-1, I value your opinions, so please feel free to comment below or if you prefer, you can email me privately.

Public comments, suggestions, corrections and loose change is all graciously accepted 😉
Until next week. . . dial carefully.

Be sure to follow me on Twitter @Fletch911

Fletch_Sig.png 


CacheFly LogoAPN is Powered by Cachefly
CacheFly is the world’s fastest CDN, delivering rich-media content up to 10x faster than traditional delivery methods. With a proven track record and over a decade’s worth of CDN experience, companies around the world choose the CacheFly CDN for reliable and unbeatable performance. For more information, visit www.cachefly.com

Related Articles:

Connected Health: The Digital Transformation of Care Innovation

All around the world, across the spectrum of disease, IT is changing our approach to chronic conditions and how we approach connected health. Text messages remind people living with HIV to take their medication and keep their medical appointments. Smartphone apps diagnose post-traumatic stress disorder by analyzing a user’s voice. Online forums enable breast cancer patients and survivors to trade information related to every stage of their care.

Collectively known as “connected health,” these recent, IT-driven innovations represent the intersection of digital technology and care. They’re transforming not only the way people manage their own health, but also the way they interact with their healthcare providers.

Unintended, but welcomed, consequences

By and large, connected health is an adaptation of technologies that were originally developed for other purposes. Mobile technology started out as a voice communication tool. Instant messaging was an outgrowth of online chat rooms. Social media became a means for making new friends.

Now these technologies have evolved and converged in a way that is overcoming formerly intractable barriers to care. By minding the agenda of day-to-day care, for instance, they give people the opportunity to stay in adherence with their treatments even where clinical visits are impractical due to cost, distance or availability. And by helping patients preserve their privacy, make sense of their conditions, and learn from others with similar experiences, health IT can lift the stifling veil of stigma from disease. 

The implications don’t stop with the individual. Connected health also helps people manage their own disease state so they don’t spread it to others. Across whole populations, it can allow interventions aimed at preventing chronic diseases, such as behavioral modifications that reduce the incidence of obesity.

Changing care innovation paradigms

In all these respects, connectivity is bringing to medicine a level of accountability and democratization that seemed unimaginable not so long ago. But it’s also dialing up the urgency of some unanswered questions. Among them:

  • What information is appropriate to gather? Not all information has value in a healthcare setting.
  • Will information remain proprietary? It’s unclear to what extent stakeholders are willing to advance the interests of the community ahead of the interests of a company.
  • What would a sharing paradigm look like? If companies were to share information, they would need a seamless, cohesive way to do it.
  • How will privacy and security be preserved? Artificial intelligence and machine learning are critical pieces of this equation.
  • How will healthcare use technologies to create new models of care? Today’s applications are largely geared toward improving quality and outcomes of existing care models.

There’s no one-size fits all solution to these questions. Neither is care innovation strictly a technology issue. Technologists must collaborate with clinicians, patients, and patient advocates to take care coordination and operational efficiency to the next level in helping people cope with long-term diseases. A new, technology-powered paradigm—one that transcends existing constraints of time and resources—can bring a welcome transformation in the ongoing management of care coordination and the patient experience.

Avaya Equinox, Now with Team Collaboration, Just Got More “Go-To”

 

I recently read that the Apple App Store now contains about 2.2 million apps. It’s an amazing number and a testament to the creativity of developers and the variety of our human interests and needs. But it made me wonder: how many apps can we really use on a regular basis & for what? Are they for fun? Are they informative? Do they increase team collaboration? If your smartphone is like mine, you’ve got a number of go-to apps that you use regularly, let’s say weekly, and probably a few you use daily or almost constantly. Then there are the Tier 2 apps, hiding in your folders that seldom see the light of day. It’s fun to delve into these folders every few months and rediscover the apps that I thought looked so interesting at the time but now languish for months on end.

What’s fun for personal apps however, can often become a nightmare in the work world. We all have someone in the office that has that need to be first with the latest hot app, to provide their take on what’s cool and what’s not and make everyone else feel a little short of the mark for not using it first. Of course most of these apps get frenzied activity for about 3 ½ days and then slip into oblivion. The issue for most of us is we simply have too much on the go to be constantly changing the way we work and coercing others to adopt our favorite app of the week.

What my work day really needs is a true go-to app. One that makes me more productive, more reachable, more on track and that lets me get to my tasks and meetings with a single touch. If you’ve read my previous blogs, you know where I’m going with this: my go-to app is Avaya Equinox®. With its “mobile-first” Top of Mind screen, it provides me with at-a-glance visibility to meetings, instant messages and my call history giving me a single place to keep up to date and productive regardless of where my day may take me.

I’m happy to say that my go-to app just got more, well, “go-to”. The Avaya UC experience that I rely on every day is now being extended with the integration of a cloud-based team collaboration capability.  It gives me the full benefits of a team work environment that integrates voice, video, persistent team chat and messaging, along with file and screen sharing, all from within the Avaya Equinox experience.

Let me give you an example of these new Equinox team collaboration capabilities in action. I’m currently working with an external vendor on a major project. Our work will carry on for several quarters with new materials being created that need review, discussion, and likely several rounds of back and forth. To get the project kicked off and a vendor selected, we needed the full gamut of collaboration capabilities from simple voice calls to several all-day video conferences with participants joining from around the world – something easily managed with Avaya Equinox. 

The next step was to establish a core team and shift into a regular cadence of interaction. Adding the participants to the team collaboration space from both inside and outside Avaya was a snap and we were instantly able to communicate with one another – I use one to one instant messaging for small items or questions and chat when I want to involve the entire team for broader issues. Tasks get assigned within Avaya Equinox to keep our review cycles on track and we use the file sharing capability avoid clogging up our email. If I’m off line at some point, due to travel or other activity, a quick glance at Avaya Equinox gets me back up to speed with the team’s progress.

On a weekly basis, we usually need some face time, and Avaya Equinox provides complete meeting capabilities including audio / video conferencing with screen sharing so we all gain the advantages of personal interaction. No matter where we are or what we are doing, we can all collaborate on content in real-time – it’s more productive and prevents misunderstandings across a widely distributed team. 

In many ways our team collaboration space has become a virtual “war room”.  Information is clearly visible and easily shared, I can see who’s available at any time and formal and informal discussions can be initiated with ease.

There’s no shortage of apps available to anyone with a mobile device and the time to spend browsing around an app store. The real challenge is finding those few go-to apps that you’ll use every day. If you aren’t using Avaya Equinox yet, I’d encourage you to give it a try. I think it will make your short list of “go-to” apps and in a month or two, you might wonder how you got through your day without it!

Building SMS Text Bots is a Breeze

As a nerdy guy, I love movies about other nerdy guys. Give me movies like “A Beautiful Mind,” “The Theory of Everything,” or “Einstein and Eddington” (two nerdy scientists for the price of one), and I am in geek heaven. Recently, I was thrilled by “The Imitation Game”—the story of Alan Turing and his quest to break Germany’s WWII secret code. While I would never dare to compare myself to Mr. Turing, I like to think that we would have a few things in common. One area would be our shared interest in natural language processing and intelligent behavior.

Way back in 1950, Turing crystallized his research into these studies in what has become known as The Turing Test. Simply put, The Turing Test is a test of a machine’s ability to impersonate a human being. For a machine to pass The Turing Test, it must be able to participate in a conversation with a human being to the point where the human doesn’t realize that he or she is interacting with a machine. I can only imagine what Turing would think of today’s technology such as Siri, Alexa, and Google Home. Better yet, imagine Alan conversing with the robot, Sophia. Would he be excited or frightened? Personally, I am a little of both.

Real or Not

If you have been reading my articles on No Jitter and here on the Avaya blog, you know how enamored I am of the Breeze and Zang workflow designers. Although I have spent the bulk of my professional life writing software in programming languages such C++ and Java, I have fallen in love with how quickly I can use the Breeze/Zang tools to go from idea, to prototype, to a production-quality application. I like to say that if you can draw it on a whiteboard, you can “code” it with Breeze.

So, the day I decided to build a text bot, I knew exactly how I was going to do it. Starting with a list of things I wanted my text bot to do, I was soon drawing out message flows and decision points (if this, do that). Once I was happy I had captured all the salient points, I turned to my computer and began typing. Early on, I realized that there was no way on earth I could capture all the different text messages my application would need to process. For instance, how many different ways can you ask for the location of a store? “Where are you located?” “What is your address?” “What city are you in?” “How can I find you?” The variations are nearly endless.

To solve this problem, I turned to natural language processing (NLP) and artificial intelligence (AI). That, of course, led me to the 500-pound gorilla in the room—IBM Watson. With Watson, I can build “Conversations” that allow me to create intents, entities, and dialogs. Intents are used to classify a request. You can think of entities as modifiers to those intents. Dialogs are the words you want to “speak” after determining the intent.

For example, consider the phrase “Are you open on Sunday?” Here, the intent could be classified as “hours.” The entity is “Sunday.” A proper dialog could be, “We are open on Sunday from 12:00 to 5:00.” To keep things simple, I created three intents for my bot: Directions, Holidays, Hours. Those intents resulted in three dialogs. I left off entities for now.

Building SMS Text Bots is a Breeze-Img1

 

My next decision point had to do with maintaining a conversation over many text messages. For that I choose Avaya’s Contest Store, which allows me to temporarily store information about a text conversation. This information can then be accessed over the life of the chat.

Building SMS Text Bots is a Breeze-Img2

Now that I had an engine to process incoming text messages (Watson), and a method of maintaining a chat’s context (Contest Store), it was time to launch the Avaya Breeze Engagement Designer. I will admit that I still had a few logic problems to work through, but I would not be stretching the truth if I said that I had a rough draft of my text bot up and running in less than an hour. Working through those remaining issues consumed another couple of hours, but in a fraction of the time it would take me to write my application in Java, my bot was accepting text messages, building contexts, and texting back replies.

Building SMS Text Bots is a Breeze-Img3

I should also say that my bot is fully multi-user. It didn’t matter if one or one hundred people were all texting in at the same time. My bot kept track of each individual conversation and no one received a text meant for someone else.

 
Building SMS Text Bots is a Breeze-Img4
 

While my example bot is fairly simple in terms of what it can handle, the framework is extendable to just about any SMS conversations you might want to support. Future plans have me using Context Store to save the entire conversation between human and machine. Not only could this be useful for determining how accurately my bot responds to incoming requests, but it could also be used to help better serve customers. A recorded chat sessions could be presented to a human agent in the case where the user moved from text to a phone call.

Next, I would love to incorporate some of the other features that Watson provides. For example, by detecting the tone/sentiment of the conversation, my bot could sense if the human was becoming frustrated with the answers he or she was receiving from my bot. This would allow the bot to either escalate the chat to a live agent, or have an agent follow up afterwards to help soothe over what might have been an unpleasant experience – or both.

Mischief Managed

Human to human conversations aren’t going away anytime soon, but more and more machines are going to step in to handle the easy to moderately hard stuff. The point is not to trick people into thinking they are talking to a human being. The point is that machines can handle tedious jobs without coming across as machines.

While I highly doubt that anyone will ever make a movie about Andrew and his fabulous text bots, it isn’t all about fame and glory, right? This is exciting technology and the fact that I can use Breeze to create sophisticated bots by easily combining powerful, but disparate technologies, is red-carpet stuff.