Understanding Avaya Internet Protocol Server Interface Resets

I cut my teeth on Port Network (PN) outages when I joined Avaya’s Tier-3 backbone support back in 2006. I was assigned to supporting the S8700 series of duplexed Communication Manager (CM) servers just as CM 3 was being released. Back then, the timers were very tight and a large percentage of my trouble tickets were explaining to customers why an IPSI (Internet Protocol Server Interface) reset, which in turn caused a port network outage.

Avaya uses several different heartbeat mechanisms so that devices know if they have lost connectivity. In the case of port networks, which means any IPSI-controlled cabinet (such as a G650), the heartbeat is variously known as a Sanity Checkslot, Socket Sanity, or IPSI Sanity. This TCP heartbeat is sent to every IPSI every second by the active CM-main (and in duplex CM also by the standby CM). So, if you were to have CM-duplex and duplicated IPSIs in each of the maximum of 64-port networks (2 CM*2 IPSI *64 PN) 256 heartbeats would fly through the network each second.

Originally, the IPSI would react if only three consecutive heartbeats went missing. Starting in CM 3.13, the timer was administrable by an Avaya engineer and in CM 5.0 it became administrable by customers on the CM change system-parameters ipserver-interface form. Now the IPSI Socket Sanity Timeout defaults to 15 seconds (values: 3 to 15 seconds). Data from CM substitutes for missing heartbeats.

Image1

Frequently, the cause of missing heartbeats is a mismatch between the IPSI being locked to communicate 100 Mbps/full duplex while the Ethernet switch was set to auto-negotiate (resulting in a half-duplex connection), or vice versa. Also, not enabling quality of service (QoS) to give priority to IPSI traffic, or not segregating the IPSI traffic into a separate physical/virtual LANs, frequently caused problems.

Upon detecting the outage, the IPSI assumes it is sick and reacts by performing a warm reset. During the warm reset, stable calls using resources within the PN stay up. But neither new calls can be initiated nor established calls transition to some other state (e.g. hold) for the obvious reason that there is no connection to CM to manage such transactions. The IPSI’s warm reset generally takes only a few seconds.

If it still doesn’t get heartbeats or data from CM, then after a default of 60 seconds (values: 60 to 120 seconds) the IPSI escalates to a cold reset. All calls using resources within the PN are dropped. On the change system-parameters port-networks form, the PN cold reset delay timer can be modified.

Next, based on the No Service Time Out Interval, the IPSI then waits for a default of 5 minutes (values: 2 to 15 minutes). During that time, while the IPSI is waiting for communication from CM-main, the resources within that PN are unavailable. Note that if one heartbeat gets through, perhaps on a flapping WAN circuit, the timer resets and the countdown starts from the beginning. If the No Service Time Out timer expires, the IPSI then attempts to register to a CM-Survivable Core (SC), formerly known as Enterprise Survivable Servers.

Image2

Each IPSI manages its own prioritized list of addresses for up to seven CM-SC, plus the CM-Main, which is always first on the list. Actually, it is in how the CM-SCs are configured that determines the server list for the IPSI. And it is the job of the CM-SC to advertise its own values to the IPSIs so that each IPSI can generate the appropriate list of eight server addresses. A customer can have up to 63 CM-SC. Note that IPSIs cannot register to CM-Survivable Remote (formerly known as Local Survivable Processors) servers.

The preference setting (System Preferred/Local Preferred, Local Only) along with a Community Size field and a Priority Score field, determines the server’s priority on IPSI’s lists. How to assign weighting of these values is beyond the scope of this article.

Image3

Each server in a CM-Duplex configuration is constantly comparing its health to the other. One statistic among many they compare is how many IPSIs each one can communicate with right now. If the standby server can communicate with more IPSIs than the active server, the standby takes over and makes itself Active. This can cause frequent server interchanges if an unreliable WAN link connection to a PN causes some of the heartbeats to get lost. So, Avaya introduced the option to Ignore Connectivity in Server Arbitration on the change ipserver-interface n form, thereby potentially reducing interchanges.

Image4

I have ignored duplicated IPSIs because I am not a big fan of them. Most of the IPSI-related tickets I’ve received were caused by network issues that duplicated IPSIs would not have protected against.

Based on my experiences, I recommend helping calls in progress stay up as long as they can by delaying the Port Network Cold Reset to 120 seconds. Then I suggest hurrying the registration to a CM-SC by setting the No Service Time Out to 2 minutes.

Although PNs are fading from Avaya’s product mix, they are a solid technology representing 30 years of development. Many customers will rely on them for years to come.

Related Articles:

Authentication & Authorization: Avaya’s Easy Security for Snap-in Developers

Security. Many application developers consider security to be the bane of their existence. We just want to get our features to work, and often don’t want to think about securing those features until the tail end of our development. Unfortunately, cybersecurity is really important in today’s world. Well, I’ve got good news for developers of Snap-ins on Avaya Breeze: Avaya has made it easy to securely expose and invoke web services in your Snap-ins.

Security Starts with Authentication—Answering “Who are you?”

Before you allow an application to invoke your features, you need to know the answer to the question “Who are you?” This has often been a difficult question to get answered. There are two major classes of applications, and they each should be authenticated differently.

  • Single User Apps: Some applications are directly focused on a single user. These sorts of applications often run on an end user’s device. They might be native applications or web applications running in a browser. For these sorts of applications, you must establish the identity of the end user. Ideally the users can use their enterprise credentials rather than a username and password specific to your application.
  • Server-Based Apps: These sorts of applications often operate on the behalf of many users, or don’t have functionality associated with any end users at all. Unfortunately, in the past we’ve often treated server-based applications like users. We give them a bogus username and password. Server-based applications really should be authenticated in a stronger way.

Next, Authorization Answers “Why are you here?”

The user or application successfully proved their identity, what more do we need? Well, we need to know the answer to the question “Why are you here?” and more importantly, “Are you allowed to do what you’re asking to do?”

We’ve usually done a pretty good of authorization for user-focused applications. If I log into a softphone, I can’t pretend to be a colleague as I make crank calls, or check my boss’s voicemail. Server-based applications are a different story. With those, it’s too often been all or nothing. If the application is a trusted super user, it can often do anything it wants. This just doesn’t cut it with the Avaya Breeze™ platform. An application that has been given full access to your snap-in might have no business accessing the services provided by another snap-in, and vice-versa. We need to do better than we have in the past.

OAuth to the Rescue!

Fortunately, Avaya Breeze has drawn upon the industry-standard OAuth 2.0 security framework to help you solve the problems of authentication and authorization. OAuth 2.0 provides a neat separation of concerns so that developers can focus their efforts only where they are required. The Authorization Service is the centerpiece of the OAuth 2.0 architecture. It is responsible for:

  • Authentication of users. Multiple forms of authentication can be used, including those that support Multi-Factor Authentication (MFA).
  • Authentication of applications. If an application invokes other services, OAuth 2.0 refers to it as a “Client”. This is true regardless of whether the application is running on an end user device or if it is a server-based application.
  • Provisioning and tracking of granted scopes. Specific applications can be granted access to specific features / resources.
  • Generation of tokens that assert identities and granted scopes.

Some of you will be writing snap-ins with web services that can be invoked by other snap-ins or non-Breeze applications. According to OAuth 2.0 lingo, you’ll be operating as a Resource. Guess what you won’t have to worry about as a Breeze Resource? Authentication! You don’t have to know or care how a user or Client was authenticated. In many cases, you don’t even have to know the identity of the invoker of your service. All you have to care about is whether the user/Client was granted access to your features (scopes). It is completely up to you to say what those scopes are called. If you have a snap-in that interfaces to a coffee dispenser, for example, you might declare a feature called “Make Coffee,” with values of “House” and “Premium.”

Others of you will be writing snap-ins or non-Breeze applications that will invoke other snap-in services. In that case, you’ll be acting as a Client in the OAuth 2.0 terminology. You will need to work with the Avaya Breeze Authorization Service to authenticate your application or snap-in, and optionally to authenticate users of your application. Once you’ve authenticated, you will get a token that you can present when invoking web services. Note that some snap-ins will act both as a Client and as a Resource.

If you read the OAuth 2.0 specification, you might think it seems complex. Fear not fellow snap-in developers, you’ll see in subsequent blogs how our Breeze APIs make it dead simple to securely expose and invoke snap-in web services. For now, if you’d like to learn more, check out the “Authorization” chapter of our Snap-in Development Guide.

How to Prevent Media Gateway Split Registrations

Back when Avaya Aura Communication Manager 5.2 was released, I recall reading about this new capability called Split Registration Prevention Feature (SRPF). Although I studied the documentation, it wasn’t until I read Timothy Kaye’s presentation (Session 717: SIP and Business Continuity Considerations: Optimizing Avaya Aura SIP Trunk Configurations Using PE) from the 2014 IAUG convention in Dallas that I fully understood its implications.

What is a Split Registration?

First I need to explain what SRPF is all about. Imagine a fairly large branch office that has two or more H.248 Media Gateways (MG), all within the same Network Region (NR). SRPF only works for MGs within a NR and provides no benefit to MGs assigned to different NRs.

Further, imagine that the MGs provide slightly different services. For example, one MG might provide local trunks to the PSTN, and another might provide Media Module connections to analog phones. For this discussion, it does not matter what type of phones (i.e. SIP, H.323, BRI, DCP, or Analog) exist within this Network Region. During a “sunny day,” all the MGs are registered to Processor Ethernet in the CM-Main, which is in a different NR somewhere else in the network. It aids understanding if you believe that all the resources needed for calls within a NR are provided by equipment within that NR.

A “rainy day” is when CM-Main becomes unavailable, perhaps due to a power outage. When a MG’s Primary Search Timer expires, it will start working down the list trying to register with any CM configured on the Media Gateway Controller (MGC) list. All MGs should have been configured to register to the same CM-Survivable server, which by virtue of their registration to it causes CM-Survivable to become active.

Image 1

In this context a CM server is “active” if it controls one or MGs. A more technical definition is that a CM becomes “active” when it controls DSP resources, which only happens if a MG, Port Network (PN) or Avaya Aura Media Server (AAMS) registers to the CM server.

Since all the MGs are registered to the same CM, all resources (e.g. trunks, announcements, etc.) are available to all calls. In effect, the “rainy day” system behaves the same as the “sunny day” with the exception of which CM is performing the call processing. Even if power is restored, only the CM-Survivable is active, and because no MGs are registered to CM-Main it is inactive.

In CM 5.2, SPRF was originally designed to work with splits between CM-Main and Survivable Remote (fka Local Survivable Processor) servers. In CM 6, the feature was extended to work with Survivable Core (fka Enterprise Survivable Servers) servers. To treat the two servers interchangeably, I use the generalized term “CM-Survivable.”

A “Split Registration” is where within a Network Region some of the MGs are registered to CM-Main and some are registered to a CM-Survivable. In this case only some of the resources are available to some of the phones. Specifically, the resources provided by the MGs registered to CM-Main are not available to phones controlled by CM-Survivable, and vice versa. In my example above, it is likely some of the phones within the branch office would not have access to the local trunks.

Further, the Avaya Session Managers (ASM) would discover CM-Survivable is active. They would learn of CM-Survivable server’s new status when either ASM or CM sent a SIP OPTIONS request to the other. The ASMs then might begin inappropriately routing calls to both CM-Main and CM-Survivable. Consequently, a split registration is even more disruptive than the simple failover to a survivable CM.

What can cause split registrations? One scenario is when the “rainy day” is caused by a partial network failure. In this case some MGs, but not all, maintain their connectivity with CM-Main while the others register to CM-Survivable. Another scenario could be that all MGs failover to CM-Survivable, but then after connectivity to CM-Main has been restored some of the MGs are reset. Those MGs would then register to CM-Main.

How SRPF Functions

If the Split Registration Prevention Feature is enabled, effectively what CM-Main does is to un-register and/or reject registrations by all MGs in the NRs that have registered to CM-Survivable. In other words, it pushes the MGs to register to CM-Survivable. Thus, there is no longer a split registration.

When I learned that, my first question was how does CM-Main know that MGs have registered to CM-Survivable? The answer is that all CM-Survivable servers are constantly trying to register with CM-Main. If a CM-Survivable server is processing calls, then when it registers to CM-Main it announces that it is active. Thus, once connectivity to CM-Main is restored, CM-Main learns which CM-survivable servers are active. This is an important requirement. If CM-Main and CM-Survivable cannot communicate with each other a split registration could still occur.

My second question was how CM forces the MGs back to the CM-Survivable. What I learned was that CM-Main looks up all the NRs for which that Survivable server is administered. The list is administered under the IP network region’s “BACKUP SERVERS” heading. CM-Main then disables the NRs registered to CM-Survivable. That both blocks new registrations and terminates existing registrations of MGs and H.323 endpoints.

Image 2

Once the network issues have been fixed, with SRPF there are only manual ways to force MGs and H.323 endpoints to failback to CM-Main. One fix would be to log into CM-Survivable and disable the NRs. Another would be to disable PROCR on CM-Survivable. An even better solution is to reboot the CM-Survivable server because then you don’t have to remember to come back to it in order to enable NRs and/or PROCR.

Implications of SRPF

Enabling SRPF has some big implications to an enterprise’s survivability design. The first limitation is that within an NR the MGC of all MGs must be limited to two entries. The first entry is Processor Ethernet of CM-Main, and the second the PE of a particular CM-Survivable. In other words, for any NR there can only be one survivable server.

Similarly, all H.323 phones within the NR must be similarly configured with an Alternate Gatekeeper List (AGL) of just one CM-Survivable. The endpoints get that list from the NR’s “Backup Servers” list (pictured above). This also means the administrator must ensure that for each NR all the MGs’ controller lists match the endpoints’ AGL.

Almost always, if SRPF is enabled, Media Gateway Recovery Rules should not be used. However in some configurations enabling both might be desirable. In this case, all MGs must be using an mg-recovery rule with the “Migrate H.248 MG to primary:” field set to “immediately” when the “Minimum time of network stability” is met (default is 3 minutes). Be very careful when enabling both features because there is a danger that in certain circumstances both the SRPF and Recovery Rule will effectively negate each other.

Finally, SPRF only works with H.248 MGs. Port Networks (PN) do not have a recovery mechanism like SRPF to assist in rogue PN behavior.

Enabling SRPF

The Split Registration Prevention Feature (Force Phones and Gateways to Active Survivable Servers?) is enabled globally on the CM form: change system-parameters ip-options.

Image 3

If I had not found Tim Kaye’s presentation, I would not have completely understood SRPF. So, now whenever I come across a presentation or document authored by him, I pay very close attention. He always provides insightful information.

Say Hello to Zang

For months, the Avaya and Esna teams have been hard at work on a revolutionary solution we believe will shape the future of communications in the new digital business era. Last week, the solution, and the new company behind it, were officially unveiled onstage at Enterprise Connect. Say hello to Zang.

We’re incredibly proud of Zang, and are big believers in its potential. Here’s why.

Apps, APIs and SDKs have fundamentally changed the way we connect with one another. Smaller startups have launched freemium, single-feature applications that are gaining traction in the market. And increasingly, we’re meeting small- and midsize customers who are mobile-first and cloud-only.

Zang is our answer to the needs and communication trends we’re seeing in the market.

Zang is the first all-in-one, 100 percent cloud-based communications platform and communication applications-as-a-service. The robust platform gives developers APIs and tools to easily build standalone communication applications, or embed parts of Zang into other apps, online services and workflows.

Zang is virtual, so it’s accessible anywhere, on any device. We also offer a range of ready-to-use applications on a pay-as-you-go subscription basis.

Giving companies the flexibility to build exactly what they need is incredibly powerful.

Imagine a midsize startup and the sheer number of distinct communication and collaboration tools it uses on a daily basis—Gmail for email, Google Docs for document collaboration, Slack for group chat, Salesforce for CRM, Skype for video calls, Zendesk for customer service and smartphones for business communications. Individual teams inside the company may adopt their own subset of tools—Zoho, Hipchat, Google Hangouts, etc.

A lot of important context gets locked up inside each platform, and isn’t shared as employees switch back and forth, all day long, communicating with customers and one another. If you want to embed Slack inside Salesforce, or embed Skype inside Salesforce, it’s hard. These applications are largely single-featured, and aren’t built with easy interoperability at their core.

Zang is different—our platform is more like a set of building blocks. We give companies exactly the configuration they need: either an already-built model, or the pieces needed to add voice, IM, conferencing, video, IVR, or a range of other capabilities to the software they’re already working with. And you don’t have to be a developer or have deep technical expertise to use Zang.

Embedding Zang communication capabilities inside an app takes hours and days, rather than weeks or months like our competitors. Stay tuned, as this is just the beginning of what Zang can do. To learn more and sign up for our First Look Program, please visit Zang.io.