Writing Your First WebRTC Application: Part 3

Welcome to part three in my series of “Writing Your First WebRTC Application” articles. Part one described the basic components involved in a WebRTC solution and part two presented the differences between Chrome and Firefox. Today, I want to spend some time on the client code from the standpoint of the WebRTC flow and API calls.

As I wrote in part one, Chrome and Firefox took slightly different approaches to the WebRTC API object names. I expect that those changes will go away once the specification is ratified, but until that happens, it’s best to create wrapper code that hides those differences from the main call flow logic.

Before you go any further, it would be best if you read the following articles:

Writing Your First WebRTC Application: Part One

Writing Your First WebRTC Application: Part Two

Understanding WebRTC Media Connections: ICE, STUN, and TURN

The goal of WebRTC is to enable multimedia calls to and from Web browsers without the need for plug-ins such as Adobe’s Flash. This is accomplished by adding support for those connections directly into HTML 5. In this way, the HTML code that runs in the browser can directly invoke the underlying WebRTC infrastructure.

In all WebRTC solutions, there will be a calling party and a called party. While much of what each half does is identical, there are some very important differences.

A common flow that both caller and called will follow goes like this:

Connect Users by way of a Signaling Server

This can be accomplished in any number of ways, but the easiest method is for both users to visit the same website and connect via a shared signaling server. Users exchange some sort of name or token that allows for the unique identification of the session. This shared token might be a room number or a conversation ID.

The most common way for WebRTC clients to connect to a Signaling Server is the WebSocket API.

Start the signaling between the two sides

Once the clients have shared a token to identify their conversation, they can start exchanging signaling messages through the WebSocket connection established above. Since WebRTC does not specify any specific signaling protocol, it is up to the solution developer to come up with one of his or her own.

Each side will exchange information about its networks and how it can be contacted

This step is often referred to as “finding candidates” and its purpose is to allow Web browsers to exchange the network information required to send direct media. Since most clients will use private IP addresses, some form of Network Address Translation (NAT) is required.

To find a publicly-addressable IP address, WebRTC will make use of STUN and/or TURN servers. These servers provide a client with an IP address that can be shared with its peer for media connections.

WebRTC calls this process of using STUN and TURN servers the Interactive Connectivity Establishment (ICE) framework. ICE will first attempt to use STUN and if STUN is not possible, TURN will be used.

Negotiate media sessions

Once clients know how to communicate with one other, they need to agree upon the type and format of the media. This is accomplished with something called JavaScript Session Establishment Protocol (JSEP). JSEP uses Session Description Protocol (SDP) to identify the codec, resolution, bitrate, frame size, etc. of the media supported by a client.

Start RTCPeerConnection streams

After the signaling connection has been established and the clients have completed negotiating their media capabilities, they can start streaming media. This is accomplished with the WebRTC construct, RTCPeerConnection.

The RTCPeerConnection API

The RTCPeerConnection API is where the real work of establishing a peer-to-peer connection between the two Web browsers occurs. It deals with the ICE handler, media streams, access to the local microphone and camera, and the JSEP offer and answer processes.

A Web browser will create an RTCPeerConnection similar to the following:

var myPeerConnection = RTCPeerConnection(configuration)

The configuration variable contains the key iceServers which consists of an array of STUN and TURN servers.

The myPeerConnection object is used by both the caller and called parties, but its usage is slightly different.

For the calling party:

  1. Register an onicecandidate handler.

The onicecandidate handler sends ICE candidates to the caller’s peer using the signaling channel.

  1. Register an onstream handler.

The onstream handler displays the video stream once it is received from the called party.

  1. Register a message handler.

A message handler is used to process messages received from the called party.   For example, if the message contained an RTCIceCandidate, it would be added to the myPeerConnection object using theaddIceCandidate() method. If the message contained an RTCSessionDescription object, it would be added to myPeerConnection using the setRemoteDescription() method.

  1. Gain access to the local camera and microphone.

The function getUserMedia() captures the local media stream which can then be displayed on the local page. That stream must then be added to myPeerConnection using the addStream() method.

  1. Negotiate media.

In this step, a Web browser performs the JSEP offer/answer process by creating an offer with themyPeerConnection method, createOffer(). Additionally, a callback handler is registered to theRTCSessionDescription object. This handler will eventually add the RTCSessionDescription tomyPeerConnection using the method setLocalDescription(). Finally, RTCSessionDescription is sent to the remote peer via the signaling channel. The end result is that the caller’s SDP will be set for the caller and called peers.

For the called party:

The steps for the called party are very similar to the caller’s flow with the exception that the called party responds with an answer to the caller’s offer.

  1. Register an onicecandidate handler.

The onicecandidate handler sends ICE candidates to the called party’s peer using the signaling channel.

  1. Register an onstream handler.

The onstream handler displays the video stream once it is received from the calling party.

  1. Register a message handler.

A message handler is used to process messages received from the caller. For example, if the message contained an RTCIceCandidate, it would be added to the myPeerConnection object using theaddIceCandidate() method. If the message contained an RTCSessionDescription object, it would be added to myPeerConnection using the setRemoteDescription() method.

  1. Gain access to the local camera and microphone.

The function getUserMedia() captures the local media stream which can then be displayed on the local page. That stream can then be added to myPeerConnection using the addStream() method.

  1. Negotiate media.

This is where the big differences between the caller and the called occur. When a “new session description” is received on the signaling channel, the called party will set the remote description with myPeerConnection.setRemoteDescription().

Next, myPeerConnection.createAnswer() is invoked to return the called party’s session description.

After the two sides have established a signaling connection and exchanged media descriptions, media can flow end-to-end. The session can then be managed (held, released, etc.) through the signaling channel.

Whew! I am going to stop here to allow everything to soak in before I start presenting the JavaScript to accomplish all of the above. Stay tuned for Part Four and oodles of more fun.

Related Articles:

Authentication & Authorization: Avaya’s Easy Security for Snap-in Developers

Security. Many application developers consider security to be the bane of their existence. We just want to get our features to work, and often don’t want to think about securing those features until the tail end of our development. Unfortunately, cybersecurity is really important in today’s world. Well, I’ve got good news for developers of Snap-ins on Avaya Breeze: Avaya has made it easy to securely expose and invoke web services in your Snap-ins.

Security Starts with Authentication—Answering “Who are you?”

Before you allow an application to invoke your features, you need to know the answer to the question “Who are you?” This has often been a difficult question to get answered. There are two major classes of applications, and they each should be authenticated differently.

  • Single User Apps: Some applications are directly focused on a single user. These sorts of applications often run on an end user’s device. They might be native applications or web applications running in a browser. For these sorts of applications, you must establish the identity of the end user. Ideally the users can use their enterprise credentials rather than a username and password specific to your application.
  • Server-Based Apps: These sorts of applications often operate on the behalf of many users, or don’t have functionality associated with any end users at all. Unfortunately, in the past we’ve often treated server-based applications like users. We give them a bogus username and password. Server-based applications really should be authenticated in a stronger way.

Next, Authorization Answers “Why are you here?”

The user or application successfully proved their identity, what more do we need? Well, we need to know the answer to the question “Why are you here?” and more importantly, “Are you allowed to do what you’re asking to do?”

We’ve usually done a pretty good of authorization for user-focused applications. If I log into a softphone, I can’t pretend to be a colleague as I make crank calls, or check my boss’s voicemail. Server-based applications are a different story. With those, it’s too often been all or nothing. If the application is a trusted super user, it can often do anything it wants. This just doesn’t cut it with the Avaya Breeze™ platform. An application that has been given full access to your snap-in might have no business accessing the services provided by another snap-in, and vice-versa. We need to do better than we have in the past.

OAuth to the Rescue!

Fortunately, Avaya Breeze has drawn upon the industry-standard OAuth 2.0 security framework to help you solve the problems of authentication and authorization. OAuth 2.0 provides a neat separation of concerns so that developers can focus their efforts only where they are required. The Authorization Service is the centerpiece of the OAuth 2.0 architecture. It is responsible for:

  • Authentication of users. Multiple forms of authentication can be used, including those that support Multi-Factor Authentication (MFA).
  • Authentication of applications. If an application invokes other services, OAuth 2.0 refers to it as a “Client”. This is true regardless of whether the application is running on an end user device or if it is a server-based application.
  • Provisioning and tracking of granted scopes. Specific applications can be granted access to specific features / resources.
  • Generation of tokens that assert identities and granted scopes.

Some of you will be writing snap-ins with web services that can be invoked by other snap-ins or non-Breeze applications. According to OAuth 2.0 lingo, you’ll be operating as a Resource. Guess what you won’t have to worry about as a Breeze Resource? Authentication! You don’t have to know or care how a user or Client was authenticated. In many cases, you don’t even have to know the identity of the invoker of your service. All you have to care about is whether the user/Client was granted access to your features (scopes). It is completely up to you to say what those scopes are called. If you have a snap-in that interfaces to a coffee dispenser, for example, you might declare a feature called “Make Coffee,” with values of “House” and “Premium.”

Others of you will be writing snap-ins or non-Breeze applications that will invoke other snap-in services. In that case, you’ll be acting as a Client in the OAuth 2.0 terminology. You will need to work with the Avaya Breeze Authorization Service to authenticate your application or snap-in, and optionally to authenticate users of your application. Once you’ve authenticated, you will get a token that you can present when invoking web services. Note that some snap-ins will act both as a Client and as a Resource.

If you read the OAuth 2.0 specification, you might think it seems complex. Fear not fellow snap-in developers, you’ll see in subsequent blogs how our Breeze APIs make it dead simple to securely expose and invoke snap-in web services. For now, if you’d like to learn more, check out the “Authorization” chapter of our Snap-in Development Guide.

How to Prevent Media Gateway Split Registrations

Back when Avaya Aura Communication Manager 5.2 was released, I recall reading about this new capability called Split Registration Prevention Feature (SRPF). Although I studied the documentation, it wasn’t until I read Timothy Kaye’s presentation (Session 717: SIP and Business Continuity Considerations: Optimizing Avaya Aura SIP Trunk Configurations Using PE) from the 2014 IAUG convention in Dallas that I fully understood its implications.

What is a Split Registration?

First I need to explain what SRPF is all about. Imagine a fairly large branch office that has two or more H.248 Media Gateways (MG), all within the same Network Region (NR). SRPF only works for MGs within a NR and provides no benefit to MGs assigned to different NRs.

Further, imagine that the MGs provide slightly different services. For example, one MG might provide local trunks to the PSTN, and another might provide Media Module connections to analog phones. For this discussion, it does not matter what type of phones (i.e. SIP, H.323, BRI, DCP, or Analog) exist within this Network Region. During a “sunny day,” all the MGs are registered to Processor Ethernet in the CM-Main, which is in a different NR somewhere else in the network. It aids understanding if you believe that all the resources needed for calls within a NR are provided by equipment within that NR.

A “rainy day” is when CM-Main becomes unavailable, perhaps due to a power outage. When a MG’s Primary Search Timer expires, it will start working down the list trying to register with any CM configured on the Media Gateway Controller (MGC) list. All MGs should have been configured to register to the same CM-Survivable server, which by virtue of their registration to it causes CM-Survivable to become active.

Image 1

In this context a CM server is “active” if it controls one or MGs. A more technical definition is that a CM becomes “active” when it controls DSP resources, which only happens if a MG, Port Network (PN) or Avaya Aura Media Server (AAMS) registers to the CM server.

Since all the MGs are registered to the same CM, all resources (e.g. trunks, announcements, etc.) are available to all calls. In effect, the “rainy day” system behaves the same as the “sunny day” with the exception of which CM is performing the call processing. Even if power is restored, only the CM-Survivable is active, and because no MGs are registered to CM-Main it is inactive.

In CM 5.2, SPRF was originally designed to work with splits between CM-Main and Survivable Remote (fka Local Survivable Processor) servers. In CM 6, the feature was extended to work with Survivable Core (fka Enterprise Survivable Servers) servers. To treat the two servers interchangeably, I use the generalized term “CM-Survivable.”

A “Split Registration” is where within a Network Region some of the MGs are registered to CM-Main and some are registered to a CM-Survivable. In this case only some of the resources are available to some of the phones. Specifically, the resources provided by the MGs registered to CM-Main are not available to phones controlled by CM-Survivable, and vice versa. In my example above, it is likely some of the phones within the branch office would not have access to the local trunks.

Further, the Avaya Session Managers (ASM) would discover CM-Survivable is active. They would learn of CM-Survivable server’s new status when either ASM or CM sent a SIP OPTIONS request to the other. The ASMs then might begin inappropriately routing calls to both CM-Main and CM-Survivable. Consequently, a split registration is even more disruptive than the simple failover to a survivable CM.

What can cause split registrations? One scenario is when the “rainy day” is caused by a partial network failure. In this case some MGs, but not all, maintain their connectivity with CM-Main while the others register to CM-Survivable. Another scenario could be that all MGs failover to CM-Survivable, but then after connectivity to CM-Main has been restored some of the MGs are reset. Those MGs would then register to CM-Main.

How SRPF Functions

If the Split Registration Prevention Feature is enabled, effectively what CM-Main does is to un-register and/or reject registrations by all MGs in the NRs that have registered to CM-Survivable. In other words, it pushes the MGs to register to CM-Survivable. Thus, there is no longer a split registration.

When I learned that, my first question was how does CM-Main know that MGs have registered to CM-Survivable? The answer is that all CM-Survivable servers are constantly trying to register with CM-Main. If a CM-Survivable server is processing calls, then when it registers to CM-Main it announces that it is active. Thus, once connectivity to CM-Main is restored, CM-Main learns which CM-survivable servers are active. This is an important requirement. If CM-Main and CM-Survivable cannot communicate with each other a split registration could still occur.

My second question was how CM forces the MGs back to the CM-Survivable. What I learned was that CM-Main looks up all the NRs for which that Survivable server is administered. The list is administered under the IP network region’s “BACKUP SERVERS” heading. CM-Main then disables the NRs registered to CM-Survivable. That both blocks new registrations and terminates existing registrations of MGs and H.323 endpoints.

Image 2

Once the network issues have been fixed, with SRPF there are only manual ways to force MGs and H.323 endpoints to failback to CM-Main. One fix would be to log into CM-Survivable and disable the NRs. Another would be to disable PROCR on CM-Survivable. An even better solution is to reboot the CM-Survivable server because then you don’t have to remember to come back to it in order to enable NRs and/or PROCR.

Implications of SRPF

Enabling SRPF has some big implications to an enterprise’s survivability design. The first limitation is that within an NR the MGC of all MGs must be limited to two entries. The first entry is Processor Ethernet of CM-Main, and the second the PE of a particular CM-Survivable. In other words, for any NR there can only be one survivable server.

Similarly, all H.323 phones within the NR must be similarly configured with an Alternate Gatekeeper List (AGL) of just one CM-Survivable. The endpoints get that list from the NR’s “Backup Servers” list (pictured above). This also means the administrator must ensure that for each NR all the MGs’ controller lists match the endpoints’ AGL.

Almost always, if SRPF is enabled, Media Gateway Recovery Rules should not be used. However in some configurations enabling both might be desirable. In this case, all MGs must be using an mg-recovery rule with the “Migrate H.248 MG to primary:” field set to “immediately” when the “Minimum time of network stability” is met (default is 3 minutes). Be very careful when enabling both features because there is a danger that in certain circumstances both the SRPF and Recovery Rule will effectively negate each other.

Finally, SPRF only works with H.248 MGs. Port Networks (PN) do not have a recovery mechanism like SRPF to assist in rogue PN behavior.

Enabling SRPF

The Split Registration Prevention Feature (Force Phones and Gateways to Active Survivable Servers?) is enabled globally on the CM form: change system-parameters ip-options.

Image 3

If I had not found Tim Kaye’s presentation, I would not have completely understood SRPF. So, now whenever I come across a presentation or document authored by him, I pay very close attention. He always provides insightful information.

Say Hello to Zang

For months, the Avaya and Esna teams have been hard at work on a revolutionary solution we believe will shape the future of communications in the new digital business era. Last week, the solution, and the new company behind it, were officially unveiled onstage at Enterprise Connect. Say hello to Zang.

We’re incredibly proud of Zang, and are big believers in its potential. Here’s why.

Apps, APIs and SDKs have fundamentally changed the way we connect with one another. Smaller startups have launched freemium, single-feature applications that are gaining traction in the market. And increasingly, we’re meeting small- and midsize customers who are mobile-first and cloud-only.

Zang is our answer to the needs and communication trends we’re seeing in the market.

Zang is the first all-in-one, 100 percent cloud-based communications platform and communication applications-as-a-service. The robust platform gives developers APIs and tools to easily build standalone communication applications, or embed parts of Zang into other apps, online services and workflows.

Zang is virtual, so it’s accessible anywhere, on any device. We also offer a range of ready-to-use applications on a pay-as-you-go subscription basis.

Giving companies the flexibility to build exactly what they need is incredibly powerful.

Imagine a midsize startup and the sheer number of distinct communication and collaboration tools it uses on a daily basis—Gmail for email, Google Docs for document collaboration, Slack for group chat, Salesforce for CRM, Skype for video calls, Zendesk for customer service and smartphones for business communications. Individual teams inside the company may adopt their own subset of tools—Zoho, Hipchat, Google Hangouts, etc.

A lot of important context gets locked up inside each platform, and isn’t shared as employees switch back and forth, all day long, communicating with customers and one another. If you want to embed Slack inside Salesforce, or embed Skype inside Salesforce, it’s hard. These applications are largely single-featured, and aren’t built with easy interoperability at their core.

Zang is different—our platform is more like a set of building blocks. We give companies exactly the configuration they need: either an already-built model, or the pieces needed to add voice, IM, conferencing, video, IVR, or a range of other capabilities to the software they’re already working with. And you don’t have to be a developer or have deep technical expertise to use Zang.

Embedding Zang communication capabilities inside an app takes hours and days, rather than weeks or months like our competitors. Stay tuned, as this is just the beginning of what Zang can do. To learn more and sign up for our First Look Program, please visit Zang.io.