An Introduction to the Avaya WebRTC Snap-In

Over the last several months, I’ve written a number of articles about WebRTC. I discussed encryption, network address translation, signaling, and–over the course of four articles–I even wrote about how to go about creating your very own WebRTC application.

In case you missed any of them, here are all my WebRTC articles to date:

WebRTC for Beginners

A WebRTC Security Primer

An Introduction to WebRTC Signaling

Understanding WebRTC Media Connections – ICE, STUN, and TURN

Writing Your First WebRTC Application Part One

Writing Your First WebRTC Application Part Two

Writing Your First WebRTC Application Part Three

Writing Your First WebRTC Application Part Four

Now that I’ve shared just about everything that I know about WebRTC, I am going to let you in on a little secret. You don’t need to know any of it. All that stuff about ICE candidates? You can forget it. Did you finally figure out how to use the RTCPeerConnection object? You don’t need to know that, either.

Okay, that’s not really true. Understanding ICE candidates and the native HTML 5 WebRTC objects and their methods isn’t a bad thing. It’s just that that bare metal WebRTC programming is pretty sophisticated stuff and a number of people recognized that the interfaces, function calls, and call flows are far more complicated than most people want to deal with – including me.

That’s why companies like Avaya are creating wrapper APIs (application programming interfaces) that hide the complexities of WebRTC. This allows applications to be written faster and with little understanding of the nitty-gritty that happens underneath the covers.

In addition to simplifying the call flows and objects, these wrappers can also add functionality not available in out-of-the-box WebRTC. That’s not to say that a good developer couldn’t write these extensions on his or her own. It’s just that they come “for free” and the programmer can concentrate on the application’s business logic and leave the nuts and bolts to someone else.

The Avaya WebRTC Solution

Today, I would like to introduce you to the WebRTC support offered by Avaya’s Collaboration Environment 3.0. If you are not familiar with Collaboration Environment, I highly recommend that you take a look at this article before proceeding.

An Introduction to Avaya Collaboration Environment 3.0

As you know, a WebRTC application can be divided into three parts. There is the application that runs in a web browser. This part will be written in HTML and JavaScript. There is the web server that delivers the application to the browser. Finally, there is the signaling server that relays information between WebRTC clients.

None of that changes with the Avaya approach. You still need a web server, WebRTC application, and a signaling server. However, unlike a traditional WebRTC application, an Avaya-based application will not call directly into the HTML 5 extensions. Instead, it will use a collection of Avaya objects that invoke those extensions on your behalf. This abstraction greatly simplifies what a programmer needs to know and do.

For instance, the Avaya API handles the work involved in finding and attaching to a STUN or TURN server. It also eliminates the need to create and manage WebSocket connections. This allows the programmer to focus on the business aspects of the application and leave the plumbing to Avaya. It also leads to fewer bugs, since the hard stuff has already been done by really smart people who live and breathe WebRTC.

In addition to making the programmer’s job a lot easier, the Avaya approach adds a layer of security that does not exist in native WebRTC. Specifically, Avaya supports the concept of a security token that assures that only authenticated/authorized users can create and manage WebRTC calls into your communications system. This prevents hackers from perpetrating toll fraud or launching denial of service attacks.

So, how does it work? Let’s start by looking at a diagram of the architecture:

Avaya WebRTC Snapp-In

There are a number of things that should look familiar. First, there is the client. This is a WebRTC-compliant web browser such as Chrome or Firefox. There is also the WebRTC application, which is retrieved from a web server. This application consists of HTML and JavaScript and is sent to the web browser via HTTP. The last familiar piece might be the reverse proxy. A reverse proxy sits between a web browser and a web server and assists in tasks such as firewall traversal, content acceleration, and data aggregation.

If you are an Avaya guy like me, you will also recognize standard Aura components such as Session Manager, Avaya Media Server, Communication Manager, Session Border Controller, and station endpoints.

Less familiar will be the Collaboration Environment server(s) and specifically the WebRTC Snap-in. Think of the Snap-in as the WebRTC signaling server. It accepts HTTP formatted commands from the application and converts them to SIP. These SIP messages will then use Communication Manager to establish and manage voice calls.

The Avaya Session Border Controller provides a secure conduit for the WebRTC application into the enterprise. It relays HTTP from the application to the WebRTC Snap-in and performs STUN and TURN functionality. If desired, an Avaya SBC can also act as a reverse proxy.

The Avaya Media Server terminates ICE, STUN, TURN, and DTLS. It also translates WebRTC media into a SIP media stream.

The Avaya WebRTC Library

Not shown in the above diagram is the Avaya WebRTC JavaScript library. This library contains the API objects that an application will invoke, as well as the code that interfaces with the underlying HTML 5 WebRTC layer. This library can be a part of the application itself, or better yet, it can be downloaded dynamically from the web server when the page is loaded. Downloading assures that the latest and greatest version of the API is always used.

The Avaya WebRTC library is what the developer will see when he or she builds an application. It consists of four sections:

  • Data declarations. An Avaya WebRTC application will use two variables: client and theCall. The client variable is used to describe the client and its connection data and theCall is the WebRTC call.
  • Callbacks are used to communicate between the Avaya API and the WebRTC application. Most of the real work will occur in these callbacks.
  • Configuration code. The client object is configured to describe the client endpoint.
  • The code that connects the application to the WebRTC Snap-in.

For example, the following is an application initiation sequence:

var client;

var theCall;

client = new avayaWebRTC.Client();

client.onConnectedCB = connectedCB;

client.onDisconnectedCB = disconnectedCB;

client.onNotificationCB = notificationCB;

client.onCallConnectedCB = callConnectedCB;

client.onCallInitiatedCB = callInitiatedCB;

client.onCallRingingCB = callRingingCB;

client.onCallRemoteDisconnectedCB = callRemoteDisconnectedCB;

client.onCallErrorCB = callErrorCB;

client.onRemoteMediaConnectedCB = remoteMediaConnectedCB;

client.webRTCHTTPAddress = serverURL; /* Collaboration Environment Server */

client.securityToken = token;

client.username = <caller’s phone number>;

client.username = <caller’s domain>;


Once onConnectedCB has been invoked by the API, the client can now make a call. Code to perform that will look like this:

theCall = new avayaWebRTC.Call(client);

theCall.ringingFileUrl = <optional wav file played after call is launched>;

theCall.destinationAddress = <called number>; /* called number can be restricted */

theCall.ContextID = <Context ID from context store>; /* think of this as caller attached data */


At this point, a call has been launched and the application will receive a series of callbacks as the call progresses. For example, onCallRingingCB will be invoked when the far end is ringing and onCallConnectedCB will be invoked when the call has been answered. The callback, onCallMediaConnectedCB, is invoked when media is received from the far end.

There are additional methods on the theCall object to manage and eventually release the call.

The WebRTC application can call any station type on the Avaya system. This includes H.233, SIP, analog, or digital telephones. These telephones can be standard endpoints or contact center agents. Additionally, the WebRTC application can use the Avaya system to make outbound calls on any trunk type.

The security token supplied by the web server can be used to restrict the types of endpoints that the application can call. For example, it may allow calls to contact center agents, but not outbound trunk calls.

Pay attention to the ContextID variable. When the WebRTC Snap-in is used with the Avaya Context Store Snap-in, caller and web page information can be passed with the call. This allows a contact center agent to know who is calling and from what page — i.e. the context of the conversation. This extension to WebRTC would be invaluable to contact centers that web enable their inbound communications.

In terms of capacity, Avaya states that the Snap-in supports 1800 simultaneous calls at a rate of 28,000 BHCC (Busy Hour Call Completions). The maximum requires a Collaboration Environment server, one Avaya SBC, and eight Avaya Media Servers.

In future articles, I will expand on the Context Store Snap-in along with Avaya’s Real-Time Speech application. Prepare to be amazed.

At this point in time, the API and WebRTC Snap-in only supports G.711 audio. Additional codecs (e.g. Opus) and video will be added at a later date.

That’s all for now

This was meant to be an introduction, so I will stop here. I hope this helps you understand the Avaya WebRTC solution from a high level, as well as get a feel for how the API and Snap-In simplify the job of writing a WebRTC application. It’s important to realize that Avaya hasn’t changed the underlying HTML 5 code – they’ve simply made it easier to use. And for folks like me that can use all the help they can get, easy is a good thing.