Writing Your First WebRTC Application: Part 1

This is not the article that I intended to post today. The real article, which is still a work in progress, is a step-by-step approach to writing your first WebRTC application. It’s chock-full of JavaScript, WebRTC function calls, and HTML code.

However, it’s proving to be much bigger and more complicated than I expected. It’s not a simple task to take the reader from square one all the way across the board and not require 30 or more pages to get him or her there.

So, like all good elephant feasts, I’ve decided to break this into smaller pieces and spend some quality time on each one of them. My goal is that when all is said and done, you will be able to string the pieces together to get a much better understanding than if I just threw them at you all at once.

Which brings me to Part One. I’ll try not to get too technical (which is a difficult task given the nature of this subject) and present the high-level concepts of writing a WebRTC application. For some of you, this will be enough. For others, it will leave you wanting more.

The Beginning of the Beginning
A WebRTC application can be divided into two halves. I will further divide up those halves, but for now, let’s stick with two.

The first half is the code that runs in a Web browser. This consists of HTML and some form of scripting language. For me, the scripting language is JavaScript, but there are other, less common choices.

HTML

The HTML code will handle input from the user and perform all the steps necessary to format the visual aspects of the webpage. This is where you ask the user what he or she wants to do along with defining the text and graphics to be displayed on the page. Of most importance to this discussion, you will use HTML to declare where video will be shown on the page.

For instance, to display CIF (Common Intermediate Format) video you will create a 384 pixels by 288 pixels container. QCIF (Quarter CIF) would only need 176 pixels by 144 pixels.

JavaScript

First, I need to say that despite its name, JavaScript is not the same as the Java programming language. JavaScript has its origins way back in the days of Netscape (my very first Web browser) where it was known as LiveScript. It’s an object-oriented scripting language that supports dynamic typing.

Dynamic Typing means that you can declare a variable (with the var statement) and use that same declaration as an integer, string, or any other JavaScript data type. This is totally contrary to Java’s strict typing where strings, integers, characters, and all other data type are completely separate entities. Assigning an integer to a string will yield an error. This is not the case with JavaScript, where it’s a date type free-for-all.

The JavaScript portion of your application contains the page’s variables and run-time logic. Within that logic will be code to create the connection to the signaling server and calls to WebRTC functions.

For example, your application will need to call the WebRTC function RTCPeerConnection.setRemoteDescription(). This will be done within your JavaScript.

Calls to the signaling server will also be housed within your JavaScript. Although not a requirement of WebRTC, most developers will choose WebSocket as the path from application to server and server back to application.

A WebSocket object is a fairly easy use and will consist of calls to create a connection, send data on the connection, receive data on the connection, and recognize when the connection closes.

The Signaling Server
If you read my article about WebRTC fundamentals (WebRTC for Beginners), you will know that the WebRTC specification does not specify a particular signaling server. It clearly states that one is necessary, but it puts few restrictions on what it is or how it is accessed.

This can be seen as a mixed blessing. By not prescribing what a signaling server is, developers can use the technology that best suits their needs. Do you want to use SIP? Go ahead and use SIP. Do you want to define your own custom signaling that’s perhaps easier to use than SIP? Go for it.

The con is that there is no standard way to perform your signaling. So, unless you can utilize someone else’s work, you are on the hook to write your own signaling server.

No matter whether you procure or write it, a signaling server needs to do two basic things.

  1. Exchange the metadata necessary to perform the signaling. This includes some form of addressing and Session Description Protocol (SDP) for each browser.
  2. Deal with Network Address Translation and firewalls.

For a closer look as the high level aspects of a signaling server, please refer to An Introduction to WebRTC Signaling.


Wrapping up Part One
Allow me to summarize what I just wrote.

  • A WebRTC solution consists of two parts – a Web browser application and a signaling server.
  • The Web browser application will  consist of HTML and JavaScript.
  • HTML will be used for user input and page display.
  • JavaScript will be used for communication to the signaling server and WebRTC function calls.
  • A signaling server must exist, but WebRTC gives you a lot of leeway as to what it is.

In future installments, I will dig deeper into all these aspects. While you may never write your own WebRTC application, it is important to understand what is happening under the covers. By the time I am finished, I hope that you can do both with relative ease.

Stay tuned for more fun and games!

Related Articles:

Authentication & Authorization: Avaya’s Easy Security for Snap-in Developers

Security. Many application developers consider security to be the bane of their existence. We just want to get our features to work, and often don’t want to think about securing those features until the tail end of our development. Unfortunately, cybersecurity is really important in today’s world. Well, I’ve got good news for developers of Snap-ins on Avaya Breeze: Avaya has made it easy to securely expose and invoke web services in your Snap-ins.

Security Starts with Authentication—Answering “Who are you?”

Before you allow an application to invoke your features, you need to know the answer to the question “Who are you?” This has often been a difficult question to get answered. There are two major classes of applications, and they each should be authenticated differently.

  • Single User Apps: Some applications are directly focused on a single user. These sorts of applications often run on an end user’s device. They might be native applications or web applications running in a browser. For these sorts of applications, you must establish the identity of the end user. Ideally the users can use their enterprise credentials rather than a username and password specific to your application.
  • Server-Based Apps: These sorts of applications often operate on the behalf of many users, or don’t have functionality associated with any end users at all. Unfortunately, in the past we’ve often treated server-based applications like users. We give them a bogus username and password. Server-based applications really should be authenticated in a stronger way.

Next, Authorization Answers “Why are you here?”

The user or application successfully proved their identity, what more do we need? Well, we need to know the answer to the question “Why are you here?” and more importantly, “Are you allowed to do what you’re asking to do?”

We’ve usually done a pretty good of authorization for user-focused applications. If I log into a softphone, I can’t pretend to be a colleague as I make crank calls, or check my boss’s voicemail. Server-based applications are a different story. With those, it’s too often been all or nothing. If the application is a trusted super user, it can often do anything it wants. This just doesn’t cut it with the Avaya Breeze™ platform. An application that has been given full access to your snap-in might have no business accessing the services provided by another snap-in, and vice-versa. We need to do better than we have in the past.

OAuth to the Rescue!

Fortunately, Avaya Breeze has drawn upon the industry-standard OAuth 2.0 security framework to help you solve the problems of authentication and authorization. OAuth 2.0 provides a neat separation of concerns so that developers can focus their efforts only where they are required. The Authorization Service is the centerpiece of the OAuth 2.0 architecture. It is responsible for:

  • Authentication of users. Multiple forms of authentication can be used, including those that support Multi-Factor Authentication (MFA).
  • Authentication of applications. If an application invokes other services, OAuth 2.0 refers to it as a “Client”. This is true regardless of whether the application is running on an end user device or if it is a server-based application.
  • Provisioning and tracking of granted scopes. Specific applications can be granted access to specific features / resources.
  • Generation of tokens that assert identities and granted scopes.

Some of you will be writing snap-ins with web services that can be invoked by other snap-ins or non-Breeze applications. According to OAuth 2.0 lingo, you’ll be operating as a Resource. Guess what you won’t have to worry about as a Breeze Resource? Authentication! You don’t have to know or care how a user or Client was authenticated. In many cases, you don’t even have to know the identity of the invoker of your service. All you have to care about is whether the user/Client was granted access to your features (scopes). It is completely up to you to say what those scopes are called. If you have a snap-in that interfaces to a coffee dispenser, for example, you might declare a feature called “Make Coffee,” with values of “House” and “Premium.”

Others of you will be writing snap-ins or non-Breeze applications that will invoke other snap-in services. In that case, you’ll be acting as a Client in the OAuth 2.0 terminology. You will need to work with the Avaya Breeze Authorization Service to authenticate your application or snap-in, and optionally to authenticate users of your application. Once you’ve authenticated, you will get a token that you can present when invoking web services. Note that some snap-ins will act both as a Client and as a Resource.

If you read the OAuth 2.0 specification, you might think it seems complex. Fear not fellow snap-in developers, you’ll see in subsequent blogs how our Breeze APIs make it dead simple to securely expose and invoke snap-in web services. For now, if you’d like to learn more, check out the “Authorization” chapter of our Snap-in Development Guide.

How to Prevent Media Gateway Split Registrations

Back when Avaya Aura Communication Manager 5.2 was released, I recall reading about this new capability called Split Registration Prevention Feature (SRPF). Although I studied the documentation, it wasn’t until I read Timothy Kaye’s presentation (Session 717: SIP and Business Continuity Considerations: Optimizing Avaya Aura SIP Trunk Configurations Using PE) from the 2014 IAUG convention in Dallas that I fully understood its implications.

What is a Split Registration?

First I need to explain what SRPF is all about. Imagine a fairly large branch office that has two or more H.248 Media Gateways (MG), all within the same Network Region (NR). SRPF only works for MGs within a NR and provides no benefit to MGs assigned to different NRs.

Further, imagine that the MGs provide slightly different services. For example, one MG might provide local trunks to the PSTN, and another might provide Media Module connections to analog phones. For this discussion, it does not matter what type of phones (i.e. SIP, H.323, BRI, DCP, or Analog) exist within this Network Region. During a “sunny day,” all the MGs are registered to Processor Ethernet in the CM-Main, which is in a different NR somewhere else in the network. It aids understanding if you believe that all the resources needed for calls within a NR are provided by equipment within that NR.

A “rainy day” is when CM-Main becomes unavailable, perhaps due to a power outage. When a MG’s Primary Search Timer expires, it will start working down the list trying to register with any CM configured on the Media Gateway Controller (MGC) list. All MGs should have been configured to register to the same CM-Survivable server, which by virtue of their registration to it causes CM-Survivable to become active.

Image 1

In this context a CM server is “active” if it controls one or MGs. A more technical definition is that a CM becomes “active” when it controls DSP resources, which only happens if a MG, Port Network (PN) or Avaya Aura Media Server (AAMS) registers to the CM server.

Since all the MGs are registered to the same CM, all resources (e.g. trunks, announcements, etc.) are available to all calls. In effect, the “rainy day” system behaves the same as the “sunny day” with the exception of which CM is performing the call processing. Even if power is restored, only the CM-Survivable is active, and because no MGs are registered to CM-Main it is inactive.

In CM 5.2, SPRF was originally designed to work with splits between CM-Main and Survivable Remote (fka Local Survivable Processor) servers. In CM 6, the feature was extended to work with Survivable Core (fka Enterprise Survivable Servers) servers. To treat the two servers interchangeably, I use the generalized term “CM-Survivable.”

A “Split Registration” is where within a Network Region some of the MGs are registered to CM-Main and some are registered to a CM-Survivable. In this case only some of the resources are available to some of the phones. Specifically, the resources provided by the MGs registered to CM-Main are not available to phones controlled by CM-Survivable, and vice versa. In my example above, it is likely some of the phones within the branch office would not have access to the local trunks.

Further, the Avaya Session Managers (ASM) would discover CM-Survivable is active. They would learn of CM-Survivable server’s new status when either ASM or CM sent a SIP OPTIONS request to the other. The ASMs then might begin inappropriately routing calls to both CM-Main and CM-Survivable. Consequently, a split registration is even more disruptive than the simple failover to a survivable CM.

What can cause split registrations? One scenario is when the “rainy day” is caused by a partial network failure. In this case some MGs, but not all, maintain their connectivity with CM-Main while the others register to CM-Survivable. Another scenario could be that all MGs failover to CM-Survivable, but then after connectivity to CM-Main has been restored some of the MGs are reset. Those MGs would then register to CM-Main.

How SRPF Functions

If the Split Registration Prevention Feature is enabled, effectively what CM-Main does is to un-register and/or reject registrations by all MGs in the NRs that have registered to CM-Survivable. In other words, it pushes the MGs to register to CM-Survivable. Thus, there is no longer a split registration.

When I learned that, my first question was how does CM-Main know that MGs have registered to CM-Survivable? The answer is that all CM-Survivable servers are constantly trying to register with CM-Main. If a CM-Survivable server is processing calls, then when it registers to CM-Main it announces that it is active. Thus, once connectivity to CM-Main is restored, CM-Main learns which CM-survivable servers are active. This is an important requirement. If CM-Main and CM-Survivable cannot communicate with each other a split registration could still occur.

My second question was how CM forces the MGs back to the CM-Survivable. What I learned was that CM-Main looks up all the NRs for which that Survivable server is administered. The list is administered under the IP network region’s “BACKUP SERVERS” heading. CM-Main then disables the NRs registered to CM-Survivable. That both blocks new registrations and terminates existing registrations of MGs and H.323 endpoints.

Image 2

Once the network issues have been fixed, with SRPF there are only manual ways to force MGs and H.323 endpoints to failback to CM-Main. One fix would be to log into CM-Survivable and disable the NRs. Another would be to disable PROCR on CM-Survivable. An even better solution is to reboot the CM-Survivable server because then you don’t have to remember to come back to it in order to enable NRs and/or PROCR.

Implications of SRPF

Enabling SRPF has some big implications to an enterprise’s survivability design. The first limitation is that within an NR the MGC of all MGs must be limited to two entries. The first entry is Processor Ethernet of CM-Main, and the second the PE of a particular CM-Survivable. In other words, for any NR there can only be one survivable server.

Similarly, all H.323 phones within the NR must be similarly configured with an Alternate Gatekeeper List (AGL) of just one CM-Survivable. The endpoints get that list from the NR’s “Backup Servers” list (pictured above). This also means the administrator must ensure that for each NR all the MGs’ controller lists match the endpoints’ AGL.

Almost always, if SRPF is enabled, Media Gateway Recovery Rules should not be used. However in some configurations enabling both might be desirable. In this case, all MGs must be using an mg-recovery rule with the “Migrate H.248 MG to primary:” field set to “immediately” when the “Minimum time of network stability” is met (default is 3 minutes). Be very careful when enabling both features because there is a danger that in certain circumstances both the SRPF and Recovery Rule will effectively negate each other.

Finally, SPRF only works with H.248 MGs. Port Networks (PN) do not have a recovery mechanism like SRPF to assist in rogue PN behavior.

Enabling SRPF

The Split Registration Prevention Feature (Force Phones and Gateways to Active Survivable Servers?) is enabled globally on the CM form: change system-parameters ip-options.

Image 3

If I had not found Tim Kaye’s presentation, I would not have completely understood SRPF. So, now whenever I come across a presentation or document authored by him, I pay very close attention. He always provides insightful information.

Say Hello to Zang

For months, the Avaya and Esna teams have been hard at work on a revolutionary solution we believe will shape the future of communications in the new digital business era. Last week, the solution, and the new company behind it, were officially unveiled onstage at Enterprise Connect. Say hello to Zang.

We’re incredibly proud of Zang, and are big believers in its potential. Here’s why.

Apps, APIs and SDKs have fundamentally changed the way we connect with one another. Smaller startups have launched freemium, single-feature applications that are gaining traction in the market. And increasingly, we’re meeting small- and midsize customers who are mobile-first and cloud-only.

Zang is our answer to the needs and communication trends we’re seeing in the market.

Zang is the first all-in-one, 100 percent cloud-based communications platform and communication applications-as-a-service. The robust platform gives developers APIs and tools to easily build standalone communication applications, or embed parts of Zang into other apps, online services and workflows.

Zang is virtual, so it’s accessible anywhere, on any device. We also offer a range of ready-to-use applications on a pay-as-you-go subscription basis.

Giving companies the flexibility to build exactly what they need is incredibly powerful.

Imagine a midsize startup and the sheer number of distinct communication and collaboration tools it uses on a daily basis—Gmail for email, Google Docs for document collaboration, Slack for group chat, Salesforce for CRM, Skype for video calls, Zendesk for customer service and smartphones for business communications. Individual teams inside the company may adopt their own subset of tools—Zoho, Hipchat, Google Hangouts, etc.

A lot of important context gets locked up inside each platform, and isn’t shared as employees switch back and forth, all day long, communicating with customers and one another. If you want to embed Slack inside Salesforce, or embed Skype inside Salesforce, it’s hard. These applications are largely single-featured, and aren’t built with easy interoperability at their core.

Zang is different—our platform is more like a set of building blocks. We give companies exactly the configuration they need: either an already-built model, or the pieces needed to add voice, IM, conferencing, video, IVR, or a range of other capabilities to the software they’re already working with. And you don’t have to be a developer or have deep technical expertise to use Zang.

Embedding Zang communication capabilities inside an app takes hours and days, rather than weeks or months like our competitors. Stay tuned, as this is just the beginning of what Zang can do. To learn more and sign up for our First Look Program, please visit Zang.io.