Clumsy Strangers and Attacking Insects: My Two Hazardous Months Wearing Google Glass

Cradling my bike, right foot on the curb, left hand on the handlebar, I stood and waited for Glass to give me directions to my destination. As I stared into the rectangular prism above my eyebrow, I felt a tap on my right shoulder. Before I could turn, a voice asked, “Are those the new Google Goggles?” “Google Glass, actually!” I responded. 

(Note: this guest blog is written for Avaya Connected by Carlos Monterrey, a San Jose, Calif.-based writer, who also supplied the photos.)

I’ve been using Google Glass – no plural, thank you very much – for over two months now. I was one of the lucky few who secured a pair through Google’s Glass Explorer Program for developers and early adopters. 

Google Glass 1.jpg 

Carlos, about to be accosted by a friendly stranger over his choice in eyewear.

Like most explorers, I had a vision of grandeur, a hope for technologic luxury and exciting innovation. The reality is unfortunately more mundane. Wearing my charcoal-colored Glass in public has made me more self-conscious than I expected to be, especially in public restrooms, gyms, and other peeper-sensitive settings.

Being an early adopter of wearable technology also comes with a peculiar civic duty. When you’re the owner of globally-publicized, bleeding-edge gadgetry, the burden to share, teach and entertain is all too real, with friends and family, as well as the ever-looming friendly stranger.

This scenario, which has already played itself out more than a dozen times, goes something like this:

1) Ask about Glass.

2) Ask if they can wear it. 

3) Talk loudly in an attempt to make it do something cool. “OK, Glass…umm, Ohmigod, is this like Siri?” 

4) Clumsily swipe the Glass’ touchpad, causing it to dial your cousin, girlfriend, and other random people in your address book.

5) Accidentally post unflattering pictures of themselves. 

I get why people are so curious about its design. The nearly-indestructible frame weighs next to nothing. It produces HD quality video, and its bone conduction audio system is extremely effective. Most people stare out of curiosity, and all of the comments have been positive (possibly-relevant note: I live in the heart of Silicon Valley). 

Some spiteful non-users and cultural gatekeepers are bashing Glass as just being the latest symbol of nerd-dom – like a Segway, only wearable. Wired magazine recently had an article titled “Guys Like This Could Kill Google Glass Before It Ever Gets Off The Ground” showing young, savvy tech investors wearing Glass and looking “goofy” — meaning that if they can’t look cool wearing them, neither can we. I won’t weigh in on the fashion aspect, but I will say one thing: most people are curious to try on a pair when they see one. And food for thought: I was told that taking pictures with Glass made me look exponentially less bourgeoisie than taking pictures with an iPad.

Looking Glassy-Eyed, and Other Hazards

My initial plan was to build an app for Glass, a venture that was more educational than entrepreneurial. That hasn’t come to fruition. The Glass Developer Kit (GDK), which would give developers access to the Application Programming Interfaces (APIs) to build apps for Glass, was announced at the Google I/O conference in May. Despite the release of other Glass features, the GDK still hasn’t been released. 

Google Glass 2.jpg

The Latest Symbol of Conspicuous Nerd-dom, ala a Wearable Segway?

That’s got me and other Glass owners impatient. Whenever I wear Glass, I can’t help but imagine a plethora of possible applications: x-ray vision, telekinesis and mobile home theatre. Jokes aside, the included applications – hands-free picture and video, GPS navigation, voice command – though limited, are practical. They work quickly and eloquently, with the only drawback being, for a lack of a better term, looking “Glass-eyed.” This refers to the motionless stare that most users have when actively using Glass. My story: I was taking pictures atop one of my favorite hiking trails. While composing my photo, I was so motionless – necessary in order to reduce image blur – that a fly nearly flew into my mouth. 

Though the future of Glass as a social innovator is still questionable, some are turning their attention to workforce applications. Think of Square and how it eases credit card payments for very small businesses; Glass could do the same for inter-business communication. 

I think back to when I was 19 and at my first part-time job at the Home Depot. Within days of getting hired, the manager called me to his office and told me that I’d be working the electrical department because the usual guy had called in sick. Naturally I knew nothing about electrical appliances, or electricity for that matter. I spent the rest of the day avoiding questions from confused customers–sorry Home Depot. 

What a difference it would have made if I’d had something like Glass! Imagine scanning a QR code and getting everything I needed to know about fluorescent light bulbs projected directly in front of my eyes instantly. If I was really in trouble, I could have paged for assistance. Or I could have conducted a quick screen-share with a co-worker in the lumber department who knew a little more about connecting ground wires to service boxes for pre-WWII house than this teenager.  

RoboCop, and Other Industrial Uses

Law enforcement agencies have been tinkering with ideas similar to Glass for a while. Imagine having image recognition apps that instantly tell you everything you need to know about a person; height, weight, criminal background etc. in a hands-free headset. 

A person operating machinery in a factory can update supervisors in real-time about potential dangers by literally relaying what he or she sees. A handyman may not need to pull out his bubble-level anymore
because the projected image of a geometric line will tell him if the shelf on the wall is straight or not. We’re only limited by our imagination, truly. 

Unwilling to wait, programmers have tinkered with Glass and started compiling a list of hardware functions and specifications. They include a Texas Instrument OMAP4430 processor, accelerometer, gyrocompass and a proximity sensor. These could enable the holographic hand swiping made famous by Tom Cruise in the movie, “Minority Report”. All told, the components are worth between $150 and $200 – low, when you consider that the list price today is $1,500. Insiders predict that Glass will retail between $300 and $500 when it becomes widely available. 

In my opinion, $500 should be the highest price — especially if Glass doesn’t expand on its current list of features. It’s to Google’s advantage to release the Developer’s Kit as quickly as possible, as the creativity of developers and the apps they build will justify the concept of wearable computers – or not. Only then, will we be able to judge if wearing Glass on your head is truly better than pulling out your smartphone.

In a world where technological innovation is intertwined with seamless integration, Glass represents the way of the future for wearable technology, at least the early stages of it. They are like Google’s self-driving cars, which are just now starting to become visible on highways across America. The day will come when people will get used to the idea of mobile glassware technology; further advancing the fusion between technology, culture and person. Until then, I’ll continue to be interrogated about my choice in eyewear by friendly strangers.

Related Articles:

We’re Opening Up (Nearly) Everything We Do. Here’s Why.

Technology wants to be open. This concept—giving people the tools and freedom to build new applications, based on open APIs—is disrupting every corner of technology.

It’s time to open up enterprise communications.

To illustrate the disruptive nature of openness, consider the iPhone. The original iPhone launched on June 29, 2007, preloaded with apps that Apple programmers built themselves—email, a notepad, a weather app, a text messaging app, a calculator, etc. Pretty dry stuff.

251 days later, Apple released a software development platform, giving people the tools to build their own apps for the iPhone.

Overnight, quirky little apps started popping up in the App Store. Today, there are more than 1.3 million iPhone apps in the market—from mega-games like Candy Crush and Angry Birds, to cultural phenomena like Instagram, Tinder and Snapchat.

It would have been impossible for Apple to develop 1.3 million apps in 6.5 years (more than 500 new apps per day) and even if they could, those apps wouldn’t be nearly as creative as the software that people dreamt up and built on their own.

The concept of openness isn’t isolated to mobile phones—it’s disrupted networking, personal computing, the Web, wearable devices, and many other sectors.

Enterprise communications is next, and Avaya is leading the way.

For most of our industry’s history, enterprise-grade communication products have largely been closed, proprietary systems, due to the hardware-centric nature of what we sell. In the past, if you needed a custom application built—for example, software that allowed your contact center to send customer data to the home office—the development process could be slow and expensive.

As we virtualize significant pieces of our product portfolio, opening up the technical backend of those products has become increasingly feasible. Our customers want faster software development times, they want the ability to do it themselves, and they want greater control over their enterprise communications experience.

Today, we’re excited to highlight the new Avaya Engagement Development Platform, a software development platform that gives people the tools to build their own communication apps on our infrastructure.

To make software development even easier, the Avaya Engagement Development Platform features something we’re calling Snap-ins—modular, reusable pieces of code that connect, enable or facilitate desired application outcomes.

With Snap-ins, programmers can quickly and cost-effectively select popular communication features and integrate them into business processes and functions.

Earlier this year, we saw an innovative application of this new, open development platform out of Michigan State University. The school had a problem: People sometimes got stuck in the elevators, and when they picked up the phone inside the elevator, they got connected to someone who had no idea where they were.

Using the Avaya Engagement Development Platform, a small group of programmers at MSU created an app that automatically identifies the stuck elevator’s location and floor, and sends that data simultaneously to the school’s contact center and on-call maintenance staff.

MSU’s elevator app didn’t take a year to write, either—the team built it at a weekend hackathon. Fast, iterative development cycles will become the norm for people building software on the Avaya Engagement Development Platform.

Another example comes from a manufacturing company with 160,000 employees worldwide. Collectively, those employees dialed into tens of thousands of conference calls each day—oftentimes from the road. Each conference call required employees to get a calendar notification, write down the conference call number and PIN and dial in manually.

Using the Avaya Engagement Development Platform and Snap-ins, the company was able to develop a completely hands-free conferencing solution in less than 2 weeks. No more conference call numbers to remember—a single touch from the calendar notification, and you’re connected.

These are two successful use cases driven by real-world customer needs. We designed the Avaya Engagement Development Platform to give programmers the tools to build everything from simple elevator apps to sophisticated, scalable, global software solutions.

Openness is fundamentally good for the industry, our customers, and ultimately—the hundreds of millions of knowledge workers worldwide who rely on high-quality, enterprise-grade communications every day.

We’re looking forward to seeing what gets built.

An Introduction to the Avaya WebRTC Snap-In

Over the last several months, I’ve written a number of articles about WebRTC. I discussed encryption, network address translation, signaling, and–over the course of four articles–I even wrote about how to go about creating your very own WebRTC application.

In case you missed any of them, here are all my WebRTC articles to date:

WebRTC for Beginners

A WebRTC Security Primer

An Introduction to WebRTC Signaling

Understanding WebRTC Media Connections – ICE, STUN, and TURN

Writing Your First WebRTC Application Part One

Writing Your First WebRTC Application Part Two

Writing Your First WebRTC Application Part Three

Writing Your First WebRTC Application Part Four

Now that I’ve shared just about everything that I know about WebRTC, I am going to let you in on a little secret. You don’t need to know any of it. All that stuff about ICE candidates? You can forget it. Did you finally figure out how to use the RTCPeerConnection object? You don’t need to know that, either.

Okay, that’s not really true. Understanding ICE candidates and the native HTML 5 WebRTC objects and their methods isn’t a bad thing. It’s just that that bare metal WebRTC programming is pretty sophisticated stuff and a number of people recognized that the interfaces, function calls, and call flows are far more complicated than most people want to deal with – including me.

That’s why companies like Avaya are creating wrapper APIs (application programming interfaces) that hide the complexities of WebRTC. This allows applications to be written faster and with little understanding of the nitty-gritty that happens underneath the covers.

In addition to simplifying the call flows and objects, these wrappers can also add functionality not available in out-of-the-box WebRTC. That’s not to say that a good developer couldn’t write these extensions on his or her own. It’s just that they come “for free” and the programmer can concentrate on the application’s business logic and leave the nuts and bolts to someone else.

The Avaya WebRTC Solution

Today, I would like to introduce you to the WebRTC support offered by Avaya’s Collaboration Environment 3.0. If you are not familiar with Collaboration Environment, I highly recommend that you take a look at this article before proceeding.

An Introduction to Avaya Collaboration Environment 3.0

As you know, a WebRTC application can be divided into three parts. There is the application that runs in a web browser. This part will be written in HTML and JavaScript. There is the web server that delivers the application to the browser. Finally, there is the signaling server that relays information between WebRTC clients.

None of that changes with the Avaya approach. You still need a web server, WebRTC application, and a signaling server. However, unlike a traditional WebRTC application, an Avaya-based application will not call directly into the HTML 5 extensions. Instead, it will use a collection of Avaya objects that invoke those extensions on your behalf. This abstraction greatly simplifies what a programmer needs to know and do.

For instance, the Avaya API handles the work involved in finding and attaching to a STUN or TURN server. It also eliminates the need to create and manage WebSocket connections. This allows the programmer to focus on the business aspects of the application and leave the plumbing to Avaya. It also leads to fewer bugs, since the hard stuff has already been done by really smart people who live and breathe WebRTC.

In addition to making the programmer’s job a lot easier, the Avaya approach adds a layer of security that does not exist in native WebRTC. Specifically, Avaya supports the concept of a security token that assures that only authenticated/authorized users can create and manage WebRTC calls into your communications system. This prevents hackers from perpetrating toll fraud or launching denial of service attacks.

So, how does it work? Let’s start by looking at a diagram of the architecture:

Avaya WebRTC Snapp-In

There are a number of things that should look familiar. First, there is the client. This is a WebRTC-compliant web browser such as Chrome or Firefox. There is also the WebRTC application, which is retrieved from a web server. This application consists of HTML and JavaScript and is sent to the web browser via HTTP. The last familiar piece might be the reverse proxy. A reverse proxy sits between a web browser and a web server and assists in tasks such as firewall traversal, content acceleration, and data aggregation.

If you are an Avaya guy like me, you will also recognize standard Aura components such as Session Manager, Avaya Media Server, Communication Manager, Session Border Controller, and station endpoints.

Less familiar will be the Collaboration Environment server(s) and specifically the WebRTC Snap-in. Think of the Snap-in as the WebRTC signaling server. It accepts HTTP formatted commands from the application and converts them to SIP. These SIP messages will then use Communication Manager to establish and manage voice calls.

The Avaya Session Border Controller provides a secure conduit for the WebRTC application into the enterprise. It relays HTTP from the application to the WebRTC Snap-in and performs STUN and TURN functionality. If desired, an Avaya SBC can also act as a reverse proxy.

The Avaya Media Server terminates ICE, STUN, TURN, and DTLS. It also translates WebRTC media into a SIP media stream.

The Avaya WebRTC Library

Not shown in the above diagram is the Avaya WebRTC JavaScript library. This library contains the API objects that an application will invoke, as well as the code that interfaces with the underlying HTML 5 WebRTC layer. This library can be a part of the application itself, or better yet, it can be downloaded dynamically from the web server when the page is loaded. Downloading assures that the latest and greatest version of the API is always used.

The Avaya WebRTC library is what the developer will see when he or she builds an application. It consists of four sections:

  • Data declarations. An Avaya WebRTC application will use two variables: client and theCall. The client variable is used to describe the client and its connection data and theCall is the WebRTC call.
  • Callbacks are used to communicate between the Avaya API and the WebRTC application. Most of the real work will occur in these callbacks.
  • Configuration code. The client object is configured to describe the client endpoint.
  • The code that connects the application to the WebRTC Snap-in.

For example, the following is an application initiation sequence:

var client;

var theCall;

client = new avayaWebRTC.Client();

client.onConnectedCB = connectedCB;

client.onDisconnectedCB = disconnectedCB;

client.onNotificationCB = notificationCB;

client.onCallConnectedCB = callConnectedCB;

client.onCallInitiatedCB = callInitiatedCB;

client.onCallRingingCB = callRingingCB;

client.onCallRemoteDisconnectedCB = callRemoteDisconnectedCB;

client.onCallErrorCB = callErrorCB;

client.onRemoteMediaConnectedCB = remoteMediaConnectedCB;

client.webRTCHTTPAddress = serverURL; /* Collaboration Environment Server */

client.securityToken = token;

client.username = <caller’s phone number>;

client.username = <caller’s domain>;


Once onConnectedCB has been invoked by the API, the client can now make a call. Code to perform that will look like this:

theCall = new avayaWebRTC.Call(client);

theCall.ringingFileUrl = <optional wav file played after call is launched>;

theCall.destinationAddress = <called number>; /* called number can be restricted */

theCall.ContextID = <Context ID from context store>; /* think of this as caller attached data */


At this point, a call has been launched and the application will receive a series of callbacks as the call progresses. For example, onCallRingingCB will be invoked when the far end is ringing and onCallConnectedCB will be invoked when the call has been answered. The callback, onCallMediaConnectedCB, is invoked when media is received from the far end.

There are additional methods on the theCall object to manage and eventually release the call.

The WebRTC application can call any station type on the Avaya system. This includes H.233, SIP, analog, or digital telephones. These telephones can be standard endpoints or contact center agents. Additionally, the WebRTC application can use the Avaya system to make outbound calls on any trunk type.

The security token supplied by the web server can be used to restrict the types of endpoints that the application can call. For example, it may allow calls to contact center agents, but not outbound trunk calls.

Pay attention to the ContextID variable. When the WebRTC Snap-in is used with the Avaya Context Store Snap-in, caller and web page information can be passed with the call. This allows a contact center agent to know who is calling and from what page — i.e. the context of the conversation. This extension to WebRTC would be invaluable to contact centers that web enable their inbound communications.

In terms of capacity, Avaya states that the Snap-in supports 1800 simultaneous calls at a rate of 28,000 BHCC (Busy Hour Call Completions). The maximum requires a Collaboration Environment server, one Avaya SBC, and eight Avaya Media Servers.

In future articles, I will expand on the Context Store Snap-in along with Avaya’s Real-Time Speech application. Prepare to be amazed.

At this point in time, the API and WebRTC Snap-in only supports G.711 audio. Additional codecs (e.g. Opus) and video will be added at a later date.

That’s all for now

This was meant to be an introduction, so I will stop here. I hope this helps you understand the Avaya WebRTC solution from a high level, as well as get a feel for how the API and Snap-In simplify the job of writing a WebRTC application. It’s important to realize that Avaya hasn’t changed the underlying HTML 5 code – they’ve simply made it easier to use. And for folks like me that can use all the help they can get, easy is a good thing.

How to Give Your Business Apps Better People Skills

cereal siloes.jpg

Silos. Every businessperson claims to hate them, yet most businesses continue to operate in them, even in this communications-rich age.

One of the biggest reasons? The applications that we use for work haven’t caught up to the collaboration technology available all around us.

“You can’t have an agile business without agile IT, and you can’t have agile IT until your communications infrastructure is agile,” said Zeus Kerravala, the well-known ex-Yankee Group analyst during an webinar last week.

The experts on the Avaya-sponsored panel, who besides Kerravala included TMCnet publisher and editor-in-chief Rich Tehrani, Singapore business software developer Eutech’s CEO Dr. Hari Gunasingham and Avaya senior vice-president for collaboration, Gary E. Barnett, agreed that most businesspeople understand how non-communication-enabled legacy apps create and exacerbate silos. 

(Listen to a replay of the webinar or download the entire slide deck below.)

Indeed, better collaboration capabilities is their second highest priority, behind analytics, and ahead of crowd-pleasers such as mobile and cloud, according to Ventana Research:

Slide6 ce.JPG

Indeed, they’re already dreaming about the collaboration capabilities they’d embed in their next-generation of apps:

Slide7.JPGThe struggle is that for most developers, communications remains a complex specialty field that they don’t know very well. To bring these features into their apps would require a huge investment in time and/or money.

“You can’t expect most developers to understand all of the nuances around telephony and communications,” said Kerravala.

Barnett compares the situation to the late 1990s, when Web developers used to building lightweight sites using HTML initially struggled to build rich retail and B2B sites that tapped databases and other back-end data sources. The arrival of Web application middleware such as Weblogic and Websphere greatly simplified things for Web developers.

Similarly, what’s needed today is a comprehensive middleware platform – not a set of individual APIs – that makes it easy for non-communications experts to embed communications features into their apps.

Avaya Aura Collaboration Environment, which was launched officially last week, is our attempt to fill this gap during a time of great demand. (Read InformationWeek’s take here).

Avaya already has a bunch of leading ISVs, including Esna Technologies, UserEvents Inc. and now Eutech using Collaboration Environment to accelerate their dev time.

Eutech recently built a mobile app for Middle East luxury retailer, Paris Galleries, embedded with voice and video conferencing features. Eutech’s team was able to do this despite, according to Gunasingham, “having zero knowledge of collaboration from a Unified Communications (UC) point of view.”

Eutech was able to build the app in slightly more than a week, compared to the months Gunasingham figures his team would have needed without Collaboration Environment.

That benefits the final end user, Paris Galleries, and its salespeople. Armed with mobile devices, they can now quickly call upon remote cosmetics and other experts when customers ask for them.

“Customers want to be pampered,” said Gunasingham. “If you want your customers to spend a few thousand dollars on impulse, it’s very important that their experience be excellent.”

Collaboration Environment is compatible with the Eclipse programming environment. “We very purposely chose Eclipse because we knew every app developer knows it,” said Barnett. 


CE also comes with a Collaboratory – a cloud-based area where they can quickly build and test apps. “Developers don’t need to build their on-site lab; with Collaboratory, they can be up and running within a day,” Barnett said.

Collaboratory “de-risks things for developers,” agrees Kerravala, who says some Avaya developer partners he has interviewed credit CE with cutting their development time from half a year to a few days.

Eutech’s Gunasingham concurs. CE “had a real benefit for us,” he said. “Without CE, we couldn’t have gotten into this field at all.”