Clumsy Strangers and Attacking Insects: My Two Hazardous Months Wearing Google Glass

Cradling my bike, right foot on the curb, left hand on the handlebar, I stood and waited for Glass to give me directions to my destination. As I stared into the rectangular prism above my eyebrow, I felt a tap on my right shoulder. Before I could turn, a voice asked, “Are those the new Google Goggles?” “Google Glass, actually!” I responded. 

(Note: this guest blog is written for Avaya Connected by Carlos Monterrey, a San Jose, Calif.-based writer, who also supplied the photos.)

I’ve been using Google Glass – no plural, thank you very much – for over two months now. I was one of the lucky few who secured a pair through Google’s Glass Explorer Program for developers and early adopters. 

Google Glass 1.jpg 

Carlos, about to be accosted by a friendly stranger over his choice in eyewear.

Like most explorers, I had a vision of grandeur, a hope for technologic luxury and exciting innovation. The reality is unfortunately more mundane. Wearing my charcoal-colored Glass in public has made me more self-conscious than I expected to be, especially in public restrooms, gyms, and other peeper-sensitive settings.

Being an early adopter of wearable technology also comes with a peculiar civic duty. When you’re the owner of globally-publicized, bleeding-edge gadgetry, the burden to share, teach and entertain is all too real, with friends and family, as well as the ever-looming friendly stranger.

This scenario, which has already played itself out more than a dozen times, goes something like this:

1) Ask about Glass.

2) Ask if they can wear it. 

3) Talk loudly in an attempt to make it do something cool. “OK, Glass…umm, Ohmigod, is this like Siri?” 

4) Clumsily swipe the Glass’ touchpad, causing it to dial your cousin, girlfriend, and other random people in your address book.

5) Accidentally post unflattering pictures of themselves. 

I get why people are so curious about its design. The nearly-indestructible frame weighs next to nothing. It produces HD quality video, and its bone conduction audio system is extremely effective. Most people stare out of curiosity, and all of the comments have been positive (possibly-relevant note: I live in the heart of Silicon Valley). 

Some spiteful non-users and cultural gatekeepers are bashing Glass as just being the latest symbol of nerd-dom – like a Segway, only wearable. Wired magazine recently had an article titled “Guys Like This Could Kill Google Glass Before It Ever Gets Off The Ground” showing young, savvy tech investors wearing Glass and looking “goofy” — meaning that if they can’t look cool wearing them, neither can we. I won’t weigh in on the fashion aspect, but I will say one thing: most people are curious to try on a pair when they see one. And food for thought: I was told that taking pictures with Glass made me look exponentially less bourgeoisie than taking pictures with an iPad.

Looking Glassy-Eyed, and Other Hazards

My initial plan was to build an app for Glass, a venture that was more educational than entrepreneurial. That hasn’t come to fruition. The Glass Developer Kit (GDK), which would give developers access to the Application Programming Interfaces (APIs) to build apps for Glass, was announced at the Google I/O conference in May. Despite the release of other Glass features, the GDK still hasn’t been released. 

Google Glass 2.jpg

The Latest Symbol of Conspicuous Nerd-dom, ala a Wearable Segway?

That’s got me and other Glass owners impatient. Whenever I wear Glass, I can’t help but imagine a plethora of possible applications: x-ray vision, telekinesis and mobile home theatre. Jokes aside, the included applications – hands-free picture and video, GPS navigation, voice command – though limited, are practical. They work quickly and eloquently, with the only drawback being, for a lack of a better term, looking “Glass-eyed.” This refers to the motionless stare that most users have when actively using Glass. My story: I was taking pictures atop one of my favorite hiking trails. While composing my photo, I was so motionless – necessary in order to reduce image blur – that a fly nearly flew into my mouth. 

Though the future of Glass as a social innovator is still questionable, some are turning their attention to workforce applications. Think of Square and how it eases credit card payments for very small businesses; Glass could do the same for inter-business communication. 

I think back to when I was 19 and at my first part-time job at the Home Depot. Within days of getting hired, the manager called me to his office and told me that I’d be working the electrical department because the usual guy had called in sick. Naturally I knew nothing about electrical appliances, or electricity for that matter. I spent the rest of the day avoiding questions from confused customers–sorry Home Depot. 

What a difference it would have made if I’d had something like Glass! Imagine scanning a QR code and getting everything I needed to know about fluorescent light bulbs projected directly in front of my eyes instantly. If I was really in trouble, I could have paged for assistance. Or I could have conducted a quick screen-share with a co-worker in the lumber department who knew a little more about connecting ground wires to service boxes for pre-WWII house than this teenager.  

RoboCop, and Other Industrial Uses

Law enforcement agencies have been tinkering with ideas similar to Glass for a while. Imagine having image recognition apps that instantly tell you everything you need to know about a person; height, weight, criminal background etc. in a hands-free headset. 

A person operating machinery in a factory can update supervisors in real-time about potential dangers by literally relaying what he or she sees. A handyman may not need to pull out his bubble-level anymore
because the projected image of a geometric line will tell him if the shelf on the wall is straight or not. We’re only limited by our imagination, truly. 

Unwilling to wait, programmers have tinkered with Glass and started compiling a list of hardware functions and specifications. They include a Texas Instrument OMAP4430 processor, accelerometer, gyrocompass and a proximity sensor. These could enable the holographic hand swiping made famous by Tom Cruise in the movie, “Minority Report”. All told, the components are worth between $150 and $200 – low, when you consider that the list price today is $1,500. Insiders predict that Glass will retail between $300 and $500 when it becomes widely available. 

In my opinion, $500 should be the highest price — especially if Glass doesn’t expand on its current list of features. It’s to Google’s advantage to release the Developer’s Kit as quickly as possible, as the creativity of developers and the apps they build will justify the concept of wearable computers – or not. Only then, will we be able to judge if wearing Glass on your head is truly better than pulling out your smartphone.

In a world where technological innovation is intertwined with seamless integration, Glass represents the way of the future for wearable technology, at least the early stages of it. They are like Google’s self-driving cars, which are just now starting to become visible on highways across America. The day will come when people will get used to the idea of mobile glassware technology; further advancing the fusion between technology, culture and person. Until then, I’ll continue to be interrogated about my choice in eyewear by friendly strangers.

Related Articles:

The IoT Chronicles Part 2: Three Big Security Threats—and How to Solve Them

With projected market revenue of $1.7 trillion by 2020, the Internet of Things (IoT) stands to forever change the world as we know it. In part 1 of this series, I demystified the IoT and explored how leaders can create a vertical-driven strategy that produces positive and proactive business outcomes. Your strategy won’t get you far, however, if it doesn’t explicitly address the unique security threats that are inherent to this level of connectivity.

These kinds of threats aren’t easy to identify or mitigate, which is exactly why nearly 60% of companies say they plan to eventually implement the IoT (i.e., once security no longer concerns them) and why nearly 30% have no plans to implement the IoT at all, but this is likely to change quickly.

With the number of connected “things” growing, it’s expected that more hackers will feed off the ever-growing possibilities to attack, threaten and compromise business. Consider the recent IoT-driven DDoS attack on Internet performance company Dyn, which disrupted websites like PayPal, Spotify and Twitter. Dyn’s Chief Strategy Officer admitted last month that some of the traffic that attacked the company came from compromised IoT devices.

As I continue this four-part IoT crash course, I’d be remiss in not discussing security. Having said that, here are three massive IoT security threats we’re seeing today (and how to expertly address them):

  1. Personally-owned devices:

    Research shows that about 40% of U.S. employees at large enterprises bring their own device(s) to work, and 75% of companies currently permit or plan to permit BYOD in the workplace. Today, there’s a clear need among businesses to securely connect these personally owned devices that simultaneously perform multiple functions and connect to public, private and hybrid clouds. It may be easy to secure enterprise IoT, but this gets a lot trickier when you factor in the devices employees are using on your network. Just consider the 10 million Android devices that were infected this summer with Chinese malware.

    My suggestion: implement some sort of malware detection mechanism and deliver some level of automation that can quickly detect abnormalities on employee devices and prevent them from spreading.

  2. Open APIs:

    An open API model is advantageous in that it allows developers outside of companies to easily access and use APIs to create breakthrough innovations. At the same time, however, publicly available APIs are also exposed ones. Promoting openness means anyone can write new APIs (which is a good thing), but that can cause some challenges in the market. If an organization has undocumented features of its API, for instance, or if someone is rolling out an API and doesn’t have it properly documented or controlled, hackers can potentially take advantage. At the end of the day, businesses must be cautious as to what is being exposed and documented when writing APIs.

  3. Influx of data:

    The amount of data being gathered from today’s ever-growing number of connected “things” is simply astounding. In fact, research shows that about 90% of all data in the world today was created in just the past few years (2.5 billion GB of data were being produced every day in 2012 alone!) While big data has the potential to transform internal processes and the customer experience, leaders must ensure they have the right infrastructure in place to securely distribute and store the massive amount of data that flows through their organizations daily.

    My suggestion: have a solid understanding of how much data your network can handle (never overestimate your network capabilities) and plan to scale accordingly. Also, know the origination of your data and what privacy regulations you might need to take depending on the industry in which you operate. Healthcare, for example, must abide by very strict regulations. Be sure to also keep in mind the legality of where you store your data, depending on where that data comes from. Countries like Germany, for instance, have strict privacy laws that others don’t.

The One Thing to Remember

Here’s the thing business leaders must keep top of mind: although the possibilities for data compromise are growing, they’ll never become realities with network security solutions offered from the right provider. This doesn’t mean your security concerns aren’t valid. It simply means that, with the right technology, there’s no longer a reason to let those concerns prevent you from tapping into the immeasurable growth brought about by the IoT.

So, what’s my final suggestion? Organizations should consider a layered approach:

  • Phase I: Analyze, monitor and inspect.
  • Phase II: When classifying a device as suspect, isolate it to a different segment and perform forensic analysis.
  • Phase III:
    • Quarantine the device if known malware is detected and identified.
    • If the cause is unknown/unidentified, maintain isolation in a honeypot—a quarantine zone to understand malware—and deploy counter measures as soon as possible once a fix becomes available.
  • Phase IV: Once malware is clearly identified, quarantine all devices potentially infected while informing the end users and LOBs impacted.

For Phases II and III, invoke an automated sophisticated workflow to notify the right team for just-in-time analysis.

To properly execute on these phases, you need an automated and more secure networking foundation. The legacy client-server is simply not suitable for this new IoT world. Whatever services your connected devices or systems provide, do whatever you can to ensure they are logically segmented on your infrastructure. This is something that can be achieved through end-to-end network segmentation.

An end-to-end network segmentation solution eliminates nodal configuration by leveraging end-to-end Virtual Services Networks (VSNs). This allows businesses to provision their networks only at specific points of service, where those services are being consumed by end users or devices. Ultimately, end-to-end segmentation transforms your network core into an automated and intelligent virtualized transport. Your network segments will be stealth to hackers, flexible for secure and authorized use, and truly isolated from one another. These core capabilities nearly guarantee network security no matter what devices your employees are using, how much data they are generating and sharing, or what APIs are being written.

Your network security strategy will never be effective if your underlying architecture isn’t what it needs to be. In my opinion, end-to-end network segmentation is the most effective way to minimize and control the inherent security risks of the IoT. And the best news is that there are end-to-end segmentation solutions proven to deliver next-generation IoT security—even for companies still leveraging aging infrastructure. The technology is possible, real and waiting to be utilized.

As we move forward with the IoT, we must ensure security is always top of mind. There are a set of best practices that organizations must implement to substantially reduce the risks associated with IoT deployment. Keep in mind, there are no immune systems, but understanding the risks and minimizing the potential business impact is key. In the end, status quo will likely be a disaster for organizations endorsing the IoT at a rapid pace—changes to legacy practices and infrastructure are a must! Thankfully, technology advancements can provide the connectivity, stability and security required to enable companies to take advantage of the opportunities provided by the IoT.

 

We’re Opening Up (Nearly) Everything We Do. Here’s Why.

Technology wants to be open. This concept—giving people the tools and freedom to build new applications, based on open APIs—is disrupting every corner of technology.

It’s time to open up enterprise communications.

To illustrate the disruptive nature of openness, consider the iPhone. The original iPhone launched on June 29, 2007, preloaded with apps that Apple programmers built themselves—email, a notepad, a weather app, a text messaging app, a calculator, etc. Pretty dry stuff.

251 days later, Apple released a software development platform, giving people the tools to build their own apps for the iPhone.

Overnight, quirky little apps started popping up in the App Store. Today, there are more than 1.3 million iPhone apps in the market—from mega-games like Candy Crush and Angry Birds, to cultural phenomena like Instagram, Tinder and Snapchat.

It would have been impossible for Apple to develop 1.3 million apps in 6.5 years (more than 500 new apps per day) and even if they could, those apps wouldn’t be nearly as creative as the software that people dreamt up and built on their own.

The concept of openness isn’t isolated to mobile phones—it’s disrupted networking, personal computing, the Web, wearable devices, and many other sectors.

Enterprise communications is next, and Avaya is leading the way.

For most of our industry’s history, enterprise-grade communication products have largely been closed, proprietary systems, due to the hardware-centric nature of what we sell. In the past, if you needed a custom application built—for example, software that allowed your contact center to send customer data to the home office—the development process could be slow and expensive.

As we virtualize significant pieces of our product portfolio, opening up the technical backend of those products has become increasingly feasible. Our customers want faster software development times, they want the ability to do it themselves, and they want greater control over their enterprise communications experience.

Today, we’re excited to highlight the new Avaya Engagement Development Platform, a software development platform that gives people the tools to build their own communication apps on our infrastructure.

To make software development even easier, the Avaya Engagement Development Platform features something we’re calling Snap-ins—modular, reusable pieces of code that connect, enable or facilitate desired application outcomes.

With Snap-ins, programmers can quickly and cost-effectively select popular communication features and integrate them into business processes and functions.

Earlier this year, we saw an innovative application of this new, open development platform out of Michigan State University. The school had a problem: People sometimes got stuck in the elevators, and when they picked up the phone inside the elevator, they got connected to someone who had no idea where they were.

Using the Avaya Engagement Development Platform, a small group of programmers at MSU created an app that automatically identifies the stuck elevator’s location and floor, and sends that data simultaneously to the school’s contact center and on-call maintenance staff.

MSU’s elevator app didn’t take a year to write, either—the team built it at a weekend hackathon. Fast, iterative development cycles will become the norm for people building software on the Avaya Engagement Development Platform.

Another example comes from a manufacturing company with 160,000 employees worldwide. Collectively, those employees dialed into tens of thousands of conference calls each day—oftentimes from the road. Each conference call required employees to get a calendar notification, write down the conference call number and PIN and dial in manually.

Using the Avaya Engagement Development Platform and Snap-ins, the company was able to develop a completely hands-free conferencing solution in less than 2 weeks. No more conference call numbers to remember—a single touch from the calendar notification, and you’re connected.

These are two successful use cases driven by real-world customer needs. We designed the Avaya Engagement Development Platform to give programmers the tools to build everything from simple elevator apps to sophisticated, scalable, global software solutions.

Openness is fundamentally good for the industry, our customers, and ultimately—the hundreds of millions of knowledge workers worldwide who rely on high-quality, enterprise-grade communications every day.

We’re looking forward to seeing what gets built.

An Introduction to the Avaya WebRTC Snap-In

Over the last several months, I’ve written a number of articles about WebRTC. I discussed encryption, network address translation, signaling, and–over the course of four articles–I even wrote about how to go about creating your very own WebRTC application.


In case you missed any of them, here are all my WebRTC articles to date:

WebRTC for Beginners

A WebRTC Security Primer

An Introduction to WebRTC Signaling

Understanding WebRTC Media Connections – ICE, STUN, and TURN

Writing Your First WebRTC Application Part One

Writing Your First WebRTC Application Part Two

Writing Your First WebRTC Application Part Three

Writing Your First WebRTC Application Part Four


Now that I’ve shared just about everything that I know about WebRTC, I am going to let you in on a little secret. You don’t need to know any of it. All that stuff about ICE candidates? You can forget it. Did you finally figure out how to use the RTCPeerConnection object? You don’t need to know that, either.

Okay, that’s not really true. Understanding ICE candidates and the native HTML 5 WebRTC objects and their methods isn’t a bad thing. It’s just that that bare metal WebRTC programming is pretty sophisticated stuff and a number of people recognized that the interfaces, function calls, and call flows are far more complicated than most people want to deal with – including me.

That’s why companies like Avaya are creating wrapper APIs (application programming interfaces) that hide the complexities of WebRTC. This allows applications to be written faster and with little understanding of the nitty-gritty that happens underneath the covers.

In addition to simplifying the call flows and objects, these wrappers can also add functionality not available in out-of-the-box WebRTC. That’s not to say that a good developer couldn’t write these extensions on his or her own. It’s just that they come “for free” and the programmer can concentrate on the application’s business logic and leave the nuts and bolts to someone else.

The Avaya WebRTC Solution

Today, I would like to introduce you to the WebRTC support offered by Avaya’s Collaboration Environment 3.0. If you are not familiar with Collaboration Environment, I highly recommend that you take a look at this article before proceeding.

An Introduction to Avaya Collaboration Environment 3.0

As you know, a WebRTC application can be divided into three parts. There is the application that runs in a web browser. This part will be written in HTML and JavaScript. There is the web server that delivers the application to the browser. Finally, there is the signaling server that relays information between WebRTC clients.

None of that changes with the Avaya approach. You still need a web server, WebRTC application, and a signaling server. However, unlike a traditional WebRTC application, an Avaya-based application will not call directly into the HTML 5 extensions. Instead, it will use a collection of Avaya objects that invoke those extensions on your behalf. This abstraction greatly simplifies what a programmer needs to know and do.

For instance, the Avaya API handles the work involved in finding and attaching to a STUN or TURN server. It also eliminates the need to create and manage WebSocket connections. This allows the programmer to focus on the business aspects of the application and leave the plumbing to Avaya. It also leads to fewer bugs, since the hard stuff has already been done by really smart people who live and breathe WebRTC.

In addition to making the programmer’s job a lot easier, the Avaya approach adds a layer of security that does not exist in native WebRTC. Specifically, Avaya supports the concept of a security token that assures that only authenticated/authorized users can create and manage WebRTC calls into your communications system. This prevents hackers from perpetrating toll fraud or launching denial of service attacks.

So, how does it work? Let’s start by looking at a diagram of the architecture:

Avaya WebRTC Snapp-In

There are a number of things that should look familiar. First, there is the client. This is a WebRTC-compliant web browser such as Chrome or Firefox. There is also the WebRTC application, which is retrieved from a web server. This application consists of HTML and JavaScript and is sent to the web browser via HTTP. The last familiar piece might be the reverse proxy. A reverse proxy sits between a web browser and a web server and assists in tasks such as firewall traversal, content acceleration, and data aggregation.

If you are an Avaya guy like me, you will also recognize standard Aura components such as Session Manager, Avaya Media Server, Communication Manager, Session Border Controller, and station endpoints.

Less familiar will be the Collaboration Environment server(s) and specifically the WebRTC Snap-in. Think of the Snap-in as the WebRTC signaling server. It accepts HTTP formatted commands from the application and converts them to SIP. These SIP messages will then use Communication Manager to establish and manage voice calls.

The Avaya Session Border Controller provides a secure conduit for the WebRTC application into the enterprise. It relays HTTP from the application to the WebRTC Snap-in and performs STUN and TURN functionality. If desired, an Avaya SBC can also act as a reverse proxy.

The Avaya Media Server terminates ICE, STUN, TURN, and DTLS. It also translates WebRTC media into a SIP media stream.

The Avaya WebRTC Library

Not shown in the above diagram is the Avaya WebRTC JavaScript library. This library contains the API objects that an application will invoke, as well as the code that interfaces with the underlying HTML 5 WebRTC layer. This library can be a part of the application itself, or better yet, it can be downloaded dynamically from the web server when the page is loaded. Downloading assures that the latest and greatest version of the API is always used.

The Avaya WebRTC library is what the developer will see when he or she builds an application. It consists of four sections:

  • Data declarations. An Avaya WebRTC application will use two variables: client and theCall. The client variable is used to describe the client and its connection data and theCall is the WebRTC call.
  • Callbacks are used to communicate between the Avaya API and the WebRTC application. Most of the real work will occur in these callbacks.
  • Configuration code. The client object is configured to describe the client endpoint.
  • The code that connects the application to the WebRTC Snap-in.

For example, the following is an application initiation sequence:

var client;

var theCall;

client = new avayaWebRTC.Client();

client.onConnectedCB = connectedCB;

client.onDisconnectedCB = disconnectedCB;

client.onNotificationCB = notificationCB;

client.onCallConnectedCB = callConnectedCB;

client.onCallInitiatedCB = callInitiatedCB;

client.onCallRingingCB = callRingingCB;

client.onCallRemoteDisconnectedCB = callRemoteDisconnectedCB;

client.onCallErrorCB = callErrorCB;

client.onRemoteMediaConnectedCB = remoteMediaConnectedCB;

client.webRTCHTTPAddress = serverURL; /* Collaboration Environment Server */

client.securityToken = token;

client.username = <caller’s phone number>;

client.username = <caller’s domain>;

client.connect();

Once onConnectedCB has been invoked by the API, the client can now make a call. Code to perform that will look like this:

theCall = new avayaWebRTC.Call(client);

theCall.ringingFileUrl = <optional wav file played after call is launched>;

theCall.destinationAddress = <called number>; /* called number can be restricted */

theCall.ContextID = <Context ID from context store>; /* think of this as caller attached data */

theCall.initiate();

At this point, a call has been launched and the application will receive a series of callbacks as the call progresses. For example, onCallRingingCB will be invoked when the far end is ringing and onCallConnectedCB will be invoked when the call has been answered. The callback, onCallMediaConnectedCB, is invoked when media is received from the far end.

There are additional methods on the theCall object to manage and eventually release the call.

The WebRTC application can call any station type on the Avaya system. This includes H.233, SIP, analog, or digital telephones. These telephones can be standard endpoints or contact center agents. Additionally, the WebRTC application can use the Avaya system to make outbound calls on any trunk type.

The security token supplied by the web server can be used to restrict the types of endpoints that the application can call. For example, it may allow calls to contact center agents, but not outbound trunk calls.

Pay attention to the ContextID variable. When the WebRTC Snap-in is used with the Avaya Context Store Snap-in, caller and web page information can be passed with the call. This allows a contact center agent to know who is calling and from what page — i.e. the context of the conversation. This extension to WebRTC would be invaluable to contact centers that web enable their inbound communications.

In terms of capacity, Avaya states that the Snap-in supports 1800 simultaneous calls at a rate of 28,000 BHCC (Busy Hour Call Completions). The maximum requires a Collaboration Environment server, one Avaya SBC, and eight Avaya Media Servers.

In future articles, I will expand on the Context Store Snap-in along with Avaya’s Real-Time Speech application. Prepare to be amazed.


At this point in time, the API and WebRTC Snap-in only supports G.711 audio. Additional codecs (e.g. Opus) and video will be added at a later date.


That’s all for now

This was meant to be an introduction, so I will stop here. I hope this helps you understand the Avaya WebRTC solution from a high level, as well as get a feel for how the API and Snap-In simplify the job of writing a WebRTC application. It’s important to realize that Avaya hasn’t changed the underlying HTML 5 code – they’ve simply made it easier to use. And for folks like me that can use all the help they can get, easy is a good thing.