Avoiding Software Outages: Three Myths That Could Save Your Business

You’re working on your computer when a pop-up suggests that you reboot to install the latest software update. Naturally, you can’t be bothered. You delay … sometimes hours, usually days, and, most of the time, you don’t see any negative consequence.

As consumers, we often view software updates as pesky interruptions − nuisances, even. And unfortunately, businesses often apply that same mentality to their software upgrades.

At the enterprise level, outdated, unfixed software can have disastrous effects, leading to communications outages.

According to an Avaya internal analysis, software bugs are the No. 4 most-leading cause of communications outages. Approximately 69 percent of those outages could have potentially been prevented had leading practices been followed.

Myth: “If It Ain’t Broke, Don’t Fix It”

Though software vendors often release fixes and upgrades into the marketplace, many companies aren’t eager to apply them. This antiquated “if it ain’t broke, don’t fix it” strategy breaks down when a company suffers an outage that would have been avoided with a fix that it voluntarily chose to postpone.

Delaying or ignoring software upgrades at the enterprise level can have major financial repercussions. According to the Aberdeen Group, the average outage costs $2,728 per minute.

Most software-savvy customers adopt a regular, proactive patch management strategy that eliminates known issues. These businesses typically perform less invasive upgrades quarterly and larger upgrades annually.

Think of it like this: if it’s a “pizza box” upgrade (delivery in 30 minutes or less!), do it as soon as possible; if you’re looking at a large upgrade, plan well and far in advance to minimize disruption. Once you’ve finished your latest large upgrade, it’s not too soon to start planning for your next.

Myth: “Hardware and Software are Two Different Worlds”

Staying current to avoid software bugs is about more than just updating your software itself. Sometimes, it requires taking a look at your hardware too.

Make sure to review vendor lifecycle information to determine End of Manufacturer Support (EoMS) and proactively upgrade your hardware so that your software can stay up-to-date. In certain cases, your hardware may be supported but its corresponding software isn’t. If there’s not further software development or patching on a product, you need to plan a switch accordingly.

Most IT departments should aim to update their infrastructure to a more modern level, at minimum, every four to five years.

Myth: “Blame Avaya.”

Who says our software quality has improved? Our customers.

In Walker Feedback anonymous surveys, our clients were asked: “How would you rate Avaya’s product quality?”

In 2012, we were 60th percentile in product quality. Since investing heavily in automated regression, static analysis testing, code reviews and testing code coverage, we’ve improved greatly.

Now, we’re in the 90th percentile of product quality!

And, thanks in large part to those quality improvements, our Net Promoter Score (NPS), a customer loyalty measurement established by Bain & Company, is at best-in-class range. In Q4FY15, our NPS was 56, an all-time high for Avaya.

Your direct feedback indicates that we’ve made the right investments to deliver innovation coupled with quality, and we’re not done yet. We’re consistently raising the bar! Now, the ball’s in your court to take proactive support measures to help avoid costly communications outages.

Related Articles:

18 Ways to Help Your Business Limit Disaster Damage

Wind speeds approaching 200 mph. Storm labeled as “one of the worst on record.”

Have you read headlines like this lately? Realizing your business in the path of eminent danger is a very stressful situation. Rest assured knowing that Avaya will be there for you well before the pending disaster and  by your side until all danger has passed and your business is sailing smoothly again.

Helping IT leaders avoid and recover from outages is my passion. Given recent global weather events, I thought that this might be a particularly good time to focus on ways to avoid unnecessary costs of the next storm.

Be Proactive

The best way to handle an extreme weather event is to be prepared: build a plan and leverage a corresponding preparedness checklist. As we all know, it isn’t a matter of “if,” but unfortunately “when” a disaster will strike.  As an industry professional, you, of course, are aware of best practices such as testing failovers, geographic redundancy and backup validity that should be part of your standard operating model.

As a storm approaches, continually communicate with stakeholders, including customers, suppliers, employees and government officials.  Begin by identifying partnerships with your sales and service staff clients that are in harm’s way, and proactively reach out by sending preparedness information in advance.

When possible, ensure that replacement parts are readily available and can be rapidly redeployed to affected areas.  Be mindful not to depend on local team members as they too could be facing challenges brought on by the weather event and might need to be available for their families.

When assembling a team, pull from across the organization and leverage readouts on defined intervals to see how the team is impacting the business.  The key to being prepared is focusing on establishing the business’ ability to generate outcomes as rapidly as possible.

Start with the Checklist

To begin planning in advance, prepare a checklist based on the expected impact of the anticipated event on the facility, people, customers, suppliers and IT infrastructure. Make sure that the checklist includes the alignment of mitigation actions, as well as backups, closures, customer communications and team member communications.

Do you have the right resources to continue your business? Where are they going to be located? When it comes to the checklist, who are your primary and secondary point people to help recover and restore the business?  Consider who will perform potential damage assessments, re-entry of facilities and ensure security is handled.  If necessary, consider partnering with local agencies – police, fire, government agencies or local relief groups like the Red Cross.

Brave Your Network for the Storm

When the storm is imminent, take these six steps called out on the Avaya Support web site to prepare the communications network:

No. 6 Save translations before the emergency event impacts the site. This will ensure that recent changes are not lost and improve speed restoration in the advent of damage to the system.

No. 5 Secure back-up media so that translations won’t be lost or damaged, thereby delaying restoration of your service. Take a copy of back-ups and any other information off site.

No. 4 Print and store a current list of configuration for solutions. If a new system is necessary, this simple precaution will minimize delays

No. 3 Consider powering your system down before the emergency event impacts the site. Electrical power surges both before and after an emergency event can pose the greatest threat to your system.

No. 2 Contemplate moving switch/applications if the site is located in an area that may be exposed to damage from the emergency.

No. 1 Review safety procedures with all employees prior to the emergency event, if possible, and make certain to have an updated contact list to keep in touch.

Survey the Impact

When the storm passes, it is time to assess what systems are still working as well as possible damage including utilities, hardware, building structure, personnel, records and data and network capabilities.

Having a post-event checklist is helpful in assessing the response to an emergency and continuing to protect your IT investment. Leverage this list to evaluate how you performed and opportunities for the organization to grow:

  • How effective was preplanning, particularly roles, responsibilities, training and backups?
  • Responsiveness: did all the teams do what they were supposed to do?
  • Did the team send out the necessary notifications?
  • How effective was communication?
  • How was recovery time?
  • How were employees assisted? Were their needs met?
  • How was the business impacted?
  • How long and at what expense did it take for facilities restoration?
  • What was the customer perception?
  • How did the overall company do in meeting the needs of the emergency location team?
  • How well did the partnership with local government and agencies function during and after the event?
  • Are there other outstanding issues and alignments to close to enhance response to future emergencies?