April 13, 2026

Why Avaya Is Investing in Open AI Foundations

Roger Wallman

Roger Wallman

Director, Product Marketing, Avaya

If you had told enterprise architects in late 2024 that an open protocol from a single AI company would become one of the fastest-moving standards in enterprise AI within 16 months, most would have been skeptical. Technology standards typically require years of committee negotiations and political compromises. Model context protocol (MCP) skipped all of that.

In July 2025, Avaya announced that the Infinity platform would support MCP natively, becoming the first major enterprise contact center vendor to formally commit to the standard. CEO Patrick Dennis described it as an “MCP moonshot” that would become core to the product roadmap. But Avaya did not stop at adoption. It joined the foundation, shaping the standard’s future.

Later that year, the Linux Foundation launched the Agentic AI Foundation. By March 2026, the organization had grown to nearly 150 members and was putting more focus on enterprise readiness. Avaya’s memberships in the Linux Foundation and the Agentic AI Foundation build on that earlier decision and makes the company’s position clear: open standards are becoming a bigger part of how enterprise AI gets built and used.

The Linux Foundation and the Broader Open-Source Ecosystem

Open software depends on more than code. It depends on governance, continuity, and a structure that lets important standards grow beyond a single company’s roadmap.

That is what makes the Linux Foundation relevant here. It sits within a larger open-source ecosystem where technical standards can develop with broader participation and longer staying power. For enterprises, that matters because flexibility is tied to the health of the ecosystem around a technology, not just the feature set of a product.

MCP shows how quickly that can matter. Anthropic introduced it in November 2024 as an open-source specification built on JSON-RPC 2.0. By March 2025, OpenAI had formally adopted MCP across its products, with Google DeepMind following shortly after. Microsoft turned Dataverse into a native MCP server. Salesforce integrated MCP into Agentforce 3. Google added native support across Cloud Run, Cloud SQL, Spanner, and Google Workspace.

The defining moment came in December 2025, when Anthropic donated MCP to the newly established Agentic AI Foundation under the Linux Foundation. That shifted MCP into a more formal, vendor-neutral governance environment. For Avaya, membership in the Linux Foundation reflects a broader commitment to open software as an architectural principle.

The Agentic AI Foundation and the Next Phase of Open Standards

That broader commitment becomes more specific in the Agentic AI Foundation (AAIF), where the focus shifts to agentic AI and the standards shaping how it will operate in enterprise environments. Avaya is a member of the Agentic AI Foundation, the Linux Foundation body providing vendor-neutral governance for MCP’s evolution. That means Avaya engineers can participate in working groups, community events, and committee nominations tied to the protocol’s development.

Avaya’s membership puts the company inside the MCP evolution process. It gives Avaya a way to bring customer experience requirements — including governance, security, and compliance — into the discussions shaping how this open standard evolves.

How open standards show up in Avaya Infinity

  • Complete model agnosticism. Avaya’s approach does not lock customers into a single AI vendor or model ecosystem. Enterprises can use models from providers such as Google, Anthropic, OpenAI, or specialized open-source options, and adapt over time without breaking the MCP-based integrations underneath.
  • Tandem care. Avaya’s tandem care operating model treats AI as a co-pilot for the human agent, not a replacement. MCP helps the platform pull contextual history, execute lookups, and surface actionable information during live interactions so the human agent can work with better timing and better context.
  • Dual-role architecture. Avaya Infinity is designed to operate as both an MCP server and an MCP client. It can expose services such as agent status, call summaries, and routing logic to external AI systems, while also consuming services from external MCP-compliant systems. That supports bidirectional orchestration across a broader AI ecosystem.
  • Enterprise-grade security through Databricks. Avaya’s strategic partnership with Databricks addresses the governance gap that makes MCP risky for many enterprise deployments. By using Databricks to manage the underlying data architecture and serve as the secure data lake, Avaya delivers fine-grained access control through Unity Catalog, strict tenant-aware data segregation for multi-customer environments, immutable audit logging for all AI interactions, and seamless integration across both structured and unstructured data sources.

What Open Architecture Looks Like in Practice

The value of open, connected AI becomes easier to see in a real interaction.

A support call in a fragmented environment

It is 2:14 PM on a Tuesday. A patient calls a regional health system’s contact center. She is a 67-year-old woman who had knee replacement surgery 11 days ago, was discharged from the hospital five days ago, and is now calling because she is worried about persistent swelling and cannot reach her surgeon’s office.

In the traditional model, this call starts badly and gets worse. The agent pulls up the patient’s account but sees only the billing record. The clinical history is in a different system. The post-surgical care plan is in a third. The agent asks the patient to describe her situation from scratch.

The patient, who is anxious and in pain, has to explain her surgery, her discharge date, and her medications for the third time this week. The agent, doing her best with the tools available, reads scripted triage questions from a protocol binder while toggling between four screens.

The same call on an MCP-enabled platform

Now consider the same call on an MCP-enabled platform running the tandem care model.

Before the agent even says hello, the platform’s AI has already orchestrated real-time data retrieval through MCP. It has connected to the EHR system through a FHIR-compliant MCP server and pulled the patient’s surgical history, discharge summary, and prescribed medication list. It has queried the scheduling system and identified that the patient’s follow-up appointment was canceled because of a scheduling conflict. It has checked the post-surgical care protocol and flagged that day-11 swelling may warrant clinical review depending on severity.

The agent sees all of this in a single pane. She greets the patient by name and says, “I can see you had your knee replacement on the 8th and were discharged on the 14th. I also see your follow-up was rescheduled. Let me help you with the swelling concern and get that appointment back on the books.”

The patient exhales. She does not have to explain anything. The AI, acting as an intelligent co-pilot, has already drafted a severity-assessment checklist based on the patient’s procedure and recovery timeline. The agent walks through the questions naturally, using a protocol tailored to this case rather than a generic script. Based on the responses, the AI surfaces a recommendation: schedule an urgent follow-up within 48 hours and flag the case for the orthopedic team’s review.

The agent confirms the recommendation, books the appointment through the scheduling MCP server, and sends a summary to the surgeon’s office in the same interaction. Total call time: four minutes and twelve seconds.

This is what MCP and tandem care can make possible together: AI handles the data orchestration, and the human agent brings judgment, empathy, and accountability.

Interoperability, Governance, and Enterprise Flexibility

Traditional contact centers still struggle to surface a customer’s full history, preferences, and current situation in real time. Agents move across disconnected applications to piece together information spread across CRMs, ticketing systems, EHRs, and knowledge bases. Before MCP, bridging those systems usually meant brittle, point-to-point integrations that many organizations could not fully build or maintain.

MCP changes that by giving AI a more standardized way to discover tools and retrieve relevant data during the interaction itself. That makes a different service model possible. Instead of focusing mainly on deflection, organizations can use AI to support the live interaction — assembling context, surfacing useful information, and helping the conversation move forward with less repetition and less friction.

Standardized tool interaction also lowers the barrier to building more dynamic workflows. Customer journeys no longer have to depend on the same level of custom engineering every time a business rule changes or a new source of context becomes relevant. Teams can configure and refine interactions more quickly, which makes the system easier to adapt as needs change.

There is a difference between saying open standards matter and building around them early. Avaya did that with MCP. These memberships extend that position by linking the company more directly to the organizations helping shape the standards environment around enterprise AI.

See how Avaya Infinity puts open orchestration and MCP to work in the enterprise.

View this status update on MCP: The Model Context Protocol: A Status Report for Enterprise Customer Experience Leaders

See how Avaya’s approach differs from other CCaaS offerings in this white paper: The Importance of Being Open for AI

Frequently Asked Questions

What is the Model Context Protocol, and why does it matter in enterprise AI?

MCP is an open-source protocol that provides a standardized way for AI models to connect with enterprise tools, data sources, and applications. In practical terms, it helps reduce the integration bottlenecks that often keep AI systems disconnected from the business context they need to be useful. It was created by Anthropic in November 2024 and is now governed by the Agentic AI Foundation (a Linux Foundation project). MCP is built on JSON-RPC 2.0 and eliminates the need for custom API integrations by establishing a standardized communication layer. It is often described as the “USB-C of enterprise AI.”

Why does MCP matter for customer experience?

MCP can help transform the contact center from a static routing engine into a more dynamic, AI-powered orchestration platform. By helping AI pull real-time data from connected systems during live interactions, it reduces the context blind spots that force customers to repeat themselves and agents to toggle between disconnected applications. It fundamentally shifts the model from customer deflection to customer augmentation.

How does Avaya Infinity use MCP?

Avaya Infinity integrates MCP natively into its core orchestration engine, operating as both an MCP server and an MCP client simultaneously. Combined with a strategic Databricks partnership for enterprise-grade data governance and membership in the AAIF, Avaya Infinity provides model-agnostic AI orchestration, fine-grained access control through Unity Catalog, immutable audit logging, and the tandem care model of human-AI collaboration. This makes it purpose-built for regulated industries.

How does MCP compare to traditional API integrations?

MCP reduces integration complexity from an exponential model (N × M) to an additive model (N + M). Traditional integrations require a unique, custom-coded connection for every AI model-tool pairing. With MCP, each tool needs one MCP server, and each AI model needs one MCP client. Organizations report 40–60% faster agent deployment times with MCP compared to traditional integration approaches.

Is MCP ready for enterprise production use?

MCP has moved past the experimental phase and is maturing quickly for enterprise production use. Major platforms are running production deployments, and the Agentic AI Foundation’s roadmap explicitly prioritizes enterprise readiness. However, enterprises must pair MCP with robust security gateways, governed data platforms, and strict access controls before deploying in regulated or high-stakes environments.