Enterprise AI Takes a Village
In the early 1900s, a man sold bottles of snake oil at a county fair. Down the road, a pharmacy stocked tested medicines from dozens of manufacturers, each labeled, each subject to oversight that no single seller could replicate. Enterprise AI faces the same choice in 2026.
Key Takeaways
Closed, single-vendor AI platforms promise convenience but deliver containment — locking enterprises into proprietary orchestration, vendor-controlled memory, and curated model marketplaces that quietly discourage outside alternatives. Open, standards-based orchestration built around the Model Context Protocol (MCP) and governed by the Agentic AI Foundation gives enterprises the freedom to choose their own AI models, swap components as the market evolves, and maintain control over their data. Avaya Infinity is purpose-built for this open future, combining complete model agnosticism, Tandem Care human-AI collaboration, and enterprise-grade security through Databricks.
- MCP has reached 97 million monthly SDK downloads and over 10,000 active production servers in 16 months.
- Gartner projects 33% of enterprise software will feature agentic AI capabilities by 2028, up from less than 1% in 2025.
- Security researchers identified 30 critical MCP vulnerabilities within 60 days in early 2026.
- Organizations implementing MCP report 40-60% faster agent deployment than with traditional integration approaches.
There is a town in upstate New York where, in the early 1900s, a man named Clark Stanley set up a folding table at a county fair and sold bottles of something he called Snake Oil Liniment. The bottle had a hand-drawn label. Stanley wore a cowboy hat and spoke with the confidence of a man who had rehearsed his pitch in fourteen previous towns. The liniment, he said, could cure everything from joint pain to hearing loss to nervous exhaustion.
One bottle. One solution. One vendor.
Down the road, a small pharmacy had just opened. It was not glamorous. It did not make promises about curing everything. What it did was stock medicines from dozens of manufacturers, each tested, each labeled with its actual ingredients, each subject to the scrutiny of a system larger than any single seller. You could walk in, describe your problem, and walk out with something that had been through a gauntlet of development, testing, and oversight that no individual salesperson could replicate.
The choice between the two was, of course, no choice at all. But the remarkable thing about the snake oil booth was how many people stood in line.
This is the story of enterprise AI in 2026.
The Seduction of the Single Bottle
There is an understandable appeal to the all-in-one pitch. A vendor walks into a boardroom and says, "We have the AI model." We have the orchestration layer. We have the data platform. We have the marketplace. Sign here, and it all just works. One interface. One invoice. One throat to choke.
This is what the closed AI stack looks like. It is elegant in the way a terrarium is: everything fits together beautifully, as long as you never need anything that is not already inside the glass.
The problem is that enterprise AI is not a terrarium. It is an ecosystem. And ecosystems, by definition, cannot be contained by a single vendor any more than a pharmacy can be run by a single chemist who insists on compounding every medication from scratch in the back room.
Consider what happens when an enterprise locks itself into a vertically integrated AI stack. The orchestration layer is proprietary. The memory architecture is vendor-controlled. The models are either built in-house by that vendor or accessed through a curated marketplace that quietly discourages outside alternatives. And every integration point, every workflow, every customer journey becomes another thread in a web of dependency that grows stickier with each passing quarter.
This is not a theoretical risk. The switching costs are real. The innovation constraints are measurable. And the strategic implications are profound: you cannot evolve faster than your vendor does. If a breakthrough model emerges next month from a lab in Paris, a startup in San Francisco, or an open-source community on GitHub, you cannot use it. Not without rearchitecting. Not without renegotiating. Not without pain.
The snake oil salesman did not want you visiting the pharmacy. He wanted you to believe that his bottle was the only bottle you would ever need.
What the Pharmacy Gets Right
A pharmacy is not exciting. Nobody writes breathless press releases about pharmacies. But pharmacies work, and they do so for a reason worth understanding: they are built on open standards.
Every drug on the shelf got there through a process that no single company controls. The active ingredients are tested against public protocols. The manufacturing processes follow shared standards. The labeling requirements are universal. A pharmacist can stock products from 100 different manufacturers because the system governing those products is designed for interoperability, not exclusivity.
This is exactly the principle behind the Model Context Protocol.
Anthropic introduced MCP in November 2024 as an open-source specification. Within 16 months, it achieved something that most technology standards take a decade to accomplish: genuine, cross-industry adoption. OpenAI adopted it. Google adopted it. Microsoft transformed Dataverse into a native MCP server. Salesforce integrated it into Agentforce 3. By March 2026, MCP had generated over 97 million monthly SDK downloads and was running on more than 10,000 active production servers.
The metaphor that caught on was "the USB-C of enterprise AI." But the pharmacy analogy may be more precise. USB-C is about physical compatibility. MCP is about something deeper: it is about creating a system where the customer, not the vendor, decides what goes on the shelf.
When Anthropic donated MCP to the newly established Agentic AI Foundation under the Linux Foundation in December 2025, the protocol moved from being a single company's contribution to a community-governed standard. Co-founded by Anthropic, OpenAI, and Block, with Platinum membership from AWS, Google, Microsoft, Bloomberg, and Cloudflare, the AAIF now has nearly 150 members. The governance is vendor-neutral. The roadmap is public. The development is collaborative.
This is what a pharmacy looks like at the infrastructure level. Not one company deciding what medicines exist, but a system of checks, standards, and shared accountability that makes the whole shelf trustworthy.
Why It Takes a Village
The reason enterprise AI takes a village is the same reason medicine takes a village. No single entity can do it all. The researcher who discovers a compound is not the manufacturer who produces it at scale, who is not the regulator who ensures its safety, who is not the pharmacist who matches it to a patient's specific needs. The system works because each participant does what it does best, connected by shared standards that enable collaboration.
Avaya understood this early. In July 2025, the company announced that the Infinity platform would support MCP natively, becoming the first major enterprise contact center vendor to commit to the standard formally. CEO Patrick Dennis described it as an "MCP moonshot" that would become core to the product roadmap. But Avaya did not stop at adoption. It joined the Agentic AI Foundation to help shape the standard's future.
The architectural philosophy behind Infinity reflects this village mentality in several concrete ways.
First, complete model agnosticism. Avaya explicitly rejects AI vendor lock-in. Enterprises using Infinity can run Google Gemini, Anthropic Claude, OpenAI models, or specialized open-source models. They can swap models as the AI market evolves without breaking their underlying MCP-standardized integrations. This is not a theoretical option buried in documentation. It is the platform's fundamental design principle.
Second, the Tandem Care model. Rather than pursuing aggressive cost reduction through customer deflection, Avaya advocates a human-AI collaboration model. The AI acts as an intelligent co-pilot, handling data retrieval, context assembly, and real-time recommendations. The human agent brings empathy, judgment, and accountability. MCP enables context and memory to persist across both roles, ensuring continuity. The goal is not fewer humans. It is more empowered humans.
Third, dual-role architecture. Avaya Infinity operates as both an MCP server and an MCP client. As a server, it exposes services like agent status, call summaries, and routing logic to external AI systems. As a client, it consumes services from external MCP-compliant systems. This bidirectional capability enables true orchestration across the entire AI ecosystem, not just the parts that one vendor controls.
Fourth, enterprise-grade security through Databricks. Avaya's strategic partnership with Databricks directly addresses the governance gap that makes raw MCP risky for enterprise deployments. Unity Catalog provides fine-grained access control. Tenant-aware data segregation protects multi-customer environments. Immutable audit logging covers all AI interactions. This is the regulatory equivalent of the FDA stamp on a medicine bottle: not a guarantee that nothing will ever go wrong, but a structured, auditable system designed to minimize the chances.
The Governance Gap and Why It Matters
Here is where the pharmacy analogy becomes urgent rather than merely illustrative.
MCP's extraordinary momentum comes with real risks. Security researchers identified 30 critical vulnerabilities within 60 days in early 2026, primarily in widely copied reference server implementations. AI agents using MCP can bypass traditional identity and access management protocols. Third-party MCP servers can introduce supply chain vulnerabilities. Sensitive data passed into an LLM's context window without strict controls can be inadvertently exposed.
Forrester Research put it bluntly in a headline that CISOs took to heart: MCP is transformative, but only when deployed with enterprise-grade security and governance.
This is exactly why the village matters. The raw protocol, by itself, does not mandate role-based access control, attribute-based access control, or audit logging. It provides an elegant mechanism for connection. But connection without governance is a liability, not an asset.
A pharmacy without the FDA, without manufacturing standards, without ingredient labeling, is just a store full of bottles. And a store full of unlabeled bottles is just a snake oil booth with better lighting.
Avaya's approach, layering MCP with Databricks governance, AAIF membership, and a security-first architecture, is designed to close this gap. Every AI action is routed through a policy-governed orchestration engine. Context boundaries are explicitly defined. Every action is timestamped, logged, and traceable. New regulatory requirements can be adopted without vendor dependence.
The Question That Matters
The enterprise AI landscape in 2026 is moving with a speed that rewards clarity and punishes indecision. Gartner projects that 33% of all enterprise software will feature agentic AI capabilities by 2028, up from less than 1% in 2025. Organizations implementing MCP report 40-60% faster agent deployment times. The Agentic AI Foundation has grown to nearly 150 members in just a few months.
Every major CX platform is rushing to announce MCP support. But there is a difference between announcing support and building around it. There is a difference between marketing "open" and architecting open. There is a difference between a vendor who adopted MCP when it became fashionable and one who committed to it when it was still a bet.
The question facing every enterprise leader is deceptively simple: If you were sick and needed the right, safe medicine that you knew would work, would you walk up to the snake-oil booth or walk into the pharmacy?
One asks you to trust a single salesman's promise. The other gives you a system of standards, oversight, and choice designed to earn that trust through transparency.
Avaya Infinity is the pharmacy. It does not ask enterprises to bet on a single AI vendor, sacrifice security for innovation, or replace their existing infrastructure to access the future. It meets organizations where they are and provides a clear, governed, open path forward.
The future of customer experience is not proprietary. It is open, intelligent, and deeply connected. And it takes a village to build.
See how Avaya Infinity puts open orchestration and MCP to work in the enterprise.
Frequently Asked Questions
What is the Model Context Protocol (MCP)?
MCP is an open-source protocol created by Anthropic in November 2024 that standardizes how AI models connect to enterprise tools, data sources, and applications. It eliminates the need for custom API integrations by providing a universal JSON-RPC 2.0 communication layer. Often described as the "USB-C of enterprise AI," MCP is now governed by the Agentic AI Foundation, a Linux Foundation project.
Why does MCP matter for customer experience?
MCP transforms the contact center from a static routing engine into a dynamic, AI-powered orchestration platform. It enables AI to pull real-time data from CRMs, ticketing systems, knowledge bases, and other enterprise systems during live interactions, eliminating the context blind spots that force customers to repeat themselves and agents to toggle between disconnected applications.
How does Avaya Infinity use MCP?
Avaya Infinity integrates MCP natively into its core orchestration engine, functioning as both an MCP server and a client. Combined with a strategic Databricks partnership for enterprise-grade data governance and membership in the AAIF, Infinity provides model-agnostic AI orchestration, fine-grained access control, immutable audit logging, and the Tandem Care model of human-AI collaboration, purpose-built for regulated industries.
What is Tandem Care?
Tandem Care is Avaya's model of human-AI collaboration. Rather than using AI solely for customer deflection, Tandem Care treats AI as an intelligent co-pilot that handles data retrieval, context assembly, and real-time recommendations. At the same time, human agents provide empathy, judgment, and relationship-building. MCP enables context and memory to persist seamlessly across both roles.
What are the security risks of MCP, and how does Avaya address them?
Key risks include vulnerable reference implementations, uncontrolled access to AI agents, supply chain threats from unvetted third-party servers, and data privacy exposure. Avaya addresses these through its Databricks partnership, which delivers Unity Catalog access control, tenant-aware data segregation, immutable audit logging, and integration across structured and unstructured data sources.
How does an open AI platform differ from a closed one?
Closed platforms consolidate control by coupling proprietary orchestration, models, and data layers, limiting customer choice and creating vendor lock-in. Open platforms like Avaya Infinity use standardized protocols, such as MCP, to separate these layers, enabling enterprises to choose their own AI models, swap components as the market evolves, and maintain control over their data and orchestration logic.