March 20, 2026

Let 1,000 MCP Servers Bloom: How MCP is Reshaping Customer Experience Through Open Orchestration

Steve Brock

Steve Brock

Marketing Director, Avaya

How the Model Context Protocol went from developer experiment to enterprise standard in 16 months, and why Avaya Infinity was the first major contact center platform to go all-in on open orchestration.

The protocol that connects AI to enterprise data has gone from open-source experiment to industry standard in 16 months. For contact center leaders, it changes everything about what orchestration can be.

Key Takeaways

The Model Context Protocol (MCP) is an open-source standard, created by Anthropic and now governed by the Linux Foundation, that gives AI models a universal way to connect with enterprise tools and data. For customer experience leaders, MCP eliminates the integration bottleneck that has kept contact centers disconnected from real-time enterprise context. Avaya Infinity was the first major contact center platform to commit to native MCP support, pairing open orchestration with enterprise-grade governance through Databricks.

  • MCP has reached 97 million monthly SDK downloads and over 10,000 active production servers.
  • Nearly 150 organizations have joined the Agentic AI Foundation governing the protocol.
  • Organizations report 40 to 60% faster agent deployment versus traditional integrations.
  • Gartner projects 33% of enterprise software will feature agentic AI capabilities by 2028.

MCP By the Numbers

MetricValue (April 2026)
Monthly SDK downloads97 million+
Active production servers10,000+
AAIF member organizations~150
Critical CVEs identified (early 2026)30 in 60 days
Faster agent deployment vs. traditional integration40–60%
Projected agentic software penetration by 2028 (Gartner)33% (up from <1% in 2025)


The Thousand Flowers Are Already Blooming

Andreessen Horowitz's Top 100 Gen AI Consumer Apps report, published on March 9, 2026, captures a shift that enterprise technology leaders should internalize: AI is no longer a feature. It is the architecture. Products like CapCut, Canva, and Notion have woven generative AI so deeply into their core that removing it would be like removing the engine from a car. What makes this possible is not better models. It is better connection, specifically the connector ecosystems that both ChatGPT and Claude have launched, with MCP integrations forming the backbone of how AI assistants interact with the enterprise software stack.

The same architectural logic is now reshaping enterprise customer experience. When you look at the MCP ecosystem in April 2026, you see exactly what this post’s title evokes: thousands of MCP servers blooming across every vertical and use case, each one adding a new capability to the AI-powered enterprise. The enterprises that will lead in customer experience are the ones planting those seeds now.

MCP in Early 2026: From Experiment to Enterprise Infrastructure

If you had told enterprise architects in late 2024 that an open protocol from a single AI company would become the universal standard for connecting AI to business systems within 16 months, most would have been skeptical. Technology standards typically require years of committee negotiations and political compromises. MCP skipped all of that.

Anthropic launched MCP in November 2024 as an open-source specification built on JSON-RPC 2.0. Developer tools like Cursor, Visual Studio Code, and GitHub Copilot were the first adopters. Then, in March 2025, OpenAI formally adopted MCP across its products, with Google DeepMind following shortly after. Microsoft transformed Dataverse into a native MCP server. Salesforce deeply integrated MCP into Agentforce 3. Google rolled out native support for Cloud Run, Cloud SQL, Spanner, and Google Workspace.

The defining institutional moment came in December 2025, when Anthropic donated MCP to the newly established Agentic AI Foundation (AAIF), a directed fund under the Linux Foundation co-founded by Anthropic, OpenAI, and Block, with Platinum membership from AWS, Google, Microsoft, Bloomberg, and Cloudflare. This was the moment MCP stopped being "Anthropic's protocol” and became the industry’s protocol.

“Nearly 150 organizations joining the AAIF in its early days is a strong signal that agentic AI is shifting from experimentation to real-world deployment. The infrastructure for autonomous systems must be open, interoperable, and community-governed.”
— Jim Zemlin, Executive Director, Linux Foundation

By March 2026, the AAIF had expanded to nearly 150 members, including JPMorgan Chase, American Express, ServiceNow, Autodesk, and Red Hat. The protocol’s 2026 roadmap, released on March 9, is organized around four priority areas: transport scalability, agent communication, governance maturation, and enterprise readiness.

Gartner’s 2025 Innovation Insight report identified MCP as fundamental to the future of AI connectivity, projecting that 75% of API gateway vendors and 50% of iPaaS vendors will natively support MCP features in 2026. Even more significantly, Gartner anticipates that 33% of all enterprise software will feature agentic retrieval-augmented generation (RAG) capabilities by 2028, up from less than 1% in 2025.

Why MCP Changes Everything for Customer Experience

For customer experience leaders, MCP is not just another protocol. It is the architectural enabler that transforms what a contact center can be.

The End of the Context Blind Spot

The context blind spot is the persistent inability of traditional contact centers to surface a customer’s full history, preferences, and current situation in real time. Agents toggle between disconnected applications to piece together information spread across CRMs, ticketing systems, EHRs, and knowledge bases. Before MCP, bridging these systems required expensive, brittle, point-to-point integrations that most organizations could never fully build or maintain.

MCP eliminates this blind spot. When a customer interaction begins, the platform’s embedded AI dynamically discovers the necessary MCP tools and orchestrates data retrieval in real time. The AI connects through the standardized protocol, pulling exactly the context it needs to personalize the interaction. The result is a fundamentally different customer experience: no one has to repeat their story, and every agent has the full picture.

From Deflection to Augmentation

The first generation of AI in the contact center focused almost entirely on deflection: routing customers away from human agents to reduce costs. While this delivered short-term savings, it often eroded brand loyalty and customer satisfaction, particularly for complex or emotionally sensitive interactions.

MCP enables a fundamentally different model. Instead of replacing human agents, AI becomes a continuous orchestration engine that provides real-time, context-rich support during live interactions. The AI quietly executes database lookups, pulls contextual history, surfaces actionable intelligence, and drafts responses for human review. The human agent remains in control but is empowered with situational awareness that was previously impossible.

Democratized Journey Orchestration

Because MCP standardizes tool interaction and data flow, it dramatically lowers the barrier to building sophisticated customer workflows. Contact center administrators and business users can configure dynamic, context-aware journeys without deep software engineering expertise. This democratization of orchestration accelerates time-to-value and enables organizations to iterate on CX strategies at the speed of business rather than the speed of IT.

What This Looks Like in Practice: A Day in the Life

It is 2:14 PM on a Tuesday. A patient calls a regional health system’s contact center. She is a 67-year-old woman who had knee replacement surgery 11 days ago, was discharged from the hospital five days ago, and is now calling because she is worried about persistent swelling and cannot reach her surgeon’s office.

In the traditional model, this call starts badly and gets worse. The agent pulls up the patient’s account but sees only the billing record. The clinical history is in a different system. The post-surgical care plan is in a third. The agent asks the patient to describe her situation from scratch. The patient, who is anxious and in pain, has to explain her surgery, her discharge date, and her medications for the third time this week. The agent, doing her best with the tools she has, reads scripted triage questions from a protocol binder while toggling between four screens.

Now consider the same call on an MCP-enabled platform running the Tandem Care model.

Before the agent even says hello, the platform’s AI has already orchestrated a cascade of real-time data retrieval through MCP. It has connected to the EHR system through a FHIR-compliant MCP server and pulled the patient’s surgical history, discharge summary, and prescribed medication list. It has queried the scheduling system and identified that the patient’s follow-up appointment was canceled by the surgeon’s office due to a scheduling conflict. It has checked the post-surgical care protocol and flagged that day-11 swelling may warrant clinical review depending on severity.

The agent sees all of this in a single pane. She greets the patient by name and says, “I can see you had your knee replacement on the 8th and were discharged on the 14th. I also see your follow-up was rescheduled. Let me help you with the swelling concern and get that appointment back on the books.”

The patient exhales. She does not have to explain anything. The AI, operating as an intelligent co-pilot, has already drafted a severity-assessment checklist based on the patient’s specific procedure and recovery timeline. The agent walks through the questions naturally, not from a generic script but from a dynamically generated protocol tailored to this patient’s case. Based on the responses, the AI surfaces a recommendation: schedule an urgent follow-up within 48 hours and flag the case for the orthopedic team’s review.

The agent confirms the recommendation, books the appointment through the scheduling MCP server, and sends a summary to the surgeon’s office, all within the same interaction. Total call time: four minutes and twelve seconds. Patient satisfaction: high. Clinical risk: addressed. Agent experience: empowered, not overwhelmed.

This is what MCP and Tandem Care deliver together. Not a chatbot deflecting a worried patient to a FAQ page. Not an IVR maze that ends in a voicemail box. A connected, intelligent, human-centered interaction where AI handles the data orchestration and the human provides the empathy and judgment.

The Competitive Landscape: Everyone Is Moving, but Timing Matters

MCP adoption across the customer experience sector is accelerating, but it is not uniformly distributed. For CX decision-makers, understanding the current state of play is essential.

Salesforce (Agentforce 3) has deep MCP integration in production, enabling autonomous agents to interface with external data stores while eliminating technical debt from custom connectors. Microsoft (Copilot Studio) now operates Dataverse as a full MCP server, giving Copilot Studio agents native access to ERP and CRM data. Google (Gemini Enterprise) offers native MCP support for Cloud Run, Cloud SQL, Spanner, and Google Workspace. Amazon Connect has launched MCP support on its public roadmap. OpenAI (ChatGPT Enterprise) is running full MCP read/write support in beta via Developer Mode.

CX Vendor MCP Comparison: March 2026

VendorMCP CommitmentFirst AnnouncedGovernance LayerModel Agnostic?
Avaya InfinityNative (server + client)July 2025Databricks Unity CatalogYes (full)
Genesys Cloud CXAnnouncedLate 2025Not disclosedPartial
Amazon ConnectPublic roadmap2025AWS nativeAWS-centric
NICE CXoneEvaluatingTBDNot disclosedLimited
Five9EvaluatingTBDNot disclosedPartial

Note: Status reflects publicly available information as of March 2026. “Evaluating” indicates no formal public commitment at time of publication.

The competitive takeaway is clear: the race in CX is no longer about who has the best proprietary chatbot. It is about which vendor provides the most flexible, secure, and open orchestration engine for autonomous agentic interactions. Every major player is moving toward MCP. The differentiator is how they implement it, and the governance and security they bring to the table.

Why Avaya Went All-In on MCP First

In July 2025, Avaya announced that the Infinity platform would support MCP natively, becoming the first major enterprise contact center vendor to formally commit to the standard. CEO Patrick Dennis described it as an “MCP moonshot” that would become core to the product roadmap.

“This is not a ‘wait-and-see’ moment. Avaya believes the time to be intentional about building the definitive open orchestration engine for the modern enterprise is now.”
— David Funck, Chief Technology Officer, Avaya

Complete Model Agnosticism

Model agnosticism is the principle that an enterprise platform should never lock customers into a single AI vendor’s ecosystem. Enterprises using Infinity are entirely free to use Google Gemini, Anthropic Claude, OpenAI models, or specialized open-source models as the cognitive engine for their customer experience. They can swap these models as the AI market evolves without ever breaking their underlying MCP-standardized data integrations. This is not a theoretical capability. It is the platform’s fundamental design principle.

The Tandem Care Model of Human-AI Collaboration

Tandem Care is Avaya’s model of human-AI collaboration in which agentic AI and human agents work together in a harmonious cycle rather than AI replacing human agents. MCP enables the AI to act as an intelligent co-pilot, executing database lookups, pulling contextual history, and surfacing actionable intelligence to the human agent during live interactions. The result is faster resolutions, higher customer satisfaction, and a more fulfilling experience for the humans in the loop.

Dual-Role Architecture

Dual-role architecture means that Avaya Infinity operates as both an MCP server and an MCP client simultaneously. As a server, it exposes services like agent status, call summaries, and routing logic to external AI systems. As a client, it consumes services from external MCP-compliant systems. This enables true bidirectional orchestration across the entire AI ecosystem.

Enterprise-Grade Security Through Databricks

Avaya’s strategic partnership with Databricks directly addresses the governance gap that makes MCP risky for many enterprise deployments. By utilizing Databricks to manage the underlying data architecture and serve as the secure data lake, Avaya delivers fine-grained access control through Unity Catalog, strict tenant-aware data segregation for multi-customer environments, immutable audit logging for all AI interactions, and seamless integration across both structured and unstructured data sources.

“Together, Databricks and Avaya empower enterprises to harness domain-specific AI without compromising agility or compliance.”
— Heather Akuiyibo, Vice President of GTM Integration, Databricks

The Security Imperative: Innovation Without Governance Is Just Risk

MCP’s extraordinary momentum comes with real risks that enterprise leaders cannot afford to underestimate. The protocol provides an elegant mechanism for connection, but it lacks built-in frameworks for security policy, authorization, or auditability.

In early 2026, security researchers identified 30 critical CVEs over 60 days, primarily related to path-traversal and argument-injection flaws in widely copied MCP reference servers. The concept of “identity dark matter,” where AI agents bypass traditional identity and access management protocols by exploiting stale service identities or long-lived API keys, has emerged as a top-tier concern for CISOs. Supply chain threats from unvetted third-party MCP servers and data privacy risks when sensitive information enters LLM context windows round out a risk profile that demands serious governance.

This is precisely why Avaya’s approach matters. The combination of native MCP in the Infinity platform with Databricks governance is not just a feature differentiator. It is a direct answer to the most urgent security questions in the enterprise AI market. Organizations evaluating MCP adoption should not ask “Does this vendor support MCP?” They should ask, “How does this vendor govern it?”

Real-World Use Cases Delivering ROI Today

Healthcare: Leading digital health vendors are using specialized extensions of MCP, such as the Healthcare Model Context Protocol (HMCP), to enable AI agents to securely interact with clinical data systems while maintaining compliance with HIPAA, GDPR, and FHIR regulations.

Financial Services: Multinational banks are deploying MCP to connect AI agents with real-time market data and transactional ledgers. By eliminating batch-processing latency, agentic AI can instantly assess transaction risk, detect fraud patterns, and trigger step-up authentication during live customer interactions.

IT Service Management: Large organizations are using MCP to unify fragmented operational IT stacks. AI agents autonomously pull incident data from logging systems, cybersecurity platforms, and issue trackers to triage incidents, assign severity levels, and draft root-cause analyses without human intervention.

Organizations implementing MCP report agent deployment times 40–60% faster than with traditional integration approaches. These efficiencies are being measured in production environments across regulated industries right now.

What CX Leaders Should Do Now

1.  Prioritize Open Orchestration Over Proprietary Lock-In
Enterprise procurement should demand platforms that support open standards like MCP, enabling the freedom to swap AI models and integrate new tools without rearchitecting backend systems. A practical test: ask your vendor whether you can replace the AI model powering your contact center in 30 days without breaking a single data integration. If the answer is no, you are locked in.

2.  Invest in Governance Before Scaling
The security risks associated with ungoverned MCP deployments are substantial and well-documented. Before scaling any MCP initiative, ensure that robust access controls, audit logging, and data governance frameworks are in place. This means partnering with platforms that integrate enterprise-grade governance natively, not as an afterthought. Specifically: require fine-grained RBAC, immutable audit trails, and tenant-aware data segregation from day one.

3.  Adopt a Tandem Care Mindset
The most effective contact center AI strategies are not about eliminating human agents. They are about amplifying human capabilities with real-time AI support. Organizations that focus solely on deflection metrics will see short-term savings but long-term erosion of customer loyalty. Set a new KPI: measure the percentage of interactions where AI augments the agent versus the percentage where AI replaces the agent. The ratio should favor augmentation by at least 3:1.

4.  Launch a 90-Day MCP Pilot in a Governed Vertical
Do not try to boil the ocean. Select one regulated, high-value interaction scenario, such as post-surgical care coordination, complex claims adjudication, or Tier 2 technical support, and run a 90-day pilot. Scope it to three MCP server integrations (e.g., EHR + scheduling + knowledge base, or CRM + transactional ledger + fraud detection). Measure three things: average handle time, first-contact resolution, and agent satisfaction. These environments demand the governance rigor that separates production-grade deployments from experimental pilots, and they deliver the clearest ROI.

5.  Evaluate Platforms on Security Posture, Not Just Feature Lists
Every CCaaS vendor is announcing MCP support. The critical evaluation criteria should include how data governance is handled, which partners secure the data layer, whether the platform supports fine-grained RBAC and audit logging, and how the vendor addresses the protocol’s known security gaps.

The Window to Lead Is Now

The Model Context Protocol has reached the point of no return. With nearly 150 organizations in the Agentic AI Foundation, 97 million monthly SDK downloads, production deployments across every major cloud and enterprise platform, and aggressive analyst forecasts projecting 33% agentic software penetration by 2028, MCP is no longer optional for enterprise CX architectures. It is foundational.

The enterprises that move now will be the ones that define the next generation of customer experience. They will build orchestration capabilities that are flexible, secure, and model-agnostic. They will embrace human-AI collaboration rather than blunt deflection. And they will choose platforms that give them control over their AI destiny rather than locking them into proprietary ecosystems.

The future of customer experience is open, intelligent, and deeply connected.

So, let a thousand MCP servers bloom. The ones that bloom inside a governed, open orchestration platform will change everything.


Learn more about how Avaya Infinity is building the open, AI-powered Connection Center with MCP at the core.



Frequently Asked Questions

What is the Model Context Protocol (MCP)?

MCP is an open-source protocol that provides a universal, standardized way for AI models to connect with enterprise tools, data sources, and applications. Created by Anthropic in November 2024 and now governed by the Agentic AI Foundation (a Linux Foundation project), MCP is built on JSON-RPC 2.0 and eliminates the need for custom API integrations by establishing a standardized communication layer. It is often described as the “USB-C of enterprise AI.”

Why does MCP matter for customer experience and contact centers?

MCP transforms the contact center from a static routing engine into a dynamic, AI-powered orchestration platform capable of pulling real-time data from any connected enterprise system during live interactions. This eliminates the “context blind spot” that forces customers to repeat their story and agents to toggle between disconnected applications. It fundamentally shifts the model from customer deflection to customer augmentation.

How does Avaya Infinity use MCP?

Avaya Infinity integrates MCP natively into its core orchestration engine, operating as both an MCP server and an MCP client simultaneously. Combined with a strategic Databricks partnership for enterprise-grade data governance, Infinity provides model-agnostic AI orchestration, fine-grained access control through Unity Catalog, immutable audit logging, and the Tandem Care model of human-AI collaboration. This makes it purpose-built for regulated industries.

Was Avaya really the first major enterprise contact center vendor to adopt MCP?

Yes. In July 2025, Avaya publicly announced that the Infinity platform would support MCP natively, making it the first major enterprise CCaaS vendor to formally commit to the standard. CEO Patrick Dennis described the initiative as an “MCP moonshot” integrated directly into the core product roadmap. Competitors like Genesys and Amazon Connect have since announced their own MCP plans, but Avaya’s early commitment gave it a significant head start in building production-ready capabilities.

What are the security risks of MCP?

Key risks include vulnerable reference server implementations, identity dark matter from ungoverned AI agent access, supply chain threats from unvetted third-party servers, and data privacy exposure when sensitive information enters LLM context windows. Security researchers identified 30 critical CVEs in the first 60 days of 2026 alone. Enterprise MCP gateways and governed data platforms are essential for safe deployment. Avaya addresses these risks through its Databricks partnership, which provides fine-grained access control, tenant-aware data segregation, and immutable audit logging.

What is Tandem Care?

Tandem Care is Avaya’s model of human-AI collaboration in the contact center, in which AI and human agents work together in a harmonious cycle rather than AI replacing human agents. AI acts as an intelligent co-pilot handling data retrieval, context assembly, and real-time recommendations, while human agents provide empathy, judgment, and relationship-building. This approach delivers both operational efficiency and deeper customer satisfaction.

How does MCP compare to traditional API integrations?

MCP reduces integration complexity from an exponential model (N × M) to an additive model (N + M). Traditional integrations require a unique, custom-coded connection for every AI model-tool pairing. With MCP, each tool needs one MCP server and each AI model needs one MCP client. Organizations report 40–60% faster agent deployment times with MCP compared to traditional integration approaches.

Which companies have adopted MCP?

As of March 2026, MCP has been adopted by Anthropic, Microsoft, Google, OpenAI, Salesforce, Amazon, Avaya, Genesys, Databricks, Kong, and many others. The Agentic AI Foundation, which governs the protocol, includes nearly 150 member organizations, including JPMorgan Chase, American Express, ServiceNow, Cisco, IBM, Oracle, Red Hat, and Autodesk.

Is MCP ready for enterprise production use?

MCP has decisively exited the experimental phase and is rapidly maturing for enterprise production use. Major platforms are running production deployments, and the Linux Foundation provides neutral governance. However, enterprises must pair MCP with robust security gateways, governed data platforms, and strict access controls before deploying in regulated or high-stakes environments. The AAIF’s 2026 roadmap explicitly prioritizes enterprise readiness as one of its four core focus areas.

What is the Agentic AI Foundation (AAIF)?

The AAIF is a directed fund under the Linux Foundation that provides vendor-neutral governance for MCP and related agentic AI projects. Co-founded by Anthropic, OpenAI, and Block in December 2025, it includes Platinum members such as AWS, Google, Microsoft, Bloomberg, and Cloudflare. The foundation ensures that MCP evolves through community consensus rather than being controlled by any single vendor.