To MCP or Not to MCP Part 1: A Critical Analysis of Anthropic’s Model Context Protocol

14 min readApr 4, 2025

As the AI landscape evolves at breakneck speed, a new term, model, or approach emerges weekly. This time, it’s MCP — Anthropic’s Model Context Protocol, designed to standardize how models and agents connect with external data systems, and other tools.

This tweet from Sundar Pichai, CEO, Alphabet, is thought-provoking and is the inspiration for the title of our latest blog.

To provide valuable insights for businesses, Rajesh Parikh and Sanjeev Mohan explore MCP in this blog series, answering common questions about its economic value from a technology player’s viewpoint:

  1. Could MCP become a ubiquitous open standard for interfacing tools with AI agents and applications?
  2. What’s the future of MCP? Should we care?
  3. What are the other tools integration patterns? What are the trade-offs?
  4. Does MCP replace other integration patterns that exist today? Or does it co-exist with other integration patterns, each adding value in different deployment scenarios?
  5. How could MCP evolve? Could there be something better than MCP that can replace MCP?
  6. What are the economic consequences?
  7. Who benefits and who doesn’t?

To answer the above questions, we have structured the blog into two parts.

Part 1 (this blog) covers:

  • The current state of tools integration with AI applications and MCP background.
  • Patterns and options available for integrating tools with AI applications, including what could trigger the need for a completely different and more widely acceptable standard in future.

Part 2 looks at:

  • Who benefits from MCP?
  • Economic incentives across integration approaches defined in part 1

Current State

The future of AI-driven transformation hinges on AI agents’ ability to effectively integrate with diverse software, APIs, and data sources. Standardized interfaces, providing robust communication, adaptability, and security, are essential for building the intelligent systems that will deliver real-world results.

Current AI agent integrations with external and internal tools typically rely on shim layers built upon vendor-specific APIs, such as RESTful, GraphQL, gRPC, WebSockets, or they require tool-specific software plug-ins.

The core challenge lies in establishing seamless data exchange and action infrastructure across a diverse ecosystem of model/agent and tool providers. Currently, this gap is addressed by agent frameworks, tools, or tool proxy framework providers. They handle the necessary translation between agent developers and tool providers.

The Model Context Protocol is an open-source standard introduced by Anthropic in November 2024. This emerging messaging standard aims at establishing structured interactions between AI systems and external tools.

What is MCP?

MCP connects AI applications to external data sources, such as content repositories, business tools, and development environments, to help AI produce more relevant, context-aware responses. This breaks down data silos and replaces fragmented custom integrations with a universal protocol, making AI systems more effective and scalable.

As shown in Figure 1 below: MCP follows a client-server architecture where a host application can connect to multiple servers. MCP hosts are AI applications such as Claude desktop, Cursor or any other AI agent.

The MCP host applications embed an MCP client library which establishes connection with the MCP Server. The MCP Servers are programs that expose a specific resource or a tool or a capability via MCP Protocol to the MCP host. The MCP Servers can directly connect to local sources connected to your computer such as filesystem, a database, or a specific file or a remote source, like a SaaS application accessible via web/custom API.

Figure 1: MCP Architecture

AI developers and hyper-growth AI companies are actively supporting MCP, strategically positioning themselves to dominate the burgeoning agentic infrastructure (call it the ‘land grab’).

Some recent developments include:

  • AI application companies like Codium, Zed, Sourcegraph, Cursor and data providers such as Apollo.io were quick to support MCP with independent developers creating thousands of MCP servers.
  • In February 2025, the AI Engineer Summit featured a viral workshop by Mahesh Murag, author of MCP servers, announcing the official registry and deepening community engagement.
  • In March 2025, Anthropic’s leading competitor, OpenAI, adopted MCP Client in its Agents SDK, planning broader integration, signaling its potential as an industry standard.
  • Also in March 2025, Anthropic released its latest version with multiple enhancements, including OAuth 2.1 to secure agent-server communication and Streamable HTTP Transport. The latter keeps the web connection between client and server open for real-time bi-directional data flow.
  • In just the past few months, thousands of MCP server repositories have emerged on GitHub, supporting a wide range of tools.

The preceding timeline demonstrates MCP’s rapid progression from a proposal to a widely popular open standard.

MCP’s rapid build cycle, with thousands of servers created by independent developers and support from major players like OpenAI, seems to suggest it’s becoming an open standard for agentic AI infrastructure.

The ‘land grab’ mentioned in the initial context indicates competitive dynamics, with hyper-growth AI companies and independent developers vying for control. This could lead to innovation but also potential fragmentation if not managed well.

Why has MCP created so much interest?

Before MCP, AI developers struggled with the complex task of building custom code for model integration with diverse data sources, including document repositories, business applications, and development environments. This resulted in significant integration complexity and scalability challenges.

MCP simplifies this process by offering a standardized protocol, enabling developers to integrate once and seamlessly connect to multiple tool providers through a client-server architecture.

In this architecture, AI applications function as MCP clients, initiating connections to an MCP server, which then acts as a proxy for the target service or application. The MCP server translates between the MCP client and the underlying service.

This approach mirrors the standardization of protocols like HTTP for web communication, promoting a more sustainable architecture for AI development. Furthermore, it fosters collaboration by allowing third-party developers to provide services.

Anthropic open-sourced the initial implementation and actively engages with the developer community through ongoing improvements and open dialogue. This transparency has cultivated a collaborative environment, driving MCP’s rapid adoption and evolution.

For understanding MCP in detail, head to the original anthropic post on MCP here and a video tutorial here.

Arguments in favor of MCP

The growing support for MCP stems from several key arguments:

  1. MCP standardizes and simplifies integration as seen from the AI application end (MCP client) compared to the current integration strategies.
  2. As an open standard championed by Anthropic, a leading AI developer, MCP is envisioned as the ‘USB-C’ of AI integration.
  3. MCP’s launch, with comprehensive documentation and starter code, empowered and engaged developers, resulting in a rapid expansion of its early adopter community. This network effect and widespread support are crucial for the emergence of any standard.
  4. Large enterprises, characterized by multiple systems and fragmented ownership, often have disparate AI development teams creating custom tool interfaces. MCP offers standardization by cleanly separating tool integration from agentic application development.

Arguments against the MCP in its current form

While MCP gains traction, several arguments challenge its widespread adoption:

MCP not exactly client-server?

While MCP works as a client-server in case of directly connected resources, Most MCP Server implementations will end up proxying the original third party service (or server).

The host with the MCP client initiates the connection and the MCP server brokers the connection and implements the glue logic with the real server/service.

A truly open standard for client-server would necessitate the original server endpoints to also expose and enable the MCP protocol. This is not likely to be “MCP all the way” for the visible future.

Is MCP a middleware with the “lowest common denominator” issue?

Benedict Evans observes that MCP, as middleware aiming to abstract various software APIs, faces the ‘lowest common denominator’ problem; it cannot fully support all features of underlying tools.

Most SaaS applications like Instacart or Salesforce or even Anthropic would not want to be a dumb tool for someone else to build an intelligent AI application. Would Anthropic expose its Deepsearch and agents in future as MCP servers for other AI applications?

An important question or critique on the MCP hype is: What are the likely incentives of original SaaS applications and tool providers to support MCP for it to truly become an open standard? Would they want to become a dumb tool for another agent or become intelligent applications/agents themselves catering to their customers future needs.

Vibrant community support and yet may lack product guarantees

While thousands of open-source MCP servers are available across various GitHub repositories claiming to comply with the MCP standard, one still needs to be careful while choosing these servers as there is no official understanding of completeness, reliability and security. For AI agents to run reliably, the MCP server layer orchestration needs to provide guarantees in run-time.

Does MCP merely solve the problem of simplifying AI agents by shifting the custom integration left and creating a newer problem?

MCP, while standardizing and simplifying AI applications to connect to third party external systems doesn’t remove the need for custom integration, all of that is moved to independent MCP servers.

This now necessitates integrating 10s to100s of these MCP servers and deploying, managing and ensuring they run continuously reliably. While this works for scaled AI applications, the benefit of this separation may be minor for AI applications which need a few server integrations.

Integration Approaches

Having examined MCP’s features, its proponents, and its critics, let’s now review the various approaches to tool integration with AI applications, along with their respective advantages and disadvantages.

Custom Tools / Agent Framework and no MCP

AI applications interface with the tools via a middleware such as agent frameworks or tools middleware services. Agent developers often lean on these SDK/frameworks to abstract the tools interface out using custom interfaces. Agent/Tools frameworks wrap the vendor SDK in a shim layer and expose a standard interface to the model. Agents just invoke the function call as shown in Figure 2.

This is the predominant integration pattern that does not involve MCP.

Figure 2: Custom Agent SDK/Tools Adapter

Advantages

  • Simpler integration pattern when only a few tools are required.
  • No need for an additional hop in a standalone application.

Disadvantages

  • Agent architecture complexity grows when agents want to integrate with tens to hundreds of tools, likely the case for general purpose agents.
  • Not all resources or SaaS tools have integrations with agent or tools framework. It may not be easy to add a new tool because of lack of adequate agent framework documentation or a cleaner interface like MCP.
  • Scalability of tools to be able to handle millions of simultaneous requests may be a concern.
  • In large enterprises, application teams tend to be spread across different departments. So by design, ownership is fragmented. Separation of concern by standardization is a critical need to scale instead of AI applications developers trying to manage all integrations.

Representative example

LangChain: For More on how this compares with MCP from the point of view of Agent Framework developers, Read LangChain founder’s post here.

MCP Compliant Server / Proxy

An MCP compliant Server or Tools Proxy service. A middle ground, standardizes interface towards the model end, but builds custom tools adapters towards the tool/service end as shown in Figure 3

Figure 3: MCP Compliant Server / Proxy

Advantages

  • Enables scalability of tools interface.
  • Simplifies design of general purpose AI agents/applications that need to access a large number of tools.
  • Provides a bridge in cases where tools vendors don’t support MCP servers.
  • Separation of concerns between agent system and tool proxy can help distribute ownership between different teams.

Disadvantages

  • The proxy could introduce latency as it involves a hop that may not work for time-sensitive real-time AI applications.
  • Added complexity may lead to race conditions and failures.
  • Limited gains for AI applications that need only a few tools.

Representative example of an MCP Server:

Microsoft Playwright: This MCP server enables LLMs to interact with web pages and provides browser automation capabilities.

Representative examples of a tools proxy with MCP:

  1. Zapier is an example of connector service which now also supports MCP Protocol to Proxy the same set of underlying tools.
  2. Composio is a new player that started its journey being a tools middleware with custom API acting as a glue to various agent frameworks such as LangChain, CrewAI, etc. but has now also added support for the MCP protocol.

MCP all the way

The “MCP all the way” scenario could potentially evolve into a universal interface standard, functioning similarly to how the OpenAPI specification standardizes API interactions. This approach might offer a consistent framework for defining, implementing, and interacting with various systems across different platforms and technologies.

This scenario envisions that MCP server endpoints would be universally supported by the SaaS applications or tools vendors ubiquitously. This avoids the need for an MCP Proxy/Server in the arrangement making it truly analogous to a USB-C connector in the PC/device world.

However, it would need additional effort towards protocol standardization. MCP in its current form is still a messaging format than well-defined protocol with authentication/trust and transport layer interoperability being addressed.

Figure 4 shows MCP all the way without the need for an MCP proxy.

Figure 4: MCP all the way

Advantages

  • Scalable access to a large number of external tools via an ubiquitous interface with standard discovery and messaging format.
  • No overhead of a proxy or language translation helps in improving latency.

Disadvantages

  • Value of MCP is limited when the requirement is for integrating only a few tools.

Hybrid Agent / Tools Framework with MCP

Use agent / tools framework where built-in SaaS application and tools support is readily available and extend with a MCP client in other cases.

Figure 5 depicts this hybrid approach.

Figure 5: Hybrid Agent / Tools Framework with MCP

Advantages

  • Best of both worlds that leverage pre-built tools where-ever frameworks support it.
  • A good trade off between complexity and scalability.
  • Makes AI agent/agent frameworks open and extensible without changing code.

Disadvantages

  • Proxy may still be required in cases where the MCP servers are not natively supported.

Representative examples

OpenAI is likely following a hybrid approach, inferring from the two recent announcements.

OpenAI announced support for MCP in their agent SDK

OpenAI may be retaining the custom connectors for its “Deep research” and recently announced proprietary enterprise search.OpenAI may be retaining the custom connectors for its “Deep research” and recently announced proprietary enterprise search.

According to OpenAI: “And this is just the beginning. The team is already working on the next wave of connectors, aiming to support all the key internal knowledge sources your team relies on today — from collaboration and project management tools to data analytics platforms, CRMs, and more. These new connectors will be available soon for ChatGPT Team and Enterprise customers.“

Another example of the hybrid approach comes from a startup called Cynepia which has introduced an agent builder. Figure 6 is an illustration of Xceed Agents — Tools Catalog with MCP Client

Figure 6: Xceed Tools Catalog with MCP Server Connectivity

Emergence of a Post-MCP Standard?

In the rapidly evolving landscape of AI, we often start believing a certain narrative until the next one arrives. Could MCP turn out to be just that, and we see another standard emerge next year. Could a better standard emerge?

We’ve become accustomed to a hierarchical approach to AI system interactions. MCP mimics what we have traditionally known and operated (a classical client-server model) where AI applications act as client and controller, and external services take the role of a functionality provider(MCP Server). This inherently works where AI applications are treated as controlled entities, rigidly connecting to specific external systems with predefined, limited interactions.

But if we usher in an era of intelligent systems, what is really an agent and what is a tool is a really hard to draw boundary that often doesn’t hold. In reality even models or agents can be tools to go to for other agents/models. For example, OpenAI or Perplexity or Grok’s DeepSearch could be a tool for another AI application.

MCP as a messaging format has interesting pieces which can enable that with a few more pieces. However, that would mean we don’t envision MCP as a model to external tools interface. What if AI systems could truly collaborate, dynamically discover and leverage each other’s capabilities, and create more intelligent, adaptive networks of systems/applications?

If every system becomes an intelligent system, the difference between a tool and an application will likely shrink. Could then the future of the needed standard be a superset which provides both collaborative and control functions.

Figure 7 shows a conceptual diagram depicting a Distributed Agent Collaboration Protocol that is inherently peer-to-peer (P2P) rather than client-server.

Figure 7: Distributed Agent Collaboration Protocol

In all possibility the future belongs to a more interconnected, intelligent ecosystem where all systems become intelligent and collaborate seamlessly and the difference between what’s a tool and what is an agent shrinks rapidly.

Conclusion

With OpenAI’s recent adoption and growing momentum, MCP seems to be shaping the future of AI integration. Enthusiasm and adoption by AI application startups, middleware developers, and application developers has clearly given the much needed momentum that could help MCP enable standardization of interface towards the application end, enabling simpler integration and thereby rapid scaling and innovation for AI application startups.

With that said, while the agent to middleware interface standardizes and simplifies the AI application design, making them scalable, custom integration still exists at the level of middleware or MCP server raising an important question, is MCP merely about cleaning up agent interface and shifting the custom integration left? How do you address reliability, availability and maintenance of all these middleware services, which could be very thin wrappers in the end? Did creating a separate microservices add a new problem (cost?) to solve an old ailment that made these applications not scale.

The future of AI integration likely involves a hybrid approach, where MCP, while leading in specific applications, coexists with other integration strategies. Ultimately, the optimal choice depends on the context.

If we indeed enter an era of intelligent systems, the difference between a tool and an AI application would be hard to distinguish. An AI application can provide service or act as a tool for another AI application. A lot on how the standard pans out also depends on the AI players who are backing the standard, it needs to be seen if they would take the lead and announce MCP service for the functionality that other AI applications gain from, and not be a mere MCP host/client.

Alternatively, could we move from a world of controlled, limited AI interactions as envisioned by MCP to a standard that is collaborative, P2P, and brings all together in a seamless network of interconnected intelligent systems.

We circle back to Sundar’s question, ‘To MCP or Not to MCP?’ Despite our current inclination towards MCP’s long-term viability, the debate remains open.

In Part 2 we specifically focus on economic incentives of MCP and other integration strategies for different categories of players.

References

  1. Model Context Protocol Announcement from Anthropic in November 2024
  2. Building Agents with MCP — Workshop with Mahesh Murg from Anthropic
  3. MCP Flash in the pan or future standard — Langchain
  4. https://docs.mcp.run/blog/2025/03/27/mcp-differential-for-modern-apis/
  5. https://www.linkedin.com/posts/markwoneill_api-apimanagement-apimarketplace-activity-7311342989941854209-bUim

--

--

Sanjeev Mohan
Sanjeev Mohan

Written by Sanjeev Mohan

Sanjeev researches the space of data and analytics. Most recently he was a research vice president at Gartner. He is now a principal with SanjMo.

Responses (2)