To MCP or Not to MCP Part 2: Economic Impact of the MCP Standard

4 min readApr 4, 2025

This is the second blog of our two-part series examining Anthropic’s Model Context Protocol (MCP). In the first part, Rajesh Parikh and Sanjeev Mohan explored the technical foundations — the what, why, and how of this new agent/model to external systems protocol.

In this blog, we shift our focus to the broader implications, addressing the critical question: how should end-users strategically invest in this technology? We analyze who the potential beneficiaries are to provide a clear investment perspective. We also look at various scenario players and integration strategies and how gains vary for each of the players in different strategies.

MCP Adoption Incentives across various players

Economics/Who Gains?

In the first part of the series, we looked at various integration patterns. Economic incentives clearly are not the same for different players across the various forms of integration. There are far fewer official MCP servers from application players and further negligible or probably none actual official MCP endpoints from original application vendors. For example, OpenAI supports only an MCP client and no MCP server endpoint for the specific features such as Deep research, it supports assuming other applications wanted to access the same as a tool.

AI application companies like Cursor, Anthropic, etc. are announcing MCP client support, anticipating independent developers and proxy startups will bridge the gap to build the server-side infrastructure. This raises a critical question: Will these companies also implement MCP servers for their own tools and agents that can be leveraged by other AI application providers? For instance, will Anthropic provide an official MCP server endpoint for its prompt and platform resources? Similarly, will Cursor expose an MCP endpoint for its code indexer/embeddings to be used by other AI applications?

The client-server nature of MCP, in contrast to a peer-to-peer design, dictates that its widespread adoption hinges on a mutually beneficial exchange among all parties. While initial announcements indicate its value for AI hyper-growth startups and independent developers, true success requires broader commitment which includes the original SaaS and application players.

Table 1 provides a comparative analysis of vendor integration approaches, categorized by suitable integration types as defined in Part 1 of this series. These integration patterns are analyzed across seven vendor playbooks:

  1. Cloud Hyperscalers: They have the opportunity to monetize the AI applications layer and also offer infrastructure-as-a-service to the independent developers/middleware players to host/run MCP servers. In addition, they can provide popular MCP servers as a shared hosted service at scale. Examples include hyperscalers like AWS, Azure, Google Cloud, NVidia as well as IaaS providers like Cloudflare.
  2. Pure Play API or Tools: These are players who monetize the service via API endpoints. Some of these tool providers offer services exclusively for AI agents. Examples include Twilio.
  3. Enterprise SaaS: These are aspirational players who also participate into the agentic future by targeting their audience with copilots and AI agents. The existing product or application often embedded a complex business workflow supported by a combination of human workflow via UI and backend features. Leading examples include Salesforce, ServiceNow, Slack.
  4. Specialist SaaS: These offer specialized workflows for specific customer needs as SaaS offerings. Examples include Acryl Data, Datastax, Alation, and MinIO.
  5. Agent/Tools SDK Framework & Middleware: These are players who currently provide middleware components for agents including agent framework, tools proxy, memory. Examples include LlamaIndex, LangChain, Zapier, Composio and others.
  6. Native AI Applications: These include rising AI model providers such as OpenAI, Anthropic, Grok etc and AI application vendors include AI startups such as Cursor, Harvey, Cynepia, and others.
  7. AI application developers: These are AI developers who develop integration software today either independently for their customers or an enterprise AI developer and are likely to build MCP servers in the new world.
Table 1: Economics of Integration by Vendor Category

Conclusion

In summary, MCP’s economic implications reveal a landscape where Cloud Hyperscalers, Native AI startups, and middleware and framework developers are poised to gain, while SaaS players face potential losses unless the standard also includes their needs. The current client-focused adoption, with few official MCP servers from application vendors like Anthropic and Cursor, highlights a gap that community efforts are filling, but clearly a broader commitment is needed for success.

For AI application startups and application developers, gain is clear. MCP enables standardization of interfaces towards the application end, enabling simpler integration and thereby rapid scaling and innovation.

Undoubtedly MCP has the momentum backing it at the moment and seems to be shaping the future of AI integration. Future of MCP could well be determined by mutual benefit as its client-server mode requires both client and servers to thrive. If MCP were to replace the OpenAPI spec by July 2025 (as predicted by a16z), it would lean towards becoming a de-facto open standard for agent-to-external systems interface.

Our beliefs in MCP can be transient even if it feels compelling in a given moment. Today’s approach envisions a market system where there are agents that act as controllers and there are external systems/tools which need to be controlled.

This may not necessarily be what the future beholds. If we embark in an era of intelligent systems, what is then a tool and what is an AI application would be hard to distinguish. An AI application can be a tool for another AI application to go to for deep research or knowledge retrieval. It would then need all vendors to look at supporting native MCP server endpoints ensuring it is P2P and not a host-client-server paradigm.

--

--

Sanjeev Mohan
Sanjeev Mohan

Written by Sanjeev Mohan

Sanjeev researches the space of data and analytics. Most recently he was a research vice president at Gartner. He is now a principal with SanjMo.

No responses yet