Data and AI Pulse Check: Gartner 2025 D&A Summit Reflections

Sanjeev Mohan
7 min readMar 10, 2025

--

“Data is exhausting. Data is scary.” With those messages, the 2025 edition of Gartner’s flagship Data and Analytics Summit keynote by Carlie Idoine and Gareth Herschel kicked off. Five thousand data professionals nodded along, probably while clutching their stress balls and wondering if they accidentally wandered into a therapy session.

I went to Gartner D&A expecting robots serving me coffee and AI solving all my problems. Instead? I got to hang out with five thousand glamorous and best-looking data and AI posse.

My friend, Carlie Idoine, an exemplary Gartner analyst with the patience of a saint, wants to do more gardening. And honestly, after seeing the state of modern data, I get it. But please check out this fascinating TIME magazine cover that turns 60 years old next month. It promised that computers would automate most tasks and only 2% of humans would be in the workforce, leaving us with plenty of time for leisurely activities.

Apparently, trust in data is a key component. Who knew? “Trust your data” is right up there with “breathe air” on my list of helpful advice. All humor aside, the reality is that building and maintaining data trust is a complex and multifaceted challenge.

The Gartner Data and Analytics Summit 2025 offered a profound look at the current state and future direction of the industry. My attendance allowed me to distill the event into several key themes, which I will explore in this blog:

  1. Data Governance: The enduring importance of data governance despite the hype surrounding AI, with emphasis on metadata management.
  2. Data Practices: The real differentiator remains solid data practices — transformation, quality, DataOps, and security — to build data products that drive value in today’s competitive landscape.
  3. AI Agent and AI Governance: Disconnect highlighting the challenges of integrating AI into existing business processes and practical applications.

Data Governance takes the front seat…again

I was expecting AI to be the star of the show but it turned out to be good ol’ fashioned data governance. Some things never go out of style!

Metadata’s enduring prominence in data governance persists, fueling yet another iteration of the “renaissance” narrative. The term “active metadata,” a long-standing favorite, strikes me as superfluous. Fundamentally, metadata is metadata, irrespective of any perceived “activity.” If passive metadata were a sentient being, it would be filing a discrimination lawsuit while active metadata would be sweating profusely on a treadmill. In my humble opinion, active should be a verb, not an adjective. Successful organizations know how to ‘activate’ their metadata better than others.

Data catalogs continue to be eternally stuck in that awkward “is it a category or just a feature?” phase and yes, they continue to be essential for metadata management. They’re not having an existential crisis, just an identity crisis. With a finite total addressable market (TAM), vendors are diversifying, expanding into areas like data quality and protection. They are also increasingly incorporating agents. A data catalog vendor’s use of Anthropic Claude’s model context protocol (MCP) particularly piqued my interest, and I’ll be exploring that further.

Securely accessing data is right up there with breathing air in terms of giving advice. Yet, despite this fundamental truth, data access governance vendors struggle to achieve widespread traction. Organizations, grappling with an ever-expanding array of systems, face the daunting challenge of consistently applying and enforcing data access governance policies. This implementation gap has propelled data security to the forefront, particularly in the financial services sector.

Less than 10% of banks meet the BCBS 239 regulatory standards despite increasing scrutiny; another point that further underscores this urgency. Capital One Software’s strategic move, following the launch of Slingshot for cloud cost and governance automation, addresses sensitive data tokenization further highlighting the growing demand for robust security solutions. Concurrently, data lineage — often rebranded as business transparency — has re-emerged as a critical concern, reflecting the need for comprehensive data visibility and control.

Data Quality and Observability are back

Data observability vendors, after years of relative quiet, made a powerful showing at the Summit. My measure of a category’s vitality is user feedback, and data observability resonated strongly. Contrary to my expectations of consolidation, the event revealed a surprising proliferation of new entrants, echoing the ongoing expansion of the data catalog market.

A key insight emerged: data observability vendors emphasizing robust data quality capabilities are gaining significant traction. While pipeline observability was initially paramount, driven by the debugging challenges of early technologies, AI is now the catalyst for prioritizing data quality.

I had once anticipated that data observability would surpass infrastructure observability in market size. This is because data is the lifeblood of organizations while infrastructure is often managed by cloud providers, which has reduced an organization’s need to directly monitor it. However, this hasn’t yet happened for reasons unknown to me. While infrastructure observability giants like Datadog boast billions in revenue, the data observability market remains comparatively small.

FinOps and cost containment also emerged as prominent use cases, further highlighting the evolving role of data observability.

Data Transformation, DataOps and Data Products remain critical

Data products have transitioned from a nascent concept to a widely accepted business imperative, serving as the crucial vehicle for making data more easily consumable and available for data-driven decisions. The successful delivery of these data products relies heavily on robust data transformation and integration techniques which are, in turn, facilitated and automated by DataOps.

The conspicuous absence of dbt, a company now touting over $100 million in ARR and a 50% growth rate, sparked considerable discussion. This absence, coupled with their exclusion from the Data Integration Magic Quadrant, raises intriguing questions about their market positioning. Notably, I encountered several data transformation vendors poised to challenge dbt’s dominance, signaling a potential disruption in this rapidly evolving space.

Prophecy, a leader in data transformation copilots, facilitated a compelling panel discussion on “The Impact of GenAI on Data Teams,” hosted by CDO Magazine. I was privileged to share the stage with distinguished panelists from Toyota Motor Corporation and Royal Bank of Canada. Prophecy’s platform, with its distinctive low/no-code interface that dynamically interacts with full code, enables accelerated data product development and deployment, catering to a diverse range of users.

AI takes a back seat

If you came to the conference to learn about cutting edge LLMs and agents, then you would have been disappointed. It is like going to a Sabrina Carpenter concert and finding Tracy Chapman on stage. Don’t get me wrong: Tracy is fantastic, but maybe not who you wore your glittery outfit for.

While the fervor surrounding AI application development is undeniable, a critical gap persists: AI governance has not evolved in tandem. In conversations with end-users, the concept often elicits a blank stare, signaling a profound disconnect between innovation and oversight. Surprisingly, many consumers turned the tables, asking me about AI governance instead of answering my questions.

Vendors are attempting to bridge this divide by extending existing data governance frameworks, yet the precise scope and requirements of AI governance remain nebulous. Perhaps the absence of widespread, mission-critical AI deployments has shielded us from the consequences of this governance vacuum. However, as AI permeates core business operations, this oversight will become indispensable, potentially transforming from a latent concern to a critical bottleneck.

While AI agents were a prominent feature in vendor demonstrations, their practical application and understanding among end-users remain elusive and are often met with blank stares. This disconnect underscores a critical truth: in a rapidly commoditizing AI landscape, data remains the primary differentiator. The Summit’s focus on data, rather than solely on AI, was therefore prescient.

The pace of AI evolution, with cutting-edge models and techniques like RLHF becoming commonplace, necessitates a shift towards leveraging unique data assets. The pervasive concern about AI’s impact on roles is understandable. While job transformations are inevitable, I believe AI will initially serve as a powerful automation tool, freeing us to focus on higher-level strategic and creative endeavors.

Modern Data Architecture Evolution

Despite the strong showing from hyperscalers and major database players, the NoSQL presence was minimal, and standalone vector databases were nowhere to be found. This absence underscores the lingering doubt about vector databases as a standalone category, particularly as traditional vendors are quick to add vector functionality to their existing platforms.

Lakehouses also emerged as the dominant analytical storage solution. A key trend I observed was the accelerated adoption of open table formats, such as Apache Iceberg and Delta Lake, across various vendor categories, including data observability, data transformation, analytical engines, and catalogs. This move allows vendors, some previously tied to specific ecosystems, to achieve greater versatility and independence.

Graph database vendors maintained a presence in the Exhibit Hall, highlighting the ongoing appeal of the intuitive graph data model. However, I continue to observe a gap between the inherent elegance of the model and its widespread adoption among end-users. While compelling case studies exist, they often represent niche applications rather than enterprise-wide deployments. I anticipate that the emerging use of graphRAG to enhance GenAI accuracy may serve as a catalyst, bridging this gap and driving broader adoption.

Conclusion

Data is not scary. It is exhilarating. We, the data people, would serve our causes better by becoming more aware of the businesses we serve. The combination of business and IT prowess with deep knowledge of data is the only “moat” we need to survive the rapidly transforming world.

In summary, the 2025 Gartner Data and Analytics Summit revealed a landscape where the enduring foundations of data governance and quality are paramount, even amidst the rapid evolution of AI. While AI’s potential remains significant, its successful integration hinges on robust data practices and a clear understanding of its practical applications. The Summit ultimately underscored that the future of data lies not in fleeting hype, but in the strategic fusion of established principles with emerging technologies.

--

--

Sanjeev Mohan
Sanjeev Mohan

Written by Sanjeev Mohan

Sanjeev researches the space of data and analytics. Most recently he was a research vice president at Gartner. He is now a principal with SanjMo.

Responses (3)