Snowflake Summit 2025: Unifying the Data Universe
What do 16,000 people, a data arms race, and a packed Moscone Center have in common? They were all part of Snowflake Summit 2025 — the company’s biggest and boldest event yet.
Ever wonder who Snowflake’s biggest customer is? Or maybe you’re losing sleep trying to figure out how many companies are playing both sides, using both Snowflake and Databricks? If these are the burning questions keeping you up at night, toss those worries aside! You’re about to get the answers to these mysteries, plus a peek at how all the Snowflake Summit 2025 announcements are addressing customer’s desire for simplicity and unification of their ever-exploding and complex stack.
For years, Snowflake Summit has been a bellwether for the data industry, and this year was no exception. It’s more than just a conference; it’s a living benchmark of the data industry’s relentless evolution. As a long-time attendee, I’ve watched Snowflake grow from a disruptive force to a foundational pillar, and what unfolded during the Summit confirms its trajectory. It continues the broader data cloud narrative that shifts, expands, and redefines possibilities.
This blog isn’t about listing every announcement; it’s about synthesizing the core messages, the unwritten strategies, and the impactful product evolutions. If you want to understand the why behind the what from the Summit, you’ve come to the right place.
Before the event started, Snowflake had released its Q1 Fiscal Year 2026 earnings (for the period ending April 30, 2025). Some highlights:
- Financials: Snowflake added 451 new customers in Q1. 754 of Forbes Global 2000 are using Snowflake. Its Q1 revenue was $1.04B marking a 26% year-over-year increase. Its full-year guidance is for $4.3B in revenues.
- Product: Innovation is accelerating. 125 new product announcements which is a 100% increase over the previous year.
- AI: Cortex is now established. Over 5200 accounts use AI and machine learning weekly.
Snowflake seems to be getting its mojo back.
To distill the announcements and understand the pivotal shifts, I have categorized my findings in Figure 1.
Note: Want a video walkthrough of these insights? Don’t miss my discussion on the ‘It Depends’ podcast, covering Snowflake Summit 2025. Watch it here: https://youtu.be/kqj3SvKgnOY
Platform
Adaptive Computing
Keeping with Snowflake’s ethos of simplicity and cost efficiency, Adaptive Computing is set to revolutionize how users manage compute resources. Traditionally, users manually select a “t-shirt size” (from XS to 6XL) for their virtual warehouses, often leading to resource underutilization or over-provisioning.
With this significant private preview enhancement, users will simply issue a CREATE ADAPTIVE WAREHOUSE <<name>> command. The system then intelligently and automatically selects the optimal warehouse size and elastically scales it based on actual consumption. This is achieved by dynamically routing queries to the most appropriate compute clusters, which are drawn from a pool of shared resources across an account. To prevent runaway costs, users can set guardrails like a maximum t-shirt size and a maximum spend limit (credits/hour).
Gen 2 Warehouse
Snowflake’s Generation 2 (Gen 2) Standard Warehouses leverage next-generation hardware (such as Graviton3 on AWS) and intelligent software optimizations. It is generally available and specifically improve performance for:
- Write-heavy and update-heavy (DML) workloads: Expect 2x to 4x faster execution, making operations like MERGE, UPDATE, and DELETE much more efficient.
- Core analytical workloads: These see over a 2x performance improvement overall.
Users now have distinct choices for their compute needs:
- Standard Warehouse (Gen 1): The original, proven virtual warehouses.
- Standard Warehouse (Gen 2): The enhanced version, optimized for higher performance across a broad range of analytics and data engineering workloads, particularly those involving DML and large table scans.
- Snowpark-Optimized Warehouse: These are purpose-built with significantly larger memory and cache per node, making them ideal for memory-intensive operations common in Snowpark workloads, such as machine learning training and complex data processing.
Existing Gen 1 Standard Warehouses can be easily upgraded to Gen 2 via a simple ALTER WAREHOUSE command, though the warehouse must be suspended during the process. While Gen 2 offers superior performance, it’s important to note it comes with a cost multiplier (e.g., 1.35x on AWS) compared to Gen 1. However, for many workloads, the dramatic reduction in job completion times can lead to overall cost optimization and significant productivity gains.
Other Performance Improvements
Beyond the advancements of Adaptive Computing and Gen 2 Warehouses, Snowflake continues to deliver broad-ranging performance improvements across its platform:
- Snowpark (Spark execution): Over the past year, Snowflake has shown performance outperforming Managed Spark for core analytical workloads, with some benchmarks indicating up to 1.9x faster performance for comparable workloads run on Gen 2 warehouses versus Managed Spark.
- Snowflake Native Tables: Snowflake has delivered 2.1x faster performance for core analytics workloads on its native tables over the past 12 months.
- Snowflake-Managed Iceberg Tables: These now offer performance at par with Snowflake’s native tables.
- Externally-Managed Iceberg Tables: These are demonstrating more than 2x better performance compared to traditional external tables, thanks to Snowflake’s optimized Parquet scanner and local caching on the warehouse.
These are continuous and often automatic performance enhancements for all types of workloads.
Databases
Snowflake recently announced its intent to acquire Crunchy Data, a prominent open-source PostgreSQL provider, for approximately $250 million. This strategic move, which will introduce Snowflake Postgres, aims to deeply integrate enterprise-grade transactional capabilities directly into the Snowflake AI Data Cloud.
This acquisition unfolded amidst a fascinating backdrop. Databricks has made a tradition of grabbing headlines around Snowflake Summit, with significant, billion-dollar acquisitions in recent years — MosaicML (2023) and Tabular (2024) being prime examples. This year, Databricks continued the trend by announcing its acquisition of Neon, a serverless PostgreSQL database, for a reported $1 billion.
The near-simultaneous acquisition of two PostgreSQL databases by these rival giants undeniably highlights PostgreSQL’s surging importance in the OLTP (Online Transaction Processing) and AI agent categories. On the surface, this timing might lead some to believe Snowflake was merely reacting to Databricks’ move. However, insider information suggests that Snowflake’s acquisition of Crunchy Data was already well in motion before Databricks’ Neon announcement. Furthermore, Snowflake was reportedly also in the process of exploring an OEM partnership with Neon, underscoring their independent strategic focus on enhancing transactional capabilities with PostgreSQL.
This acquisition strategy serves two purposes for both Snowflake and Databricks:
- Firstly, it ensures end-to-end visibility across the entire data lifecycle. Data’s journey originates in operational and transactional systems, where it’s first created. By bringing OLTP capabilities in-house, both vendors can now offer a more comprehensive data platform, connecting the real-time world of applications directly with their robust analytical and AI services.
- Secondly, and more forward-thinking, is the strategic positioning within the rapidly maturing AI agentic arena. As organizations increasingly adopt multi-agent architectures, these agents will depend on underlying databases to store their context, state, and relevant data. These “agent databases” are expected to be exceptionally fast to spin up and down, catering to transient, bursty workloads.
While traditional relational databases were primarily designed for long-term data persistence, the emerging agentic use case, demanding rapid transience, is yet to be fully proven at scale. PostgreSQL’s versatility and strong developer ecosystem, however, make it a strong contender for this role, especially with extensions like pgvector for vector embeddings.
The acquired databases, Crunchy Data and Neon, underscore these distinct approaches. Crunchy Data is renowned for its DOD-hardened, enterprise-grade PostgreSQL solutions, deeply rooted in on-premise and Kubernetes deployments, with a strong focus on security and compliance since 2012.
In stark contrast, Neon is a true cloud-native, serverless PostgreSQL platform, having launched its core offering around 2021, distinguished by its unique separation of storage and compute, enabling features like database branching and scale-to-zero capabilities. While both are PostgreSQL-compatible, their architectural philosophies and immediate use cases diverge significantly beyond that core compatibility.
So what happened to Snowflake’s Unistore (hybrid tables)?
Snowflake’s strategy for transactional workloads involved extending Snowflake’s native columnar capabilities to support low-latency, high-concurrency operational patterns directly within the platform. This aimed to unify analytical and transactional data in a single system, simplifying data architectures.
However, the recent acquisition of Crunchy Data highlights the challenge of extending a columnar store into a row store that is much better optimized for operational needs. The Crunchy acquisition brings a battle-tested, developer-favorite PostgreSQL row-store directly into the AI Data Cloud.
Data Engineering
Continuing its mission to bring the entire data lifecycle under one roof, Snowflake’s latest announcements are focused on empowering data engineers. The goal is to provide seamless capabilities for ingesting raw data, and to build, manage, and execute the most complex data transformations within Snowflake’s single, secure, and unified environment.
Data engineering, often seen as the “workhorse” of the data world, is a critical battleground for Snowflake. While perhaps not as “sexy” as cutting-edge AI, Snowflake is clearly pulling out all the stops because, over the last few years, some of this essential work migrated off its platform. Organizations began performing certain data engineering tasks directly on raw files within object stores, primarily driven by the perceived cost savings.
Snowflake’s latest announcements are designed to reverse this trend and make the platform the native, unified, and governed home for all data engineering activities. This includes making real-time pipelines first-class citizens alongside traditional batch ELT. By enhancing capabilities like Snowpipe Streaming, OpenFlow, and dbt Projects on Snowflake, as well as improving performance for Iceberg Tables (both managed and external) and Snowpark (Spark execution) as mentioned earlier.
Openflow
Snowflake’s OpenFlow is a powerful new capability for unified data ingestion and pipeline orchestration, leveraging the open-source Apache NiFi technology. Its foundation lies in Snowflake’s November 2024 acquisition of Datavolo, a company co-founded by Joe Witt, the original creator of Apache NiFi during his tenure at the NSA. NiFi’s unique strength lies in its design for ingesting and processing unstructured data first, later expanding to structured formats. This makes OpenFlow versatile for diverse data sources.
OpenFlow offers flexible deployment options. It can be accessed directly via Snowsight to visually build data pipelines. These pipelines can then be deployed either onto Snowpark Container Service (SPCS) or run within a customer’s own Virtual Private Cloud (BYOC — Bring Your Own Cloud). At the Summit, the AWS BYOC version of OpenFlow reached General Availability (GA), while the version running on SPCS in both AWS and Azure remains in public preview.
OpenFlow’s extensive connectivity boasts over 200 built-in connectors. These span a vast range of sources, from traditional databases and SaaS applications to real-time streaming services. Crucially, OpenFlow is not limited to Snowflake as a destination; it can also write to non-Snowflake targets, enabling broader data orchestration.
This move mirrors a parallel strategy by Databricks, who announced Lakeflow in 2024, built upon their Arcion acquisition, serving a similar purpose. Both Snowflake’s OpenFlow and Databricks’ Lakeflow now possess capabilities that directly overlap with established data integration vendors like Fivetran and Matillion. While these new offerings will require time to mature to full enterprise scale, their emergence reinforces the accelerating consolidation trend within the data and analytics landscape. This intensifies competition within the estimated $15 billion data integration market, which is growing at a CAGR of 12%.
Furthermore, OpenFlow is seamlessly integrated into Snowflake’s governance and observability ecosystem. Built-in observability via Snowflake Trail now extends to OpenFlow, providing comprehensive monitoring and end-to-end data lineage. This ensures that as OpenFlow becomes the single pane of glass for building and deploying streaming and batch pipelines, the visibility and governance become robust. Additionally, Snowflake continues to champion open standards, now supporting OpenLineage for enhanced interoperability.
When the destination is Snowflake’s database, it uses Snowpipe to write to it. Traditionally Snowpipe was used for file-based loading in micro-batches. Snowpipe Streaming is designed for continuous, low-latency, row-based ingestion directly into Snowflake tables. It bypasses intermediate cloud storage, reducing latency and simplifying pipelines for use cases like real-time streaming and Change Data Capture (CDC).
At the Summit, Snowflake Streaming’s billing changed from being based on the serverless compute resources utilized and the number of active client connections to being based on volume of data. In this process, it reduced its price by 50%.
SnowConvert AI
Snowflake’s goal is to simplify and accelerate legacy system migrations and it is doing so with SnowConvert AI. It is an AI agent, leveraging the capabilities of Snowflake Cortex AI. And is free to use.
SnowConvert AI provides an automated, end-to-end migration experience that analyzes and transforms code, schemas, and data from various legacy platforms, generating highly accurate Snowflake-compatible outputs. Its key capabilities include:
- Automated SQL Conversion: Snowflake estimates SnowConvert AI can achieve over 90% automatic SQL code conversion and up to 2–3x faster conversion/testing.
- Comprehensive Scope: Beyond just SQL, it generates test cases, translates entire schemas, and can even build synthetic data for validation purposes.
- Broad Source Support: SnowConvert AI supports a wide array of legacy databases including Oracle, Teradata, SQL Server, Amazon Redshift, PostgreSQL, Google BigQuery, Greenplum, Netezza, Sybase IQ, and Databricks SQL. It extends to assist in migrating ETL tools (like Informatica and SSIS) and BI tools (such as Power BI).
By automating these complex and often time-consuming tasks, SnowConvert AI empowers organizations to modernize their data infrastructure faster, reduce migration risks, and unlock the full value of the Snowflake AI Data Cloud.
dbt Projects, Workspaces & Data Projects
Snowflake is enhancing the developer experience and streamlining data engineering workflows by introducing dbt Projects natively within its platform. This new service allows developers to build, test, deploy, and monitor dbt Core projects directly in Snowflake. Existing dbt projects can be seamlessly imported into this integrated environment, which runs inside the intuitive, file-based development interface known as Snowsight Workspaces.
Snowsight Workspaces serve as a unified “single pane of glass” for data professionals. They consolidate essential tools like a robust SQL editor, Python notebooks, Streamlit applications, and now, integrated dbt capabilities for managing data pipelines. This collaborative environment enables users to easily share their workspaces and work together efficiently. Crucially, it offers bidirectional integration with external Git repositories (including GitHub, GitLab, Azure DevOps, Bitbucket, and AWS CodeCommit), facilitating seamless code management, version control, and a streamlined development lifecycle.
Further bolstering Snowflake’s declarative approach is the introduction of Data Projects. This framework allows users to manage collections of Snowflake objects (like tables, views, and stored procedures) as a single, version-controlled unit, primarily using SQL or YAML. Data Projects automate the complex process of managing changes and code dependencies, ensuring consistency and reliability across environments.
Finally, reinforcing its commitment to robust DevOps practices, Snowflake now fully enables Terraform integration. As a leading infrastructure-as-code (IaC) automation tool, Terraform allows teams to provision and manage Snowflake resources programmatically, addressing CI/CD (Continuous Integration/Continuous Deployment) and comprehensive DevOps use cases. Snowflake’s solution team also uses external DataOps services like DataOps.live, which is available on Snowflake Marketplace.
Collectively, these advancements underscore Snowflake’s strategy to bring advanced, integrated DevOps functionality directly onto its platform, fostering simplicity, governance, and efficiency for the entire data development lifecycle.
Artificial Intelligence
While some perceived Snowflake as initially slow to jump on the rapidly accelerating AI bandwagon, the company is now making aggressive strides. Building on its robust data foundation, Snowflake is actively integrating cutting-edge AI capabilities directly into its platform, aiming to empower every user, from business analysts to data scientists, to leverage AI and machine learning on their data with ease, security, and scalability.
Semantic Views
Snowflake is unifying business logic directly within its core engine by introducing Semantic Views. This move addresses a long-standing challenge where semantic definitions (such as metrics, facts, and dimensions) were often siloed within individual BI tools, catalogs, or declarative layers like dbt.
Semantic Views are a powerful new schema-level object designed to store all critical semantic model information natively in the database. This ensures a single source of truth for key business metrics and definitions. Consequently, earlier approaches like the Cortex Analyst YAML file are no longer needed, as the semantic layer is now baked directly into Snowflake.
This foundational shift offers immense benefits:
- Consistency Across Tools: By centralizing semantics in the database, any accessing tool — from traditional BI platforms like Sigma and Hex, to more advanced data applications — can apply business metrics and definitions consistently, eliminating discrepancies and ensuring trust in data insights.
- AI Accuracy: Large Language Models (LLMs) that convert natural language to SQL (text-to-SQL) can now leverage these native semantic views to improve query accuracy and relevance, understanding the true business context behind the data.
- Performance Gains: Storing semantic definitions right next to the data they describe can also contribute to improved query performance, as the database engine can optimize execution paths with full semantic awareness.
However, storing semantics in various layers of the data stack or in specialized semantic layer platforms like AtScale or Cube.dev each presents distinct trade-offs. Buyers should therefore carefully evaluate these different approaches to determine the optimal fit for their specific organizational needs and use cases.
Cortex AISQL
Snowflake is democratizing access to generative AI by introducing a suite of Cortex AI Functions directly embedded into its SQL engine. These powerful new AI operators allow users to analyze both structured and unstructured data using familiar SQL syntax, bringing advanced AI capabilities within reach of every data professional.
Common use cases include:
- Entity Extraction: Identify key entities (e.g., names, organizations, dates) from text.
- Sentiment Analysis: Determine the emotional tone of text.
- Summarization: AI_SUMMARIZE condenses long documents into concise summaries.
- Translation: AI_TRANSLATE converts text between languages.
- Document Parsing and Question Answering: For instance, the AI_EXTRACT_ANSWER function enables users to ask specific questions directly about PDF documents.
Currently, Snowflake supports processing text and image data with these functions, with audio support actively on the roadmap. Behind the scenes, Snowflake leverages and manages access to leading large language models (LLMs) from top-tier providers such as OpenAI, Anthropic, Meta, Mistral, and Google Gemini, abstracting away the complexity for the user.
Snowflake Intelligence
Snowflake Intelligence is a new sophisticated natural language interface that enables both business users and data professionals to interact with and query structured and unstructured data using plain English, democratizing access to information across the entire AI Data Cloud.
A key differentiator is its seamless integration with Snowflake’s robust native governance framework and access controls, ensuring that insights are always delivered securely and only to authorized users.
A standout feature within Snowflake Intelligence is “Agentic Deep Research.” This advanced capability leverages multi-step reasoning LLMs that run securely within the Snowflake platform’s trust boundary. This allows the system to go beyond simple queries, enabling it to break down complex questions, perform iterative analysis, and synthesize comprehensive answers, acting much like an intelligent data assistant to uncover deeper insights.
Agents and Assistants
Throughout this discussion, we’ve seen various manifestations of Snowflake’s AI capabilities, many of which leverage the power of Cortex AI — Snowflake’s fully managed service for AI. Cortex AI serves as the fundamental underlying substrate that enables these intelligent features. Specifically, we’ve encountered examples that function as specialized Cortex agents or embed generative AI directly, like SnowConvert AI, AISQL, and Snowflake Intelligence.
Beyond these, Snowflake is actively launching and expanding AI capabilities designed to empower various roles, including Data Scientists. Cortex AI provides the tools and infrastructure for data scientists to build, train, and deploy machine learning models, perform advanced analytics, and leverage features like vector search for sophisticated AI applications directly within the secure and governed Snowflake environment.
Snowflake’s most compelling testament to Cortex AI is that its largest customer is Snowflake itself. Through various product demonstrations, we observed a burgeoning ecosystem of internal AI assistants, deployed for its nearly 8,000 employees. They are designed to boost productivity across sales, quality assurance, and broader employee operations.
Arctic Model
SQL’s remarkable 50-year longevity is truly a testament to its enduring power as the de facto standard for querying structured data. Yet, despite its unparalleled persistence, the promise of natural language (NL) to SQL conversion via (LLMs has, thus far, struggled with consistent reliability and accuracy. To address this critical hurdle, Snowflake has unveiled Arctic-Text2SQL-R1, a specialized 32-billion parameter model engineered to significantly enhance the accuracy of text-to-SQL translations.
A notoriously tricky problem in foundation model (FM) development is robustly evaluating their output, especially in dynamic environments with complex business logic where traditional supervised fine-tuning (SFT) on labeled data falls short. Snowflake’s solution is a groundbreaking approach: “execution-aligned reinforcement learning.” This method leverages a novel execution runtime reward system called Group Relative Policy Optimization (GRPO), which directly optimizes the model based on the correctness and efficiency of the generated SQL when executed against the database.
Furthermore, recognizing that LLM inference runtimes vary widely in optimization (some prioritizing speed, others cost), Snowflake has introduced Arctic Inference. This innovative service employs dynamic parallelization strategies that adapt in real-time to traffic patterns. For instance, during low traffic, it might use ‘tensor parallelism,’ while under higher loads, it switches to ‘Arctic Sequence Parallelism.’ By intelligently varying the number of GPUs and underlying parallelization techniques, Arctic Inference is designed to deliver twice the inference speed compared to many open-source models, all while maintaining superior cost efficiency. This model can also be deployed as a vLLM plugin, allowing integration with vLLM, a popular open-source Kubernetes-based inference server, offering users maximum flexibility.
Storage and Catalog
Snowflake’s latest announcements for open formats, governance, and the marketplace bolster its vision for a unified data ecosystem.
Iceberg
Snowflake is actively implementing v3 of the Iceberg specification, bringing advanced capabilities like Merge-on-Read (MOR) and other DML operations that enhance the performance and flexibility of data manipulation within Iceberg tables.
It now supports seamless read and write operations for both Snowflake-managed Iceberg Tables (where data and metadata are managed by Snowflake’s engine for optimal performance) and Externally-Managed Iceberg Tables (where data resides in external object storage and metadata is managed by an external Iceberg REST Catalog). For the latter, Snowflake can connect to any Iceberg REST Catalog, including its own Polaris, AWS Glue, or others, allowing it to read and write tables directly to and from these external catalogs via standard REST OpenAPI calls.
Organizations increasingly seek to build future-proof, open, and interoperable data platforms, making support for various open table formats a critical capability. Each OTF comes with its own strengths and weaknesses, and many analytical engines support multiple formats.
In a significant move towards broader interoperability, Snowflake’s Polaris Catalog introduced a new REST endpoint specifically designed to support non-Iceberg tables like Delta Lake and Apache Hudi. This functionality is currently available in a preview release. Users can create, delete, and read these Delta Lake tables directly from Spark using a new client library provided for Polaris, with support for other engines expected soon. This capability helps organizations centralize metadata and unify governance across their fragmented data lakes, reducing the need for separate catalogs and simplifying data discovery and access control for diverse open table formats.
While the broader industry trend clearly leans towards open table formats, Snowflake maintains that its proprietary micro-partitioned storage format currently offers distinct advantages, including superior compression, native encryption, optimized replication, and other significant performance benefits. However, Snowflake’s strategic direction is undeniably evolving. Its significant contributions to the Iceberg spec, such as the VARIANT data type being contributed to Iceberg spec v3, signal a clear intent towards convergence. This indicates a future where the lines between Snowflake’s proprietary format and open formats could further blur, offering users the best of both worlds.
Horizon
Horizon is Snowflake’s comprehensive and built-in data governance and discovery suite, designed to provide end-to-end control and visibility across the AI Data Cloud. Key capabilities include:
- Policy-Driven Data Protection: Offers fine-grained, read-only access control to data, including tables and views, through robust data protection policies.
- Full AI Model Lifecycle Management: Provides complete governance over the lifecycle of AI models, from development and deployment to monitoring and retirement.
- Application & Data Product Governance: Governs the metadata, usage, and access to Native Apps and Data Products within the Marketplace, ensuring responsible data sharing and consumption. (It governs the apps and products themselves, not necessarily the data inside them if that data lives outside Snowflake.)
- Integrated Discovery: Extends its discovery capabilities to metadata from external relational databases (such as PostgreSQL, MySQL, SQL Server, Oracle, and Databricks) and even popular BI tools (like Power BI and Tableau), creating a holistic view of an organization’s data landscape.
- Comprehensive Lineage: Horizon provides deep lineage insights, supporting OpenLineage and enabling users to visualize data flow from upstream ingestion systems, popular data transformation tools like dbt, and orchestration platforms like Airflow, all without leaving the Snowflake environment.
- Catalog Interoperability: It connects seamlessly with any Iceberg REST Catalog (IRC), including Snowflake’s own Polaris, or external catalogs like AWS Glue, Dremio (Nessie), or Apache Gravitino, providing a unified metadata layer.
- Copilot Integration: While not a direct governance feature of Horizon, Snowflake Copilot (the AI assistant) is designed to leverage Horizon’s governed data and metadata, ensuring that AI-driven interactions adhere to established policies.
Marketplace
The Snowflake Marketplace continues to be a cornerstone of the AI Data Cloud, serving as an ecosystem where partners offer both data products and, increasingly, Native Apps. These applications run directly within the customer’s Snowflake account, leveraging its security and governance, and provide out-of-the-box solutions built on shared data. This fosters a rich environment for innovation and collaboration within the Snowflake ecosystem.
Before we conclude this blog, let’s address one of the burning questions posed at the beginning regarding the overlap between Snowflake and Databricks customers. According to July 2024 ETR (Enterprise Technology Research) data, approximately 40% of Snowflake customers also utilize Databricks, while conversely, about 60% of Databricks customers are also Snowflake users. In June 2025, those numbers increased to 52% and 61% respectively according to discussions with ETR. This significant overlap underscores that, rather than a zero-sum game, many enterprises are leveraging the complementary strengths of both platforms within their complex data ecosystems.
Closing Thoughts
The sheer volume and strategic depth of announcements demonstrate Snowflake’s bold moves into all aspects of the data lifecycle. From transactional databases with Snowflake Postgres, to the relentless focus on bringing data engineering and AI capabilities natively onto the platform, speak volumes. Snowflake’s commitment to simplicity and unification is no longer just an ethos, but a tangible reality delivered through innovative features like OpenFlow, SnowConvert AI, Semantic Views, and the powerful Arctic models. This Summit reinforced that Snowflake is not merely a data warehouse, but a comprehensive, intelligent, and increasingly self-optimizing platform poised to accelerate the data-driven and AI-powered future for every enterprise.
The journey continues, and the momentum is undeniable. Until Snowflake Summit 2026…