Databricks’ serverless database cuts app development from months to days as companies prepare for agent AI

Five years ago, Databricks coined the term ‘data lakehouse’ to describe a new type of data architecture that combines a data lake and a data warehouse. That term and data structure is now commonplace throughout the data industry for analytical workloads.
Now, Databricks is looking to create a new category with its Lakebase service, which is now generally available today. While data lakehouse architecture deals with OLAP (online analytical processing) databases, Lakebase is all about OLTP (online transaction processing) and operational databases. The Lakebase service has been in development since June 2025 and is based on the Databricks technology acquired through its acquisition PostgreSQL database provider Neon. It was also updated in October of 2025 with mooncake purchase, which brought capabilities to help integrate PostgreSQL with lakehouse data formats.
Lakebase is a serverless operational database that represents a fundamental rethinking of how databases work in the age of autonomous AI agents. Early adopters, including easyJet, Hafnia and Warner Music Group, are reducing application delivery times by 75 to 95%, but deep experience in innovation is positioning data as a temporary, self-service infrastructure that AI agents can provision and manage without human intervention.
This is not just another managed Postgres resource. Lakebase treats operational databases as simple, scalable computing that runs on a lake database rather than monolithic systems that require careful capacity planning and database administrator (DBA) oversight.
“In fact, for the vibe coding trend to take off, you need developers to believe that they can actually build new apps very quickly, but you also need a central IT team, or DBAs, to be comfortable with the tsunami of apps and databases,” Databricks co-founder Reynold Xin told VentureBeat. “Classic databases won’t reach that level because they can’t afford a DBA for each database and application.”
92% faster delivery: From two months to five days
Productivity numbers show an immediate impact beyond the perspective of giving an agent. Hafnia reduced the delivery time of production-ready applications from two months to five days – or 92% – using Lakebase as the transaction engine for its internal operations portal. The logistics company moved beyond static BI reports to real-time business applications for fleet, commercial and financial workflows.
EasyJet consolidated more than 100 Git repositories into just two development cycles and cut development cycles from nine months to four months – a 56% reduction – while building a web-based revenue management platform in Lakebase to replace a ten-year-old desktop application and large European SQL Server environments.
Warner Music Group delivers data directly to production systems using a consolidated basis, while Quantum Capital Group uses it to store consistent, controlled data for identifying and evaluating oil and gas investments – eliminating the duplication of data that previously forced groups to store multiple copies in different formats.
The acceleration comes from the removal of two major obstacles: the creation of a database of test points and the maintenance of ETL pipelines to synchronize operational and analytical data.
Technical architecture: Why this can’t just be handled by Postgres
Native databases combine storage and compute – organizations provide a database instance with attached storage and scale by adding additional instances or storage. AWS Aurora was developed by separating these layers using proprietary storage, but the storage remained locked within the AWS ecosystem and not independently accessible for analysis.
Lakebase takes storage partitioning and computing to its logical conclusion by placing storage directly in the data center. The compute layer runs basically vanilla PostgreSQL— it maintains full compatibility with the Postgres ecosystem — but all writes go to the storage pool in formats Spark, Databricks SQL and other analytics engines can query on the fly without ETL.
“The unique understanding of technology was that data pools separate computer storage, which was good, but we need to introduce data management capabilities such as governance and transaction management in the data pool,” explained Xin. “We’re actually not that different from the lakehouse concept, but we’re building a lightweight, ephemeral compute for OLTP information on top.”
Databricks built Lakebase with technology it acquired from the Neon acquisition. But Xin emphasized that Databricks has greatly expanded Neon’s original capabilities to create something very different.
“They didn’t have the business experience, and they didn’t have the scale of the cloud,” Xin said. “We combined the novel architecture idea of the Neon team with the robustness of the Databricks infrastructure and combined it. So now we have created a very dangerous platform.”
From hundreds of databases to millions built for agent AI
Xin presented a view directly tied to the economics of AI coding tools that explains why Lakebase is building things beyond current use cases. As development costs decrease, businesses will move from purchasing hundreds of SaaS applications to building millions of specified internal applications.
“As the cost of software development comes down, which we’re seeing today because of AI coding tools, it will go from SaaS expansion in 10 to 15 years to in-house application development,” Xin said. “Instead of building maybe hundreds of applications, they’ll be building millions of spoken applications over time.”
This creates a fleet management problem that is impossible with traditional methods. You cannot hire enough DBAs to manually provision, monitor and troubleshoot thousands of databases. Xin’s solution: Treat database management itself as a data problem rather than a performance problem.
Lakebase stores all telemetry and metadata – query performance, resource usage, connection patterns, error rates – directly in the lakehouse, where it can be analyzed using standard data engineering and data science tools. Instead of setting up dashboards in database-specific monitoring tools, data teams query telemetry data with SQL or analyze it with machine learning models to identify outliers and predict problems.
“Instead of creating a dashboard for all 50 or 100 databases, you can look at a chart to understand when something went wrong,” Xin explained. “Information management will look very similar to the analytics problem. You look at outliers, you look at trends, you try to understand why things happen. This is how you manage at scale where agents are creating and destroying information programmatically.”
The effects extend to the autonomous agents themselves. An AI agent with performance issues may query telemetry data to identify issues – treating database operations as an analytical task rather than requiring specialized DBA knowledge. Database management becomes something that agents can do for themselves using the data analysis skills they already have.
What does this mean for enterprise data teams
The creation of Lakebase represents a fundamental shift in the way businesses should think about operational databases – not as precious, carefully managed infrastructure that requires specialized DBAs, but as temporary, self-service resources that scale systematically like cloud computing.
This is important that independent agents materialize as quickly as Databricks envisions, because the basic architectural principle – treating database management as an analytical problem rather than an operational problem – is changing the skill sets and team structures that companies need.
Data leaders must pay attention to the convergence of operational and analytical data that is occurring across the industry. When written in an operational database can be quickly queried by analytics engines without ETL, the traditional boundaries between transactional systems and data warehouses are blurred. This integrated structure reduces the performance of maintaining separate systems, but it also requires a rethinking of the data group structures built into those boundaries.
When the lakehouse was introduced, competitors rejected the idea before eventually adopting it themselves. Xin expects the same approach for Lakebase.
“It makes sense to separate storage and compute and put all storage in a pool – it enables more capabilities and opportunities,” he said.



