Several breakthrough announcements emerged at DAIS 2025, but the Lakeflow updates around building robust pipelines had the most immediate impact on my current code. Specifically, I can now see a clear path to persisting SCD2 (Slowly Changing Dimension Type 2) tables in the silver layer from mutable data sources. If this sentence resonates with you, we share a common challenge. If not, it soon will.
Maintaining history through Change Data Capture is critical for both AI and foundational use cases like Single View of the Customer. However, the current state of Delta Live Tables (DLT) pipelines only allows streaming tables to maintain SCD2 logic, while most data sources permit updates. Let’s dive into the technical challenges and how Lakeflow Connect is solving them.
Slowly Changing Dimensions
There are two options for managing changes: SCD1 and SCD2.
- SCD Type 1 is focused on keeping only the latest data. This approach involves overwriting old data with new data whenever a change occurs. No history of changes is kept, and only the latest version of the data is available. This is useful when the history of changes isn’t important, such as correcting errors or updating non-critical fields like customer email addresses or maintaining lookup tables.
- SCD Type 2 keeps the historical versions of data. This approach maintains a historical record of data changes by creating additional records to capture different versions of the data over time. Each version of the data is timestamped or tagged with metadata that allows users to trace when a change occurred. This is useful when it’s important to track the evolution of data, such as tracking customer address changes over time for analysis purposes.
While basic operational reporting can support SCD1, almost any analytic approach will benefit from history. ML models suffer from lack of data, and AI will be more likely to hallucinate. Let’s look at a simple example.
Monday Morning Dataset:
id | name | state |
---|---|---|
1 | John | NY |
2 | Jane | CA |
3 | Juan | PA |
Tuesday Update: John moves from New York to New Jersey.
id | name | state |
---|---|---|
1 | John | NJ |
2 | Jane | CA |
3 | Juan | PA |
- SCD1 Result: Overwrites John’s state, leaving only three records.
- SCD2 Result: Retains John’s NY record and adds a new NJ record, resulting in four records.
This important thing to understand here is that having John’s lifecycle is almost certainly valuable from an analytical perspective. This extremely small cost around storage is negligible compared to the potential lost opportunity of simply overwriting the data. I like to have SCD2 tables in the silver layer as a general rule in the medallion architecture. However, there were some issues with DLTs around this scenario.
Challenges with the APPLY CHANGES API
In the current state, SCD updates are managed through the APPLY CHANGES API. This API was more effective than Spark’s MERGE INTO statement. MERGE INTO is relatively straightforward until you start to factor in edge cases. For example, what if there are several updates to the same key in the same microbatch? What if the changes come in out of order? How do you handle DELETEs? Worse, how do you handle out-of-order DELETEs? However, APPLY CHANGES only worked for append-only data.
In its current state, a DLT pipeline creates a Directed Acyclic Graph (DAG) for all the tables and views in the pipeline using the metadata of the resources. Only the metadata. In many pipelines, the data from the source RDBMS has already been ingested into bronze and is refreshed daily. Lets look at our sample dataset. On Monday, I run the DLT. While the DLT is aware of the metadata of the table, it does not have access to the contents. Imagine a MERGE statement where no current records exists. Everything is an insert. Now imagine processing the next day’s data. Again, since only the metadata is loaded into the DAG, the APPLY CHANGES has no prior record of John. Effectively, only SCD1 tables can be created from mutable data sources since the data will not be loaded at this time.
The new Lakeflow process provides a mechanism where CDC can be used with the Lakeflow Connector to drive SCD2 semantics even with mutable data.
What is Change Data Capture (CDC)?
Change Data Capture (CDC) is a data integration pattern that captures changes in a source system, like inserts, updates and deletes, through a CDC feed. The CDC feed stores a list of changes rather than the whole dataset, providing a performance opportunity. Most transactional databases, like SQL Server, Oracle and MySQL, can generate CDC feeds automatically. When a row in the source table is updated, a new set of rows is created in the CDC feed that only has the changes, plus some metadata like UPDATE or DELETE as well as a column that can be used to deterministically identify order, like a sequence number. There is also an update to APPLY CHANGES called AUTO CDC INTO.
AUTO CDC INTO
There are actually two APIs: AUTO CDC and AUTO CDC FROM SNAPSHOT. They have the same syntax as APPLY CHANGES, but they can now correctly handle more use cases. You may have already guessed that AUTO CDC FROM SNAPSHOT has the same method signature as APPLY CHANGES FROM SNAPSHOT. However, the AUTO CDC API supports periodic ingestion of snapshots with each pipeline update. Since data and not just metadata is made available to the API, there is sufficient information made available to the call to correctly populate the SCD2 dataset.
Conclusion
Lakeflow Connect is a game-changer for data engineers, enabling SCD2 tables in the silver layer even with mutable data sources. By leveraging CDC and the new AUTO CDC INTO API, you can maintain historical data accurately, ensuring your AI and ML models have the context they need to perform optimally.
The future of data engineering is here, and it’s built on Lakeflow Connect.
Contact us to learn more about how to empower your teams with the right tools, processes, and training to unlock Databricks’ full potential across your enterprise.
Source: Read MoreÂ