1.Data Ingestion with Lakeflow Connect
This course provides a comprehensive introduction to Lakeflow Connect as a scalable and simplified solution for ingesting data into Databricks from a variety of data sources. You will begin by exploring the different types of connectors within Lakeflow Connect (Standard and Managed), learn about various ingestion techniques, including batch, incremental batch, and streaming, and then review the key benefits of Delta tables and the Medallion architecture.
From there, you will gain practical skills to efficiently ingest data from cloud object storage using Lakeflow Connect Standard Connectors with methods such as CREATE TABLE AS (CTAS), COPY INTO, and Auto Loader, along with the benefits and considerations of each approach. You will then learn how to append metadata columns to your bronze level tables during ingestion into the Databricks data intelligence platform. This is followed by working with the rescued data column, which handles records that don’t match the schema of your bronze table, including strategies for managing this rescued data.
The course also introduces techniques for ingesting and flattening semi-structured JSON data, as well as enterprise-grade data ingestion using Lakeflow Connect Managed Connectors.
Finally, learners will explore alternative ingestion strategies, including MERGE INTO operations and leveraging the Databricks Marketplace, equipping you with foundational knowledge to support modern data engineering ingestion.
2.Deploy Workloads with Lakeflow Jobs
Deploy Workloads with Lakeflow Jobs course teaches how to orchestrate and automate data, analytics, and AI workflows using Lakeflow Jobs. You will learn to make robust, production-ready pipelines with flexible scheduling, advanced orchestration, and best practices for reliability and efficiency-all natively integrated within the Databricks Data intelligence Platform. Prior experience with Databricks, Python and SQL is recommended.
3.Build Data Pipelines with Lakeflow Spark Declarative Pipelines
This course introduces users to the essential concepts and skills needed to build data pipelines using Lakeflow Spark Declarative Pipelines (SDP) in Databricks for incremental batch or streaming ingestion and processing through multiple streaming tables and materialized views. Designed for data engineers new to Spark Declarative Pipelines, the course provides a comprehensive overview of core components such as incremental data processing, streaming tables, materialized views, and temporary views, highlighting their specific purposes and differences.
Topics covered include:
- Developing and debugging ETL pipelines with the multi-file editor in Spark Declarative Pipelines using SQL (with Python code examples provided)
- How Spark Declarative Pipelines track data dependencies in a pipeline through the pipeline graph
- Configuring pipeline compute resources, data assets, trigger modes, and other advanced options
Next, the course introduces data quality expectations in Spark Declarative Pipelines, guiding users through the process of integrating expectations into pipelines to validate and enforce data integrity. Learners will then explore how to put a pipeline into production, including scheduling options, and enabling pipeline event logging to monitor pipeline performance and health.
Finally, the course covers how to implement Change Data Capture (CDC) using the AUTO CDC INTO syntax within Spark Declarative Pipelines to manage slowly changing dimensions (SCD Type 1 and Type 2), preparing users to integrate CDC into their own pipelines.
4.Data Management and Governance with Unity Catalog
In this course, you'll learn about data management and governance using Databricks Unity Catalog. It covers foundational concepts of data governance, complexities in managing data lakes, Unity Catalog's architecture, security, administration, and advanced topics like fine-grained access control, data segregation, and privilege management.
* This course seeks to prepare students to complete the Associate Data Engineering certification exam, and provides the requisite knowledge to take the course Advanced Data Engineering with Databricks.