This course provides a comprehensive understanding of Spark Structured Streaming and Delta Lake, including computation models, configuration for streaming read, and maintaining data quality in a streaming environment.
Note: This course is part of the 'Advanced Data Engineering with Databricks' course series.
What You'll Learn
- Introduction to Streaming
- Aggregations, Time Windows, Watermarks
- Streaming Joins (Optional)
- Streaming ETL Patterns with Lakeflow Declarative Pipelines
Prerequisites
The content was developed for participants with these skills/knowledge/abilities:
• Ability to perform basic code development tasks using the Databricks Data Engineering and Data Science workspace (create clusters, run code in notebooks, use basic notebook operations, import repos from git, etc.)
• Intermediate programming experience with PySpark
• Extract data from a variety of file formats and data sources
• Apply a number of common transformations to clean data
• Reshape and manipulate complex data using advanced built-in functions
• Intermediate programming experience with Delta Lake (create tables, perform complete and incremental updates, compact files, restore previous versions, etc.)
• Beginner experience configuring and scheduling data pipelines using the Lakeflow Declarative Pipelines UI
• Beginner experience defining Lakeflow Declarative Pipelines using PySpark
• Ingest and process data using Auto Loader and PySpark syntax
• Process Change Data Capture feeds with APPLY CHANGES INTO syntax
• Review pipeline event logs and results to troubleshoot Declarative Pipeline syntax
Learning Journey
Coming Soon...