Vendors

This course serves as an appropriate entry point to learn Advanced Data Engineering with Databricks. 

Below, we describe each of the four, four-hour modules included in this course.

Databricks Streaming and Lakeflow Declarative Pipelines

This course provides a comprehensive understanding of Spark Structured Streaming and Delta Lake, including computation models, configuration for streaming read, and maintaining data quality in a streaming environment.

Databricks Data Privacy

This content is intended for the learner persona of data engineers or for customers, partners, and employees who complete data engineering tasks with Databricks. It aims to provide them with the necessary knowledge and skills to execute these activities effectively on the Databricks platform.

Databricks Performance Optimization

In this course, you’ll learn how to optimize workloads and physical layout with Spark and Delta Lake and and analyze the Spark UI to assess performance and debug applications. We’ll cover topics like streaming, liquid clustering, data skipping, caching, photons, and more.

Automated Deployment with Databricks Asset Bundles

This course provides a comprehensive review of DevOps principles and their application to Databricks projects. It begins with an overview of core DevOps, DataOps, continuous integration (CI), continuous deployment (CD), and testing, and explores how these principles can be applied to data engineering pipelines.

The course then focuses on continuous deployment within the CI/CD process, examining tools like the Databricks REST API, SDK, and CLI for project deployment. You will learn about Databricks Asset Bundles (DABs) and how they fit into the CI/CD process. You’ll dive into their key components, folder structure, and how they streamline deployment across various target environments in Databricks. You will also learn how to add variables, modify, validate, deploy, and execute Databricks Asset Bundles for multiple environments with different configurations using the Databricks CLI.

Finally, the course introduces Visual Studio Code as an Interactive Development Environment (IDE) for building, testing, and deploying Databricks Asset Bundles locally, optimizing your development process. The course concludes with an introduction to automating deployment pipelines using GitHub Actions to enhance the CI/CD workflow with Databricks Asset Bundles.

By the end of this course, you will be equipped to automate Databricks project deployments with Databricks Asset Bundles, improving efficiency through DevOps practices.

img-course-overview.jpg

What You'll Learn

  • Databricks Streaming and Lakeflow Declarative Pipelines
  • Databricks Data Privacy
  • Databricks Performance Optimization
  • Automated Deployment with Databricks Asset Bundles

Who Should Attend

This content is intended for the learner persona of data engineers or for customers, partners, and employees who complete data engineering tasks with Databricks.

img-who-should-learn.png

Prerequisites

  • Ability to perform basic code development tasks using the Databricks Data Engineering and Data Science workspace (create clusters, run code in notebooks, use basic notebook operations, import repos from git, etc.)
  • Intermediate programming experience with PySpark
  • Extract data from a variety of file formats and data sources
  • Apply a number of common transformations to clean data
  • Reshape and manipulate complex data using advanced built-in functions
  • Intermediate programming experience with Delta Lake (create tables, perform complete and incremental updates, compact files, restore previous versions, etc.) 
  • Beginner experience configuring and scheduling data pipelines using the Lakeflow Declarative Pipelines UI
  • Beginner experience defining Lakeflow Declarative Pipelines using PySpark
  • Ingest and process data using Auto Loader and PySpark syntax
  • Process Change Data Capture feeds with APPLY CHANGES INTO syntax
  • Review pipeline event logs and results to troubleshoot Declarative Pipeline syntax
  • Strong knowledge of the Databricks platform, including experience with Databricks Workspaces, Apache Spark, Delta Lake, the Medallion Architecture, Unity Catalog, Lakeflow Declarative Pipelines, and Workflows. In particular, knowledge of leveraging Expectations with Lakeflow Declarative Pipelines. 
  • Experience in data ingestion and transformation, with proficiency in PySpark for data processing and DataFrame manipulation. Candidates should also have experience writing intermediate-level SQL queries for data analysis and transformation.
  • Proficiency in Python programming, including the ability to design and implement functions and classes, and experience with creating, importing, and utilizing Python packages.
  • Familiarity with DevOps practices, particularly continuous integration and continuous delivery/deployment (CI/CD) principles.
  • A basic understanding of Git version control.
  • Prerequisite course DevOps Essentials for Data Engineering Course

Learning Journey

Coming Soon...

Databricks Streaming and Lakeflow Declarative Pipelines

  • Streaming Data Concepts
  • Introduction to Structured Streaming
  • Demo: Reading from a Streaming Query
  • Streaming from Delta Lake
  • Lab: Streaming Query 
  • Aggregation, Time Windows, Watermarks
  • Event Time + Aggregatios over Time Windows
  • Lab: Stream Aggregation
  • Demo: Windowed Aggregation with Watermark
  • Streaming Joins (Optional)
  • Data Ingestion Pattern
  • Demo: Auto Load to Bronze
  • Demo: Stream from Multiplex Bronze
  • Data Quality Enforcement
  • Lab: Streaming ETL

Databricks Data Privacy

  • Regulatory Compliance
  • Data Privacy
  • Key Concepts and Components
  • Audit Your Data
  • Data Isolation
  • Demo: Securing Data in Unity Catalog 
  • Pseudonymization & Anonymization
  • Summary & Best Practices
  • Demo: PII Data Security
  • Capturing Changed Data
  • Deleting Data in Databricks
  • Demo: Processing Records from CDF and Propagating Changes
  • Lab: Propagating Changes with CDF Lab

Databricks Performance Optimization

  • DevOps Spark UI Introduction
  • Introduction to Designing Foundation
  • Demo: File Explosion
  • Data Skipping and Liquid Clustering
  • Lab: Data Skipping and Liquid Clustering
  • Skew
  • Shuffles
  • Demo: Shuffle
  • Spill
  • Lab: Exploding Join
  • Serialization
  • Demo: User-Defined Functions
  • Fine-Tuning: Choosing the Right Cluster
  • Pick the Best Instance Types

Automated Deployment with Databricks Asset Bundles

  • DevOps Review
  • Continuous Integration and Continuous Deployment/Delivery (CI/CD) Review
  • Demo: Course Setup and Authentication
  • Deploying Databricks Projects
  • Introduction to Databricks Asset Bundles (DABs)
  • Demo: Deploying a Simple DAB
  • Lab: Deploying a Simple DAB
  • Variable Substitutions in DABs
  • Demo: Deploying a DAB to Multiple Environments
  • Lab: Deploy a DAB to Multiple Environments
  • DAB Project Templates Overview
  • Lab: Use a Databricks Default DAB Template
  • CI/CD Project Overview with DABs
  • Demo: Continuous Integration and Continuous Deployment with DABs
  • Lab: Adding ML to Engineering Workflows with DABs
  • Developing Locally with Visual Studio Code (VSCode)
  • Demo: Using VSCode with Databricks
  • CI/CD Best Practices for Data Engineering
  • Next Steps: Automated Deployment with GitHub Actions

img-exam-cert

Frequently Asked Questions (FAQs)

None

Keep Exploring

Course Curriculum

Course Curriculum

Training Schedule

Training Schedule

Exam & Certification

Exam & Certification

FAQs

Frequently Asked Questions

img-improve-career.jpg

Improve yourself and your career by taking this course.

img-get-info.jpg

Ready to Take Your Business from Great to Awesome?

Level-up by partnering with Trainocate. Get in touch today.

Name
Email
Phone
I'm inquiring for
Inquiry Details

By submitting this form, you consent to Trainocate processing your data to respond to your inquiry and provide you with relevant information about our training programs, including occasional emails with the latest news, exclusive events, and special offers.

You can unsubscribe from our marketing emails at any time. Our data handling practices are in accordance with our Privacy Policy.