Module 1: Experiment with Azure Machine Learning
Learn how to find the best machine learning model with automated machine learning (AutoML), MLflow-tracked notebooks, and the Responsible AI dashboard.
Module 2: Perform hyperparameter tuning with Azure Machine Learning
Learn how to perform hyperparameter tuning with a sweep job in Azure Machine Learning.
Module 3: Run pipelines in Azure Machine Learning
Learn how to create and use components to build pipeline in Azure Machine Learning. Run and schedule Azure Machine Learning pipelines to automate machine learning workflows.
Module 4: Trigger Azure Machine Learning jobs with GitHub Actions
Learn how to automate your machine learning workflows by using GitHub Actions.
Module 5: Trigger GitHub Actions with feature-based development
Learn how to protect your main branch and how to trigger tasks in the machine learning workflow based on changes to the code.
Module 6: Work with environments in GitHub Actions
Learn how to train, test, and deploy a machine learning model by using environments as part of your machine learning operations (MLOps) strategy.
Module 7: Deploy a model with GitHub Actions
Learn how to automate and test model deployment with GitHub Actions and the Azure Machine Learning CLI (v2).
Module 8: Plan and prepare a GenAIOps solution
Learn how to develop chat applications with language models using a code-first development approach. By developing generative AI apps code-first, you can create robust and reproducible flows that are integral for generative AI Operations, or GenAIOps.
Module 9: Manage prompts for agents in Microsoft Foundry with GitHub
Learn how to manage AI prompts as versioned assets using GitHub. Apply software engineering best practices to create, test, and promote prompt versions used in Microsoft Foundry as part of a GenAIOps workflow.
Module 10: Evaluate and optimize AI agents through structured experiments
Learn how to optimize AI agents through structured evaluation that transforms guesswork into evidence-based engineering decisions. You'll explore how to design evaluation experiments with clear metrics for quality, cost, and performance; organize experiments using Git-based workflows; create evaluation rubrics for consistent scoring; and compare results to make informed optimization decisions.
Module 11: Automate AI evaluations with Microsoft Foundry and GitHub Actions
Learn how to implement automated evaluations for AI agent responses using Microsoft Foundry evaluators, create evaluation datasets from production data and synthetic generation, run batch evaluations with Python scripts, and integrate evaluation workflows into GitHub Actions for continuous quality assurance.
Module 12: Monitor your generative AI application
Learn how to monitor the performance of your generative AI application using Microsoft Foundry. This module teaches you to track key metrics like latency and token usage to make informed, cost-effective deployment decisions.
Module 13: Analyze and debug your generative AI app with tracing
Learn how to implement tracing in your generative AI applications using Microsoft Foundry and OpenTelemetry. This module teaches you to capture detailed execution flows, debug complex workflows, and understand application behavior for better reliability and optimization.