Vendors

The course covers details about how to deploy, operationalize, and monitor generative AI applications. The course will help you gain skills in the deployment of generative AI applications using tools like Model Serving. It also covers how to operationalize generative AI applications following best practices and recommended architectures. Finally, the course discusses the idea of monitoring generative AI applications and their components using Lakehouse Monitoring.

img-course-overview.jpg

What You'll Learn

  • Model Deployment Fundamentals
  • Batch Deployment
  • Real-Time Deployment
  • AI System Monitoring
  • LLMOps Concepts

Who Should Attend

  • Data scientists, ML engineers and AI practitioners responsible for deploying, serving and monitoring generative-AI applications in production.
  • Professionals managing real-time inference workloads, model-serving endpoints, versioning, rollout strategies and operational readiness for LLM-based systems.
  • Individuals implementing monitoring pipelines for LLM performance, latency, drift, hallucination detection, cost optimisation and quality evaluation.
  • Practitioners with experience in Python, MLflow or model-serving frameworks who want to enhance their skills in operationalising and governing generative-AI systems.
  • Teams moving generative-AI projects from experimentation to production, ensuring reliability, observability, compliance and continuous improvement of AI applications.
img-who-should-learn.png

Prerequisites

  • Familiarity with natural language processing concepts
  • Familiarity with prompt engineering/prompt engineering best practices 
  • Familiarity with the Databricks Data Intelligence Platform
  • Familiarity with RAG  (preparing data, building a RAG architecture, concepts like embedding, vectors, vector databases, etc.)
  • Experience with building LLM applications using multi-stage reasoning LLM chains and agents
  • Familarity with Databricks Data Intelligence Platform tools for evaluation and governance. 

Learning Journey

Coming Soon...

Module 1. Model Deployment Fundamentals

  • Model Management
  • Deployment Methods

Module 2. Batch Deployment

  • Introduction to Batch Deployment
  • Batch Inference
  • Batch Inference Workflows using SLM

Module 3. Real-Time Deployment

  • Introduction to Real-Time Deployment
  • Databricks Model Serving
  • Serving External Models with Model Serving
  • Deploying an LLM Chain to Databricks Model Serving 
  • Custom Model Deployment and A/B Testing

Module 4. AI System Monitoring

  • AI Application Monitoring
  • Online Monitoring an LLM RAG Chain

Module 5. LLMOps Concepts

  • MLOps Primer
  • LLMOps vs MLOps

img-exam-cert

Frequently Asked Questions (FAQs)

None

Keep Exploring

Course Curriculum

Course Curriculum

Training Schedule

Training Schedule

Exam & Certification

Exam & Certification

FAQs

Frequently Asked Questions

img-improve-career.jpg

Improve yourself and your career by taking this course.

img-get-info.jpg

Ready to Take Your Business from Great to Awesome?

Level-up by partnering with Trainocate. Get in touch today.

Name
Email
Phone
I'm inquiring for
Inquiry Details

By submitting this form, you consent to Trainocate processing your data to respond to your inquiry and provide you with relevant information about our training programs, including occasional emails with the latest news, exclusive events, and special offers.

You can unsubscribe from our marketing emails at any time. Our data handling practices are in accordance with our Privacy Policy.