Generative AI in Production
Course Outline
In this course, you learn about the different challenges that arise when productionizing generative AI-powered applications versus traditional ML. You will learn how to manage experimentation and tuning of your LLMs, then you will discuss how to deploy, test, and maintain your LLM-powered applications. Finally, you will discuss best practices for logging and monitoring your LLM-powered applications in production.
Generative AI in Production Benefits
-
This course will empower you to:
- Describe the challenges in productionizing applications using generative AI.
- Manage experimentation and evaluation for LLM-powered applications.
- Productionize LLM-powered applications.
- Implement logging and monitoring for LLM-powered applications.
-
Prerequisites
Completion of "Introduction to Developer Efficiency on Google Cloud" or equivalent knowledge.
Generative AI in Production Course Outline
Learning Objectives
Module 1: Introduction to Generative AI in Production
- Understand generative AI operations
- Compare traditional MLOps and GenAIOps
- Analyze the components of an LLM system
Module 2: Managing Experimentation
- Experiment with datasets and prompt engineering.
- Utilize RAG and ReACT architecture.
- Evaluate LLM models. • Track experiments.
Module 3: Productionizing Generative AI
- Deploy, package, and version models
- Test LLM systems
- Maintain and update LLM models
- Manage prompt security and migration
Module 4: Logging and Monitoring for Production LLM Systems
- Utilize Cloud Logging
- Version, evaluate, and generalize prompts
- Monitor for evaluation-serving skew
- Utilize continuous validation.
- choosing a selection results in a full page refresh