LLM Application Foundations
5 full-days or 10 half-days · 35 training hours
Build the engineering foundations for LLM-powered features: model APIs, prompting, context management, structured outputs, and tool/function calling in production-quality TypeScript or Python.
Topics
- Frontier model APIs: Anthropic, OpenAI, Gemini
- Prompting and context engineering for apps
- Streaming responses and error handling
- Structured outputs with Zod / Pydantic
- Tool/function calling in applications
- Conversation state and context window management
- Cost tracking and retry patterns
- Spec-driven development with GitHub Spec Kit for LLM feature work
Lab Project
Build a streaming LLM API service with tool calling, structured output validation, and per-request cost tracking.