AI-Assisted Software Engineering Fundamentals
Class Duration
7 hours of live training delivered over 1-2 days to accommodate your scheduling needs.
Student Prerequisites
- At least one year of professional software development experience
- Familiarity with version control (Git)
Target Audience
Software engineers, engineering managers, and tech leads who want to use AI tools productively without accumulating technical debt, security vulnerabilities, or eroding code ownership. Equally useful for teams rolling out AI assistants organization-wide and for developers who have already started using AI tools but want a principled framework for when and how to rely on them.
Description
AI coding assistants accelerate development—but only when used with discipline. This course builds the professional habits and mental models that separate developers who leverage AI effectively from those who ship AI-generated bugs. We cover the full cycle: understanding what AI assistants are good at and where they confidently produce wrong output, establishing a code review discipline for generated code, maintaining authorial ownership, handling security-sensitive contexts, designing team workflows and standards, and measuring the true productivity impact. Labs use realistic codebases and realistic failure modes.
Learning Outcomes
- Identify the categories of tasks where AI assistance provides reliable value vs. where it requires extra scrutiny.
- Apply a structured review checklist to AI-generated code before committing it.
- Recognize common AI error patterns: hallucinated APIs, incorrect logic, subtle security issues, and license concerns.
- Design team-level workflows: prompt libraries,
.instructions.mdfiles, review gates, and onboarding standards. - Evaluate when to override or disable AI suggestions (security-sensitive code, compliance scope, novel domains).
- Measure productivity impact honestly using appropriate leading and lagging indicators.
Training Materials
Comprehensive courseware is distributed online at the start of class. All students receive a downloadable MP4 recording of the training.
Software Requirements
Students should have a GitHub Copilot, Cursor, or Claude Code subscription active in their preferred IDE for hands-on labs.
Training Topics
What AI Assistants Are Good At
- Pattern-heavy, well-documented problem domains
- Boilerplate reduction and mechanical transformation
- Test scaffolding and documentation drafting
- When models are confidently wrong
Review Discipline for Generated Code
- Code review checklist for AI-generated output
- Logic correctness, edge cases, and error handling
- API hallucination detection
- Dependency and version verification
- Maintaining authorial understanding
Security Pitfalls
- Common vulnerabilities in generated code (OWASP relevance)
- Injection risks in generated queries and templates
- Secret handling and credential leakage
- When to turn AI assistance off entirely
Team Workflows and Standards
- Shared prompt libraries and instruction files
- Pull request policies for AI-assisted changes
- Onboarding developers to team AI standards
- Balancing autonomy and consistency
Code Ownership and Quality
- Preserving architectural intent with AI assistance
- Avoiding over-generated complexity
- Refactoring AI output to match team style
- Long-term maintainability considerations
Measuring Productivity Impact
- What to measure: cycle time, review iteration, defect rate
- Leading indicators vs. vanity metrics
- Team surveys and developer experience data
- Communicating results to leadership
Workshop
- Code review lab: evaluate AI-generated pull request
- Team standards exercise: draft instruction file
- Q&A session