Practice model
From prompts to systems
This track is not “write one prompt and hope.” You learn how serious teams use AI today: tight instructions, engineered context, structured reasoning, bounded agent-style steps, and workflows you can review, across ChatGPT, Claude, Cursor-class tools, and whatever ships next month.
Prompting & instruction design
Clear asks, real constraints, and iteration, not magic phrases.
- Designing roles, success criteria, and output shapes that survive real review, not vague “be helpful” requests.
- Spotting ambiguity, instruction leakage, and plausible-but-wrong answers before they spread downstream.
- Tight feedback loops: revise prompts against evidence (samples, checks, diffs), not vibes.
Context engineering & project memory
What you load into the system usually matters more than clever wording.
- Building maintainable project briefs: rules files, `skills.md`-style playbooks, and docs that teams can actually update.
- Choosing what belongs in the model context vs. what belongs in tickets, repos, or wikis, signal over noise.
- Handling long contexts honestly: freshness, provenance, and avoiding “padding” that hides the real task.
Structure, chains, and subagent-style work
Break work so models succeed on bounded slices, and you can audit the path.
- Chain-of-thought style structuring as a discipline: explicit intermediate steps you can inspect, correct, or throw away.
- Decomposing deliverables into substeps suitable for separate passes (subagent-style runs) without pretending autonomy you do not have.
- Clean hand-offs: defined inputs/outputs between steps so quality does not collapse at the seams.
Workflow design, tools, and quality control
ChatGPT, Claude, Cursor, Antigravity, and peers, as stages in a system, not a slot machine.
- Tool-aware prompting: when to browse, when to edit files, when to stay in chat, and how to verify tool output before you trust it.
- Designing workflows with approvals, checklists, and rollback paths before AI touches production-ish work.
- QC habits: regression on prompts and behaviour, red-team style checks, and knowing when to stop using AI for a task.
We stay tool-agnostic on purpose: interfaces change; the underlying habits, context, decomposition, verification, are what make output reliable.