Key topics
- Reference architecture and component boundaries
- RAG and knowledge grounding patterns
- Agent design and tool governance
- Evaluation, regression testing, and red teaming
- Security posture and prompt-injection defenses
- Performance, reliability, and cost control
- Integration patterns and operational tooling
- Data engineering and LLMOps practices
Common pitfalls
- Treating agent behavior as ‘prompt magic’ without contracts or tests.
- Shipping write-capable automation without idempotency and audit trails.
- Overloading context with irrelevant chunks (token bloat, lower accuracy).
- Ignoring permission boundaries during retrieval and tool execution.
- No regression testing: quality silently degrades over time.
Recommended practices
- Use strict schemas and structured outputs for critical steps.
- Implement stateful orchestration with checkpoints and replay.
- Adopt a golden set and run evaluations continuously.
- Instrument traces and cost dashboards from day one.
- Default to read-only and add approvals before automation.
This page is intended to be actionable for engineering teams. For platform-specific details, cross-reference /platform/agents, /platform/orchestration, and /platform/knowledge.