Skip to content

Module 09: Agentic Development And ADLC

Use agents to accelerate engineering without losing understanding, safety, or ownership.

ADLC is SDLC with delegated machine assistance at each stage.

human intent -> context -> agent task -> output -> human review -> verification

The agent can produce work. The human must produce judgment.

An agent is like a very fast junior engineer who has read a lot, forgets details, sometimes invents APIs, and never carries production responsibility. Treat it as useful, not authoritative.

RoleGood ForRisk
ExplorerMapping repos and finding filesMay miss runtime behavior
PlannerBreaking down workMay ignore constraints
WorkerBounded implementationMay over-edit
ReviewerFinding bugs and gapsMay produce false positives
OperatorSummarizing logs and commandsMay misread environment
DocumenterDrafting runbooks and handoffsMay sound confident but vague

agentic development control loop

GoalHuman defines outcome and constraints.
ContextFiles, commands, contracts, risks.
TaskBounded agent assignment.
OutputPatch, map, test plan, or summary.
ReviewHuman inspects diff and assumptions.
VerifyTests and runtime proof.
Accept or reviseIntegrate, reject, or retask.
ObserveDocument behavior after change.

Every serious agent task should include:

  • Goal.
  • Current behavior.
  • Desired behavior.
  • Relevant files.
  • Constraints.
  • Out of scope.
  • Verification commands.
  • Expected response format.
  • “Build the whole app.”
  • “Fix all bugs.”
  • “Make this production ready.”
  • “Deploy this.”
  • “Refactor everything.”

These are vague and dangerous.

  • “Inspect the ticket creation path and return files, data flow, tests, and risks. Do not edit files.”
  • “Add server-side validation for missing ticket title in routes/tickets.ts. Add tests. Do not touch UI files.”
  • “Review this diff for auth bypass, duplicate writes, missing tests, and behavior regressions.”

Agent output is not accepted until:

  • Diff is understood.
  • Tests run.
  • Runtime behavior is checked if user-facing.
  • Security/data assumptions are reviewed.
  • Docs/contracts are updated if behavior changed.
  • Remaining risks are written down.
  • What context does the agent need?
  • What should the agent not touch?
  • What would make the output unacceptable?
  • What tests prove the output?
  • What claims need manual verification?
  • What did the agent miss?
  1. Create portfolio/09-agentic-development-adlc/agentic-log.md.
  2. Write a context pack for a feature.
  3. Ask an agent to map the relevant code.
  4. Ask an agent for a plan.
  5. Edit the plan.
  6. Implement one bounded slice.
  7. Ask an agent to review the diff.
  8. Run verification.
  9. Document accepted and rejected agent output.
  • Agent tasks are specific.
  • Outputs are reviewed, not trusted.
  • Verification evidence exists.
  • The learner can explain every accepted change.