Skip to main content

The End of “Faster Horse Chariots”

For decades, software development methodologies evolved incrementally. Waterfall gave way to Agile. Sprints replaced long release cycles. Story points replaced time estimates. Each iteration made us marginally faster. Then AI arrived—and broke the entire model.
“Retrofitting AI into existing methods not only limits its potential, but also reinforces outdated inefficiencies. To fully leverage AI’s transformative power, SDLC methods need to be reimagined.”AWS AI-DLC Whitepaper
We don’t need faster horse chariots. We need automobiles.

Three Eras of AI in Development

EraHuman RoleAI RoleParadigm
AI-Assisted (2020-2023)Primary creatorAutocomplete, suggestionsHuman drives, AI helps
AI-Driven (2023-2025)Validator, decision-makerGenerates code, plans, testsAI proposes, human approves
Agentic (2025+)Supervisor, architectAutonomous multi-step executionAI executes, human oversees
We’re now in the Agentic Age—where AI agents don’t just assist; they autonomously plan, reason, and execute complex workflows. This isn’t a minor upgrade. It’s a paradigm shift that renders traditional methodologies obsolete.

Why Sprints Don’t Work Anymore

Two Weeks Is No Longer Fast

AI-enabled development produces working prototypes in hours. A two-week sprint is not rapid iteration—it’s an artificial delay that queues work behind an arbitrary time boundary.
When concept-to-working-code happens in an afternoon, waiting twelve more days for a sprint boundary serves no purpose except ceremony compliance.

The Cost of Code Has Collapsed

Agile assumed producing code was expensive because human effort was expensive. The methodology optimized for producing less code more carefully. AI inverted this assumption. Code generation now costs minutes, not days.

Estimation Becomes Meaningless

Traditional MetricProblem in AI Era
Story PointsAI execution time bears no relation to human effort estimates
VelocityFluctuates wildly based on AI tool usage, not team capability
Sprint PlanningCreates artificial delays for completed work waiting for ceremonies
Daily StandupsConsume time sharing information automated systems could surface instantly
“Would effort estimation (story points) be as critical if AI diminishes the boundaries between simple, medium, and hard tasks? Would metrics like velocity be relevant, or should we replace it with Business Value?”— AWS AI-DLC Whitepaper

The V-Bounce Model: Humans as Validators

The V-Bounce paper from Crowdbotics introduced a foundational insight:

Core Insight

The role of humans shifts from primary implementers to validators and verifiers.

Traditional V-Model vs V-Bounce

AspectTraditional V-ModelV-Bounce
Implementation PhaseSubstantial (weeks/months)Drastically reduced (hours/days)
Human RoleHands-on codingValidation and verification
EmphasisCode productionRequirements + Architecture + Continuous validation
AI RoleNone/minimalEnd-to-end: planning → code → tests → maintenance

Three Core Assumptions

1

Near-Instantaneous Code Generation

LLMs enable rapid generation of high-quality code
2

Natural Language as Primary Interface

Programming is becoming language-driven
3

Humans as Verifiers

Human roles shift from creators to sophisticated validators

Empirical Results

  • 55.8% faster task completion with AI tools (GitHub Copilot study)
  • 70%+ efficiency in generating test suites with AI
  • Enhanced early bug detection and overall software quality

AI-DLC: The Methodology for the Agentic Age

AWS’s AI-Driven Development Lifecycle (AI-DLC) takes these insights and builds a complete, production-ready methodology.

Core Principle: Reimagine, Don’t Retrofit

“We need automobiles, not faster horse chariots.”
AI-DLC doesn’t bolt AI onto existing processes. It rebuilds from first principles for an AI-native world.

The Reversed Conversation

In traditional development, humans prompt AI:
Human: "Write a function that calculates tax"
AI: [generates code]
Human: "Now add error handling"
AI: [updates code]
In AI-DLC, AI drives the conversation:
AI: "I've analyzed your intent. Here are 3 Units I propose,
     with 12 user stories. I have 5 clarifying questions
     before we proceed. Question 1: What's your compliance
     framework for tax calculations?"
Human: [validates, approves, or redirects]
This is like Google Maps: humans set the destination, AI provides step-by-step directions, humans maintain oversight.

Three Phases, Not Endless Sprints

PhaseRitualDurationOutput
InceptionMob ElaborationHoursIntents → Units → Stories
ConstructionMob ConstructionHours/Days (Bolts)Domain Design → Code → Tests
OperationsContinuousOngoingDeployment, monitoring, maintenance

Bolts Replace Sprints

SprintsBolts
2-4 weeksHours or days
Fixed timeboxesFlexible, intent-driven
Velocity measuredBusiness value measured
Story points estimatedAI executes, humans validate

Mob Rituals: Collaborative AI Alignment

  • Product managers, developers, QA collaborate with AI from the start
  • AI proposes breakdown into Units and Stories
  • Team validates in single room with shared screen
  • What took months now takes hours
  • Teams work in parallel after domain modeling
  • AI generates component models, sequence diagrams, functional flows
  • Team provides real-time clarification on technical decisions
  • Prevents hallucinations and poor design

Why You Don’t Need Other Spec-Driven Tools

The Landscape Today

ToolPhilosophyLimitation
Spec KitLightweight toolkitNo methodology, human-driven
BMAD19-agent simulationComplex, no formal methodology
OpenSpecChange-centricNo lifecycle, brownfield-only
KiroIDE-integratedVendor lock-in, no team rituals

What They’re Missing

These tools focus on specifications—they help you write better prompts and structure your requirements. But specifications alone don’t solve the fundamental problem:
Traditional methods were built for longer iteration durations (months and weeks), which led to rituals like daily standups and retrospectives. With AI, iteration cycles are measured in hours or days. This needs continuous, real-time validation and feedback mechanisms, rendering many traditional rituals less relevant.

AI-DLC Is Different

AI-DLC isn’t a tool—it’s a methodology that includes:
  • Formal phases (Inception → Construction → Operations)
  • Defined rituals (Mob Elaboration, Mob Construction)
  • Design integration (DDD as core, not optional)
  • Reversed conversation (AI proposes, human validates)
  • New artifacts (Intents, Units, Bolts)

specs.md: Three Flows for Every Use Case

specs.md is an AI-native development framework with pluggable flows. Choose the level of methodology that matches your project needs.

Simple Flow

Spec Generation OnlyQuick requirements, design, and task documents without execution tracking.
  • 1 agent, 3 phase gates
  • Kiro-style workflow
  • Best for: prototypes, handoff

FIRE Flow

Adaptive ExecutionShip in hours with adaptive checkpoints and first-class brownfield support.
  • 3 agents, adaptive checkpoints
  • Monorepo & brownfield ready
  • Optimized for: teams who hate friction

AI-DLC Flow

Full MethodologyComplete AI-DLC implementation with DDD and comprehensive traceability.
  • 4 agents, 10-26 checkpoints
  • Mob rituals, DDD as core
  • Best for: teams, complex domains
Not sure which flow? Check out our Choose Your Flow guide.

The Bottom Line

Old WorldNew World
Sprints (weeks)Bolts (hours/days)
Story pointsBusiness value
Human codes, AI assistsAI proposes, human validates
Retrofit AI into AgileReimagine from first principles
Specification toolsComplete methodology (AI-DLC)
“AI might be the death of Agile, but it’s the beginning of true agility.”

Further Reading

AI-DLC Whitepaper (AWS)

The original AWS whitepaper defining AI-DLC methodology

V-Bounce Paper (arXiv)

The AI-Native Software Development Lifecycle research paper

AWS Blog: AI-DLC

AWS DevOps blog post on reimagining software engineering

Compare Tools

See how specs.md compares to Spec Kit, BMAD, Kiro, and OpenSpec