Skip to the content.

Prompt Driven Development: The Philosophy

There’s a quiet shift happening in how software gets built. It’s not just that developers are using AI tools — it’s that the act of prompting is becoming a core part of the development workflow itself. Some are calling this Prompt Driven Development, or PDD. And if you’re using AI tools daily without a clear structure around it, you’re probably leaving a lot on the table.


What is Prompt Driven Development?

PDD is a development approach where prompts — instructions given to AI models — are treated as first-class artifacts, not throwaway inputs. Andrew Miller, one of the earliest writers to formalize the term, describes it as a workflow where the developer is primarily prompting an LLM to generate all necessary code, with the developer reviewing changes rather than writing code themselves. Just as traditional development has source code, tests, and documentation, PDD has a structured set of prompts, context files, and outputs that together define how a project is built.

The key mental shift: your prompts are the source of truth, not just a means to an end.

Microsoft’s developer content team puts it similarly: prompts should be saved, documented, and versioned to capture architectural intent — just like code or design documents.

This matters because without structure, AI-assisted development tends to drift. You end up re-explaining the same context in every session, regenerating code that contradicts earlier decisions, and losing the reasoning behind why things were built a certain way. PDD solves this by making context explicit and persistent.


Why Structure Matters

A common trap with AI tools is treating them like a smarter autocomplete. You type a vague request, get something plausible-looking, copy it in, and move on. This works fine for one-off tasks — but it doesn’t scale.

Here’s what happens without structure:

graph LR
    subgraph "❌ Without Structure"
        A[New Session] --> B[Re-explain context]
        B --> C[Generate code]
        C --> D[Contradicts past decisions]
        D --> A
    end

    subgraph "✅ With PDD"
        E[New Session] --> F[Load context files]
        F --> G[Focused prompt]
        G --> H[Verify & review]
        H --> I[Commit & update context]
        I --> E
    end

    style A fill:#e74c3c,stroke:#c0392b,color:#fff
    style B fill:#e74c3c,stroke:#c0392b,color:#fff
    style C fill:#e74c3c,stroke:#c0392b,color:#fff
    style D fill:#e74c3c,stroke:#c0392b,color:#fff
    style E fill:#27ae60,stroke:#1e8449,color:#fff
    style F fill:#27ae60,stroke:#1e8449,color:#fff
    style G fill:#27ae60,stroke:#1e8449,color:#fff
    style H fill:#27ae60,stroke:#1e8449,color:#fff
    style I fill:#27ae60,stroke:#1e8449,color:#fff

Context drift: Every new session starts cold. The AI doesn’t know your stack, your conventions, or what you built last week.

Decision amnesia: You make an architectural call in conversation, it never gets written down, and three weeks later the AI (or a teammate) proposes something that contradicts it.

Opaque outputs: Code generated without documented intent is hard to debug, harder to hand off, and hardest to extend.

Structure doesn’t slow you down. It’s what lets PDD compound over time instead of creating a sprawling mess.


The Four Layers

graph TD
    C["Context Layer\nPermanent project briefing"]
    P["Prompt Layer\nModular, single-purpose instructions"]
    O["Output Layer\nReviewed & accepted artifacts"]
    E["Eval Layer\nValidation & checklists"]

    C -->|informs| P
    P -->|generates| O
    O -->|validated by| E
    E -->|refines| C

    style C fill:#4a90d9,stroke:#2c5f8a,color:#fff
    style P fill:#7b68ee,stroke:#5a4dba,color:#fff
    style O fill:#50c878,stroke:#3a9a5c,color:#fff
    style E fill:#f4a460,stroke:#c4824a,color:#fff

Context layer is what the AI always needs to know. Think of it as the permanent briefing you’d give a new contractor on day one. It gets prepended to every significant prompt session.

Prompt layer is the actual instructions — kept modular and single-purpose. A prompt that tries to do five things produces worse results than five focused prompts chained together.

Output layer is where reviewed, accepted AI-generated code or content lives. Nothing goes here without being read and understood.

Eval layer is how you know your prompts are still working. Even simple checklists beat nothing.


Search Before Building, Plan Before Prompting

Two of the most common failure modes in AI-assisted development:

  1. Building what already exists. A developer prompts a custom validation layer when zod does the job. A team writes a custom auth flow when their framework has one built in. The AI is happy to build whatever you ask — it won’t tell you not to.

  2. Prompting without a plan. A developer writes a monolithic prompt for a feature that spans schema, API, and UI. The output is too large, too tangled, and too hard to review. Or they write prompts in the wrong order and discover halfway through that a dependency is missing.

The fix for both is the same: slow down before you speed up.

Before writing a prompt, check whether a library, MCP server, framework built-in, or existing codebase pattern already solves the problem. If the answer is “build it,” decompose the feature into phases where each phase produces one testable artifact and maps to one prompt. The plan catches implicit decisions before they become embedded in generated code.

This doesn’t apply to every feature. Simple, single-prompt tasks should skip straight to prompting. But for anything that spans multiple files or layers — search first, plan second, prompt third.


Project Type Flavors

The core PDD structure is universal — it works for any kind of project. But where projects genuinely differ is in the content of those files and the criteria for good output.

mindmap
  root((PDD Core))
    Frontend / UI
    Backend / API
    Mobile
    Data / ML
    DevOps / Infra
    Full-stack
    Library / Package
    CLI / Dev Tools
    Embedded / IoT
    Game Dev
    Blockchain
    Security
    API Platform
    Desktop GUI
    Compiler
    Robotics
Flavor Key concerns
Frontend / UI Design systems, component naming, accessibility, state management
Backend / API Schema design, auth patterns, error handling, parameterized queries
Mobile Platform constraints, offline-first, permissions, app store readiness
Data / ML Dataset provenance, model selection, eval metrics, pipeline idempotency
DevOps / Infra IaC conventions, blast radius, secret management, change safety
Full-stack Client/server boundary, shared types, API contracts
Library / Package Public API design, semver, dependency policy, tree-shaking, multi-environment support
CLI / Developer Tools Argument parsing, exit codes, signal handling, piped output, cross-platform behavior, shell completions
Embedded / IoT Memory constraints, interrupt safety, power budgets, cross-compilation, OTA updates, real-time behavior
Game Development Frame budgets, ECS architecture, asset pipelines, physics/rendering integration, platform certification
Blockchain / Smart Contracts Security patterns (reentrancy, access control), gas optimization, upgradeability, audit readiness, on-chain math
Security / Pentesting Tools Detection quality, false positive management, safe defaults, responsible disclosure, SIEM integration, adversarial input handling
API Platform / SDK Backward compatibility, SDK generation, error design, rate limiting, pagination, webhooks, developer experience
Desktop / Native GUI Window management, OS integration, code signing, auto-updates, cross-platform behavior, memory budgets, accessibility
Compiler / Language Tooling Parsing, AST design, type systems, error recovery, source span tracking, incremental compilation, LSP integration
Robotics / ROS Real-time control loops, sensor fusion, simulation-first development, safety systems, coordinate frames, ROS2 node architecture

A React app and a Python data pipeline both need a project.md and versioned prompts. What changes is what goes inside them. The Library flavor is composable — a React component library would combine it with the Frontend flavor, a Python ML toolkit with the Data / ML flavor.


Starting a Fresh Project

Before writing a single prompt, set up the skeleton:

mkdir my-project && cd my-project
git init
mkdir -p pdd/{prompts/{features,templates,experiments},context,evals/{baselines,scripts}} src
touch pdd/context/project.md pdd/context/conventions.md pdd/context/decisions.md README.md
flowchart TD
    A["1. Scaffold"] --> B["2. Write Context"]
    B --> S{"Complex feature?"}
    S -- Yes --> R["3. Search"]
    R --> P["4. Plan"]
    P --> C
    S -- No --> C["5. Write Prompt"]
    C --> D["6. Review"]
    D --> E["7. Commit"]
    E --> F{"Decision made?"}
    F -- Yes --> G["Log in decisions.md"]
    F -- No --> C
    G --> C

    style A fill:#3498db,stroke:#2471a3,color:#fff
    style B fill:#3498db,stroke:#2471a3,color:#fff
    style S fill:#f1c40f,stroke:#d4ac0d,color:#333
    style R fill:#1abc9c,stroke:#17a589,color:#fff
    style P fill:#1abc9c,stroke:#17a589,color:#fff
    style C fill:#9b59b6,stroke:#7d3c98,color:#fff
    style D fill:#e67e22,stroke:#ca6f1e,color:#fff
    style E fill:#27ae60,stroke:#1e8449,color:#fff
    style F fill:#f1c40f,stroke:#d4ac0d,color:#333
    style G fill:#1abc9c,stroke:#17a589,color:#fff

The quick path for simple features is Context → Prompt → Review → Commit. For complex features, add Search and Plan before prompting — they catch missing dependencies, wrong decomposition, and “don’t build what already exists” moments.

Then invest 30 minutes writing pdd/context/project.md. Answer these questions:

Follow that with a lean pdd/context/conventions.md — even 10 lines covering naming, file structure, and error handling. You’ll grow it over time.

For each new feature: write a prompt in pdd/prompts/features/<area>/, run it, review the output, and commit both. If you made any architectural decision in the process, capture it in pdd/context/decisions.md.

The ongoing discipline: every session starts by asking — is my context still current?


Integrating Into an Existing Project

Retrofitting PDD is trickier, but very doable if you resist doing it all at once. The Init workflow (/project:pdd-init in Claude Code, /pdd-init in Copilot) automates the first step — it scans your existing project, detects the tech stack, conventions, and source layout, then creates the pdd/ structure without touching your code.

gantt
    title PDD Adoption Timeline
    dateFormat X
    axisFormat Week %s

    section Observe
        Log prompts & pain points           :active, w1, 1, 2

    section Initialize
        Run pdd-init & write context files  :w2, 2, 3

    section Apply
        PDD on new features only            :w3, 3, 5

Week 1 — observe before changing: Run your normal workflow, but log what you ask the AI, which prompts work well, and what context you re-explain repeatedly.

Week 2 — initialize and build context: Run pdd-init to create the pdd/ structure and detect your project’s stack. Then fill in pdd/context/project.md describing the project as it currently is and pdd/context/decisions.md retroactively — capturing the big decisions already made.

Week 3 onward — apply the full workflow to new work only: Don’t PDD-ify your entire existing codebase. Apply the full workflow only to new features and significant changes.


Rules of Thumb

Don’t build what already exists. Search before prompting — a library, framework built-in, or existing pattern may already solve the problem.

One prompt, one job. Decompose aggressively before prompting. For complex features, write a plan first.

Version your prompts. A prompt that worked last week may not work after a model update.

Document intent, not just output. Note why you prompted it that way.

Timebox experiments. Exploratory prompts go in /experiments with a date. If they don’t graduate within a week, delete them.

Never treat raw output as done. Verify it builds, passes tests, and has no security issues. Then review it like you’d review a PR.


The Biggest Mistake to Avoid

Building a long, tangled single conversation and treating it as your project. Context windows end. Models get updated. Teammates can’t see your chat history.

Anything important that emerges in a session — a decision, a pattern that worked, a constraint you discovered — needs to be extracted into your pdd/context/ or pdd/prompts/ directories before the session ends. Otherwise it’s gone.


Closing Thought

PDD isn’t magic, and it won’t replace engineering judgment. What it does is give that judgment a structured interface to work through. The developers who get the most out of it are the ones who use it to move faster on things they already understand — not to skip understanding entirely.

The analogy that keeps coming to mind: calculators didn’t make math literacy less important. They raised the stakes for knowing when and what to calculate.

The same applies here. The better your prompts, context, and review discipline — the better the AI works for you.


Further Reading

Foundational PDD

Related Thinking

Academic