The Articulation Method
How I build software
without writing code
Software, websites, and AI tools.
No code. No dev team. Just articulation.
I built a SaaS product, a diagnostic scorecard, a content engine, and three websites in six weeks. Here's the method.
~/projects/scorecard
$ |
The ceiling nobody talks about
Most people's relationship with AI looks like this: open ChatGPT, type a vague question, get a vague answer, feel underwhelmed.
Or they watch someone on YouTube ship an app on Lovable in 20 minutes and assume that's what “building with AI” looks like.
Platforms like Lovable, Replit, and Bolt have done something genuinely useful. They've made the idea of building software feel accessible. But there's a ceiling. You're working inside someone else's constraints, with someone else's defaults, producing something that looks and feels like everything else shipped on those platforms.
What I'm talking about is different. A blank canvas. A terminal. An AI agent that writes production code to your specification, with no wrapper, no template, no guardrails you didn't choose.
The output doesn't feel like an MVP. It feels like something you'd be proud to ship on day one.
The skill gap between “I used a builder tool” and “I built this” isn't technical. It's articulation.
The real leverage isn't technical
Here's what nobody tells you about working with AI: the most valuable skill isn't prompt engineering or knowing how to code. It's knowing your domain deeply enough to describe what needs to exist.
Every operator, founder, and business owner carries years of accumulated knowledge. Pricing decisions, customer patterns, workflow bottlenecks, edge cases. Most of it feels so obvious to you that you'd never think to write it down.
But a machine doesn't have your intuition. It needs all of it made explicit. And the process of making it explicit, through conversation, through dictation, through being pushed with clarifying questions, is where the real breakthroughs happen. You discover things about your own business that you didn't know you knew.
A developer with no domain knowledge builds generic software.
An operator who can articulate their domain builds something that solves their actual problem.
That's the gap this method closes.
Reveal the map before the build
AI is an autonomous driver. Extraordinarily capable, but it needs a map. Your job isn't to write code. Your job is to describe the destination so precisely that the route becomes obvious.
A task estimated at 30 hours by a traditional developer? The AI can execute the build in 30 minutes. But getting to that point means you've probably spent 3 hours hashing out the spec, running initial tests, realising where you didn't want to go, and refining until the map is precise enough that the builder barely needs to ask questions.
That's still a fraction of the traditional timeline. But the time compression doesn't come from AI being fast at typing code. It comes from front-loading the thinking.
The spec is the work. The build is the output.
I call this the Articulation Method. Not because the name matters, but because the skill does. The entire approach rests on one ability: describing what you want to exist with enough precision that an AI can build it.
One document governs everything
Every project starts with a single document. I call it the master spec. It's a living file that captures everything the AI needs to know: what we're building, why, the constraints, the design system, the tone of voice, the technical decisions, what “done” looks like.
This isn't a brief you write once and hand over. It evolves. As you build, you learn things about your own requirements that you couldn't have known at the start. Those learnings feed back into the master spec, making every subsequent build faster and more precise.
From the master spec, you derive sub-specs. Each one covers a specific feature, page, or component. A sub-spec inherits the context of the master, so you're never re-explaining your design system or business logic, but adds the granular detail needed for that particular piece of work.
# The Operator AI Readiness Score — Sprint Spec ## What This Is A diagnostic scorecard at jackharris.ai/score that assesses how ready an experience business is to leverage AI across its operations. The scorecard: - Asks 12 questions across 4 categories - Scrapes the user's website via Firecrawl at the start to understand their business context - Produces a radar chart heatmap and category scores - Assigns the user to one of four tiers - Pipes quiz answers + Firecrawl data to Opus 4.6 for a nuanced, personalised interpretation - Generates a PDF report emailed via Loops - Pushes contact + scores to Supabase ## Tech Stack | Layer | Tool | |-------------|-------------------| | Framework | Next.js | | Hosting | Vercel | | Database | Supabase | | Email | Loops | | Scraping | Firecrawl | | Analysis | Claude Opus 4.6 |
The loop looks like this: master spec governs the project. Sub-specs get spun off for execution. Execution surfaces questions and edge cases. Those feed back into the master. The master gets sharper. The next sub-spec benefits.
This is what makes the method compound. Your first sub-spec might take an hour to refine. By your fifth, the master spec is so rich with context that new features practically write themselves.
The five stages
Every build follows the same rhythm. The tools change. The stages don't.
01 — CAPTURE
Dictate, don’t type
Use voice-to-text and brain dump without editing. Go off on tangents. Speak for 20 seconds or 2 minutes. The goal is to get the idea out of your head with zero friction between thinking and capturing.
You’re not writing a prompt. You’re thinking out loud.
The tool I use
Wispr Flow is a voice-to-text layer that works across every app on your machine. I dictate specs, emails, Slack messages, code comments, everything. For someone who thinks faster than they type and tends to talk around an idea before landing on it, this is the non-negotiable foundation. The entire method starts here.
02 — DISTIL
Let AI find the structure
Paste the raw dictation into Claude Chat with your project context loaded. Claude identifies what you’re actually asking for, pushes back with clarifying questions, and distils the mess into a structured brief.
This is a conversation, not a single prompt. You go back and forth. You argue. You refine. You’re co-authoring the spec.
03 — SPEC
Write the map, not the code
The output of the distillation is what I call a sprint spec: a document detailed enough that a fresh AI instance, or a human developer, could execute it without further briefing.
It includes what we’re building, why, the exact scope, technical constraints, file structures, design system references, and what “done” looks like. This is the artefact. Everything downstream depends on its quality.
04 — EXECUTE
Hand the spec to the builder
The spec goes to Claude Code, a terminal-based AI coding agent. Depending on scope and your confidence in the spec: plan mode (it proposes actions, you approve each one), act mode (it executes with checkpoints), or full bypass (autonomous execution against a clear, tested spec).
For greenfield builds with a tight spec, bypass all permissions. Let it run. The spec is your safety net.
05 — DIALOGUE
Close the loop
Claude Code feeds back into Claude Chat. Errors, questions, and architectural decisions flow back up. You refine. The spec evolves. This isn’t a linear handoff. It’s a conversation between thinking and building, with you as the editorial layer.
The master spec gets updated. The next sub-spec is sharper. The method compounds.
Run multiple streams at once
Once you trust the method, you stop working sequentially.
One Claude Chat session refines a spec for a new feature. Another is ideating on a completely different problem. Two Claude Code terminals are building different components of the same project simultaneously. A third is running tests on yesterday's build.
You become the conductor, not the player. Your job is editorial: reading output, making decisions, feeding context between streams. The AI does the building. You do the thinking.
This isn't a theoretical workflow. On any given evening, I have four Claude instances running. Two Chat sessions with different project contexts loaded. Two Code terminals executing against different sub-specs. The master spec keeps everything aligned.
Chat: Scorecard Spec
Chat: Content Strategy
Code: Building /results
Code: Running tests
The tools
You don't need to become a developer. But you should understand the building blocks well enough to have an opinion about what you're building on.
For most of what I build, and what I see other operators and founders needing, the stack looks roughly the same:
Next.js
The framework. Handles the website, the app, the API, all in one place.
Vercel
Deployment. Push code, it’s live. No servers to manage.
GitHub
Version control. Your safety net. Every change is tracked, reversible, backed up.
Supabase
The database. Where your data lives. Postgres under the hood.
Resend / Loops
Transactional and marketing email. Clean, modern, built for developers or AI agents acting as developers.
Railway
Backend services that need to run independently. Simple, scales when you need it.
Cloudflare
Edge workers, DNS, security. The infrastructure layer you set once and forget.
This isn't a beginner stack. It's a production stack, the same tools that well-funded startups use. The difference is you're building on it with AI instead of a team of engineers.
Don't let AI push you toward a safe MVP that feels dated the moment you ship it. Start with tools that scale. The build effort is roughly the same either way. The AI doesn't care whether it's writing code for a toy or for production.
What this isn't
This isn’t prompt engineering.
It’s thinking clearly about what you want to build.
This isn’t no-code or low-code.
The AI writes real, production-grade code. You describe the outcome.
This isn’t Lovable or Replit.
There’s no template. No wrapper. No ceiling.
This isn’t for toy projects.
I run a real business on software built this way.
You don’t need to know JavaScript, Python, or any programming language.
You need to know your domain.
The Guide
Want the full method?
The Articulation Method guide covers everything on this page in depth, plus a full case study showing how I built the AI Readiness Scorecard from first commit to production in seven days.
What's inside
- The five-layer hierarchy — why prompting is layer one
- The five-stage method: Capture → Distil → Spec → Build → Ship
- Full case study: the AI Readiness Scorecard, built in 7 days
- Git commit timestamps from the actual build
- The decision framework: when to plan and when to let AI run
Free PDF. 7 pages. Straight to it.