Here’s a number that should worry anyone with a what if we built this with AI idea on their whiteboard: 97 percent of Claude Code projects never see the light of day.
Ninety-seven.
That’s not 97 percent of weekend tinkerers and curious dabblers.
That’s the broad funnel of people who’ve done enough work to actually be using Claude Code – and still bail before shipping.
The reason isn’t that the tools don’t work. They work spectacularly well. The reason is the gap between what AI development looks like in marketing copy and what it actually costs.
The $20 plan lie
Anthropic charges $20 a month for the entry-level Claude plan.
Two hundred for the Max tier. If you’re prototyping a SaaS idea, those numbers feel like a bargain. They’re how the conversation usually starts: I’ll just spin something up, the AI plan’s twenty bucks, what could it cost?
What it costs is your $20 plan, then your $200 plan, then API credits the moment you outgrow either, then hosting, then a database, then storage – and then the moment you push real usage through it, actual money. Real money. The kind that compounds.
We just spent $45,000 in just the first two months building an internal platform. That’s API usage, database hosting, server compute.
Not human salaries – those are separate.
That’s the infrastructure bill for two months of building one platform.
We’re an agency with the team and the budget to absorb that. Most “I’ll spin it up on weekends” projects don’t have a $45,000 ceiling.
They have $1,000 – which gets burned through somewhere around the 50% mark, right when the project starts to feel real but isn’t shippable yet.
And then it dies.
Quietly.
In a folder called /old/projects/that-one-thing.
Vibe coding's hidden bill
There’s a phrase doing the rounds: vibe coding.
It’s what happens when you sit down with Claude or Cursor and start building, no plan, no architecture, just energy and a problem you want to solve. You let the AI lead, you respond to what it produces, you iterate.
Vibe coding can be fantastic. The barrier to building a working prototype has collapsed. People without a single line of formal CS training can ship things that would have required a team and six months a few years ago.
But vibe coding has a structural cost that doesn’t show up in the demo videos.
Every back-and-forth with the model burns tokens.
Every actually let’s redo that whole component burns more.
Every time you ask Claude to just look at the codebase and figure out what’s wrong – without telling it where to look – you’re paying for the model to read everything, twice.
In our own team, one person blew through a meaningful chunk of our API budget in their first ten minutes on a new tool.
Not because they did anything wrong. Because they asked the AI to scan six months of email when they only needed one specific thread. That’s a $200 prompt where a $2 prompt would have done the same job.
Multiply that across a developer trying to build a SaaS by themselves on weekends and you can see how the $1,000 budget evaporates in a fortnight.
What the 3% who ship have in common
So what separates the projects that ship from the ones that die in the folder?
It isn’t smarter ideas. It isn’t more money – though more money helps. It’s how the work gets broken down before any AI is touched.
The shipped-it 3% don’t try to build the whole thing in one heroic vibe-coding marathon.
They scope a tiny first slice.
They ship that slice.
They scope the next.
They ship that.
Each slice is small enough that if it explodes, the cost is contained – and concrete enough that you can tell when it’s actually working.
Our internal platform looks complete from the outside now. It didn’t get built that way. It got built in compartments – feature by feature, each one shipped before the next one started.
That discipline is the only thing standing between $45,000 of investment and another project in the graveyard.
The skill that just got valuable
There’s a follow-on insight from this.
The most valuable person in an AI-native business isn’t the one who writes the cleanest code.
It’s the one who can break a fuzzy problem into the smallest shippable piece, then feed AI just enough context to solve that piece without going off-piste.
Some people call that skill prompt engineering.
That’s underselling it. It’s problem architecture – the ability to look at a tangle and find the right knot to untie first.
It’s also a skill you can develop without a CS degree, which is part of what’s reshaping who gets hired and how. We’ve stopped asking new hires can you code. We’re asking can you frame a problem so an AI can solve it without you sitting on its shoulder for the next four hours.
The takeaway
If you’re sitting on a whiteboard idea right now thinking I’ll just spin this up with Claude, the question isn’t can the AI build this.
The AI can almost certainly build this.
The question is whether you’ve scoped the first slice tight enough that you’ll have something real to look at before your budget evaporates.
The 97% who didn’t ship had good ideas.
Most of them, anyway.
They didn’t have a plan – they had a vibe.
The 3% who shipped had a plan that started one slice at a time.
Go small first.
Then go again.










