I know a secret.

The heavyweight, old-school, phase-gated software development lifecycle that everyone spent the last twenty years running away from? It works. It works really well, actually. Better than it ever did.

I realize that’s heresy. We’ve spent two decades telling each other that waterfall is dead, that Big Design Up Front is a relic, that the only way to build software is to move fast, iterate, and figure it out as you go. And for a while, that was right. The traditional SDLC had a fatal flaw: it was too slow. By the time you finished your requirements phase, the world had moved on. By the time you delivered against the spec, the spec was wrong.

But here’s what nobody’s talking about: AI just eliminated the speed problem.

And with the speed problem gone, all those things we threw away? Requirements gathering. Formal specifications. Implementation planning. Phased delivery with sign-offs. The stuff that Andersen Consulting formalized as Method/1, that Coopers & Lybrand built into their Systems Development Methodology (tightly integrated with their audit and compliance process), that EDS turned into a patented delivery framework. The stuff I was trained on when I started my career in 1990.

It turns out that stuff was actually good. We just couldn’t afford the time it took to do it.

Now we can.

Wait, Seriously?

I know how this sounds. “Waterfall is back” from a guy who spent the last several months writing about AI disruption. Bear with me.

I’m not arguing for six-month requirements phases and three-ring binders full of functional specs that nobody reads. I’m arguing for the discipline underneath those methodologies. The idea that you should understand what you’re building before you build it. That domain experts should validate requirements before engineers write code. That architecture decisions should be deliberate, not emergent. That there should be a plan.

The old methodologies got this right. What they got wrong was the cost. A requirements phase that takes three months and a planning phase that takes two months and an implementation phase that takes six months is a death sentence in a market that moves quarterly. That’s why Agile won. Not because process was wrong, but because heavy process was too expensive.

AI changes that equation completely. What if your requirements phase takes a week instead of three months? What if your implementation plan generates in hours instead of weeks? What if each feature goes from spec to production in days instead of quarters?

That’s not hypothetical. I just did it.

The Project

I’ve been building a production membership management platform for a regional services company. Real business. Real product owners. Real members who will use this system to manage their accounts, browse profiles, and interact with staff.

This isn’t a demo. It’s not a POC sitting in someone’s innovation lab. It’s a production system running on AWS. Lambda, API Gateway, RDS, S3, CloudFront, Cognito for authentication, transactional email, photo processing pipelines, video integration. The works. The kind of system that, two years ago, would have required a team of six to eight engineers working for a year.

I built it with AI. Specifically, Claude Code. And I did it without skipping a single phase of the software development lifecycle.

Ten features shipped. Production infrastructure running. Real product owners reviewing real specs and signing off on real functionality. A junior developer onboarded and contributing through proper PR workflows.

Here’s the part that matters: none of this happened by abandoning the process. It happened because the process is what made the AI effective.

It Started With Requirements, Not Prompts

When people talk about AI-assisted development, they usually mean one thing: generating code. Open Cursor, describe what you want, accept the output, ship it.

That’s the least interesting part of what I’ve been doing.

Before I wrote a single line of code, before I even opened Claude Code, I spent weeks doing the work that software projects have always required. Strategic analysis of the existing platform. A domain model that mapped every entity, relationship, and business rule. Discovery sessions with product owners to understand workflows, edge cases, and the regulatory constraints that govern how the business operates. A questionnaire field reference documenting every data element the legacy system captured.

From that discovery work, I built a requirements catalog: ten epics decomposed into over fifty features, each with a description, acceptance criteria, and priority. Then a phased project plan that the business owners could actually read. Not a Gantt chart. Not a JIRA board. A document that explained what was getting built, in what order, and why. Phase 0 for infrastructure. Phase 1 for admin core. Phase 2 for the member experience. Phase 3 for interactions and monetization. Phase 4 for community features and analytics.

The product owners reviewed that plan. They understood the sequencing. They could see how the pieces fit together. They knew what was coming next and why it depended on what came before. “You can’t open a restaurant to customers until the kitchen is built.” That was the metaphor that clicked for them, and it drove the entire phase structure.

Here’s the thing: I used AI for all of it. Claude was in the room for the strategic analysis, the domain modeling, the requirements decomposition. But I wasn’t using it as a code generator. I was using it the way an enterprise architect uses a whiteboard in a strategy session. As a thinking partner. I was the product owner and the architect, asking the hard questions, challenging the output, pressure-testing the domain model against what the business owners told me in discovery sessions. The AI helped me move faster through work I already knew how to do. It didn’t decide what the work should be.

That distinction matters. The AI was present from day one, but its role changed as the project matured. Early on, it helped me think. Later, it helped me build. The requirements and the project plan are the reason the building phase worked as well as it did.

The Specification Pipeline

With requirements defined and a project plan approved, each feature enters a structured pipeline I built using a tool called spec-kit. It’s a framework that sits on top of Claude Code and enforces a deliberate lifecycle: specification, planning, task generation, implementation, and cross-artifact analysis.

Here’s how a feature actually gets built:

Specification. I feed the feature requirements, already validated by product owners, into the pipeline. The system generates a formal spec: scope, constraints, acceptance criteria, edge cases, integration points. Not a throwaway prompt. A document that goes back to the product owners for another round of review. They don’t care about your tech stack. They care about whether the system does what their members need.

Planning. From the approved spec, the system generates an implementation plan. Architectural decisions, file-by-file change maps, dependency analysis, risk identification. This is where 35 years of experience becomes the quality filter. The AI proposes. I evaluate. I catch the patterns that won’t scale, the integration points it doesn’t understand, the business rules it can’t infer from a prompt.

Task generation. The plan decomposes into ordered, dependency-aware tasks. Each task knows what it depends on, what files it touches, and what the acceptance criteria look like. This isn’t a to-do list. It’s an execution graph.

Implementation. Claude Code executes the tasks. But here’s the thing: it’s executing against a spec that product owners reviewed, a plan that I validated, and tasks that have explicit boundaries. The AI isn’t making architectural decisions. It’s implementing decisions that were already made by humans who understand the domain.

Analysis. After task generation, a cross-artifact consistency check runs against the spec, plan, and task list. It catches gaps, contradictions, and missing dependencies before a single line of code gets written.

Every feature went through this cycle. All ten of them. And every one shipped cleaner than anything I’ve built with a traditional team in the last decade.

The Product Owners Changed Everything

Here’s what surprised me most: the product owners made the AI better.

These aren’t technical people. They’re business operators who’ve been running this company for years. They know their members. They know their workflows. They know the edge cases that no requirements document has ever captured because nobody thought to ask.

When I put a generated spec in front of them, they didn’t nod politely and sign off. They pushed back. Hard.

“Members under this status can browse but can’t select anyone.” “Frozen members should still be visible to other members with an ‘Inactive’ label. It drives re-engagement when they see what they’re missing. But they can’t take any actions until they reactivate.” “No, members don’t see who viewed their profile. That’s admin-only. Period.”

Each correction went back into the spec. The spec drove the plan. The plan drove the tasks. The tasks drove the implementation. By the time Claude Code was writing code, it was writing code against requirements that had been pressure-tested by people who actually understand the business.

IMHO, that’s the part the vibe coding crowd completely misses. AI without domain input generates plausible software. AI with structured domain input from real stakeholders generates correct software. The difference isn’t subtle. It’s the difference between a demo and a production system.

Speed Without Shortcuts

Let me be specific about what “fast” looked like.

I shipped an admin authentication system with role-based access control. A member account creation workflow with staff-driven onboarding. A full member management module. Photo processing with async pipelines. Membership contract recording with legal compliance for state-regulated duration limits. A transactional email system. Member identity and authentication through a separate Cognito pool. A search and discovery interface with demographic filtering. Video integration with a third-party hosting platform. An email provider migration from one service to another.

Ten features. Each one specified, planned, tasked, implemented, tested, and deployed to production on AWS. Each one reviewed by product owners who had opinions and weren’t shy about sharing them.

I wasn’t moving fast by skipping steps. I was moving fast because the steps themselves were accelerated. Writing a spec that used to take a week took a day. But the spec still got written. Generating an implementation plan that used to take days took hours. But the plan still got reviewed. The SDLC didn’t disappear. It compressed.

And here’s what compression gave me that shortcuts never could: confidence. When a feature deployed to production, I knew what it was supposed to do because there was a spec. I knew how it was built because there was a plan. I knew what was tested because the tasks had explicit acceptance criteria.

Try getting that from “just tell the AI what you want and ship it.”

The Complexity Was Real

This wasn’t a CRUD app.

AWS Cognito has two separate user pools. One for admin staff, one for members. Each with different authentication flows, different IAM permissions, and different token validation logic. Lambda functions running behind API Gateway with Mangum adapters. Presigned S3 URLs that break if you pass explicit credentials instead of letting the default credential chain handle IAM roles. SES sandbox mode that silently drops emails to unverified recipients. Terraform state that reports “0 changed” on IAM policies you know you modified.

Every one of those gotchas generated a lesson. Every lesson got documented. Every documentation update fed back into the specification process so the next feature didn’t hit the same wall.

The AI didn’t magically know any of this. It hallucinated SES configurations that would have failed in production. It suggested Cognito patterns that don’t work with async FastAPI. It generated Terraform that looked right but violated free-tier constraints.

My job was to catch those hallucinations. And I could catch them because I’d been an architect and a CTO and I’d deployed to AWS enough times to know when something smelled wrong. The AI amplified my experience. It didn’t replace it.

Right or wrong, that’s the pattern I keep seeing: AI is a multiplier, not a substitute. And what it multiplies depends entirely on what you bring to the table.

The Junior Developer Question

Halfway through the project, I onboarded a junior developer. Fresh out of college. Learning the stack, learning the workflow, learning how production software actually gets built.

He works on feature branches. He submits pull requests. I review them. We talk about why certain patterns exist, what the spec requires, where the plan made specific architectural decisions and why.

The specification artifacts became his onboarding documentation. He doesn’t need me to explain what a feature is supposed to do. The spec tells him. He doesn’t need me to explain the implementation approach. The plan tells him. He can focus on learning how to write good code within a structure that’s already been validated.

That’s what a real SDLC gives you. Not bureaucracy. Leverage. The ability to bring people into a project and have them productive in days instead of weeks because the decisions are documented, the rationale is explicit, and the boundaries are clear.

Strip all that away, just vibe code your way through a project, and onboarding a junior developer means sitting next to them for two weeks explaining decisions that live in your head and nowhere else.

Method One, Meet Claude Code

In 1990, I was a junior consultant at Coopers & Lybrand running SDM engagements. Requirements gathering. Functional specifications. Design documents. Implementation plans. Phase gates with sign-offs. The whole nine yards.

It worked. The systems we built were solid. But it took forever. A mid-size project could burn six months before anyone wrote a line of code. That’s why Agile won. Not because the discipline was wrong, but because the overhead was a dealbreaker.

I’ve spent the last several months writing about the disruption hitting professional services. Collapsing talent arbitrage. Clients who need fewer developers. The death of staff aug as we’ve known it.

This project is the other side of that equation. It’s what becomes possible when you pair AI with the discipline those old methodologies demanded, minus the overhead that made them impractical.

A single experienced practitioner, using Claude Code with a structured specification workflow, built a production system that would have taken a traditional team a year. Not by cutting corners. By compressing cycles. Not by abandoning the process. By making the process the accelerant.

Bam. That’s the model.

The partners at Coopers would barely recognize the tooling. But they’d recognize the discipline. Requirements before design. Design before code. Validation at every phase. The methodology is the same. The clock speed is different.

And it has implications for every PS firm leader who’s trying to figure out what the future looks like. The future isn’t AI replacing the SDLC. It’s the SDLC running at a cadence that would have been impossible without AI. The process doesn’t slow you down. It’s the thing that keeps AI from generating plausible garbage at high speed.

The Bottom Line

The bottom line is this: the firms and practitioners who figure out how to combine AI speed with engineering discipline will build things that the “just ship it” crowd can’t touch.

Production systems. Real infrastructure. Actual users. Business logic that reflects how a company actually operates, not how an AI model guesses it might.

The vibe coders will build demos. The practitioners who kept the process will build businesses.

I’ve been in this industry for over 35 years. I’ve watched every shortcut eventually come due. Skipping requirements leads to rework. Skipping design leads to technical debt. Skipping testing leads to production incidents at 2 AM.

AI doesn’t change those dynamics. It accelerates them. Skip the process with AI and you’ll generate technical debt faster than any human team ever could. Keep the process with AI and you’ll deliver production software at a pace that would have sounded like fantasy three years ago.

I know which side of that bet I’m on. And I’m only halfway through proving it.

The Hard Part Is Next

Ten features shipped. Production infrastructure running. But the project is roughly 50% complete, and the hardest work is still ahead.

Mutual selection workflows. Message moderation queues. Payment processing through Stripe in an industry where most processors won’t touch you. Community events. Analytics and reporting.

And the big one: migrating years of member data from a legacy SQL Server sitting under a desk in an office to a modern cloud-hosted PostgreSQL instance on AWS. Different schema. Different data model. Different everything. Zero data loss. Zero disruption to active members.

That’s the kind of problem where the process I’ve been describing isn’t just helpful. It’s the only thing that keeps you from destroying a business. You don’t vibe code a production data migration. You specify it. You plan it. You test it obsessively. And then you execute it with the confidence that comes from having done the work.

I’ll be writing about how that goes. Stay tuned.


John Doucette is the founder of The Disruption Brief, where he writes about the AI transformation reshaping IT professional services. With over 35 years in the industry, from developer to CTO, he’s focused on helping PS firms navigate disruption before it’s too late. Connect with him on LinkedIn.