I Built an Entire SaaS Platform Using AI
110,000 lines of production code. 236 API endpoints. 4,869 automated tests. 25+ database migrations. 112+ development sprints with formal completion protocols. One founder. Zero employees.
I built JobIntel — a full SaaS platform for the problem that inspired this platform — using AI-assisted development. Not as an experiment. Not as a weekend prototype that I am calling a product. As a production system with enterprise-grade architecture, security scanning, infrastructure-as-code, and the kind of test coverage that would survive a code review at any company I have worked for.
This post is about what building with AI actually looks like when you apply real engineering discipline to it. The wins, the failures, and the parts nobody talks about in the breathless LinkedIn posts.
The backstory in 30 seconds
I spent 15+ years running enterprise programs at companies like GitLab and FICO — managing teams of up to 80+ people, budgets exceeding $15M, and consulting all by myself on transformations that touched hundreds of engineers. I wrote three books, including "Enterprise AI Adoption: Challenges & Solutions", which is a framework for how organizations should approach AI implementation.
Then I tried to build a product with it.
The irony was not lost on me. I had written the playbook for enterprise AI adoption. Now I was the enterprise — a one-person enterprise, using the exact tools I had been analyzing professionally.
What building with AI actually looks like
The development stack: a FastAPI backend, PostgreSQL on RDS, ECS Fargate for container orchestration, CloudFront for delivery, Terraform for infrastructure-as-code, and Claude Code Agent Teams as the AI-assisted development layer. An 8-agent team structure where each agent handled a different responsibility — code generation, testing, review, documentation, security scanning.
Here is what most "I built X with AI" posts leave out: AI-assisted development is not a magic wand. It is a power tool. The distinction matters. A power tool in the hands of someone who knows what they are building produces extraordinary results. The same tool in the hands of someone who does not understand the underlying craft produces extraordinary messes.
The 112+ sprints I ran were not casual. Each had formal completion protocols — the same enterprise discipline I had applied to programs with hundreds of developers, scaled down to a team of one plus AI. Cost tracking, security scanning, dependency audits, test coverage gates. The sprints exist because without them, AI-assisted development drifts toward generating code that passes tests but misses the point.
Where AI was genuinely excellent
Credit where it is due. There are categories where AI-assisted development was not just helpful — it was transformative.
Boilerplate and CRUD patterns. Every SaaS has dozens of create-read-update-delete flows. These are tedious, repetitive, and error-prone when written manually at volume. AI generated them reliably and fast. The 236 API endpoints exist partly because the cost of adding a well-structured endpoint dropped from hours to minutes.
Test generation. Give Claude Code a spec and it produces comprehensive tests — edge cases included — faster than I could outline them. The 4,869 tests are not padding. They catch real regressions. The AI was better at thinking through boundary conditions than I expected, probably because it has seen millions of test suites.
Migration scripts. Database schema evolution is tedious bookkeeping. The 25+ Alembic migrations were generated accurately from schema change descriptions. Not glamorous. Extremely valuable.
Code review and bug detection. The agent team structure meant that code generated by one agent was reviewed by another. This caught issues that a single-pass generation would have missed — inconsistent error handling, missing validation, naming drift across modules.
Repetitive pattern implementation. When the same pattern needed to be applied across 15 modules — say, adding rate limiting or standardizing response formats — AI handled the propagation flawlessly. This is the kind of work that causes human developers to make one mistake on module 12 out of 15 because attention flags.
Where AI failed and I had to intervene
This is the section that matters most, and the one that most builder posts skip.
Architecture decisions. AI can generate code within an architecture. It cannot decide the architecture. The choice to follow 12-Factor methodology, the decision between microservices and a monolith, the database schema design that would scale — these required experience that no model has. AI will confidently propose an architecture. Whether it is the right one for your specific constraints is a judgment call that comes from building systems for 15 years.
Security model design. Rate limiting strategies, input validation depth, authentication flows, authorization boundaries — AI generates plausible security code. Plausible is not the same as secure. Every security-critical path was designed by hand, then implemented with AI assistance, then reviewed manually. The order matters.
Business logic trade-offs. How should credibility scoring weight posting age versus salary transparency? What belongs in the free tier versus Pro? Where is the line between useful cross-user intelligence and privacy exposure? These are product decisions disguised as code decisions. AI has no opinion about your business. It will implement whatever you ask, including bad ideas, with equal confidence.
Knowing when the AI is confidently wrong. This is the hardest skill in AI-assisted development and the one least discussed. Claude Code generates code that compiles, passes tests, and looks correct. Sometimes it is subtly wrong — a business rule that does not match the spec, an optimization that introduces a race condition, an error handler that swallows information you needed. The sprint completion protocols caught these. Pure vibes-based development would not have.
The enterprise discipline that made it work
The real differentiator was not the AI tooling. It was applying enterprise program management patterns to a solo development context.
Every sprint had a scope document. Every completion had a security scan. Cost was tracked per sprint, not discovered at the end of the month. Dependencies were audited. Test coverage had a floor, not a suggestion. These are patterns I developed managing programs with hundreds of people and millions in budget. They work just as well — arguably better — when the "team" is one person and a set of AI agents.
Without that discipline, AI-assisted development produces something that looks like a product. With it, you get something that actually is one.
The honest summary
I did not build JobIntel "with AI" in the way that phrase is usually used — where AI did the work and I watched. I built it as an experienced engineer using AI as a force multiplier, inside a framework of enterprise discipline that kept the output production-grade.
The AI handled roughly 70–80% of the raw code generation. But the 20–30% that required human judgment — architecture, security, business logic, knowing when to override the machine — that was the 20–30% that determined whether this was a real product or a demo.
Could someone without 15 years of enterprise experience replicate this? Parts of it. The CRUD patterns, the test generation, the boilerplate — AI levels the playing field there. But the architectural judgment, the security instincts, the product thinking? Not yet. Maybe not for a while.
The tools are extraordinary. What you bring to them still matters.
JobIntel is the result: a platform that helps job seekers detect ghost jobs at scale and make data-driven decisions about where to invest their time. If you want to see what the output of AI-assisted development looks like in practice, sign up at jobintel.com. The first 5,000 users get Pro free, for life.
Ready to take control of your job search?
Sign up for JobIntel — free.
Get Started Free