A Bonsai tree

To Vibe Code or Not To Vibe Code?
That's the question!

A comprehensive guide to different approaches for AI-supported coding

You have vibe coded your first project and it didn't work at all? The code was just spaghetti?
Maintaining it or adding features turned out to be a nightmare that even Freddy Krueger would be scared of?
Fear not, there is light at the end of the tunnel!

In this article I will outline some approaches that you can take to bring your AI-supported coding to the next level and generate results that you or the AI tool of your choosing can continue working with.

AI tools are powerful and can produce a lot of great code. But only if we approach them the right way. Join me in understanding which approaches there are to produce better results than you might see at the moment without getting totally frustrated about the tools you are using, like Jane Zhang summarizes it in a recent tweet.

Issues with AI coding tools
Issues with AI coding tools

The Promise and Problem with Vibe Coding

One of the culprits, as it turns out, is the way we think about a term that has been coined a while ago: vibe coding. And it's not the term itself that's the issue, it's the fact that not everyone has the same understanding of it, as I have noticed when listening or reading what others say about their approach to it.

Let's first have a look at the definition provided by Wikipedia about what vibe coding actually is to at least have some common starting point:

Computer scientist Andrej Karpathy, a co-founder of OpenAI and former AI leader at Tesla, introduced the term vibe coding in February 2025. The concept refers to a coding approach that relies on LLMs, allowing programmers to generate working code by providing natural language descriptions rather than manually writing it.

The concept of vibe coding elaborates on Karpathy's claim from 2023 that "the hottest new programming language is English", meaning that the capabilities of LLMs were such that humans would no longer need to learn specific programming languages to command computers.

A key part of the definition of vibe coding is that the user accepts code without full understanding. Programmer Simon Willison said: "If an LLM wrote every line of your code, but you've reviewed, tested, and understood it all, that's not vibe coding in my book—that's using an LLM as a typing assistant."

Sounds great, right? We don't even have to read code anymore, AI is producing everything for us, we can just use the results and be happy. Depending on the type of application you are building, that might even work. Either because it's very simple or you got lucky (AI hallucinations don't always have to be a negative thing).

(often videos or social posts only show a first MVP, nothing that's actually fully functional, and even if it is functional, no one shows how well you can actually build on top of this first vibe coded tool)

Solution

More likely than not, especially if you didn't get the result you wanted, there are some simple rules and approaches we can apply to see a better outcome next time.

Koen summarizes that nicely by comparing your code to a Bonsai. You have to take care of it, otherwise it'll grow into something unmanageable.

Treat your code like a Bonsai
Treat your code like a Bonsai

The Code Reviewer

Instead of actually working with the code, consider yourself the reviewer in this approach: You prompt, AI writes the code, you review.

Note that this is not considered vibe coding in the sense of the definition, just to make it clear. You use the let it write the code part but ignore the do not look at the code side of it. This is the most hands-on approach and some might prefer to see themselves here, especially if you're used to mainly writing code.

For this approach you don't even need to open your IDE. GitHub Actions integrations like the one Anthropic is offering for Claude Code turn your issues into prompts that result in a PR that Claude Code submits for you to review.

You would then actively review the PR and provide feedback to let Claude Code continue working on it until you are satisfied with the result.

And for this review, there are several approaches again that you can take:

Hands-on / Deep Dive Approach

The most hands-on way would be to actually review the full code, line by line. Without wanting to go into details if this is actually more productive than writing the code yourself (that's a whole article in itself), you really need to be proficient in the language that you are working with. For developers who want to improve their productivity in a language that they've been working with for 10+ years, this might just be the right approach.

Structural Approach

A little higher level and maybe a good approach for seasoned software engineers working with a new / less familiar language but who still want to actually review code: look at the code structure. Files, file names, code separation, classes, class names, functions and function names. Maybe half a look at tests and if the test functions sounds like the right thing is tested. If you are familiar with other programming language, you will probably at least be able to read the code in a totally new language, but the AI tool enables you to write it faster.

Both approaches are valid and depend on your knowledge of the language and frameworks you are working with.

The QA Engineer

Now, to the second approach. This is the medium flight level between the architect and the engineer.

You still want to look at code, but only the tests. You can either write them yourself, if you are familiar with the language or let AI write it but you would then actually review them, in detail. Make sure that not only the main features are covered, but also corner cases. Be very specific, but don't aim for 100% code coverage, aim for 100% feature coverage and use case coverage.

This is basically the technical version of what will be described in the next section as the architect. And it's especially very valuable for those of us who prefer to work test-driven.

This approach works with all kinds of AI tools, from the more focused Cursor AI (which works best for me when working on small, limited contexts) to the bigger picture AI coding tools like Claude Code which can work well with bigger projects.

The Architect

The third approach I want to look into and describe is the architect. Or as I like to call it nowadays: the Kiro approach.

If you have used AWS's Kiro already, you probably know what I'm referring to. It's the same three-step process that many others talk about as well:

  1. Requirements
  2. Design
  3. Tasks

This approach is yet another flight level above the previous ones. And this time we're actually not looking at code anymore. So, in a way, this is the approach that could be described as actual vibe coding. But with a twist.

See, many people think, when they hear about vibe coding, that it's about just sending one short prompt and then AI does everything for us. But this rarely works.

In contrast to that, the architect sends one big prompt, so to speak. A very big one. Because we need to prepare several documents for this approach to work.

This is the right approach for those who prefer to plan instead of looking at code. For those who maybe can't even code themselves and stumbled upon vibe coding for that reason in the first place. This is the approach for the architects among us.

Kiro is just one of the many examples, but let's just have a look at the documents it created based on my very detailed description of what kind of tool I'm building.

For this example I used one of the tools I've built recently: https://photoboost.app/

I gave it a description of what exactly the web app is, what it does, the results, and also provided the full landing page to offer even more context.

As a result you get the first thing you need when creating apps as the architect: the requirements document

The structure of this document depends on which tools you use to create it or maybe you are already familiar with requirements engineering and got your own template. Here is the example for PhotoBoost using Kiro (reduced to just the headlines and the first requirement).

# Requirements Document
## Introduction
## Requirements
### Requirement 1

**User Story:** As a professional seeking a headshot, I want to upload multiple selfies of myself, so that the AI can generate high-quality professional headshots from various angles and expressions.

#### Acceptance Criteria

1. WHEN a user accesses the upload interface THEN the system SHALL allow uploading between 5-20 selfie images
2. WHEN a user uploads images THEN the system SHALL validate that images contain human faces
3. WHEN images are uploaded THEN the system SHALL accept common formats (JPEG, PNG, HEIC) up to 10MB each
4. IF an uploaded image is invalid or corrupted THEN the system SHALL display a clear error message and allow re-upload
5. WHEN all images are uploaded THEN the system SHALL display a preview gallery for user confirmation

Even for a rather simple web app like PhotoBoost this can easily end in a long list of requirements. But the more specific the better. Describe the use cases along with their roles and acceptance criteria in individual user stories.

Product Managers, Product Owners, Scrum Masters and others will also feel very familiar with this approach. Consider this basically the same step you would take when creating a new ticket in Jira or similar tools.

In the next step, we then need to take the requirements and actually come up with the technical details. How this step is done depends on how technical you are. Getting a first draft using Kiro or other tools is certainly not a bad idea, but then you might dive in and adjust details, either by telling your AI tool or simply adjusting the document yourself.

Let's have a look at how such a document could look like. Again, just the headlines, it's not about the details, but to give you an overview and template to get an idea about what kind of document you need to create.

# Design Document
## Overview
## Architecture
### High-Level Architecture
### Technology Stack
## Components and Interfaces
### Frontend Components
#### Core Pages
#### Reusable Components
### Backend API Routes
#### Image Processing Routes
#### User Management Routes
#### Payment Routes
### fal.ai Integration
#### SDK Configuration
#### Headshot Generation Workflow
#### Recommended fal.ai Models
## Data Models
### User Model
### Order Model
### Style Preferences Model
### Generation Job Model
## Error Handling
### Client-Side Error Handling
### Server-Side Error Handling
### Error Response Format
## Testing Strategy
### Unit Testing
### Integration Testing
### End-to-End Testing
### Performance Testing
## Security Considerations
### Data Protection
### API Security
### Privacy Compliance

The design document goes into detail regarding every technical aspect of the app you are planning to build. It's the foundation for the final step in our approach as the architect, the actual task list.

Here we create the implementation plan that can be seen as the individual prompts that we provide to our AI coding tool. For example, in Kiro, each task can be passed to one sub agent that then works on it in parallel (as much as possible - not all tasks are 100% independent).

Again, here as an example, the first task for PhotoBoost when creating the implementation plan via Kiro:

# Implementation Plan

1. Initialize SvelteKit project with core dependencies
  - Create new SvelteKit project with TypeScript template
  - Install and configure TailwindCSS for styling
  - Install required dependencies (@fal-ai/serverless-client, stripe, firebase)
  - Set up basic project structure with src/lib folders
  - Configure TypeScript and ESLint settings
  - _Requirements: Foundation for all requirements_

No matter if you use Kiro or a similar tools, the goal of this approach is to make a detailed plan before starting so that your single shot solution has a higher chance actually succeeding.

The great thing about this approach is that more work can be done later in the process in bulk and parallel. And if you are not satisfied with the results, it's often quite simple to figure out which part was not well documented:

  1. For example, a feature isn't working as expected? Go back to the requirements document and re-visit the user story describing this feature.
  2. You need a different payment provider? Check the design document about this part and adjust it to your needs.
  3. The Firebase security rules were not configured as part of the cloud storage setup? Maybe check the task list if it was included there.

With this, we've covered the three main approaches or branches for how you can make your next project using AI-supported coding a success.

In the next section, I will give you a little bonus that does not really fit into the above pattern or three distinct roles. But maybe it is just the right approach for you.

Bonus: The Software Engineer turned Many-Shot Vibe Coder

This is a totally different approach and yet it has some similarities to other roles. In fact, it probably combines all of them to some extent.

Whenever I start a new project, I like to try out new approaches on how to become more efficient and get better results to improve my and my clients satisfaction.

One of the approaches that works quite well for me is many-shot vibe coding, but not the way you might think.

When I'm starting on a project and have a really good idea of what the result is going to look like and also know how the architecture and code design will pan out, down to actual files and individual services, I noticed that shooting many mini prompts can be a valuable approach.

Here is a very specific example I worked on recently. An example implementation of a vector search of an app that allows me to find a good book to read depending on what kind of story I would like to read about. My starting point was the URL to a list of open source sci-fi books I found on Reddit. From there, my prompt looked like this (summary):

  1. Using this Reddit post URL, please write a script that scrapes the post and puts it into a text file called reddit_post.txt
  2. Using @reddit_post.txt, please write a script that produces a text file called reddit_urls.txt which contains a list of all URLs mentioned in @reddit_post.txt, one URL per line, nothing else.
  3. Using @reddits_urls.txt, please write a script that parses each link (they all point to https://www.gutenberg.org/) and looks for the 'Plain Text UTF-8' link in it. Put all those links in a file called @gutenberg_links.txt, one link per line, nothing else.
  4. Please write a script that downloads all text files from the @gutenberg_links.txt file and saves them locally.
  5. Please write a script that [..] ... you get the idea!

The result was a fully functional script that would parse books, vectorize them, put them into a local database, and offer a search script to actually find the right books for me. I didn't look at any code, I didn't have to fix anything that didn't work ... but I had to write a lot of prompts. Really a lot.

This approach might not be for you, if you don't actually want to write those prompts, but it is yet another approach that you can take.

Final Thoughts

It's important to understand when and where getting help from AI is the right approach and how much to get from it.

For example, I've written this article myself, with my ideas and my concept, but I've bounced off the idea of it and the structure with ChatGPT. We talked about the approach and I asked it if I am maybe overlooking some more approaches and what it thinks of overlap between those angles. Questions like those, after making your own decisions and coming up with your own thoughts, will help you get a deeper understanding of what you are writing about because you basically get someone to review the article while it's in the making.

Another option are summaries. After the whole article idea was finished and I laid out the structure and content, I asked it to make a summary table and here is what ChatGPT came up with. I'm quite happy with the result.

Deep-Dive Engineer

Artifact Focus

Code (line-level)

Human Responsibility

Review every line for correctness, efficiency, maintainability. Spot bugs, anti-patterns, vulnerabilities.

AI Responsibility

Generate complete implementations.

Strengths

Maximum assurance of correctness. Developer retains control over quality.

Risks / Weaknesses

Extremely time-consuming, negates some AI efficiency. Risk of "death by review fatigue."

Structural Reviewer

Artifact Focus

Code (structural-level)

Human Responsibility

Validate project/file layout, naming conventions, function/class/API design, DB schema. Ensure architectural consistency.

AI Responsibility

Generate organized codebase with modules, classes, schemas.

Strengths

Faster than line-by-line review, focuses on long-term maintainability.

Risks / Weaknesses

Bugs may slip past since internals aren't checked. Relies heavily on tests or runtime validation to catch logic errors.

QA

Artifact Focus

Tests

Human Responsibility

Define/validate test cases, check coverage & correctness. Rely on passing tests as contract of quality.

AI Responsibility

Implement code to satisfy test suite.

Strengths

Scales well, shifts verification into automation. Forces AI to meet objective criteria.

Risks / Weaknesses

AI-generated tests often shallow. Corner cases may be missed unless human is thorough. False sense of safety if tests are weak.

Architect / PM

Artifact Focus

Specifications & requirements

Human Responsibility

Write detailed design docs, acceptance criteria, workflows. Validate that implementation matches spec.

AI Responsibility

Translate structured spec into code.

Strengths

Best for large projects; shifts effort to upfront planning. Reduces ambiguity.

Risks / Weaknesses

High upfront effort. If specs are incomplete, AI fills gaps unpredictably. May miss low-level issues not expressed in specs.

No matter which route you go. If you want to work with a project that's not done after one first MVP, you could try one of the above approaches and make your life easier.