The State of AI Assisted Coding

This is a living document. It reflects my position on AI assisted software development in early 2026. My views have changed before and will change again as tools evolve. What follows is an honest description of how I currently think about these tools.

Vibe Coding Is Not AI Assisted Coding

I want to start with a distinction that often gets missed. Vibe coding is prompt first development. You describe requirements in natural language, give them to a large language model, and accept the output with little structure or constraint. It is fast and often enjoyable. It works well for prototypes and early experiments.

But maintainability suffers.

For me, good code is code that is easy to test and easy to change. Testable code requires well crafted architecture. Architecture requires explicit decisions. When you simply throw requirements at an LLM, those decisions are left implicit or accidental.

Vibe coded projects usually work until the first meaningful change. I have seen this many times. You modify one file and several others break in ways that are hard to reason about because the original system had no clear structure. The code looks reasonable and happens to run, but it is fragile.

This fits into the idea that “bad software is better than no software”. Sometimes that tradeoff is acceptable. But it is not proper engineering, and it does not scale.

LLMs as Non Deterministic Compilers

A useful mental model is to treat LLMs as compilers from natural language to code, but non deterministic ones. Small changes in the prompt can produce different results. Sometimes better, sometimes worse, sometimes just different. Reproducibility is weak and reliability is fragile.

Natural language itself is part of the problem. People used prose to describe mathematics for centuries before precise notation existed. Programming in natural language runs into the same limitation. LLMs are powerful, but relying on them alone to translate intent into software introduces ambiguity that grows with project size and complexity.

The idea that an LLM can act as a full software engineer, taking high level instructions and producing correct and maintainable systems over time, does not hold up in practice.

Ambiguity, non determinism, and loss of context are not things we can simply fix with better prompts. They are properties of the system. The only practical way to deal with them is to place the model inside a deterministic feedback loop.

When an agent writes code, the output should immediately go through compilers, linters, type checkers, and automated tests. Reliability no longer depends on the model getting things right on the first attempt. It depends on the system rejecting incorrect output and forcing iteration until constraints are satisfied. The compiler is no longer just the LLM. It is the LLM plus the verification loop.

Why Context Limits Matter

Context limits are not just a scaling problem. They are a coordination problem.

As projects grow, the original instructions become outdated. No single prompt can capture the full reality of a production system that spans multiple repositories, shared libraries, deployment pipelines, and infrastructure. Distributed systems, integration boundaries, and shared assumptions quickly exceed what any model context can reliably represent.

I have seen this in my own work. Extending or refactoring large systems without careful planning produces brittle and inconsistent results. The model does not know what it does not know. It fills gaps with confident but incorrect assumptions.

Without explicit structure, summaries, and constraints, the model is reasoning about an incomplete and outdated version of the system.

What Works in Practice

AI assisted coding is not the same as vibe coding. The difference matters.

The most effective tools introduce planning steps. They summarize relevant context, define clear goals, and limit agents to well scoped parts of the codebase. Changes become controlled and reviewable instead of inferred from old instructions. In practice, this makes a large difference.

The human role remains essential. AI does not replace skill. It amplifies it. A strong engineer produces better results faster. A weaker engineer produces flawed results faster. This amplification becomes dangerous when AI generated changes are accepted without proper review.

AI speeds up writing code. Judgment determines whether the result stays maintainable.

Workflow and Tooling

My workflow has changed quickly. I started by copying prompts into general ChatGPT. Then I moved to inline completion tools, and later to agent based workflows inside the editor. Today I mostly work using planning and agent modes.

One thing that surprised me is that differences between strong models matter less than expected. What matters much more is context quality, structured prompts, and workflow discipline.

Over time I developed an intuition. Some tasks are safe to delegate. Writing unit tests for simple functions. Generating boilerplate. Refactoring repeated patterns. In these cases, the model performs well and cleanup is minimal.

Other tasks consistently produce poor results. Tasks where being mostly correct is still wrong. In these cases, writing the code myself is faster than fixing AI output.

Learning when to delegate and when to write code directly has become a required skill.

Even when writing code manually, I rely heavily on tab completion. There is no reason to type what the editor can predict. This uses AI, but it does not change how the system is designed or reasoned about. It speeds up typing, not engineering decisions.

A Balanced Position

AI assisted coding is not a solution for everything. Vibe coding is fast and fun, but rarely sustainable. Real value comes from structured workflows, planning, and careful review. The goal is maintainable, reliable, and reproducible code, not just fast delivery.

Ignoring AI is also a mistake. I know engineers who avoid these tools because they fear dependence or incomplete understanding. Meanwhile, their peers deliver significantly faster without sacrificing quality.

Teams and companies that avoid AI assisted workflows are already falling behind. These tools are becoming standard. The advantage does not come from blind trust. It comes from understanding the limits, applying structure, and using AI as an accelerator rather than a replacement for thinking.


TLDR: Vibe coding is fast but not sustainable. LLMs behave like non deterministic compilers and do not replace engineers. AI works best as an accelerator for skilled developers. Ignoring it puts you behind. Relying on it blindly creates technical debt. The productive path is structured workflows, planning, and knowing when to delegate versus when to write the code yourself.