"I've observed two distinct patterns in how teams are leveraging AI for development. Let's call them the "bootstrappers" and the "iterators." Both are helping engineers (and even non-technical users) reduce the gap from idea to execution (or minimum viable product (MVP))."

"The Bootstrappers: Zero to MVP: Start with a design or rough concept, use AI to generate a complete initial codebase, get a working prototype in hours or days instead of weeks, focus on rapid validation and iteration."

"The Iterators: daily development: Using AI for code completion and suggestions, leveraging AI for complex refactoring tasks, generating tests and documentation, using AI as a 'pair programmer' for problem-solving."

The "bootstrappers" use tools like Bolt, v0, and screenshot-to-code AI, while "iterators" use tools like Cursor, Cline, Copilot, and WindSurf.

But there is "hidden cost".

"When you watch a senior engineer work with AI tools like Cursor or Copilot, it looks like magic, absolutely amazing. But watch carefully, and you'll notice something crucial: They're not just accepting what the AI suggests. They're constantly: Refactoring the generated code into smaller, focused modules, adding edge case handling the AI missed, strengthening type definitions and interfaces, questioning architectural decisions, and adding comprehensive error handling."

"In other words, they're applying years of hard-won engineering wisdom to shape and constrain the AI's output."

The author speculates on two futures for software: One is "agentic AI", where AI gets better and better and teams of AI agents can take on more and more of the work done by humans, and "software as craft", where humans make high-quality, polished software, with empathy, experience, and caring deeply about craft that can't be AI-generated.

The article used the term "P2 bugs" without explaining what that means. P2 means "priority 2". The idea is people focus all their attention on "priority 1" bugs, but fixing all the "priority 2" bugs is what makes software feel "polished" to the end user.

Commentary: My own experience is that AI is useful for certain use cases. If your situation fits those use cases, AI is magic. If your situation doesn't fit those use cases, AI isn't useful, or is of marginal utility. Because AI is useful-or-not depending on situation, it doesn't provide the across-the-board 5x productivity improvement that employers expect today. My feeling is that the current generation of LLMs aren't good enough to fix this, but because of the employer expectation, I have to keep trying new AI tools in pursuit of the expected 5x improvement in productivity. (If you are able to achieve a 5x productivity improvement over 2 years ago on a large (more than a half million lines of code) codebase written in a crappy language, get in touch with me -- I want to know how you do it.)

The 70% problem: Hard truths about AI-assisted coding

#solidstatelife #ai #genai #llms #codingai