How AI Writes Code
Now we know what AI can and can't do. But how does it actually work? Not the technical details — you don't care about those. But an understanding of the principle.
Tokens Are Like Letters for AI
Remember K01 (Text)? We learned there that AI can't "understand sentences." AI predicts what text comes next. It does this by breaking every text into small pieces — so-called "tokens."
A token isn't a word. It's more like a syllable or even a letter. When you say "hello world" to AI, it breaks it into roughly 4 tokens: "hel", "lo", "_world." The exact split is technically complicated, but the idea is: AI doesn't see your text as complete sentences, but as sequences of patterns.
With code it's the same — with one exception. Code has much stricter rules. "hello world" can be written in 1000 ways. But x = x + 1 can only be written in a few ways without being wrong.
This means: AI is much more precise with code than with text. The structure is stricter. This is a great strength of AI in code, because AI loves rules.
Why AI Knows Code from GitHub
AI was trained on billions of lines of code. GitHub, StackOverflow, open-source projects — it's all training material. AI has seen how to name functions, how to catch errors, how to build data structures.
That's why AI can write correct, structured code that looks like real, professional code. Not because AI "understands" what code does, but because it's seen the pattern so often that it reproduces the most likely pattern.
It's like a composer who's heard millions of symphonies and then composes a new one. The composer understands music — but AI does NOT understand code. AI just reproduces the most common structures.
This also explains why AI often over-engineers code. GitHub is full of over-engineered code, because professional programmers tend to be defensive. AI learned this defensiveness.
Three Task Types for Code
We learned about three task types in K01 (Text): Multiplier, Enabler, Limits. With code it's exactly the same.
1. Multiplier: Code for Routine Tasks
These are the typical tasks where AI shines:
- "Write me an HTTP server in Python"
- "Create a script that processes CSV files"
- "Generate code to review images"
Why does this work? Because AI knows the structure. These tasks have been done thousands of times, and AI has seen all the patterns. The chance that the code works is high.
The Multiplier Rule: The more often the task appears in training, the better the code.
2. Enabler: Code for Your Specific Requirements
These are tasks where you need to concretize your idea:
- "Write a script that processes my specific file structure"
- "Create a scraper for this particular website"
- "Build a system that implements my business logic"
Here AI must not only understand your requirements but also extrapolate them. This works — but with limitations. AI will make assumptions and often get them wrong.
The Enabler Rule: You must translate your requirement into AI's knowledge. The more you describe, the better.
3. Limits: Things AI (Still) Can't Do
These are tasks where AI hits its limits:
- "Optimize my code for maximum performance"
- "Find the security hole in my system"
- "Understand why this code doesn't work" (when there are subtle bugs)
Why doesn't this work? Because it requires understanding. Performance optimization needs context — what are the bottlenecks? Security needs creativity — what unexpected attack vectors exist? Debugging needs methodology — which hypothesis do you test first?
AI can do this, but with much lower success rates. And you don't always notice when it's wrong.
Two Questions Before Every Code Prompt
Before you ask AI to write code, ask yourself two questions:
1. Is this a standard problem? Has it been solved millions of times? If yes → Multiplier, AI will be great. If no → harder, AI will make assumptions.
2. Can I test the result? Can I see if the code works? If yes → I can find and fix bugs. If no → I have to trust someone.
These two questions decide whether you can trust AI or not.
Why Code Is Harder Than Text
Text: AI writes words that fit together well. Wrong doesn't work — it just sounds odd.
Code: AI writes symbols that fit together and have meaning. Wrong means the program doesn't run or does the wrong thing. That's a bigger error.
This is why code prompts need to be more precise. With text, AI can "guess" more. With code, context must be clear.
The Future: Better Code AI
Will AI get better at code in the future? Definitely. There are already systems that not only write code but can also understand it — recognize errors, test variations, and even find their own mistakes.
But even these won't "understand" in the true sense. They'll just have even better patterns. And that's why you'll always need a human to review the system.
This isn't a limitation of AI. This is the reality of any software: code is complex, and a second pair of eyes is always useful.
AI writes code like text — through token prediction. Structural rules help AI. Standard tasks work well. Custom requirements need precision. Your understanding is still needed.