Matt Shumer writes that something big is happening. It’s a long post. Some good points and some things that might be framed a little too dramatically. If you are firmly an AI skeptic, I doubt you will be convinced by Matt, but his post is comprehensive and got me thinking.
I want to focus on this part:
But it was the model that was released last week (GPT-5.3 Codex) that shook me the most. It wasn’t just executing my instructions. It was making intelligent decisions. It had something that felt, for the first time, like judgment. Like taste. The inexplicable sense of knowing what the right call is that people always said AI would never have. This model has it, or something close enough that the distinction is starting not to matter.
First of all, I agree with Matt that GPT-5.3 is a great model. Codex has gotten really good. Token limits are so high that usage is effectively unlimited with ChatGPT Pro.
Let’s think about taste, though. I continue to see proclamations about AI building a complete app in a day with just a few prompts. Technically that’s true — we are going to see a flood of new apps this year — but are they the kind of apps that could be real products?
Even as AI works its way into everyday life for more developers, one thing that won’t change is the iterative process of building good apps. When I start working on something, I don’t know exactly where it’s going to end up until I’ve built, tested, and thrown out multiple ideas, tweaking the design along the way. It takes weeks or months to get there. AI could build “a” version of something on its own, but not the version I want.
No amount of up-front prompting can solve this because at the beginning we don’t fully know what the final product should look like. There’s no question that AI will have a profound impact on many jobs. But AI is rarely a replacement for humans. It’s an accelerant. It helps us iterate faster as we apply our own taste along the way.