Hello everyone. It’s been a while since I wrote a post about AI. A lot has happened in the past months, but I want to focus on one thing today: the economics of AI subscriptions for coding, and why I think they are about to break. Let’s be real and talk about about money.
The math doesn’t add up
AI companies are done with subscriptions. It’s not worth it.
If you think about Codex, the amount of tokens you can spend on a 20$ subscription is insane. By my account, a 5h limit had 4 context compactions on the same chat. This means I consumed around 1 million tokens on a single chat before my 5h limit was reached. The one-week limit is a whole lot more than that.
If you take the price per output of GPT 5.4, you’ll see the size of the hole. 15$ per million tokens. I pay 20$ A MONTH for my subscription. It just doesn’t add up.
It is well known that OpenAI is burning cash like nothing on earth right now, so I’m not too surprised.
Anthropic isn’t playing along
Anthropic, on the other hand, is not willing to play this game.
This week the internet came down because Anthropic removed Claude Code from the Pro subscription on the pricing page. New subscribers to the Pro subscription didn’t have access to Claude Code. They restored access at the end of the day, claiming it was a short test for 2% of the user base. But leaked memos reveal this urge to go back to token-based pricing. Subscriptions are not worth it, unless you are Sam Altman.
I wouldn’t be surprised if Google and OpenAI follow this trend in the future. OpenAI is already targeting corporate customers, because they know that’s where the good revenue comes from. And because they are losing ground to Anthropic.
So my bet is that we will see increasingly more pressure from these AI big techs to either:
- Convert their user base into corporate users OR;
- For those willing to pay, charge per million tokens or 200$ subscriptions
How this will happen exactly is hard to say. I guess we have to sit and wait.
What should we do?
I won’t pay 25$ per million tokens for a big tech. I also don’t want to pay 200$ a month for a subscription that still has limitations. OpenAI was founded to make AI available for everyone, democratically, for free, without capitalization. Clearly a lie. And I think there might be some alternative ways.
More recently, I started checking out OpenCode and took a 10$ subscription for their Go tier. I think OpenCode is playing smarter here, offering hosting and tooling for AI models that are open-source and open-weight. I’ve been checking out Moonshot, Z.ai and Alibaba for a while, but never found a real opportunity to run their models. OpenCode gives this to us for a fair price. And recently Moonshot AI, a Chinese AI startup, launched Kimi 2.6. And while I was writing this post, DeepSeek launched its long-awaited v4 model.
If we think about the whole geopolitical situation, and the fact that China doesn’t have the latest bleeding-edge chips, it seems obvious that their models will never catch up with US companies. However, I truly believe that times of hardship create much more solid solutions for a problem. China is catching up with US companies using domestic technology for a fraction of the price. If this is not a sign that the US AI industry is inflated, I don’t know what is.
How does OpenCode and the Chinese LLMs compare?
For a while I will keep testing the Chinese models using my OpenCode subscription. It’s cheap and gets the job done.
Of course, OpenCode is not close to the quality and polish of Claude, for example. But I’m looking for an LLM that is able to use the tooling and write code for me. The UI polishing is optional.
I have a concrete example for comparison: I’m building an application in Godot. It’s a stitch designer for crochet. A really good one! More on that in another post. I have a specific issue where stitches can’t be extended, only chains. It’s a very specific issue involving some very specific math in a very specific piece of software. Codex, Gemini and Claude failed to fix it for me. Claude hit the 5h limit during the thinking process, which is very annoying. Kimi 2.6 was able to make it work, but with a small inconsistency in the spacing between stitches.
Sometimes, depending on how complex the problem is, the LLM might perform a change that does absolutely nothing. Kimi 2.6, unlike the others, seems to always give you something. I am not saying it’s better. It’s to early for that anyways. But the initial impression is for sure very positive.
That’s it for now. Stay hydrated.