For all of the opaqueness surrounding them, for all the prognostication of AI boosters who predict various kinds of societal upheaval upon their wider adoption, large language models (LLMs) are fundamentally simple creatures. Despite what most people might think, and the fact that there are now large ecosystems, product catalogs, and proprietary scaffolding around LLMs to enable them with real-time retrieval and code execution, among other capabilities, they are still fundamentally just next-token predictors.
The ways in which they do next token prediction – i.e. given words A and B, what word is C? – can be fascinating, uncanny, even startling in their resemblance to human logic and speaking. Frontier models now even include "reasoning", in which specialty "reasoning tokens" allow models to scale accuracy with time and compute. ChatGPT and Gemini Pro, for instance, allow you to select the "level" of reasoning to use, with higher levels being reserved for higher paying customers.
Next token prediction is first trained to do simple next token prediction over the entirety of the data of the Internet and then "aligned" via reinforcement learning from human feedback. It is further improved by maximizing its accuracy for tasks like coding and app integrations, which is where AI investors seem to see the most potential return on investment, a sharp turn away from all the talk of AGI in 2024.
Next token prediction is now designed to primarily mimic human conversation; it can sound an awful lot like a friend or husband or editor or girlfriend or therapist or PhD advisor.
But AI, as impressive as they often seem at first, are still, fundamentally, just. next. token. predictors.
Wait, Don't Yell at Me Yet
Is this is a distinction without a difference? As the famous saying goes, we didn't build airplanes by exactly mimicking the flight of birds, thus we can't expect that artificial intelligence will exactly resemble human intelligence. What matters is whether it can do what we want.
The problem arises when it seems that, outside of a limited scope of talents relating to para-socialization; summarizing and writing emails; simple coding tasks; and authoring low-effort creative content, AI can do very little. And that small parcel of cognition takes an exorbitant amount of energy to produce. It's neither efficient nor economical.
Some say we might be in a bubble. Personally, I think it's a black hole: sucking all the capital, talent, and precious resources into a cavernous maw that leaks a tiny bit of value – just enough value to get a lot of folks using, but not enough to get a lot of folks paying for it.
To demonstrate that these are not the deluded musings of a Luddite software engineer desperate to believe he will not be automated away by AI in 2026, consider that the Salesforce CEO recently admitted that executives have lost enthusiasm in LLMs and that his company is retooling their Agentforce software to be less reliant on them.
It turns out that 2025 wasn't the year of AGI, but don't worry, 2026 will be.
LLMs are unpredictable in their behavior because they don't truly reason. They learn patterns. The scale at which they learn those patterns means they demonstrate some forms of emergent intelligence. They do seem to learn "concepts" that can be re-used for different purposes, such as sharing the idea of "dog" across multiple speaking languages. But they still operate over language and language alone.
This does not an intelligent machine make.
Looking Towards the Future
As I look out over a frosted Park City in the twilight of 2025, and reflect on where we are and what we've come to in this industry, it's clear we are still hyper-fixated on linguistic patterns and automating away the very things that distinguish us from body snatchers. Creative writing, the visual arts, musical composition and even social interaction.
We have decided to try and land the final blow against the white-collared middle class, to deride and gleefully pontificate on what the world will look like without people who say no, who push back, who deign to suggest anything other than we are great. We have decided that the bland, the median, the Netflixed and the Disneyfied will sate the worlds simple needs, almost like when tech bros decided food was stupid and you should drink some sketchy paste instead.
2025 was the year the tech industry deified the mindset of middle managers to a new religion of labor reduction and anti-intellectualism, where subject matter experts are not needed and no one has to pay anyone but their friends a living wage. So now we're living through the enshittification of everything.
How we currently train LLMs makes very few, if any assumptions, about how language or the human mind works. It does not organize combinations of different concepts into something brand new in real-time, the way humans can. It can't consistently be told something and then remember it, the way many or most humans can. It can't be trusted not to forget everything you've ever taught it the minute you want to train it for a different purpose or introduce to it new concepts, like the world's most expensive Etch A Sketch.
It certainly in no way resembles my own ideas on how to construct a useful AI, which for me is premised on the idea of automating low-risk, high-reward tasks that humans struggle with. Scheduling and optimizing a calendar, remembering details that are important in one moment then promptly forgotten in the next, helping to catalog disparate or disorganized ideas into an ontology.
Helping remember your medications, translating languages or keeping at-risk languages from extinction, the potential accessibility applications, lowering the ceiling in ways that allow people to still be who they fundamentally are, to allow us to still connect in the most fundamental and important ways.
Everyone is saying that 2026 will be the year that AI revolutionizes everything, but this is what they've said every year for over a decade. I'm betting it's the year that folks realize the AI that could be the most useful to them is not the AI that's being promised by the wealthiest and most privileged in the world.
I look forward to being part of that discussion.