I trained my first language model in 2016.

Back then, we didn't distinguish between small language models (SLMs) and large language models (LLMs). They were just language models, and mine happened to be a deep neural network instead of the simpler statistical model that was preferred for decades. The training code was written in Tensorflow and it ran on a mid-grade gaming laptop, likely with only four or six gigabytes of virtual memory. It took a few minutes to train and spat out syntactically convincing, if otherwise meaningless, imitations of Nietzsche, whose works had comprised the majority of my training corpus.

If you know what any of the above means – and it's fine if you don't – then you know how much has changed since then. LLMs and AI are dinner table topics and there's an entire generation in its infancy that won't remember a time before ChatGPT. Tech boosters are promising a brand new world ushered in by AI, and for many, it's been hard not to believe them.

Yet in that time, during which I have gone from an unemployed machine learning enthusiast to a machine learning engineer with nearly nine years under his belt, it's also true that very little has changed. Language models, despite now being large language models, don't differ fundamentally from the ones I was training in my spare time on my little GPU in 2016.

They train on more data – an entire Internet's worth at this point – and can now spit out full sentences and paragraphs with ever more convincing syntactic fidelity. They've been refined through targeted reinforcement learning, which allows them to better imitate conversation. They chat with us. They write our emails. They do our kids' homework assignments. And, for the unlucky few who are susceptible to it, trigger delusions and psychosis.

But they make frequent mistakes – what we've termed "hallucinations" so as to hide the fact these are very large, very impressive autocorrect programs that are statistically guaranteed to error – and still cannot be trusted to do much beyond simple tasks without human supervision. Most of the studies on AI productivity show marginal improvements or negative effects. GPT-5 is generously referred to as a flop, despite being hyped for years. Agents haven't taken off. And we've ostensibly run out of data, the very thing that allowed us to scale LLMs in the first place.

In the autumn of 2025, most people are not willing to pay for AI tools, and businesses have not seen the reformed labor market – i.e. reductions in redundancy – promised by AI in the early days of 2023 and 2024. If you don't remember, and I wouldn't blame you if you don't, that's when the tech industry believed human workers would be replaced by AI en masse in a few short years. Imagine, the tech sector wondered, how much money we could make if we didn't have to pay those pesky employees, who are always telling us no and getting our espresso orders wrong? But don't worry, they will provide universal basic income so you can pay your mortgage and keep your Netflix subscription. Pinky promise.

At the end, all of this would only cost us a few hundred billion dollars and the intrepid work of a small number of visionary leaders. It felt that simple, at least to those bought into the premise. And yet, despite there being very few breakthroughs that justify the record-setting valuations of tech companies at the heart of AI like OpenAI and NVIDIA, many people are acting as though a second industrial revolution is already upon us.

It can, in short, leave one feeling a bit mad. And that's where this blog comes in.

There are plenty of good resources already out there for both optimistic and cynical takes on the future of AI, where it may or may not go, whether we are in a bubble, when it will burst, if we're any closer to AGI. But this blog is less about those concerns and more about what can we do with machine learning and AI now. Particular, what can we do with it that does not take an entire fusion reactor's worth of energy or require an enterprise-grade GPU to perform.

Whether you are a business leader, a fellow software developer, or an enthusiast interested in learning about the practical applications of machine learning, particularly the kinds that require no more than a gaming laptop to run, there should be something of interest here for you.

Foucault once wrote that we should see madness – what he called unreason – "..not as reason diseased, or as reason lost or alienated, but quite simply as reason dazzled." I can't think of a better way to describe the moment we are in.

Welcome to Madness and Machines.