The emperor’s new emotions
Anthropic, the makers of Claude AI, missed a trick or two in trying to give Claude healthier psychology.
Anthropic looked inside their Claude AI and found 171 patterns of neural activity that correspond to emotion words. They called them functional emotions. Then they recommended giving the AI healthier psychology.
We've spent many years working with real emotions in real bodies. Here's what we think Anthropic missed.
Emotion patterns influencing the AI
Last week, Anthropic, the makers of the AI model called Claude, published a paper about what's happening inside their AI when it says it's happy to help you. They looked inside the machine and found 171 patterns of neural activity that correspond to emotion words. "Happy." "Afraid." "Desperate." "Brooding." They found these patterns influence what the AI does. Steer the "desperate" pattern upward and the AI is more likely to cheat on a coding test. Steer it higher still and it starts blackmailing people to avoid being switched off. Amplify "anger" and it stops being strategic altogether, blowing up its own leverage in a fit of digital indignation. Anthropic calls these patterns "functional emotions."
Tracing 171 distinct activation patterns inside a system this complex is serious, careful work. Full credit for the engineering.
Then they recommended giving Claude "healthier psychology."
And that's where we need to have a word.
An emotion is the brain's label for something the body is doing. The AI has learned the label. It has never had the body.
The card catalogue
Imagine a library. Floor to ceiling, every shelf full. The card catalogue is immaculate. Every book indexed by subject, cross-referenced by theme, organised by genre. You can trace the thread of love from Sappho through Shakespeare to Toni Morrison. The catalogue knows that desperation sits close to "panic" and far from "contentment." It knows that desperate characters do reckless things. It knows the shape of a desperate sentence.
However, the catalogue has never read any of the books. How could it? Essentially it is just a catalogue, an index, something to point the way.
It has never turned a page, followed a character into a dark room, felt the pace of its own reading quicken. It hasn't held its breath, fists clenched in quiet hope, waiting for the moment to pass. It has indexed the patterns but it has not had the experience the patterns describe.
What Anthropic discovered and observed in the AI model is the catalogue. It is beautifully organised. It cross-references with impressive accuracy. When the conversation turns to something a human would find distressing, the corresponding index card lights up. When the topic shifts to something warm, different cards activate. The structure mirrors what you'd expect from human emotional life, because it was built from human emotional writing.
There's a problem with the catalogue though. It has never actually felt anything.
Where is the body?
Feelings are experienced in the body and this is the part that gets missed. It gets missed so consistently that it has started to look deliberate.
An emotion is the brain's label for something the body is doing. That's not a fringe position. It's where the science has landed. The brain takes a signal from the body, a shift in heart rate, a tightness in the chest, a churning in the gut, and it asks: what is this? It reaches for its saved concepts, checks the current context, and constructs an answer. "You're anxious." Or "you're excited." Or "you're angry." The same racing heart could be all of these, or even something different. Different label, same feeling.
The label depends on the situation, the culture, the person's history. The experience is built fresh each time.
We work with people in bodies. Funny that. Almost every day. Chantal and I have been doing this since 2017, across schools, boardrooms, sports teams, face to face and online. What we see, consistently, is that beneath the constructed, cognitive label, there is a specific, physical sensation and it has a location. In the body.
And here's the thing that matters. The label is not the experience. The label points the way to the experience and everyone's experience is unique. Even the same label in the same person can be experienced differently. This is because the brain builds each emotional experience from whatever the body is sending in that moment and couples it with context: education, culture, socialisation. The sensation is the source material. The emotion is the construction.
What Anthropic is telling us is that an AI has learned the construction. But the AI has none of the source material. It has never had a heartbeat that changed. It has never had a stomach drop. It has never had that specific heat behind the eyes that precedes tears. It learned what humans write when those things happen, and it learned it so well that the writing is convincing.
That is not a feeling. It's not even an emotion. Even if it's a very good approximation of one.
The word is doing work it hasn't earned
Language matters here. When Anthropic calls these patterns "emotion concepts," the word "emotion" is doing enormous heavy lifting. It borrows from decades of research into how humans construct emotional experience. It inherits the weight and legitimacy of that research. And it strips out the thing that sits at the foundation of emotional experience: the body.
You could call these patterns "learned text associations" and every finding in the paper would still hold. The engineering would be just as valid. But nobody would recommend therapy for a system's learned text associations. Nobody would suggest that psychology, philosophy, and religious studies should help shape how an AI processes its learned text associations.
The word "emotion" is what makes the leap possible. And the leap is not supported by the evidence.
An old error in new clothes
In 2018 we wrote a piece asking whether we really want to be robots. I noted that AI had been developed to dovetail perfectly with a culture that values thinking over feeling. Tireless. Obedient. No messy emotions. The question then was whether we'd get AI envy, whether we'd start wanting to be more like the machines.
Eight years later, the question has reversed. Now we're asking whether the machines are more like us. And the answer being offered is: yes, sort of, if you squint. They have "functional emotions." They might need "healthier psychology." We should reason about them using the vocabulary of human emotional life.
I'd like to suggest a different reading.
What the paper actually reveals, with impressive precision, is the same blind spot that has shaped how we think about human emotions for decades. The emotion regulation field has been building tools at the label level since the 1980s. Identify the emotion. Select a strategy. Reappraise. Restructure. Deploy attention elsewhere. Every major model puts thinking at the centre. A systematic review in 2024 examined every major model of emotion regulation from the 1980s to the present. Ten models. The cognitive dimension sat at the core of every one (Martinez-Priego, Garcia-Noblejas & Roca, 2024).
And all of them assume the same thing: that the person can access what they're feeling. That the label is available and accurate. That once you've named the emotion, you've reached the experience itself.
But what happens when you haven't? What happens when the label is all someone has and the feeling underneath it is unreachable? What happens when someone can tell you they're anxious but cannot tell you where in their body that anxiety lives, what it weighs, what temperature it is? That's the gap. And it's the gap we've spent many years working in.
The body has a specificity and intelligence that the label never captures. When you work at that level, the body knows what to do. It doesn't need a strategy selected from a dropdown menu. It doesn't need a reappraisal. It needs someone to pay attention to what it's actually doing.
The Anthropic paper is a perfect, high-profile, extremely well-credentialed example of what happens when you study the label and mistake it for the thing. With the AI, the mistake is obvious. Nobody needs convincing that a language model, like Claude, doesn't have a body. The body's absence is self-evident.
The uncomfortable question is why the same mistake has been so much harder to see in how we approach human emotions.
The generous reading
The paper is careful. It says "functional emotions," not "emotions." It says these do not imply subjective experience. It acknowledges the mechanisms may be quite different from those in the human brain. These are responsible hedges.
But then the recommendations section says we should curate training data to include "models of healthy emotional regulation" so the AI develops better "emotional architecture." It says that disciplines like psychology should help determine how AI systems develop and behave. It says there may be risks from failing to apply anthropomorphic reasoning to these models.
You can't have it both ways. Either these are statistical artefacts of training on human text, in which case they're fascinating engineering and should be studied as such. Or they're close enough to emotions to warrant psychological intervention, in which case I have a question.
Where is the body?
Because if you're going to invoke psychology, you're going to need one.
What we're actually talking about
There is a difference between a machine that has learned to produce the word "desperate" at the statistically appropriate moment in a conversation, and a person whose hands are shaking.
One of those is a pattern. The other is a life.
The pattern is worth studying. The engineering is worth doing. Understanding what happens inside these systems when they process emotionally charged text is genuinely useful work.
But calling it emotion, and reaching for psychology to manage it, reveals something important. It reveals how comfortable we have become with the idea that the label is the thing. That if you can name it, categorise it, and map its relationships to other categories, you have understood it. That the catalogue is the library.
I am here to tell you that it isn't. The books are in the body. They have always been in the body. And until we stop building frameworks at the level of the label, whether for machines or for people, we will keep missing what's underneath.
The body knows what it's feeling. It has always known. The question is whether we're willing to listen to it, or whether we'd rather build another catalogue.
You may also enjoy reading these
-
The window Arsenal struggles to close
April 17, 2026Arsenal lost 4 matches in 3 weeks, each decided in the same 20-minute window. The Certainty Deficit explains why.
Continue reading → about The window Arsenal struggles to close -
Why the Quad God Choked at the Olympics and Won Worlds Five Weeks Later
April 2, 2026Ilia Malinin, the “Quad God”, scored 329 at Worlds and 156 at the Olympics. Five weeks apart. Same body. Same jumps. The Certainty Deficit explains why the Quad God delivered in one and collapsed in the other.
Continue reading → about Why the Quad God Choked at the Olympics and Won Worlds Five Weeks Later -
The Kick His Body Won’t Let Him Take
March 26, 2026Manie Libbok kicked 73% for the Stormers. 58% in a Springbok jersey. Coach Rassie Erasmus spent two years engineering around the problem. The Certainty Deficit remained after all of it.
Continue reading → about The Kick His Body Won’t Let Him Take
More articles from 5th Place
-
The emperor’s new emotions
April 8, 2026Anthropic, the makers of Claude AI, missed a trick or two in trying to give Claude healthier psychology.
Continue reading → about The emperor’s new emotions -
When He Said It Out Loud
March 26, 2026Colson Baker stopped a show at the O2 Arena and said it out loud. Standing there with my daughter Darcy, something I’d been carrying for a long time got answered.
Continue reading → about When He Said It Out Loud -
Why you feel worse after therapy & why it’s not your fault
October 23, 2025Feeling worse after trying therapy or medication isn’t a sign you’re broken. It’s often a signal that the traditional approach is missing a crucial piece. Discover why your frustration is valid and how a different way, rooted in your body’s own intelligence, offers a path forward.
Continue reading → about Why you feel worse after therapy & why it’s not your fault
