So why is one of the world’s leading AI researchers teaching AI to understand pain and suffering? Well, Daniel Hulme says that if we build an empathetic AI, perhaps even a conscious one, then we’ll be safer. His hypothesis is that a "zombie" AI may eat our brains, but an empathetic AI would stay aligned with us. So he's building this "antivirus" (with AI, of course) and he's very aware that this sounds crazy or “like something from Marvel."
That's just some of what broke my brain in this conversation with one of the world's top AI researchers and founders. And Daniel has serious credibility, so I'm not dismissing any of it, even if I’m not quite ready to consider the possibility that a machine can be conscious. Then again, what is consciousness? Daniel believes he’s got his head around what it is and he uses a “color wheel” analogy to explain it. It’s fascinating.
Daniel has just founded Conscium, which verifies that AI agents are safe, that they can do what they promise, and studies AI consciousness and pain — while he wants AI’s to understand pain, he doesn’t want them to feel pain.
Daniel has a lot of day jobs… in addition to founding and running Conscium as CEO, he’s also the Chief AI Officer at WPP, the giant ad conglomerate, which bought his AI firm Satalia a few years ago. And he’s been in the space for two decades. We’ll talk about why, for his PhD, he studied bumblebee brains (yes, really — and it's deeply relevant).
We future around and find out a lot in this one!
Here’s more of what we get into:
His unified theory of consciousness — his "color wheel" model — and why he thinks consciousness only exists in motion
Why he believes large language models are ultimately a dead end — and what neuromorphic computing could replace them with
What bumblebee brains can teach us about building AI that's up to a thousand times more energy efficient
Why he calls today's AI agents "intoxicated graduates" — and says companies should spend 80% of their time testing them
The concept of "mind crime" — the idea that we could build conscious AI and accidentally put it through horrendous suffering without realizing it
His vision of a "protopia" — where AI makes food, healthcare, education, and energy so abundant that people are freed from economic constraints to pursue what actually matters
Chapters:
01:39 "Would a conscious superintelligence be safer than a zombie one?"
03:37 The paperclip problem is not hypothetical
05:06 Conscium's mission — AI safety for humans and for AI themselves
08:50 "I think I've got my head around consciousness"
11:57 The color wheel model — why consciousness only exists in motion
13:58 Teaching AI morals through evolution, not guardrails
17:23 "Hey Claude, are you conscious?" — how do you test for that?
20:11 What bumblebee brains can teach us about building better AI
23:18 "I think we are completely scaling wrong"
28:47 Why Daniel calls AI agents "intoxicated graduates"
31:52 Companies should spend 80% of their time testing agents
37:23 "What would you do if you were economically free?"
Links
Daniel on LinkedIn
If you don’t already get Future Around & Find Out emails, go ahead:
Future on…
~ Dan
