Rumman Chowdhury wants to remind you that “AI isn't doing anything.”  We do things. AI is not to blame for layoffs or if you’re denied medical coverage. People are.

Eight years ago, Rumman coined the term “moral outsourcing” to describe this excuse where we blame tech for decisions that people make. Why do the semantics matter? Because, Rumman says:

In world one where, “AI did X,” it's very scary. It's like, “oh my gosh, this thing that is bigger and smarter than me has come and descended and  now it's gonna wipe out every job. “

[But if we center on people, then we have agency and accountability and we can say] “no, you built a thing that was broken and flawed.”

Rumman is the founder and CEO of Human Intelligence PBC, which is building evaluation infrastructure to make Gen AI systems safe, trustworthy, and compliant. She also served as the U.S. Science Envoy for Artificial Intelligence under the Biden administration, led AI ethics teams at Twitter and Accenture, and is a Responsible AI Fellow at Harvard.

In this conversation:

  • Why "moral outsourcing" is the sneakiest trick in tech — and how execs use AI as a shield for decisions humans made

  • How to avoid — or at least how to mitigate — creating AI that’s biased

  • Red teaming AI and creating bias bounties

  • The "grandma hack" and other ways regular people accidentally jailbreak AI models

  • How AI companies are quietly rewriting their terms of service to dodge liability when things go wrong

  • Why the benchmarks you see when a new model drops are "basically spelling tests"

  • AI psychosis, parasocial chatbots, and the cold emails Rumman gets once a month from people who think AI is alive

  • What builders can do right now to take back agency — and why Rumman is more excited about agentic AI than anything that came before

Enjoy this episode on: YouTube | Spotify | Apple | etc…

Chapters:

  • 00:00 "The thing I believe in the most is human agency"

  • 02:14 Why builders have more agency than they realize

  • 04:00 What is a bias bounty?

  • 06:41 What 2,000 hackers at DEF CON found

  • 09:40 The grandma hack

  • 11:30 Why guardrails fall apart

  • 14:54 Anthropic's new bug-finding model and the cat-and-mouse game

  • 19:10 Why most evals are "basically spelling tests"

  • 21:30 How to actually evaluate an AI agent

  • 26:20 "Moral outsourcing" and the AI layoff lie

  • 28:45 Inside Rumman's tenure as U.S. AI Science Envoy

  • 32:10 The legal loophole AI companies use to dodge liability

  • 35:35 AI psychosis and the cold emails Rumman gets

  • 38:40 Why Google's AI overview is quietly dangerous

  • 44:35 The problem with "AI literacy"

  • 48:05 Can we trust anything we see anymore?

  • 50:15 What builders can do right now to take back agency

Enjoy this episode on: YouTube | Spotify | Apple | etc…

If you don’t already get Future Around & Find Out emails, go ahead:

Future on…

~ Dan

Keep Reading