
Member-only story
Are Large Language Models Sentient?
What we actually mean when we ask that question
Google just suspended the engineer Blake Lemoine for publishing conversations with the company’s chatbot development system, LaMDA.
According to Lemoine, these conversations are evidence that the system is sentient. Google disagreed, citing that there’s plenty of evidence against the claim of sentience.
This all strikes me as rather odd, mainly because the question of sentience is an unfalsifiable one. All the evidence in the world can’t prove the presence or absence of it—making it a useless technical question to pose in the first place.
All the evidence in the world can’t prove the presence or absence of sentience
It’s fun for a philosophical faff at the ol’ Parisian salon, sure, but not worthy of any serious energy. Especially not institutional energy.

Many of you might think it is in fact the most important question to ask, and I understand where you’re coming from. The notion of sentience seems crucial for thinking about ethics, fairness, and rights.
Those are important conversations to have. But thinking in terms of sentience isn’t the right way to go about it.
I’ll tell you why—but first, we have to define terms.
Sentience and Mary the super-scientist
What is sentience, anyway?
For the purposes of this discussion, we’ll say sentience is the ability to “feel feelings”. By that I mean the ability to have subjective experiences—or what philosophers might call “qualia”.
To dig into this idea a little deeper, I need to introduce you to Mary.

Mary is a genius. The smartest human being alive or dead, in fact. She decided to study neuroscience at a young age and this has…