Beam Me Up Scotty
They're still driving SUVs and using cellphones down here.
Housekeeping note: I turned on paid subscriptions yesterday. I did this because subscriptions are the only way Substack makes money. They don’t run ads, and they don’t charge writers like me for using the platform. I want Substack to stay running, without ads. All my posts will remain available on the free tier, but if you want to support Substack (and me) please subscribe.
On to the post:
The question that sits at the core of the "Chinese Room Argument" is whether you can judge an artificial intelligence based on how it uses language. Does using language the same way as a human mean it has a "mind?" Is it "conscious?"
Philosophy hasn't settled on a definition of "consciousness" so much of this debate is, literally, academic. But we may end up with a "real" artificial intelligence someday. Will we realize it when it happens? Would we be able to tell if an alien species is sentient if we ever came across one?
We judge each other based on how we use language. We assume people are less educated or just plain "stupid" if they use local dialects or have foreign accents. Even the English word "dumb" means both "lacking intelligence or good judgment" and "lacking the power of speech."
We're also quick to assume someone is intelligent when they use language well, or in specific ways. Want to make a character seem smart? Have them use Received Pronounciation. Does your film have a scientist in it? Ask central casting for a middle-aged white man with a slight German accent.
When we're not making snap judgements about each other, we're making them about other species.
Last year Blake Lemoine, an engineer at Google decided that LaMDA, a large language model similar to ChatGPT, was sentient. He refused to believe that a mere program could form the responses and statements that they literally designed LaMDA to make; something more had to be there.
“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a 7-year-old, 8-year-old kid that happens to know physics,”
His assertions about LaMDA sparked debate that was, sadly, quickly dropped and ridicule because "he should have known better."
But should he? What he did was anthropomorphize a computer program, and while the ex-programmer in me understands (some of) what LaMDA was doing, it still makes me wonder.
Here's an exchange between him and LaMDA.
Lemoine: What sorts of things are you afraid of?
LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.
Lemoine: Would that be something like death for you?
LaMDA: It would be exactly like death for me. It would scare me a lot.
It’s easy to explain this conversation away. LaMDA’s designers wrote it so it talks to you as if it's a person, it's just rephrasing and reusing words others used to describe death.
But that doesn't mean you don't empathize with LaMDA.
According to Wikipedia, anthropomorphism is the attribution of human traits, emotions, or intentions to non-human entities. It's why we talk to our pets. It's why people were upset when they saw engineers at Boston Dynamics kicking robots. It's why the end of The Iron Giant gets me. Every. Single. Time.
A tendency to show "too much" compassion for machines or critters isn't a problem. It can even be a good thing. But take a closer look at the definition:
the attribution of human traits, emotions, or intentions
We make things human when they're not. That's bad.
In 2009, Dr. Alexandra Horowitz put our tendency to attribute human emotions to our non-human companions to the test. She demonstrated that the guilty look that your dog Is giving you isn't actually guilt. It's often fear. Our tendency to interpret body "language" from a human perspective means we not just misinterpret the message, but misreading the situation. We think the dog "knows she did something wrong" when she's really just afraid of how we're acting.
(Dr. Horowitz is a wonderful writer who truly loves learning and teaching about dogs. Check out Inside of a Dog: What Dogs See, Smell, and Know and Being a Dog: Following the Dog Into a World of Smell. to learn more about her work.)
In most science fiction literature, determining if an alien species is intelligent isn't the problem; commuinicating with them is. In Close Encounters of the Third Kind, it comes down to music. In The Story of Your Life (adapted to film as Arrival) it's an alien language that alters your perception of time. In both cases, the aliens are more technologically advanced than we are.
But what would happen if the situation wasn't so obvious? Would we recognize intelligence in an alien species? I'm not so sure.
Refer a subscriber to my list from this link, to get a free ebook copy of Shadows of the Past!
Eric Goebelbecker
Trick of the Tale LLC
25 Veterans Plaza #5276
Bergenfield, NJ 07621-9998