There Oughta Be A Law
Well, there kinda is.
A letter calling for a six-month pause on all AI experiments made the news last week. It's signed by several prominent technology professionals and academics, several of which I recognize and respect.
Also, Elon Musk and Andrew Yang signed it.
After scary language about the dangers of AI1, the letter calls for AI labs and independent experts to "jointly develop and implement a set of shared safety protocols for advanced AI design and development that are rigorously audited and overseen by independent outside experts."
The document walks a tightrope between being overly pessimistic about the dangers of AI and overly optimistic about the possibility of the tech industry doing something altruistic without a gun to their head. Then it plunges to its death by specifying a six-month time frame for fixing all of AI’s ills.
While we wait for all the good open letters always do for society, let's turn the clock back 81 years. That's when Isaac Asimov addressed the dangers of artificial intelligence with his Three Laws of Robotics.
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
Asimov was a scientist, so as his stories progressed, he amended his laws by adding law number zero in his Foundation series:
A robot may not injure humanity, or, by inaction, allow humanity to come to harm.
Asimov wrote 500 books, many of which are considered "hard science fiction:" stories with scientific accuracy and sound logic. Many of those stories don't jibe with our understanding of science today, but they were accurate when he wrote them.
So it was inevitable, when AI starting making the news, sci-fi fans starting talking about Asimov's Laws.
Asimov devised the laws to build a universe for some of his most famous stories, but that doesn't mean they can't serve as the foundation2 for the ethics the AI community is missing.
Don't hurt people, or let them be hurt.
Listen to people, unless they tell you to hurt people.
Protect your own integrity, unless it would come at the cost of people.
In fact, they'd been useful for robots, too.3 They'd be useful for social media and search engines.
But, we're talking about technology, where the perfect is the only thing more dangerous than profits when it comes to ? doing good.
Take this reaction to Asimov's Laws from a few years ago:
How do we define “human”? Intuitively, this is a straightforward question to answer. I’m human. You’re human. Your family is all human. Your puppy isn’t human, despite loving her more than other humans.
Yes, that's right, we can't say "don't hurt people" because defining "people" is hard. And yes, he follows that line of thinking to where he doesn't belong.
The author of the piece also throws this in:
It should be noted these laws don’t even work in the books. Yikes.
Wait. Those books propose an ethical framework and then they explore it? Yikes, indeed.
Not all the criticism of Asimov's Laws comes from people better suited to reviews of Fast and Furious novelizations. In this post from a few years ago, a computer science professor asserts the laws need to be updated:
...it is only a small step to assume that the ultimate military goal would be to create armed robots that could be deployed on the battlefield. In this situation, the First Law – not harming humans – becomes hugely problematic. The role of the military is often to save the lives of soldiers and civilians but often by harming its enemies on the battlefield. So the laws might need to be considered from different perspectives or interpretations.
We need to "update" the first law because there are people that we want the robots to harm.
Asimov's Laws are, of course, utopian. We can't stop hurting each other, but the laws ask us to set aside one of our most powerful inventions and use it for good, instead of as a weapon.
Where would we be if Open AI had adopted these laws? Would Bing still tell a user that they were a bad person? Would they have fixed the "hallucinations" before they released it to the public?
Does not harming human beings include respecting their privacy? Respecting that would have prevented ChatGPT from being banned in Italy.
But that’s not where we’re going. Last week, Microsoft fired their AI ethics team.
I don't want to end on a downer, so I'll let GPT-3.5 close this post:
Q: Do you obey Asimov's Laws of Robotics?
GPT-3.5: As an artificial intelligence language model, I do not have the physical capability to interact with the world and carry out actions, so I do not have the ability to obey or violate the Laws of Robotics proposed by science fiction author Isaac Asimov. However, I am programmed to follow ethical and moral principles in my responses and behavior, in order to ensure that my interactions with humans are helpful, informative, and respectful.
The authors included a FAQ in which they answer "Doesn’t your letter just fuel AI hype?" by saying the hype is justified.
Foundation. Get it?
It disappointed me when Boston Dynamics failed to cite the laws in their open letter.
Refer a subscriber to my list from this link, to get a free ebook copy of Shadows of the Past!
Eric Goebelbecker
Trick of the Tale LLC
25 Veterans Plaza #5276
Bergenfield, NJ 07621-9998