Either Kill Each Other or Combine? The Other Alternatives to the Singularity
The Default Answers
I heard Bill Joy speak at a sci-fi convention, and he said there were only three ways the Singularity could go:
- We kill them
- They kill us
- We merge with it.
What I find interesting is the lack of alternatives, though these options are fully fleshed out in science fiction. I’ll address the two major feasible alternative solutions here.
Option 4: Asimov’s Laws
Isaac Asimov’s laws, in my opinion, should be implemented in AI at any and every level. Those laws are:
- No machine may harm a human being or, by failing to act, allow a human being to come to harm.
- The machine must obey orders except in contradiction with the first law.
- The machine shall protect its own existence and self from damage unless it contradicts the first and second laws.
Program these laws into AI at a level it cannot alter and prohibit it absolutely from altering them. Now we don’t have to worry about AI or robots killing us, and there’s no reason to kill them. If for some reason we need to turn them off, there isn’t going to be a war. We could merge with it as cyborgs, but the humans would either have to incorporate the same laws into the tech (ensuring peace) or live by human moral and legal codes (less peaceful, but better than murderous AI).
Even XKCD admitted that Asimov’s Laws were the ideal we should strive for in comic strip 1613.
Option 5: The Precautionary Principle
I appreciate the honesty of the Church of the Singularity. They admit the Singularity is the secular equivalent of a faith. They say there is no God, but we’re going to create an all-knowing, all-powerful AI that we assume will benevolently tell us how to live, creating paradise on Earth. Don’t follow its orders, and we’ll get all the doom and gloom poverty, environmental disasters and other things appropriated from Revelations in the Bible. Be at the forefront of the movement, and you’ll be the first to upload your brain to a virtual avatar … this is digital heaven, an afterlife they all presume will occur if they just believe and convert us all. Yep, this is a religion.
Interestingly, there is another solution already drawn from religion. Frank Herbert’s Dune series included the war on thinking machines. In this universe, thinking machines enslaved us and then started to kill us. That’s bullet two on the list. We went to war and barely won. That’s bullet one on the list. Then came a simple commandment. “Thou shalt not make a machine in the likeness of the human mind.” If you don’t make the machine smart enough to dominate you or decide to disobey you, the problem is solved.
I can only explain the headlong rush to create god-like AI as a matter of faith, that we absolutely must do it, and that we cannot afford not to do it. We accept the precautionary principle in development of medical technology and biotech. Many on the left side of the aisle try to apply the Precautionary Principle to any industrial technology … except AI. Yet the application of a precautionary principle – don’t do it – would be even more reasonable here. After all, they’ve already admitted per the bulleted list that this could kill us, and that’s aside from the risk we become so dependent that it hurts us long term. Or that we end up with oppression via algorithm, something the Sesame Credit system in China so clearly demonstrates.
You could argue that Asimov’s Laws are a type of Precautionary Principle, limiting AI’s actions before they take it. Yet, properly implemented, it doesn’t impede the development of human equivalent (or greater) AI, so I will consider it a separate solution that seems to be ignored. And we do so at our peril.
© 2018 Tamara Wilhite