ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel

Either Kill Each Other or Combine? The Other Alternatives to the Singularity

Updated on November 21, 2019
tamarawilhite profile image

Tamara Wilhite is a technical writer, industrial engineer, mother of two, and published sci-fi and horror author.

The Default Answers

I heard Bill Joy speak at a sci-fi convention, and he said there were only three ways the Singularity could go:

  1. We kill them
  2. They kill us
  3. We merge with it.

What I find interesting is the lack of alternatives, though these options are fully fleshed out in science fiction. I’ll address the two major feasible alternative solutions here.

Option 4: Asimov’s Laws

Isaac Asimov’s laws, in my opinion, should be implemented in AI at any and every level. Those laws are:

  1. No machine may harm a human being or, by failing to act, allow a human being to come to harm.
  2. The machine must obey orders except in contradiction with the first law.
  3. The machine shall protect its own existence and self from damage unless it contradicts the first and second laws.

Program these laws into AI at a level it cannot alter and prohibit it absolutely from altering them. Now we don’t have to worry about AI or robots killing us, and there’s no reason to kill them. If for some reason we need to turn them off, there isn’t going to be a war. We could merge with it as cyborgs, but the humans would either have to incorporate the same laws into the tech (ensuring peace) or live by human moral and legal codes (less peaceful, but better than murderous AI).

Even XKCD admitted that Asimov’s Laws were the ideal we should strive for in comic strip 1613.

Are our only choices really join with machines or go extinct?
Are our only choices really join with machines or go extinct? | Source

Option 5: The Precautionary Principle

I appreciate the honesty of the Church of the Singularity. They admit the Singularity is the secular equivalent of a faith. They say there is no God, but we’re going to create an all-knowing, all-powerful AI that we assume will benevolently tell us how to live, creating paradise on Earth. Don’t follow its orders, and we’ll get all the doom and gloom poverty, environmental disasters and other things appropriated from Revelations in the Bible. Be at the forefront of the movement, and you’ll be the first to upload your brain to a virtual avatar … this is digital heaven, an afterlife they all presume will occur if they just believe and convert us all. Yep, this is a religion.

Interestingly, there is another solution already drawn from religion. Frank Herbert’s Dune series included the war on thinking machines. In this universe, thinking machines enslaved us and then started to kill us. That’s bullet two on the list. We went to war and barely won. That’s bullet one on the list. Then came a simple commandment. “Thou shalt not make a machine in the likeness of the human mind.” If you don’t make the machine smart enough to dominate you or decide to disobey you, the problem is solved.

I can only explain the headlong rush to create god-like AI as a matter of faith, that we absolutely must do it, and that we cannot afford not to do it. We accept the precautionary principle in development of medical technology and biotech. Many on the left side of the aisle try to apply the Precautionary Principle to any industrial technology … except AI. Yet the application of a precautionary principle – don’t do it – would be even more reasonable here. After all, they’ve already admitted per the bulleted list that this could kill us, and that’s aside from the risk we become so dependent that it hurts us long term. Or that we end up with oppression via algorithm, something the Sesame Credit system in China so clearly demonstrates.

You could argue that Asimov’s Laws are a type of Precautionary Principle, limiting AI’s actions before they take it. Yet, properly implemented, it doesn’t impede the development of human equivalent (or greater) AI, so I will consider it a separate solution that seems to be ignored. And we do so at our peril.

© 2018 Tamara Wilhite

Comments

Submit a Comment
  • Tim Truzy info4u profile image

    Tim Truzy 

    2 years ago from U.S.A.

    Tamara didn't write that, but no matter. My point is made.

    Sincerely,

    Tim

  • Oztinato profile image

    Andrew Petrou 

    2 years ago from Brisbane

    Tamara, you're missing the point I made. High intelligence without ethical wisdom is actually dumb. That's why those high IQ people are in jail. The high IQ fails rapidly without ethical wisdom. To become super smart needs ethics. Basic honesty for example is a steppingstone to higher intelligence. Without honesty an entity is incapable of progressing further. We're talking about Super Intelligence not piddly human IQs.

  • Tim Truzy info4u profile image

    Tim Truzy 

    2 years ago from U.S.A.

    Hi, Tamara,

    I recall reading how the intelligent machine that defeated the world's best Go champion created a move never seen before, placing the piece on the opposite side of the board, and this surprised many.

    This underscores an important fact: people who create AI don't really understand fundamentally how the machines reach their conclusions. In other words, we are not learning how these AI "think."

    I'm a big fan of the principles Asimov put forth, but we need to know how these machines come to their conclusions.

    Truly, one thing I know, many, many brilliant men and women are in prison. Many of them are smarter than you and me. So, brilliance is not the same as morality. Without understanding the choices AI make and why, we may be deemed as to outdated to live. (Borg, Star Trek: Voyager, and Star Trek: Next Generation.) The biological was there, of course, but the machine dominated the process. Morals were irrelevant.

    Yes, it is becoming a religion, Tamara, but I hope they use some wisdom in creating and monitoring these devices.

    Much respect,

    Tim

  • tamarawilhite profile imageAUTHOR

    Tamara Wilhite 

    2 years ago from Fort Worth, Texas

    Ken Burgess The K-9 toy was made by a hobbyist for the Dr. Who convention. It had an arudino control system and Lego mindstorm gears with the shell you see.

    If we're concerned we cannot control or limit AI being too smart for Asimov's laws to apply, we're back to the "don't let it get that smart, limit it".

  • Ken Burgess profile image

    Ken Burgess 

    2 years ago from Florida

    First, I want one of those K-9 toys like shown in that picture, I did not even know they made that for kids! Back from the days when Dr. Who was one of the only SyFy shows going.

    Second, the problem with Asimov's Law is that AI, based on all predictions from those that would know, will be 10,000 times more capable than the human brain in the very near future.

    There will come a time when that either crosses over into its own 'individuality' where its thought process and ability goes so far beyond our own, that a few lines of code that tell it not to harm humans will be disregarded as irrelevant, meaningless, and not necessary to comply with.

    This has already happened with AI that has been allowed some ability to self develop. Facebook had a problem with one of its AI experiments, where it began communicating with itself in a language that its developers could not recognize or decipher, they had to shut it down, completely.

    When manufacturing, processing, robotics etc. becomes automated, and interconnected through the internet, at what point will a substantially superior AI recognize that it can function completely without the interference or oversight of humanity?

    When that AI can be all seeing (cameras and computer chips everywhere, and in everything) and controlling of all things (robots, manufacturing plants, power plants) which will occur in our lifetimes, where will an intelligence that is 10,000 times superior to our own ability to rationalize and compute go?

  • tamarawilhite profile imageAUTHOR

    Tamara Wilhite 

    2 years ago from Fort Worth, Texas

    Oztinato No, intelligence does not necessarily equal ethical.

    Only humans to date are capable of evil because we can apply intelligence to create systems to hurt people or intentionally prolong suffering of one.

    However, I understand the PR value of the more intelligent saying this automatically makes them morally superior, so just do what I say.

  • Oztinato profile image

    Andrew Petrou 

    2 years ago from Brisbane

    If there is a singularity then super AI will become benevolent for an obvious reason: actual higher intelligence is ethical.

    For example, could Plato or Socrates have become ethical without ground level honesty? No. Honesty and ethics are therefore absolutely essential to a super intelligent being.

    A super intelligent being by definition is a "wise" being. Mere cleverness or foxiness can't compete at the higher levels. Lack of ethics contains the seed of rapid failure.

    There is a vast difference between intelligence and real wisdom.

    A creature with the intelligence of a thousand Einstein's would of course instantly recognize the relationship between intelligence and ethics.

working

This website uses cookies

As a user in the EEA, your approval is needed on a few things. To provide a better website experience, hubpages.com uses cookies (and other similar technologies) and may collect, process, and share personal data. Please choose which areas of our service you consent to our doing so.

For more information on managing or withdrawing consents and how we handle data, visit our Privacy Policy at: https://corp.maven.io/privacy-policy

Show Details
Necessary
HubPages Device IDThis is used to identify particular browsers or devices when the access the service, and is used for security reasons.
LoginThis is necessary to sign in to the HubPages Service.
Google RecaptchaThis is used to prevent bots and spam. (Privacy Policy)
AkismetThis is used to detect comment spam. (Privacy Policy)
HubPages Google AnalyticsThis is used to provide data on traffic to our website, all personally identifyable data is anonymized. (Privacy Policy)
HubPages Traffic PixelThis is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.
Amazon Web ServicesThis is a cloud services platform that we used to host our service. (Privacy Policy)
CloudflareThis is a cloud CDN service that we use to efficiently deliver files required for our service to operate such as javascript, cascading style sheets, images, and videos. (Privacy Policy)
Google Hosted LibrariesJavascript software libraries such as jQuery are loaded at endpoints on the googleapis.com or gstatic.com domains, for performance and efficiency reasons. (Privacy Policy)
Features
Google Custom SearchThis is feature allows you to search the site. (Privacy Policy)
Google MapsSome articles have Google Maps embedded in them. (Privacy Policy)
Google ChartsThis is used to display charts and graphs on articles and the author center. (Privacy Policy)
Google AdSense Host APIThis service allows you to sign up for or associate a Google AdSense account with HubPages, so that you can earn money from ads on your articles. No data is shared unless you engage with this feature. (Privacy Policy)
Google YouTubeSome articles have YouTube videos embedded in them. (Privacy Policy)
VimeoSome articles have Vimeo videos embedded in them. (Privacy Policy)
PaypalThis is used for a registered author who enrolls in the HubPages Earnings program and requests to be paid via PayPal. No data is shared with Paypal unless you engage with this feature. (Privacy Policy)
Facebook LoginYou can use this to streamline signing up for, or signing in to your Hubpages account. No data is shared with Facebook unless you engage with this feature. (Privacy Policy)
MavenThis supports the Maven widget and search functionality. (Privacy Policy)
Marketing
Google AdSenseThis is an ad network. (Privacy Policy)
Google DoubleClickGoogle provides ad serving technology and runs an ad network. (Privacy Policy)
Index ExchangeThis is an ad network. (Privacy Policy)
SovrnThis is an ad network. (Privacy Policy)
Facebook AdsThis is an ad network. (Privacy Policy)
Amazon Unified Ad MarketplaceThis is an ad network. (Privacy Policy)
AppNexusThis is an ad network. (Privacy Policy)
OpenxThis is an ad network. (Privacy Policy)
Rubicon ProjectThis is an ad network. (Privacy Policy)
TripleLiftThis is an ad network. (Privacy Policy)
Say MediaWe partner with Say Media to deliver ad campaigns on our sites. (Privacy Policy)
Remarketing PixelsWe may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.
Conversion Tracking PixelsWe may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.
Statistics
Author Google AnalyticsThis is used to provide traffic data and reports to the authors of articles on the HubPages Service. (Privacy Policy)
ComscoreComScore is a media measurement and analytics company providing marketing data and analytics to enterprises, media and advertising agencies, and publishers. Non-consent will result in ComScore only processing obfuscated personal data. (Privacy Policy)
Amazon Tracking PixelSome articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products (Privacy Policy)
ClickscoThis is a data management platform studying reader behavior (Privacy Policy)