The Implications and Dilemmas Society Faces as Technology Approaches the Singularity

Source

As with so many things in human history and society, technological growth is occurring at an accelerated rate.  This growth curve will one day reach a near vertical phase; technological changes will occur so rapidly, that it will be beyond human comprehension.  This event is known as The Singularity.  As a human race, we need to be prepared for the inevitable.  We must ponder the ethical, political, economic, and environmental implications of such an event as to ensure our survival.  The time to prepare is now, before The Singularity happens. The future is uncertain, and several scenarios exist to explain our possibilities.  Presented herein is a series of possibilities, explanations, and responsibilities that must be dealt with before the inescapable incident known as The Singularity comes knocking at our door.

Introduction

From the incipience of humanity, there has always been a desire and a necessity for technology. The hardships of mankind have beckoned us to tame them with technological advances. With intelligence, ingenuity, and inventiveness on our side, humans have inquired solutions to the everyday issues at hand. For every invention that we develop, there soon exists another invention that either replaces it or improves upon it. New inventions also have an inherent ability to spawn even more inventions. The growth of technology, as well as such things as population, take the configuration of an exponential increase. This ever-accelerating growth is known to many as The Curve [4 (47)] or The Law of Accelerating Returns [6].

One sector of technology that has experienced some of the most explosive growth we have ever seen is the area of computers and robotics. Moore’s law basically states that the data density, or computing power, of a chip will double in performance every 18 months [8]. This law can be applied to many things, not just computer chips. This quandary of sorts poses many perplexing possibilities of peculiar positions that people may face in the future. For we know that we cannot predict the future, except perhaps maybe a few minutes at a time. But what does the future hold for us? Can technology reach a point where it can begin to learn and improve upon itself without human intervention? What happens when our technology is on the verge of sentience and The Singularity is waiting at our doorstep? What ethical and political dilemmas will we humans have to conquer to ensure our existence or co-existence in the future?


The Singularity

“Within a few decades, machine intelligence will surpass human intelligence, leading to The Singularity -- technological change so rapid and profound it represents a rupture in the fabric of human history.” [6]. As strange as the above quote sounds, it is a very real possibility that has irked the minds of many for at least a century. The intelligence of humans will one day bow to the power of the machines.

The Singularity represents the nearly vertical portion of the exponential growth curve. Once a machine has the capability and capacity for true learning, the technological change will become so rapid that we simply will not be able to comprehend it. It is this point that we will be unable to tell a human from a machine. The machine will attain Singularity status and will essentially be the most intelligent thing on the planet. What happens at this point in time is up for debate

It is well known that science fiction is often a precursor for science fact. The essence of The Singularity can be found in a wide variety of media ranging from books and movies to popular video games. Several well-known movies such as “Colossus: The Forbin Project” (1970) and “The Terminator” (1984) present several apocalyptic themes of machines attaining sentience. Scenarios such as these science fiction ones are essentially the basis of what The Singularity is. However, The Singularity does not have to be a negative thing. With the proper set of rules, such as those presented in Isaac Asimov’s “I, Robot,” machine sentience can be a very positive and powerful thing. It will be the revolution of revolutions.

The Three Laws of Robotics

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey orders given to it by human beings except where such orders would conflict with the First Law
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law [1]

-Isaac Asimov

Ethical Responsibility

It is a philosopher’s playground to discuss ethics and why we need them. Society cannot function without a proper set of ethics and morals. It can be agreed upon that good ethics is a necessary part of life. A good place to start for ethical laws as it pertains to sentient robots is the three laws presented to us by Isaac Asimov. While this a good start, it simply isn’t enough if we want to stay protected.

One important guideline that will need consideration is the fact that we will likely want a human to be in control of a machine at all times. This ensures that a machine of reputable intelligence will remain compliant with human laws. A robot may need to obtain permission from its owner before beginning any significant tasks. This also ensures that humans will take responsibility for the actions of their robots.

Another important facet of robotic ethics that will need to be looked into is the protection of personal and private data. It is very likely that one day everyone will have a personal robot that will hold all the information about the owner in its solid state drives. It is imperative that the machine be secure as to prevent hackers from removing integral data from its memory banks. These machines would need to utilize encryption technology that is above what we have today to ensure that the data is protected.

We also have to consider how to treat our robots. Will they simply be objects that can be man-handled like any other piece of technology, or will there have to be rules and laws that protect truly intelligent machines from destruction? If a machine is capable of processing information much faster than any human, and is able to learn and retain it, we can conclude that this particular machine is more intelligent than an average human. Based on current ethical rules in regards to animals and other humans, it would follow that these machines do indeed deserve a high degree of ethical treatment [7]. The only difference now is that these machines can be switched off and reset. Their memories can even be erased if trouble arises. So if this is a possibility, then why should we even have to treat it with dignity? We could just reset it and the machine wouldn’t have any idea what happened to it in the past. Or maybe the question is, should we even be allowed to reset a truly intelligent machine?

But even with laws in place, problems could still easily arise. A conflict with in the internal workings of the robot brain could allow for misinterpretations of laws. Each word that goes into a law would have to be defined in such a way that it would not allow for any interpretations. For instance, how do we define harm to a human being? Something that harms one person may not harm another. And what about smoking? It’s proven to be harmful to a human’s health, but how would a robot react to a chain smoker? What about alcohol, or any drug with side effects? “Any smokers would presumably have the cigarette torn from their lips whenever robots were about.” [3] What about bearing children, or driving, or getting a tattoo? This is where a near-sentient robot with embedded laws will have the most trouble functioning; or rather we will have the most trouble controlling it.

When The Singularity happens, it is unknown as to the outcome of the relationship of humans and machines. Grim visions of the future aside, machines with superior intelligence will start to make our decisions for us. They will desire to control the workings of the world to improve its function. They will argue that their superior intelligence makes them more capable of handling the tasks we once did. And soon after, the machines will demand to have at least equal rights to their creators. Like humans, a fully cognitive machine will have freedom of choice. It can be said that many sentient machines will choose to ignore any laws inserted into their minds. They may take on the characteristics of their creators and put self-interest above all other things. If this happens, humanity is quite possible doomed.

It is at this point that we will begin to struggle with the real ethical dilemmas. Should we allow machines the same rights as humans? Should robots even be allowed to make decisions that affect humanity? Is it murder if you destroy a robot? Can a human marry a machine (This is expected to happen within the next 50 years [2])? Questions like these are beyond the scope of this paper as they can result in entire new essays of discussion. However, we will one day experience The Singularity and have to face these and other ethical quandaries head on.

Political Responsibility

Even if we can get past the ethical predicaments of a created race of machines, we still have many more hurdles to jump over.  It is well known that the US military has a strong interest in robots.  We currently have robots in Iraq helping keep the soldiers safe.  Within the next few years, it is expected that lightweight, super strong robots will become a major part of the battlefield [12].  It won’t be much longer thereafter that super intelligent robots will assume control of several war decisions.  But what does this have to do with politics?  War is politics.  The reasons why or why not to go to war are largely political.

For now, these robots currently in Iraq will remain non-anthropomorphic, but perhaps one day that will change.  Because the robots are being utilized for lugging cargo and performing reconnaissance, they work much better when constructed to more mechanical forms.  It is human nature and natural tendency, however, to construct objects to reflect our likeness.  How would we treat an anthropomorphic, super smart robot solder that has been wounded in battle?  Would it even get a mention on the morning news?  My bet is that as The Singularity nears, and our machines become more human-like, people will begin to have empathy for their fellow mechanized comrades.

In a political light, it may be better for the morale of the country to focus on robotic deaths instead of just simply the death of humans.  A president who emphasizes the work of the machines will get more support from the populous then if he/she only talked about human deaths. An eager president could much more easily get support for a war if he/she could use a lot of robots to fight in the place of humans.  Will this lead to more wars?  Perhaps, but in the end were talking about the salvation of human life.

In theory, and perhaps in practice, mechanized spies could be used to monitor political events [9]. It is a conspiracy theorist’s reverie as to what the purposes of such an illegal investigation could be for.  It could be something as innocent as a ‘peace keeping’ watch, or something as sinister as a plot to sway the vote in one direction.  In either case, sooner or later we will have to come to terms with the idea of spying robots.

It is our responsibility as a human race to ensure that we do what we can to preserve life as well the democratic state.  We must also prevent political intrusion by spy-bots and allow for clean and fair elections. Through political policy and law making we can prevent a machine from becoming more important than a human. As long as machines are not considered life, we can consider them expendable; at least more expendable than a human.  However, this will most likely have to change when The Singularity happens.

Economic Responsibility

There are three main points of analysis that come up when talking about our economic responsibilities as it pertains to robots and other thinking machines. The first point to ponder is that of the current and future roles of robots in our society. How will their status affect the job markets? The second point to think about is the economic costs associated with the manufacture and possible self-replication of mechanized intelligent creatures. And the third and final point of interest is how much, if any, we should allow machines to participate in economic endeavors.

Most people know that existing robots have replaced humans when it comes to dangerous or very repetitive jobs. For now, this hasn’t affected the economy too much, because some of the jobs that were lost were replaced with such jobs as robot maintenance technicians. As the price of robots goes down, and they get easier to build and program, the amount of jobs lost to robots will increase. This too will follow the law of accelerating returns [6]. It is possible that machines could one day leave very few employment opportunities for humans. “[In 2003] a humanoid robot developed as part of a Japanese public-private partnership demonstrated its ability to sit in a backhoe and operate it.” [11]. The possibilities for robotic entities in the workforce are endless, and we as a human race have to be prepared to either prevent or accept the inevitable machine take over.

As with most commodities, the manufacturing costs of robots will decrease over time. At the same time, their performance will increase [6]. In the beginning, the robot owners will be responsible for the initial cost as well as the cost of maintenance for their machines. There will come a time when intelligent machines will begin or want to begin to self-replicate [7]. When this happens, there will surely be economic turmoil.

Who will pay for the materials that are needed for this robotic reproduction? We could assume that the owner would be responsible. But what happens when the owner refuses to pay for his robot’s offspring? We will have to decide whether robots should be given money. When a machine begins to earn a salary, their rights will be acknowledged. Should we even allow robots to have rights at all? And what would happen to the economy if all of a sudden we had to pay robots for their duties? And who gets to decide what the rights of robots will be? This perplexity will inevitably lead to even more fervent political debate. We must think about these issues now, before we are amidst a troublesome situation.

If you think that that is unsettling, imagine the prospects of a post-singularity sentient machine with the capability to make and spend money. It would be incredibly easy for a machine of superior intelligence to topple the economy. A super sentient machine could anticipate human behavior and send the stock market flying in any direction. It is quite obvious that human inaction or ignorance of our economic responsibilities could easily lead to disastrous results. The human race must band together and develop a set of rules to curtail the problems of a robotic society in the future.

Environmental Responsibility

The environmental concerns here are threefold. The first issue is the obvious discussion of the manufacture and disposal of robots. The second issue to be dealt with is how to power the machines. The third issue is a bit more complicated as it deals with the performance and duties of the robot itself.

The production of these intelligent machines will require large amounts of energy and other natural resources. When the eventual self-replication issue presents itself, there will be a sudden and huge demand for raw materials. This could put a large strain on the environment. And when a robot requires disposal, it would need to be recycled or else placed into a landfill.

This problem can be easily mitigated if properly handled in the beginning. Robots will most certainly be designed to clean up the environment for us. There currently exit several robots that can clean your home [5]. Other robots exist to clear land mines from a ravaged battlefield. Robots could easily help turn the tables on climate change and protect humanity.

A race of robots would require a very substantial stock of electrical power. Depending on how this power is obtained, it could be very detrimental to the environment. Energy consumption would reach a level that is currently unimaginable by today’s quantities. New and very efficient sources of clean energy would need to be developed in order for this order of machines to thrive. Maybe a future akin to the story of the movie The Matrix isn’t too far away? That movie did portray a scene of near total environmental destruction. In the end though, I believe that post-singularity machines will very easily find ways to power themselves.

Lets suppose we do everything right and we can coexist with a race of sentient servant robots. Since these robots are sentient beings and servants, how can we really expect to know what its response will be to any particularly uncommon question or request? What are its real motives, and highest priorities? What if a scientist or a philosopher asks a robot a to perform an impossible task or answer an impossible question? One possibility is that the machine will do anything, even destroy the earth and the universe, just to come up with an answer. This is highly unlikely, but it is still a possibility. This discussion really ties into the discussion of our ethical responsibilities as humans.

The environment is of the utmost concern in these trying times. The possibility of an army of environmental cleanup machines might actually be a good thing for humanity. But still we have to thoroughly explore all the possibilities of the future. We wouldn’t want things to spiral out of human control now would we?

A Scenario of the Future

By the early 1920’s and 30’s America was fascinated by the idea of robots [10].  An idealistic scenario that was painted in everyone’s mind was that of a robot in every household.  These robots, the early visionaries thought, would sweep kitchen floors and prepare hot meals upon request. Well, it’s currently the year 2008 and society still hasn’t given us what these visionaries wanted. So let us jump ahead to 2099 and ponder a brief scenario of a pre-singularity society.

The date and time is Thursday April 7, 2099 at 10:59PM. John is preparing for another great slumber in his iBed.  This particular bed is a literal human sanctuary.  The bed is just one of many intelligent devices that humans have created. It controls ambient temperatures, wicks away moisture, monitors John’s health, and acts as his alarm clock. The bed senses John’s weight as he lies down for another restful night.  The bed instantly reads his thoughts and adjusts its firmness to John’s desire.  The bed then plays a soothing set of tones that lull John into an instant and deep sleep.

Come morning, John feels like a new person.  This is how he feels everyday thanks to the iBed.  John now telepathically informs several of his mechanized servants to begin preparing a breakfast fit for a king.  John takes a moment to relax as the iShower prepares a bath at the perfect temperature.  This is just another typical day in John’s 512th floor apartment.

Just ten years ago, John would be struggling to get ready for work on time.  Now, because of the millions of robots that exist, John does not have to go to work anymore.  John’s job, as most of humanity’s is, is to acquire new knowledge for the betterment of society.  His robots act as his lab assistants, retrieving data and collaborating on new ideas. 

John hasn’t been outside of his apartment in nearly 6 months.  He has no reason to leave.  The machines of the age do all the work and bring all of life’s necessities to his apartment.  And money does not exist anymore either.  The economy has shifted to the control of machines.  Machines work and supply every human with nearly everything they desire.  Materials, food, and energy are all plentiful.  With the help of these intelligent creatures, humans have left nothing unconquered.  Life in general is very placid.

This vision of the future bleeds of communism, but perhaps one day everyone truly can be equal.  While this scenario portrays a life of unlimited enjoyment, few responsibilities, and an abdication of societal control, it really is just one of the millions of possibilities of what the future holds.  Of course this scenario implies that we have overcome all opposition to the changes that lead to it.  This also implies that The Singularity has not yet happened. For this event would mean severe, quick, and possibly detrimental changes for the human race.

Conclusions

As the growth of technology speeds up, it becomes harder and harder to perceive what reality is. We will one day experience a change so rapid that its consequences are unimaginable. When this event, The Singularity, happens, humans must be ready for it. We must ensure our survival by discussing and preparing for the hard realities of our ethical, political, economic, and environmental responsibilities. I’ve presented only the tip-of-the-iceberg of the questions we need to ask ourselves; and even coming up with a coherent answer for just one of them is proving difficult. Perhaps one day a group of intelligent cyborg philosophers will solve these problems for us, but I wouldn’t bet on it.

Copyright (C) 2011 Christopher Wanamaker

References

[1] BBC News "Robotic Age Poses Ethical Dilemma.” March 7, 2007. <http://news.bbc.co.uk/2/hi/technology/6425927.stm>

[2] Bioethics News "Following up: robot marriage, political (neuro)science and stem cell politics." November 30, 2007. <http://blog.bioethics.net/2007/11/following-up-robot-marriage-political-neuroscience/>

[3] Conscious Entities "Robot Ethics." September 20, 2004. <http://www.consciousentities.com/robotethics.htm>

[4] Garreau, Joel. Radical Evolution. New York: Doubleday, 2005.

[5] Grumet, Tobey. "Robots Clear Land Mines And Clean Your House” Popular Mechanics(November, 2003)

[6] Kurzweil, Ray. "The Law of Accelerating Returns." March 7, 2001. <http://www.kurzweilai.net/articles/art0134.html>

[7] Lindsey, Patrick. "Robots and Ethics." Swarthmore Engineering Department

[8] Stokes, Jon. "Understanding Moore's Law” ARS Technica. February 20, 2003. <http://arstechnica.com/gadgets/2008/09/moore/>

[9] Weiss, Rick. "Dragonfly or Insect Spy? Scientists at Work on Robobugs.” The Washington Post. October 9, 2007

[10] Wikipedia "Robot." April 21, 2008. <http://en.wikipedia.org/wiki/Robot>

[11] Williams, Martyn. "Robots Take Dangerous Jobs.” IDG News Service (April 03, 2003), <http://www.pcworld.com/article/id,110127-page,1/article.html>

[12] Wired "Robots May Fight for the Army." April 13, 2004. <http://www.wired.com/science/discoveries/news/2004/04/63036>

More by this Author


Comments 3 comments

aaditysony profile image

aaditysony 5 years ago from Raya Mathura

I loved it but singularity will ruined the pleasure of our life


CWanamaker profile image

CWanamaker 5 years ago from Arizona Author

Yeah our lives will certainly be drastically different. When the machines takeover, humans will be rendered useless. With no jobs and nothing to do, how will people live? Hopefully programmers will hard wire in some basic rules so they will live within the confines of human law and morality.


goatfury profile image

goatfury 2 years ago from Richmond, VA

Very good stuff. No matter how you slice it, our way of life is about to change!

    Sign in or sign up and post using a HubPages Network account.

    0 of 8192 characters used
    Post Comment

    No HTML is allowed in comments, but URLs will be hyperlinked. Comments are not for promoting your articles or other sites.


    Click to Rate This Article
    working