The Singularity Is Near, Time to Prepare: Artificial Intelligence & You
The Dawn of a New Age
We are entering the dawn of a new age. An age in which we are nearing the singularity and the ability to achieve artificial intelligence. The reasons to believe that we are nearing the singularity are vast. The law of accelerating returns, proposed by Kurzweil extends Moore’s law beyond integrated circuits and states that the rate of change and ontogeny in evolving systems, including emerging technologies, tends to increase exponentially. We can already see the fast paced growth that has occurred just within the past 15 years in the scientific community. The development of automatic speech recognition, and the fact that we have already developed artificial intelligence to the point where computers can devise a scientific hypothesis, run experiments, and analyze the results accurately all points to the nearing of the singularity.
- Rise Of The Robot Scientists
Some scientific questions are so complex that designing and carrying out the experiments needed to find answers requires a prohibitive amount of scientists’ time. Robot scientists could fill the void.
Welcome to the Future
In 2010, Ross D. King, professor of computer science at Aberystwyth University, created a robot named Adam. "Adam" is essentially is a complex, automated lab. However, its computational reasoning abilities are beyond extraordinary. Adam conducts studies and experiments on how microbes grow through selecting various microbial strains. Adam was programmed with an extensive knowledge of scientific information and genomics. ‘He’ is then able to ascertain how the microbial strains grow.
To say that Adam just possesses information and processes it would be a vast understatement. Adam is able to use logic statements to represent the knowledge it has acquired. It then rationalizes and channelizes these interactions with the physical world. “Adam generated and experimentally confirmed 20 hypotheses about which genes encode specific enzymes in yeast. Like all scientific claims, Adam’s needed to be confirmed. We therefore checked Adam’s conclusions using other sources of information not available to it and using new experiments we did with our own hands. We determined that seven of Adam’s conclusions were already known, one appeared wrong and 12 were novel to science.”
I don't think anyone can not find that particularly astounding. Adam and other robots also use reasoning skills that humans have been using for thousands of years, such as deductive, abductive, and inductive reasoning. This exemplifies how extremely close to the singularity point we are.
- Aubrey de Grey: A roadmap to end aging | Video on TED.com
Cambridge researcher Aubrey de Grey argues that aging is merely a disease -- and a curable one at that. Humans age in seven basic ways, he says, all of which can be averted.
Aubrey de Grey on the Singularity And Longevity
The Possible Effects of The Singularity on Humanity
It is extremely hard to decide whether or not humanity will be better off with or without the singularity occurring. Acclaimed science-fiction writer Vernor Vinge proposed that the dawn of machines that bypass human intelligence will create an ‘event horizon’ in which we will not be able to accurately depict the future. This event horizon would turn our world upside down, (figuratively, of course). As change accelerates, the re-shaping of every aspect of life on Earth will occur, our world appearing more alien than any human mind could have ever imagined. However, one can still attempt to conceive these ideas and I believe that humanity will be better off in most aspects due to the singularity and the rise of ultra intelligent machines.
In the realm of medicine and health, things will be extremely better for humanity as a whole. Nanotechnology, which is already in use, could and will reach a point where our bodies are constantly being repaired by nanotech devices on the individual molecular and cellular level. Diseases could be eradicated completely. Aubrey de Grey, a brilliant gerontologist and futurist has predicted that if we can generate this technology, human beings could have an extended life expectancy of 1000 years. Furthermore, brain-computer interfaces may one day allow us to upload our consciousness’ and theoretically, live forever.
There will be a vast amount of ethical implications that will arise from the singularity. The most common apprehension regarding the ethics of A.I. is the fear of a malevolent, ultra-intelligent machine uprising against the human race, due to resentment and malice. This fear is regarded as irrational by many academics, scientists, & writers, including myself.
It is more likely that AI’s would drastically alter the environment, such as harvesting all available nuclear, solar, and chemical energy, if they were to find that useful to maximize their goals and reproductive fitness. As artificial intelligence researcher Eliezer Yudkowsky has said "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else." Although the intention of the AI’s would not be malevolent, the consequences for the human race could be dire and even lead to the possibility of human extinction.
Nevertheless, the idea and creation of “friendly” AI, budding research into the concept, and procedures to follow to ensure a benevolent AI have already been outlined. Theoretical systems of friendly AI structures, such as Gödel machines “make provably optimal self-improvements given certain assumptions (Schmidhuber 2007)” and can modify themselves whilst preserving their established goals. We must also consider that moral and social values change considerably and constantly, and no matter what the change may be, it is usually considered progress. “Every improvement is a change, but not every change is an improvement.” An approach to solve this moral dilemma could be to program the AI with the knowledge of reflective equilibrium.
It's Time to Talk
It is important that we begin discussing these matters now. Though it may seem to some like we are very far away from grasping this technology yet and achieving singularity, the fact of the matter is that if it does happen it will be sudden. It will be incomprehensible to most human beings due to how strange a turn humanity would take. So it is of utmost importance to discuss these advancements in technology and the ethical implications that will certainly occur from them. We must also take heed not to restrict such technology just because of the possible unwanted implications from AI and the singularity due to the immense benefits it could bring as well.
Let's Play! The Game of Reflective Equilibrium.
Reflective Equilibrium is meant to form principles that will give guidance and confidence to our convictions. The way this model is laid out, certainty is an absorbing property.
For the purpose of this thought experiment/ game, consider the "Theorist" as an A.I. Programmer and the "Reviewer" as an ultra-intelligent A.I.
The game is modeled on the peer review process. A thinker, called the Theorist (Programmer), proposes a theory, which has the form of an ideology. That is, it is a vector of positions on all issues. Another thinker, the Reviewer (A.I.), considers, then either accepts or rejects it.The game has two steps. First, a Theorist is chosen from among the polity to propose a belief system. This is a complete ideology, with prescriptions for all issues. It is thus a vector of 0's and 1's. Second, a Reviewer is chosen from among the polity. That Reviewer gets to "accept" or "reject" the piece.
After acceptance or rejection, both the Reviewer (A.I.) and the Theorist (Programmer) have learned something. For the Theorist, if the Reviewer accepts, the Theorist should become more confident. The Theorist knows that the Reviewer doesn't necessarily agree with everything presented, but on balance, everything is more likely. If the Reviewer rejects, the Theorist becomes less confident (except for those issues which the Theorist believes are settled-- her considered judgments). For the Reviewer, something similar happens. If the Theorist offers something that agrees with him on all considered judgements, then the theory is persuasive in general. Thus the Reviewer's (A.I.) beliefs move toward the theory on all issues. However, if the Theorist (Programmer) offers something that is disagreeable, then the Theorist has proven herself untrustworthy, and the Reviewer's beliefs move away from the theory.(Hans, 2010)
Based on this model, and assuming that the A.I. has a top-level goal of beneficence, I postulate that an A.I. programmed with the understanding and usage of reflective equilibrium would be able to make morally sound AND morally progressive conclusions and determinations of its own volition. Possible bootstrapping algorithms that convey reflective equilibrium, such as, “Do what we would have told you to do if we knew everything you knew,” or “Do what we would’ve told you to do if we thought as fast as you did and could consider many more possible lines of moral argument," could reliably ensure that an A.I. would remain friendly & would not become morally stagnant.