Artificial Intelligence - Our Illegitimate Fears
Famous works of literature and film such as 2001: A Space Odyssey and Terminator show us that humans have been romancing the idea of creating a conscious, self-aware intelligence for quite some time. That intelligence, unfortunately, is seen to launch some apocalyptic scheme to destroy all humans in most of the pieces of literature and films it is involved in. This screams of an innate fear of humanity; perhaps in regards to technology which is evolving and developing too quickly in the eyes of some. It inspires us to step back and question what exactly we are doing by attempting to create a machine capable of desire, thought, and basic emotions. Are we making a terrible mistake or are we bringing forth a great gift to existence? In this paper, I will attempt to dispel irrational fear and attempt to show potential real tools capable of dodging negative consequences in the quest to create anthropomorphic artificial intelligence.
Much like any technological or scientific innovation, social implications and consequences should always be regarded, not just for artificial intelligence. One historical example exists with a famous physical relationship. Einstein first derived his mass-energy equivalence E = mc2 in a paper titled ”On the electrodynamics of moving bodies.”  This discovery was a huge leap in innovation, and helped us structure a better answer to the question of the very essence of matter. This relation, however, was not strictly necessary for the development of nuclear fission and the followed development of the atomic bomb. As physicist and Manhattan Project participant Robert Serber put it: ”Somehow the popular notion took hold long ago that Einstein’s theory of relativity, in particular his famous energy relation, plays some essential role in the theory of fission. Albert Einstein had a part in alerting the United States
government to the possibility of building an atomic bomb, but his theory of relativity is not required in discussing fission.” The theory of fission is what physicists call a non-relativistic theory, meaning that relativistic effects are too small to affect the dynamics of the fission process significantly.”  While Serber is correct in asserting that Einstein was not directly involved in the creation of the atomic bomb to any serious extent, the relation inspired those in the nuclear fields to begin thinking about how one could convert matter to energy.
Einstein’s mass-energy relation is simply a prime example of an innovation that iniated a chain of events that had social implications from applied use. All of science and technology bears this risk of being misused or of accidentally creating some disaster. Because of this risk, should we stop discovering and innovating because of the new risks we encounter everyday we progress? Should we strip away all technology and scientific knowledge from everyday life and retreat to caves? Unfortunately, even if we did that, we would still be living in an environment very difficult to survive. As humans, we live in an environment that plagues us with new obstacles each and every day. If we are to thrive and progress into an enriched future, we must continue innovating and we must continue to move forward. Development of artificial intelligence in the quest to answer some basic questions that have been unanswered for many years is a part of this progression, this forward movement. We should not squander a part of it because of unjustified fear.
If it seems ridiculous still to blame Einstein for the atomic bomb, then it should come as no shock to you that this line of thinking is typically known as a slippery-slope fallacy. A slippery slope fallacy  is a fallacy in which a person asserts that an event must inevitably follow from another without any argument for the inevitability of the event in question. In this case, the fallacy is committed when one asserts that Einstein is responsible for the atomic bomb because he inspired thinking about getting converting matter to energy. While it may be true that Einstein set the events leading up to the construction of the atomic bomb in motion, it is ludicrous to assert that he was responsible for its construction and use. From an ethical standpoint, an initial innovation or discovery is inherently neutral. The applications of it are what may be considered as helpful or harmful. Of course, this line of thinking requires modification when the innovation in question is a conscious being with desires and basic thoughts. This is seen from works of fiction such as Terminator or I, Robot in which the machines have some (although maybe subdued) initial malfunction that causes them to desire the execution of all humans or some poor reason to ”save us from ourselves.” It is for this reason why we should design psychological evaluators of some sort to assess the stability of the machine as we begin to bring it to life. Are we justified in terminating it if necessary? Absolutely, as any psychologically unstable being is danger to itself and the individuals around it. Hopefully we can simply shut it down without having to destroy it, but that is not the point. The point is that we should be able to, assuming these innovations are possible, safely create these innovations in a controlled area.
The question naturally arises from the above paragraphs, ”is the development of artificial intelligence a necessary innovation?” It is impossible to give a truthful, objective answer to this since no human on the planet truly knows what will occur in the distant future. We can look at, however, the thousands of companies in modern society  that deploy artificial intelligence techniques in everyday life. From financial companies to hospitals, music, and aviation, artificial intelligence techniques are all around us. Famous Swedish philosopher Nick Bostrom writes that ”Many thousands of AI applications are deeply embedded in the infrastructure of every industry.” Perhaps it is the fact that he is correct that scares people to a significant degree. In a world that is so heavily based and integrated on artificial intelligence,
is it dangerous to apply human principles to artificial life forms everywhere? It certainly may be if we apply defective technology in the attempt to permit this to occur. Again, this is why we should indeed take many precautions before actually attempting something of that magnitude. It does not follow that we should abandon the project all together simply because there are great risks involved. Did we choose not to land on the Moon because it was risky? The same should go for this ancient goal that still persists in the field of artificial intelligence.
We can also reflect on the original goals of the field to justify our progression forward : to learn more about our own intelligence, to learn more about the essence of intelligence, and to answer the question of whether or not we, as humans, can create intelligence ”from scratch” on our own. Developing such an entity has been a particularly vivid dream of humanity. We should not throw away our hopes in accomplishing that dream just because works of fiction have construed particularly negative consequences involved in the creation of conscious, human-like artificial intelligence. We should continue to reach for our dreams and to pursue our potentially never ending quest to unlock the mysteries hidden deep within the Universe and within us.
From an epistemological standpoint, the fear regarding this problem is backed by no solid evidence. We simply do not know what our future as a species will be. One small action in the present leads to a long causal chain of effects, each effect existing merely because of its prior necessitated cause. Taking precautions are incredibly important in any dangerous yet potentially significant project or experiment. Yet we can accomplish nothing in life if we choose to pursue an imaginary route that bears no obstacles. It is for this reason that fear surrounding the age when ”machines completely replace us” is unbounded. We have no idea what will occur. We may destroy ourselves, become cyborgs, leave the planet, or something far more wild. The point is that we simply do not know. If we are committed to avoiding a future with unknown factors, we might as well quit at persisting as a species.
The question still arises of why isn't artificial intelligence without the ability to feel, to have desires, goals, and be self-aware as we understand it good enough for us? Why must we attempt to shove organic life principles such as instinct, desire, and emotion into the mind of an artificial intelligence? The reason to this is best understood by looking at humanity’s history with technology. And by technology I refer to Merriam-Webster’s definition  of ”a practical application of knowledge.” Ever since we formed simple tools out of stone and animal flesh, we have noticed a peculiar phenomenon. The wide acceptance and use of new tools by a large number of people percolates into the culture of the people involved in its usage. Soon, techniques and art forms develop from mere particular ways to use the tool. Each major technological innovation has changed our world in ways we cannot fully be certain of. Technology is a huge part of humanity. It grows and evolves with us, and exists in a mutual relationship; we evolve it and it evolves us. So perhaps it can be said that the most important bits of technology are the ones which extend the human experience. Innovations of thought and sentience in artificial intelligence may do just that for us. It has the potential to give us a boost into a future full of unknown wonder.
It is true, however, that the developments of sentience and thought in artificial intelligence could elicit negative consequences if executed haphazardly, much like any other experiment or project. It is for this reason that necessary precautions should be taken when moving the field ever closer to the goal of a sentient artificial intelligence. Desire, fear, anger, and other passions are strong reasons why certain biological life thrives in the hostile areas of this planet. Comprehension and communication are also essential for the development, as seen from any wild animal. To truly make an artificial mind like a human’s, we are going to have to hardcode innate ”pseudo-hormonal” desires, instinct, and everything else that governs intuitive thinking. Not only that, we will have to grant the ability of more conscious activities such as reasoning (more so in the human way), artistic ability, sensations, and pure thought. Certainly we would not allow the intelligence to access much of anything until we can definitely discern it is psychologically stable. We can even hardcode in ”fail-safes” in case of disaster. If we take these precautions, we have no
reason to fear sentient machines, and if we can truly overcome all of these obstacles, I think we owe it to ourselves as a species to seek out an attempt to fulfill the goal. We should continue dreaming of the day in which we successfully create a machine capable of thinking as we do. Machines have become deeply integrated into our society and culture. It is not un- reasonable to desire them to stand beside us and live as we do. It is not unreasonable that machines should think as we do, especially given the uncertain future looming on the horizon.
 Einstein, Albert (1905). ”Does The Inertia Of A Body Depend On Its Energy Content?”. Annalen der Physik 18 (13): 639641. Bibcode 1905AnP...323..639E. doi:10.1002/andp.19053231314
 Robert Serber, The Los Alamos Primer: The First Lectures on How to Build an Atomic Bomb (University of California Press, 1992), page 7.
 ”Learning to Reason Clearly by Understanding Logical Fallacies”. Makethestand.com. July 19 2007.  Nordlander, Tomas Eric (2001). ”AI Surveying: Artificial Intelligence In Business” (PDF). (MS The- sis), De Montfort University. Retrieved 2007-11-04.
 AI set to exceed human brain power, Web, Article, CNN.com
 Merriam-Webster. Definition of ”technology” Web. Definition 1a http://www.merriam- webster.com/dictionary/technology.