ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel

Artificial Intelligence

Updated on May 6, 2009

The Future of Moral Debate?

For as long as there have been governments, even in their loosest forms, (i.e. tribal councils or village leaders) there have been moral debates, issues that were, and still are, argued by opposing sides because of a potential benefit, but also a potential loss. Today is no different from those bygone days, and the news is often rife with these debates; as a society, we struggle with the moral complications of everything from abortion to cloning, from stem-cell research to the benefits of the space program, and everything in between. But, with nanotechnology becoming a not-so-far off reality, and cloning already occurring in a limited, government regulated form, the future holds an entirely new series of moral debates that have firm roots in the writings and arguments of philosophers and psychologists of every age, even as far back as those of ancient Greece.

But what, you may ask, could this wide-spanning issue be? Consciousness. For almost as long as mankind has been aware of thought and consciousness, there have been those who sought to truly understand it, to map its processes and label each step, hoping to unlock not only the hidden secrets of our minds and how we become the unique individuals we are, but also perhaps the answer to the biggest and most heated debate the world has ever seen: what happens when we die? Even today, with research into quantum physics becoming a more accepted and understood area of science, the mind is a curious thing, an organ so amazing that it's very processes boggle the mind.

A bio-chemical machine for intelligence?
A bio-chemical machine for intelligence?

With this in mind, (pun intended!) consider for a moment, the implications of creating a being capable of this level of awareness yourself; if a creature, a mind, or perhaps a simple program were to have all the mental capabilities, awareness and burning theological questions previously thought unique to human beings. Would such an act of godlike creation be possible? Would the creation have a soul, would it disprove the notion entirely, or would it perhaps shed light on yet another spiritual grey area that seems to confirm the existence of a spirit, yet in a sense that no-one today might expect? Here we tread on ground haunted by theologians, philosophers, and scientists, a realm wholly different from anything we've yet explored, the home of digital sentience, the sort of awareness and consciousness that is considered to be the hallmark of true artificial intelligence.

But, despite the sheer amount of "if's" and vast stretches of knowledge that is yet to be discovered in relation to digital sentience and the creation of an aware, conscious being, the notion of artificial intelligence is not a new thing; it has run rampant in science fiction, but it is widely believed that the ancient Greeks first dreamed up the concept in the form of myths, specifically the "Promethean" myth, and in the form of Golems. Since that time, it has been all but beaten to death by stories, novels, and films, until it has practically become a household word; just ask anyone on the street about "A.I.," and you're sure to get some kind of answer that relates to both popular culture and sentient machines.

We see such references everywhere in our entertainment. They make their appearances in the form of the Skynet creation of the Terminator series, in the machine minds of The Matrix, in the androids and artificial lifeforms of Blade Runner, Star Wars, Star Trek, Armitage III, Aliens, and Steven Spielberg's A.I., as well as a host of other films and shows, not to mention mountains of books and stories. if you watch carefully, you're sure to notice an interesting trend, and doubtless the foreshadowing of things to come: each time a digitally sentient creation of man (or sometimes even of an alien species,) appears in a plotline, regardless of the medium used for the story, the idea of viewing digitally sentient "life-forms" as equals, as aware creatures that are perhaps as concerned with the nature of life, reality, and the afterlife as their human counterparts, as well as the insidious counter argument pop up in some form or another. Sometimes the hints are subtle, (In Aliens, it takes a major sacrifice for the character, Ripley to accept and trust the android, Bishop) but often, these arguments constitute a major part of the piece, blurring the lines between man and machine. (Armitage III and the more classic piece of 80's cyberpunk culture, Ghost in the Shell, are great examples of this.)

How will society deal with machines that think and learn as we do?
How will society deal with machines that think and learn as we do?

Our entertainment, and it's hit-or-miss portrayal of interesting futures that "may yet come to pass" has set us up for and provided a colorful backdrop for the future of moral debates on sentience; they provide a firm basis for us to objectively analyze the paradox that lies herein: would we see our own creations, albiet sentient ones, as deserving of our recognition? Should we believe they are as aware and applicable to the afterlife as we are simply because they learn, think, and grow as any other human being does, or are they merely dolls, mechanical puppets that exist to serve and amuse us, no matter how aware or human they may seem? Would it even be moral to create such a being, and if so, what rights should it be guaranteed? Children are "created" by a form of biochemical construction, and are therefore guaranteed rights that increase as they mature; should the mechanical children of science applied by human hands be given the same rights, or should they be denied simply because the creation was "manufactured" in a different form, was born fully functional, and has (we might assume) a period of usefulness before planned obsolescence? Here that age-old adage that continues to encourage ignorance rears it's ugly head: "Perhaps some things are better left unknown."

Never before has the wild (and granted, old) notion of AI been possible, but rest assured, it is incredibly close, on the horizon even, so to speak. True, we've had computers and cloned organisms for years now, each being seen as a manner of creating a new intelligence, though the former cannot "learn" in the truest sense, and the latter is simply a form of copying a design of evolution or the divine, (or both.) (Though consider this: do cloned organisms have souls? Are they granted the same kind of consciousness and spiritual awareness their non-cloned relatives do?) Would a cloned human be an automaton, or as ordinary as any other child? Never before have we had the technology to even fathom how we might create an entirely new form of life, one outside the blueprint of the divine, with all the mental faculties we hold so dear, until now. It's right in there with nanotechnology; with the ability to stack atoms just out of reach, we're literally just a few short decades from realizing the medical boon of synthetic neurons and nervous tissue in general. Imagine, if you will, the uses for such a wondrous technology! With synthetic nervous tissues, extensive bodily damage that results in a loss of feeling in say, a limb or a section of skin can be restored, and more importantly, new types of brains and brain-like neural networks can be built. Think about it: with the capability to build structures capable of functioning like the human brain, structures that are built of synthetic materials and to the specifications we set, the debates will heat up immediately. Should we use this technology simply because we can? Is it moral to create a machine that can think better than any human in the history of the species, or a simple worker specifically designed to follow directions coded into it's mind as a child, limited by undying obedience?

And heated the debates will become; machines that can think! Imagine the implications, both moral and economical. Will these creations be our willing slaves, our equals, or perhaps our eventual masters, as many a bad science fiction film has suggested? Will they act peacefully, helping us overcome our natural frailties, or will they resist us, fighting against oppression in a passive or perhaps an aggressive manner? Will we see ourselves in these creatures of silicon and software, the very reason for our existence, even to the degree of inciting a second uprising akin to the civil war and the abolition of slavery, or will we see yet another blank and unyielding wall of steel incapable of putting us any closer to the answers we seek in our quest to understand ourselves? Perhaps it will be a combination, a amalgam of a little of everything we hope for as well as dread, for the very notion of Artificial Intelligence and Digital Sentience is not simply the creation of a machine or an artificial organism that can think, it is the creation of a new individual, outside the conventional means, with an entirely new set of blueprints, an individual as unique and different as any human being is from another, and a genuine product of it's environment.


    0 of 8192 characters used
    Post Comment

    • profile image

      smg 8 years ago

      You very cleverly document THE question, as well as cover the ground leading up to it- pop culture as a means to continually subconsciously redefine societal sentiment for and against future dilemas.

      We WILL be able to accomplish sentience in machines eventually. THE question then will be- is it moral to do so just because we can?