Apocalypse How? The So-Called Singularity, Does Mankind Face a Digital Dystopian Future and a Cyber Societal Collapse?

Part 1 - General Thoughts and Considerations re: the SIngularity

Writers, academics, scientists, futurists, talking heads and regular Joe's like me who have seriously pondered and/or researched a concept known as "the Singularity" at any length will have been exposed to perhaps a dozen different theories about what exactly the word "Singularity" means. Moreover, the main players involved in this debate offer widely divergent theories, among other things, with regard to (a) is it possible for a "Singularity" as that word is generally understood to ever come into being in the first place?, (b) even if it can, in theory, come into being, will it actually ever happen?, (c) will it be a good thing (Ray Kurzweil) or will it mean "The End of the World As We Know It" (aka TEOTWAWKI), (d) are there prophylactic measures that can be taken to mitigate or even erease any risks posed to humans by computers possessing a so-called "super-human" level of intelligence, and (e) what are the ethical ramifications and considerations that must be weighed if indeed mankind one day creates sentient machines.

Exponential Scientific and Technological Advancement

The idea of exponential scientific and technological improvement that compounds itself with previous and future advances in science and technology has been around for centuries. As a general matter, there is no real controversy regarding the fact that this is a very real phenomenon observed throughout the course of human history. Even assuming that there ever has been any such controversy in the past, the exponential nature of mankind's scientific and technological advancement is now essentially a universally acknowledged phenomenon, and certainly was so by the time the concept of the Singularity started to work its way into popular culture (and in society generally), in or around the late 90’s or early 2000’s.

What I have found to be the most generally-accepted definition of the "SIngularity" envisions machines (synonymous with "computers" for the purposes of this article) being programmed to design new and improved models of themselves. These new and improved machines, in turn, would begin creating even smarter and more capable machines, and this would continue on and on ad infinitum. In short order, computing power and capabilities would develop a level of super-intelligence that exceeds the collective intelligence of all of mankind by orders of magnitude.

An Existential Threat?

The various different scientists, experts in the technology and computer science fields, futurists and lay-person writers who have written about the Singularity have established different – sometimes very different – criteria that must be satisfied in order for the Singularity to come into being. Putting those differences aside and assuming that it can one day come into being, in my opinion the Singularity would unquestionably represent a substantial - and potentially even an existential - threat to humanity. The manner in which any such worst-case scenario might unfold is another Singularity-related topic with respect to which there are many different theories.

In any event, if one were to hypothesize a scenario in which the emergence of the Singularity turns out to be an extinction-level event for mankind, I do not envision it unfolding in a manner that even remotely resembles the events that unfolded in the Terminator movies, where the scrappy remnants of civilization are engaged in total war against their machine overlords. Instead, I believe that a societal collapse caused by the emergence of the Singularity would prove that Yeats was right when he wrote “This is the way the world ends. This is the way the world ends. This is the way the world ends. Not with a bang, but a whimper.”

Why Would Mankind Knowingly Create an Existential Threat to Itself?

However – and this is a concept that may forever render the question whether there is an existential threat posed by super-human Artificial Intelligence, standing alone, into a “the chicken or the egg” question – none of the developments leading towards the emergence of a Singularity could come into being without having human fingerprints all over it. And in that regard, I cannot fathom there being any financial or common good-based justification for mankind to make what would unquestionably be a massive investment in connection with funding the creation of machines possessing anything that even remotely resembles a super-intelligent and sentient machine, and doing so for the sole purpose of creating even more advanced (or "intelligent") machines.

Put differently, I do not perceive the creation of super-intelligent and/or sentient machines to be a condition precedent to achieving any and all utilitarian goals for the betterment of mankind that will serve the non-military-related motivation for the research and investment that would be entailed in connection with the development of machines which incorporate advancements in computer technology, efficiency and utility to the human race that are presently unimaginable.

Mankind's role in this process would, I admit, decrease over time as machines become ever more advanced and increasingly capable of contributing to their own advancement. However, I believe that nothing even remotely resembling the Singularity can come into being without the assistance of human beings with respect to every step in the process. If indeed mankind would play an irreplaceable role throughout the course of creating such machines, and thereafter maintaining them following their creation (or, as it were, keeping these sentient machines "alive"), then I cannot perceive how we will ever relinquish our control over them. Put differently, the human race would survive (albeit at an infinitesimally-small fraction of our current population) even if every computer in the world disappeared instantly, whereas computers could not survive without us.

Below I have listed what I believe to be some of the most relevant questions and considerations regarding the role that mankind would have in the development of super-intelligent machines.

*A computer that is capable of writing code, that does so for the purpose of creating an equal or superior computer in less time than it took to create itself, would have to be programmed by humans.

*Why is there an assumption that such computers would be connected to the internet upon their creation, and remain connected indefinitely?

*If they are not connected to the internet, where will their ability to affect and change their environment come from in the event that humans decide not to connect them to the internet?

*If such a machine were not connected to the internet or even a smaller, internal network to another machine that is physically capable of affecting and changing their environment, doesn't that leave telepathy and/or telekinesis as the only methods through which the machines could do so?

Additional Topics to be Discussed in My Next Article re: the Singularity

If such computers are not capable of telepathy or telekinesis, and their access to the internet or any other form of connectivity is controlled by humans, how would they possibly independently effectuate changes to their environment? Perhaps most importantly, how would such a computer go about literally – meaning physically – building these superior supercomputers in less time than they were developed in?

By mere virtue of having superhuman intelligence, these computers will not necessarily have anything to offer in terms of producing the actual end-product of their work. Millions of manufacturing jobs worldwide have been lost to machines that are capable of playing the role formerly played by human labor in the manufacturing process, and this replacement of human labor is only going to increase over time. However, while the need for human labor has been largely obviated by the development of computers and machines, these new machine “laborers” or "workers" are, as a general matter, “dumb” machines operating under the control of "ordinary" computers. These present-day computers that serve this function do not even remotely resemble the type of machines people envision when discussing the Singularity, which will have a level of intelligence, computing power and functionality which exceeds that of contemporary computers by orders of magnitude.

And in that regard, it is absurd to hypothesize any super-intelligent, sentient computers of the future being used on an automobile assembly line, in any capacity. Using a computer with superhuman intelligence to perform this type menial labor, or even the highest-level management-related functions, would be about as useful as assigning Albert Einstein to perform manual labor on a factory assembly line producing tanks, rather than applying his time and energies in his capacity as a scientist on developing the Manhattan Project! For the same reasons, there is no conceivable purpose behind bestowing any such manufacturing capabilities upon computers with superhuman intelligence.

I hope you enjoyed this post, and please comment with your thoughts and/or attacks on my theories about the Singularity!

More by this Author


Comments 1 comment

someonewhoknows profile image

someonewhoknows 2 years ago from south and west of canada,north of ohio

Amen!

    Sign in or sign up and post using a HubPages Network account.

    0 of 8192 characters used
    Post Comment

    No HTML is allowed in comments, but URLs will be hyperlinked. Comments are not for promoting your articles or other sites.


    Click to Rate This Article
    working