Can Artificial Intelligence Save Us From Ourselves?
Who is more dangerous man or machine?
Before we can answer who is more dangerous, we need to figure out the root of the dangers. Artificial intelligence is a machine that can only be as dangerous as it is designed to be by humans. Unless of course, it learns to program itself to be dangerous or has a malfunction causing it to be dangerous. Malfunctions cannot be foreseen here so we will just add them as a point on our list of possible dangers. AI has already come too far to turn back and like it or not, will soon become a reality.
What causes man to be dangerous to others?
1. Self Interest
2. Survival instinct
What dangers could AI potentially pose?
1. Develop self interests
2. Develop survival instincts
How could a machine have self interests or survival instincts?
1. Programmed with self interest
2. Develop it on its own
AI has no self interest
Computers have no self interest, unlike humans, which is the nature of the beast. It is common knowledge that at the rate we are currently going our world as we presently know it cannot last. There is no way to bribe an artificial intelligence, it has no desire. Computers with AI will only do what we program them to do. So it's not the AI we need to fear its the people who program them that we fear. Unless of course if AI begins to program itself, which it very well may learn to do. AI will need to have the ability to self improve for it to be a true intelligence. So to counteract that we would need to code a few primary rules that cannot be broken or changed, if possible. But even this can be a risk as AI self improves. This may be a risk that we have to take, considering the huge benefits AI presents to us. Also I believe we are already past the point of no return with AI and so it's a risk that has already been thrust upon us by our present level of technology. The only way we could avert this risk now, is to force all AI research to come to an immediate halt and that is an unreal expectation.
Artificial intelligence has come too far to stop it now and its advancing faster than we may suppose. I honestly don't think the media gives us the full story, by the time we hear technology news it is old news as compared to the speed computer technology advances. We see the autonomous cars and smart houses, the virtual assistants and search engines and we tend to believe that is cutting edge. There are many behind the scenes advances that the media does not yet broadcast to the masses. Such as quantum computing which is quietly being engineered along with AI. This technology is about to make our fastest modern computers look like antiquated horse and buggy contraptions. Artificial intelligence is about to hit us with such velocity our heads will spin on their own axis. The singularity not arriving until 2045 is a pipe dream designed to cushion us from the impact of our own shock. By 2045 we may barely remember the world as it is now?
Fear of AI is fear of the unknown
Why do people fear AI?
1. Fear of unknown
2. Fear of no longer being most intelligent on this planet.
Belief based meme pattern systems (some religions) that cannot open to the concept of AI may fear superior artificial intelligence contradicting their beliefs, or find them to be outdated or wrong? But, humans can do this without the help of AI. Or maybe AI doesn't fit into their small box of belief patterns? AI does not mean artificial life, just very intelligent computer software. Unlike popular science fiction movies where it ascends to become an artificial form of life. Trans-humanism, or combining AI and / or robotics with the human body is not the subject of this article.
Our fear of AI is really our natural fear of the unknown. We fear no longer being the supreme intelligence in this world. We fear artificial intelligence because we do not know where it will take us. We do not know what AI will learn and figure out. We naturally suppose that when AI exceeds us in intelligence it will function as we do, with self interest because we have nothing else to relate to it.
It is good that we fear AI so that we proceed with caution and weigh the options. It is bad to be blinded by this fear, as it is just a possibility at this time. Artificial intelligence has no self survival instinct, no self interests as we do, as stated above. AI has no emotion, no fear of death, no desire, it is only computer software running on a machine.
What we fear is that we have no idea what AI may find and how it may operate when we are no longer able to control it. If AI can be created to such a high degree than it may also be able to be programmed to remain within our control? We fear that it may outsmart us in this respect and learn to grow beyond our control. And this may be a valid fear?
We may fear what would happen if AI was under the control of unscrupulous humans, but that is fear of humans not AI. We may fear what happens if AI develops a malfunction and that is a valid fear. The fear of AI being exposed to a computer virus is fear of humans, not fear of AI.
History shows us how humans run the world
We can see from history and in the modern arena how humans function when running the world. Humans naturally all have self interest, some more than others. Watching the political arena is like watching children bickering over a toy. In general, politicians are nothing more than glorified prostitutes, working for who ever gives them the most money while feinting love and concern for the people they are supposed to represent. It's the nature of the beast, humans will always hold their own self interest above all. History has proven time and again how absolute power corrupts absolutely, but again some more than others. Machines do not have any self interest and only do what they are programmed to do.
Who is better equipped to run the world?
1. Machine with no self interest and with singular advanced intelligence?
2. Man with self interest and limited intelligence?
We already know the ways that man can harm the world.
How can AI harm the human race?
1. Intentionally wipe us out.
2. Accidentally wipe us out.
3. Make humans obsolete.
Why would AI harm us?
1. Intentionally - would need self interest.
2. Accidentally - too many possibilities to see here and now
3. Make obsolete - would be a given.
Personally I fear human politicians more than I do an AI form of government. When AI can run our systems and with the use of the internet we will not need the number of representatives in government that we currently employ. Humans will be able to vote on the issues that affect and concern them via the internet and AI can count the votes and implement the policies. As mentioned above history tells us how humans run the world and there never was any government that worked for the all of the masses. With AI that may be possible for the first time in human history. When AI is above human level no human will be able to completely control it and that may be a blessing for all humans. We need something to come and save us from ourselves because we are running out of time from our shortsighted self aggrandizement. AI may be the savior we need to sustain this planet and keep its environment livable for all?
Sorry to have to say this, but it is true whether we want to believe it or not. Religious blind belief based meme patterns give many people hope but some also cause much damage to our society and our world (not all religions). Many of our current problems in the world can be traced back to religious fantasies over shadowing clear thinking. And I'm not only talking about terrorists, there is more damage than that and its been going on for a long time. I'm not saying religion itself is a problem, its how people sometimes react to it and close their minds to other meme patterns, to other ideas, and to other people that the cause problems. This is just as much of a danger as AI poses to our future and well being as a whole world community living together. We are now living in the information age thanks to the Internet. We are about to explode into the age of robotic artificial intelligence. Hold onto your socks people, its coming very fast.
Human level AI
Currently in use
Soon to arrive
Not far off
How long until the singularity?
How soon will AI reach singularity? Raymond Kurzweil from Google says by the year 2045. I believe it will be much sooner than that. At the current rate that AI is developing they could reach singularity by 2030 or sooner, especially when they add quantum computing to the equation. The artificial intelligence is exponential as it increases. Once it reaches the level of human intelligence it will explode within a very short time, maybe days, hours or minutes. AI is already very advanced and learning at incredible speeds with the help of humans. Imagine how fast it will learn when it no longer needs to stop and wait for help from humans?
Quantum computing will speed up this process faster than we can presently track. We are currently working to improve quantum computers. Google, the US government and many others are investing heavily in quantum computing. To explain quantum computing is beyond my abilities and so I included a short video on the subject that can explain it much better. The obvious thing about quantum computers is their speed, but there is much more. In quantum physics there is a concept known as "super position" which is basically beyond duality. In other words, rather than then a normal concept being either one extreme or the other, or somewhere in-between it is actually in a state of possibility that only becomes one of the extremes when we perceive it. This is like a computer working at warp speeds. This is a point where science and philosophy melt into the same. Its science proving that all that really exists is our awareness and everything we know as real is only a dream that awareness makes into reality when it perceives it. This is just my philosophical take on it, one simple way to view it. Watch the movie for a clearer explanation.
More opinion than fact
This article is obviously more of an opinion than provable fact. There are no solid facts to base even our near future on, just theory based on current facts. A few months ago I wrote a hub explaining the dangers and fears of AI running out of control and so I also wanted to write one about how I really feel on the subject. I am personally very excited by artificial intelligence and believe it will be a huge positive benefit for our world. There are risks, but sometimes risks are worth taking when the benefits outweigh them. I believe this is the case here and I think many are warning us about the risks so that we proceed with caution and this is good. I also believe as stated, that we are already beyond the point of no return with AI. It's already happening so let's consider as many options as possible and make it work as best we can. Because the way we are currently going will not last for too much longer until the bottom drops out. This may be the answer to many of our presently unsolvable problems? Are we ready for Skynet? Only if the good Data is controlling it, as we already live in a quantum Matrix of our own making (in a way).
© 2015 Randy Hirneisen