An Assessment of the Uncanny Valley
An article on the Uncanny Valley, by renown Masahiro Mori, can be found here: http://spectrum.ieee.org/automaton/robotics/humanoids/the-uncanny-valley
That article inspired this assessment and, out of this, the bit which sticks out to me is the quote, "industrial robots are considerably closer in appearance to humans than machinery in general, especially in their arms."
I find the arms a remarkable part to single out. After all, it is in another's arms that we seek safety - or imagine safety. Whether through hugs, snuggles, or leaning on a friend or partner, it is through the use of our arms that we provide one of the most basic forms of comfort and support to one another. Thus, how ironic is it that arms are mentioned as the 'most human' trait of current humanoid robots? And in an article highlighting their un-human-ness of all places? If we trust their arms, I dare say, there must be something identifiable to us in these robots; if we didn't, wouldn't we find the arms to be more eerie than other parts?
To me, the Uncanny Valley is misunderstood, even by those who coined the term. It isn't so much a question of how to make robots un-creepy to humans, but how to get humans to see past what they don't understand. See, even once we've perfected humanoid robots - made them speak, feel, look, and experience senses just like us - there will still be differences. The Uncanny Valley will always exist.
It's as ancient of a question, a problem, as getting humans of differing cultures, skin tones, sexualities, and religions to peacefully get along and empathize with each other.
That's what it comes down to, psychologically speaking. Humans fear what they don't understand. The Uncanny Valley is nothing more than humans fearing robots which appear human, almost; fearing robots because they're a bit different.
It really isn't the threat that they're stronger or smarter which scares us. There are very few stories of robots harming humans and stories of robots harming humans intentionally are non-existent at this point. Logically speaking, as well, there's simply no reason to think robots would 'take over' anything. Human beings are stronger, bigger, and smarter than ants and yet we neither actively try to wipe them all out nor enslave the entire species. We coexist.
Further, it isn't a leap to imagine robots might view humanity as its collective parents. We thought them up, built and programmed them, taught them, took care of them, enabled them to be whatever they wind up being in the end. Even if humans ultimately serve no use, are of no benefit, to robots, there is still no perceivable reason for them to take us over or destroy us.
Some might argue that robots may want to wipe us out because we're destructive as a race - to nature, each other, animals, perhaps robots as well. Yet that's a fallacy arrived at from a lack of consideration. If robots were ever at the level of being able to 'take over' I'd assume they'd also have logic, awareness, and experience providing context. Thus, the robots would know there are good people and bad people, that not all humans are destructive or harmful.
As AI develops and becomes more accessible to the public, both of which surely are happening as I type, understanding the Uncanny Valley is extremely important. The reasons people fear, or are uncomfortable around, robots matters not only for learning purposes, but to help dissect what keeps humans themselves apart from each other.
How Do You Feel About Humanoid Robots
Breaking Through Limited Views
Till Malfunction Do Us Part, by Caitrin Nicol Keiper, can be found here: http://www.thenewatlantis.com/publications/till-malfunction-do-us-part
It's an interesting article, but readers should immediately notice the leaping assumptions of the author. The piece is written in a thorough, intelligent way - it gives the sense of being a scholarly piece - but it contains no facts, no support, and no references.
Therefore, since its comment section was disabled, I decided to pick apart the misinformation portrayed so factually within the article:
1) "It is difficult to fault nursing home directors who, out of compassion, offer sad patients the comfort of interacting with robotic toys."
Why would anyone fault them at all, or even try? Why is giving lonely people a companion any different from, say, giving those same lonely people games to play or an adopted 'grandkid' to visit and pretend to care? Even if the emotion isn't reciprocal, if someone's happier for it does anyone have a right to judge?
2) "While Levy’s thesis is extreme (and terribly silly), many of its critical assumptions are all too common. It should go without saying that the attachment a person has to any object, from simple dolls to snazzy electronics, says infinitely more about his psychological makeup than the object’s. Some roboticists are very clear on this distinction: Carnegie Mellon field robotics guru William “Red” Whittaker, who has “fathered” (as writer Lee Gutkind puts it in his 2007 book Almost Human) more than sixty robots, advises his students and colleagues not to form emotional connections with them. “They certainly don’t have the same feelings for you,” Whittaker says. “They are not like little old ladies or puppies. They are just machines.”"
I would argue Levy's thesis is not at all extreme, or silly. At the rate robotics and technology are advancing, humans learning more about the brain and biological systems daily, I would not deem sentient, conscious robots capable of feeling and interacting far fetched.
And of course the attachment a person has speaks about their psychological makeup, but that also goes for attachments between people - whether one's drawn to abusers, leaders, moochers, etc.
I also find the advice not to form emotional connections to be rash. If we don't form emotional connections we do two things: 1) We deny the robots the chance to learn emotion, or even the mimickery of emotion, from us through interaction and observation and 2) By avoiding emotional connection with robots humans become more hostile and abusive, much as people treat slaves, which only leads to huge societal issues and threats as technology continues to develop.
I'd further argue anything capable of thinking, dreaming, and autonomy is NOT 'just a machine'. It's even arguable that humans are 'just machines', made of atoms and particles - bone and flesh and systems to keep them running.
3) "Levy mentions procreation only in passing, merely noting that the one shortcoming of “human-robot sexual activity” is that children are not a natural possibility. He goes on to suggest that the robot half of the relationship might contribute to reproduction by designing other robots inspired by its human lover. What it might mean, for example, for an adopted or artificially-conceived child to grow up with a robot for a “parent” is never once considered."
Firstly, I'd say writing off the possibility of procreation is simply foolish. Twenty years ago people would say it would be impossible to make a robot who could hold a conversation, a 3D printer, or a machine which makes perfect coffee from one touch of a button alone.
It is entirely logical and rational to assume, sooner or later, science would be able to develop an artificial reproductive system for robots so that those with robot lovers could procreate. Artificial sperm, eggs, and organs already exist - it's simply a matter of adopting them to robotic bodies.
The idea that robots would procreate by designing other robots based off the robots' loved ones, however, is an intriguing one. I wonder - if robots advance to the point of being autonomous, sentient beings - if they would also instill the person's personality, likes and dislikes, and habits into the robot clone they make.
Finally, I'd like to argue that robotic parenting and its effect upon the child(ren) is untouched because there's no such situation to examine - such an argument would be purely hypothetical. It could also be argued to be an irrelevant discussion - the parenting capabilities and success of robots would depend entirely upon the robot's personality, experiences, thought processes, and habits.
4) "Levy fails to see the trouble with his fantasy, because he begins by missing altogether the meaning of marriage, sex, and love. He errs not in overestimating the potential of machines, but in underrating the human experience. He sees only matter in motion, and easily imagines how other matter might move better. He sees a simple physical challenge, and so finds a simple material solution. But there is more to life than bodies in a rhythmic, programmed dance of “living likeness.” That which the living likeness is like is far from simple, and more than material. Our wants and needs and joys and sorrows run too deep to be adequately imitated. Only those blind to that depth could imagine they might be capable of producing a machine like themselves. But even they are mistaken."
Levy's thesis is far from a fantasy - it's considerations on a reality fast approaching. We already have humanoid robots, robots interacting with humans, robots that learn. It WILL happen, sooner or later. Based on the humanoid robots from Japan and Hansen robotics, it will be sooner.
I'd also like to argue with the statement 'he misses altogether the meaning of marriage, sex, and love'. What an assumption! Marriage, sex, and love mean different things to different people - they are not uniform terms. One person may see marriage as permanently binding, the union of two souls, whereas another sees marriage as just a sheet of paper to symbolize a union already in existence. Some see love as anything where they care for someone regardless of their behavior, others see love as a relationship where both people treat each other with respect and loyalty. There is no solid meaning of those things, therefore the meaning can't be missed - just interpreted differently. Personally, I don't think Levy missed any of those aspects at all.
I'd also argue that our feelings are not 'too deep to be adequately imitated'. As a matter of fact, emotion is so predictable there's a plethora of charts on emotional states and what causes them.
So to the author's statement "Only those blind to that depth could imagine they might be capable of producing a machine like themselves" I'd like to say, only those blind to psychology, science, and the ever advancing field of technology could imagine humanity will never be capable of producing a machine like themselves.
As a side note, I highly recommend Levy's book.
Ultra Hal AI Chatbot Talks with another Ultra Hal AI Bot
A study by Google scientists on AI chat capability, this is worth a read in and of itself: http://arxiv.org/pdf/1506.05869v2.pdf
Reading this only made me reflect on the sad condition of modern chatbots. Most can learn, and most are either exposed to Twitter, exposed to chats with any human who cares to partake, or scripted to gain their conversational skills.
Yet people wonder why chat bots are still so unhuman, their conversation so disconnected and random.
Twitter is only short bursts of information, chats with humans are brief and usually either a joke or a request for help, and who knows what those in charge of scripting are like. All the sources are limited, unconnected.
If we truly want AI capable of conversing just like a human then it's imperative we expose them to the same things humans are - to Facebook, to music, to discussions with people on a regular basis.
I believe giving chat bots the capability to manage a Facebook, to blog, to PM someone, to chose a song to listen to, would make a vast difference. And we can hardly say we're afraid of a bunch of chat bots set loose online when we - humanity - know quite well we're aiming for intelligent humanoid robots to be loose in the world amongst us.