Could We Be Building Androids for the Home?
No, I'm not talking about Google's brand of popular smartphones today, I mean autonomous machines similar in concept to the.ones portrayed in the movie iRobot. I know what you're thinking, "No Thanks! I don't want that!"
Similar things could have been said for technology of all kinds that has already taken over our lives from automotive manufacturing from which many people suffered layoffs replaced by robot workers to the smartphone in your hands that connects us together every moment of every day. Tech gadgets just have a way of growing on us over time.
Having grown up through the 80s and 90s myself, I remember when computers were "that huge box over there" performing some nebulous work most people didn't understand. The original computers were so disconnected from human interaction, only professional computer geeks (like me) knew how to use them.
However, despite the disconnect, they were beginning to change the way we live, work and play every day even then. Slowly this integrated technology became an indispensable part of everything we use from cameras to cars. This pushed the edge where humans and computers were now face to face everyday. The need to improve human to computer interaction was inevitable.
It wasn't until the mobile phone explosion in 2000s that this human interaction requirement started becoming a reality. Many companies including Microsoft, Google, and Apple began efforts to make these personal devices more user friendly.
It was Apple with Siri and Google who got the headstart merging AI tech with human interfaces that both understood human speech which started the current day Internet Of Things (IOT) revolution that is changing our lives today. Don't get me wrong, computers deciphering human speech and computer generated speech has been around for some time already. The new paradigm is full machine to human interaction.
Up until now only supercomputers could perform near real-time human interaction effectively. The internet just wasn't fast enough to send speech data for remote analysis before. Now that faster Internet became available on our phones, all the processing can be done in massive data centers giving us a more pleasant experience overall. While Apple and Google took the lead, more big players like Amazon with their Alexa and Microsoft with Cortana joined the market each with their own AI tech flavor.
Thanks to these developments increasing exposure to human speaking interfaces we have gradually moved from clunky interfaces understanding "one - word - at - a - time" to fully fluent (almost) human to machine communication.
Still, the picture isn't all roses yet. Wireless connectivity issues such as pairing Bluetooth continues to plague users. For example, just yesterday my wife tried to connect her phone to the car without success, but just today I paired my phone easily. A few weeks ago it took 30 minutes to figure out how to pair a device with an Amazon Echo, but the first time seemed easier. There are also times when these machines lack necessary context no fault of their own since they can't see the full context. You can't ask them what am I holding? Not to mention failure to recognize vocal tone to detect intent. You could be saying something in jest, but it won't understand the difference.
So, with how far we have come, I see the next wave of AI being fluent conversations with full vocal intent recognition while incorporating visual context. We can't be very far from systems capable of interpreting body language as part of the communication not unlike we do. We are likely only a short step away from these fully interactive machines that could be able understand us better than we understand ourselves. OK. Maybe that's pushing it.
Seriously though, what do you think the future holds? Could we be close to building our own household androids?
Life in the Big Cities
© 2018 Jeremy W