Will A. I. Outsmart the Human? Are we at risk?

  1. Sulz97 profile image61
    Sulz97posted 19 months ago

    Will A. I. Outsmart the Human? Are we at risk?

    At present robotics is been used in most areas of the world. My main concern is the "Self-learning" robots. Just like in the movie series "Terminator" will our human race be taken over by Artificial Intelligence. Nowadays most robots can be programmed to talk and act like humans. These robots might gain the upper hand by being more smarter than their inventor. If any situation like this happens in the future are we ready to face it?


  2. profile image58
    Setank Setunkposted 19 months ago

    We still operate with the same primitive binary or magnetic on/off system we have used for half a century. Functionally it is no different than a machine with relays: They are just smaller. All our improvements are restricted to better coding and quartz slicing for complexity and speed. The speed is needed to compensate for our primitive system of tiny relays which is exceedingly inefficient. Only living matter can process cognitively and I do not believe man will survive long enough to generate living matter.
    So I would not worry about it.

  3. tamarawilhite profile image91
    tamarawilhiteposted 19 months ago

    The cloud based AI poses no threat unless:
    * we give it the ability to control its decision making process and it has the ability to decide it should kill someone / a lot of people
    * we give it control of anything important
    * we give it a physical body it can use to damage anyone or anything of importance like a power pole or car

    So for a Terminator scenario to happen, it has to have both the idea that people need to be killed and the ability to do so. This is why we shouldn't let AIs self-direct their development and not give them control over anything important.

    I've written hubs on "why AI will think like us". And the real reason any AI may be responsible for mass deaths is when the programming is set by the same people who rationed care in HMOs to save money and environmentalists who consider people pollution and of low value. So you won't see AIs committing mass murder unless under the military, but if we use some of the same logic many liberals have, you will get:
    * the software system that told a woman in the Pacific Northwest it won't treat her late stage cancer, then sent her info on euthanasia - but such processes would be coordinated and deliberate, not accidental
    * an AI that says the cheapest way to cure the pandemic is kill the infected, and if refused by a human, just quarantine everyone with placebos and burn the bodies
    * a system that assigns all people a value and starts denying medical care and sterilizing those that don't meet the state's standards; when liberals call all conservatives crazy and a major political pundit sells several million of a book that "liberalism is a mental disorder", you'd easily see your political or ethnic identity flagged as bad, then you're subject to mandatory sterilization, lower tier healthcare, but you can't see the records, challenge the decision and may not even know the pills you are getting are to sedate your or ease the pain of the condition they won't treat because they want you do die sooner

    Yes, I write horror and sci-fi - but I also have an engineering degree and a decade of experience in IT.