jump to last post 1-3 of 3 discussions (6 posts)

Steven Hawking warning about Artificial Intelligence

  1. Stacie L profile image88
    Stacie Lposted 3 years ago

    Stephen Hawking is the world’s most famous physicist, and even he’s worried about being outsmarted. Not by a better mathematician, but by a machine.
    So, it might seem a strange moment for him to point out that while limited forms of artificial intelligence and machine learning have turned out to be very useful, going any further risks a dystopian future and possibly the end of human life as we know it.

    http://www.theglobeandmail.com/technolo … e21888416/
    It seems that the I,Robot world will become a reality according to the greatest mind of the century...makes me nervous

    1. oceansnsunsets profile image89
      oceansnsunsetsposted 3 years agoin reply to this

      Hi Stacie, thanks for sharing this.  How interesting.  Something to look into more, to see what is feared exactly and why, etc.  Food for thought.

  2. Electro-Denizen profile image83
    Electro-Denizenposted 3 years ago

    I think it would be only very worrying if, unaided by humans:
    1) a machine could locate and mine minerals needed for batteries
    2) transport the material to a plant
    3) create the batteries
    4) insert the batteries into previously created machines

    Otherwise, all we need to do, is pull the plug. End of story :-))

    Honestly, I think before we get to a real point of concern over robots taking over the world (that all need electricity), we'll soon reach a point where our electrical grid won't support massive expansion and/or get burned out by unusual solar activity. Everything we do is electricity based, it's a very strange form of dependency really.

    1. psycheskinner profile image82
      psycheskinnerposted 3 years agoin reply to this

      The robot floating around on that comet has solar panels.

      That said, I doubt breakthrough true IA will occur in out lifetimes.

      1. Electro-Denizen profile image83
        Electro-Denizenposted 3 years agoin reply to this

        That's a fair point!

        I agree though, in my opinion true AI would need consciousness....  If we could create machines with pituitary glands (god forbid), we'd be on our way with this (horror)

  3. bBerean profile image60
    bBereanposted 3 years ago

    Sentience will not be realized, but it doesn't have to be for a program to be dangerous.  Giving up so much control without balanced safeguards is flirting with potential disaster.