What is your take on the role and future of Artificial Intelligence?

  1. Bildad Hawi profile image72
    Bildad Hawiposted 8 months ago

    What is your take on the role and future of Artificial Intelligence?

    Elon musk and Mark Zuckerberg represent the two versions of the importance of AI. Whose opinion do you mostly align to in their argument?

  2. tamarawilhite profile image91
    tamarawilhiteposted 6 months ago

    1. There are too many in the tech world holding up AI as a replacement for religion. That the AI should be all knowing, all seeing, all powerful and assumed benevolent so let's do what it says. The idea that we'll be able to upload our minds to a digital heaven is a direct play on the religious concept of an afterlife. No, we should not invest religious levels of devotion to technology, much less blindly obey it.

    2. Artificial intelligence has the biases its programmers have. This is reflected in the data sets they use to train it, such as comment censorship programs trained on the very liberal New York Times and Huffington Post by that very decision now biased against moderate and conservative content. These biases are further reinforced by training by human trainers on tasks through Amazon Mturk and the programming team itself, the latter reinforcing the biases of the programmers.

    Assuming that an AI is neutral thus limits everyone else's ability to challenge decisions. For example, one group created a quality of life years model that says if you're sick and over 65 or under 15, your life has less value. The older and younger, the even lower the value. The real life implication of this is when the model was applied to determining what care should and shouldn't be given in Medicaid and Medicare.
    Cancer testing intervals for the elderly were increased because cancer treatment means less than if you're younger, they don't care about extending those lives. It became much easier to justify denying care to the disabled and sick, because the quality of life metric is lower for them. This is how an algorithm has automatically rejected cancer treatment for patients who had it relapse but said we'll offer you euthanasia.
    Now imagine socialized medicine applying these standards and worse to everyone, and because general society assumes the AI is both logical and neutral that it knows best, you have no recourse when told go home to die, go home and wait six months in suffering because your pain doesn't matter.

    1. Bildad Hawi profile image72
      Bildad Hawiposted 6 months agoin reply to this

      Those are valid concerns and I believe proper legislation should be put in place to ensure programming of AI serves the needs of all the people regardless of their condition and not depend on a programmer's biases.