I would say we should apply Issac Asimov's rules of robotics:
1. Protect the human from harm, even at the expense of your existence or contrary to human orders
2. Obey orders unless violating the first law
3. Protect self unless contrary to first two laws.
I don't want an atheist AI that uses utilitarian ethics - its economic and logical solution to a pandemic is kill the infected, its solution to a strained health care system is killing everyone that is using more resources and/or considered a socially undesirable element. Imagine being told you're politically incorrect, go home and die you dissident.
I also don't want a compassionate AI. Trying to program emotions is as messy as the emotions people have, and too many problems occur when we act out of a wave of sympathy for one dead migrant child and let a million young Muslim men into Europe, unable to handle the logistics or stop the many rapes and thefts they commit. Now imagine this as a machine's decision, and like all tyrannies, those who don't agree with it are in the way and need to be eliminated. A Terminator killing out of kindness or malice is still bad.
I did write a hub Why Artificial Intelligences Will Think Like Us that describes why artificial intelligence won't be totally alien. But we won't let ones that say kill all these people (unless military strategy is the intent) stay online.