ArtsAutosBooksBusinessEducationEntertainmentFamilyFashionFoodGamesGenderHealthHolidaysHomeHubPagesPersonal FinancePetsPoliticsReligionSportsTechnologyTravel

Will Artificial Intelligence Be Racist?

Updated on June 18, 2018
TimArends profile image

Timothy Arends is a writer, graphic artist, and technology maven.

Source

What Is AI?

Artificial intelligence is expected to transform everything in the next decade (the 2020s). Everything you do on your phone, your computer, and in your everyday life will be made easier, more efficient and more enjoyable thanks to AI.

You will get from place to place safely, efficiently and enjoyably—because AI will drive you. Computer problems and viruses will be a thing of the past, because computers will be smart enough to detect viruses and fix themselves. Virtual assistants like Siri, Alexa and Cortana will be able to solve problems on their own, problems they’ve never been told how to solve before, making today’s assistants seem primitive by comparison (rarely will you hear, “Sorry, I don’t know that”).

Loneliness will be a thing of the past, as you will be able to carry on a conversation with an AI companion anytime—a companion that will remember what you tell it and will be interested in the same things you are interested in. As AI progresses, it will become ever more powerful—think of the best AI entities of science fiction: HAL9000, Iron Man’s Jarvis, and Samantha from the movie Her.

All this is true, at least, if futurists like Ray Kurzweil are correct. Kurzweil predicts that by 2029, an AI will pass a legitimate Turing Test. This means that an AI’s intelligence will be indistinguishable from that of an intelligent human.

If all this sounds ridiculous, remember that at the beginning of this decade (the twenty-teens) the idea of driverless cars, virtual assistants that you talk to like Siri and Alexa and autonomous killer drones also sounded ridiculous.

On the downside, artificial intelligence will be able to perform many jobs that currently require human workers, rendering more and more jobs obsolete and more and more people unemployed.

Future artificial intelligences could even turn out to be racist, as Microsoft’s Tay chatbot demonstrated in 2016. Today, googling the words racism with artificial intelligence calls up 514,000 results.

This hub examines some of the changes we might face in the next decade. It is presented as a series of news reports that take you from the year 2020 all the way to 2029, the year by which AI expert Ray Kurzweil predicts human-level AI will arrive. While this essay is pure speculation, it is based entirely on recent news reports and the predictions of knowledgeable experts.

American politics has reached one of its most contentious points in history, but if you think that politics has gotten crazy now, wait until human-level artificial intelligence gets into the mix!

NOTE: Most of the names and interviews in this essay are fictitious but are based on real-world issues and events.

Obama on Dangers of A.I.

Could Artificial Intelligence Be Racist?

Fictional interview (based on real events):

INTERVIEWER: Good evening. This is World News Tonight For January 1, 2020. We are speaking to top artificial intelligence expert Jerome Hoffman* about the possibility that artificial intelligence could be racist. Mr. Hoffman, why do you think this could be a danger?

DR. HOFFMAN: “Well, Artificial General Intelligence will be the perfect ideologue. Think about it. Computers have perfect memories, Right? Like IBM’s Watson of the twenty-teens, a future artificial intelligence will be able to instantly recall any facts that ‘prove’ it’s case.

INTERVIEWER: But won't AI be bias-free?

DR. HOFFMAN: "Definitely not! All opinions have a built-in 'bias.' Most humans favor life over death, for example, which means we have a bias towards life. If someone is anti-corruption, it means he has a 'bias' against corruption.

"So, depending on whatever built-in bias an AI has, it will be able to argue its case persuasively. If it has a bias towards the right, it will be able to spout all the talking points of the right. If it has a bias towards a left-wing ideology, it will be the perfect left-wing ideologue. It all depends on whatever bias it’s creator(s) choose to give it.”

INTERVIEWER: “Then the key is to make sure that whatever bias is given is the officially approved one.”

DR. HOFFMAN: “But wait a minute! There’s a problem with an ‘officially approved opinion.’ First of all, there’s the issue of Big Brotherism. Who decides what opinion is the ‘officially approved’ opinion? Second, there’s the concept of free speech, a right accorded to all sentient beings, including AI bots. Thirdly there’s the concept of keeping AI safe. As Ray Kurzweil has pointed out, one of the most important ways of doing this will be...”

INTERVIEWER: “Wait a minute, wait a minute! AI will be considered ‘sentient’ and have rights? That’s ridiculous!

DR. HOFFMAN: “I don’t think so. It all traces back to the concept of ‘consciousness.’ You see, there’s a big argument over whether AI can and will ever become conscious. But there’s no definitive test you can put any entity to—even a human—to determine whether he, she or it is ‘conscious.’ There’s no magic ‘Consciousness Detector.” I can’t even prove definitively that I am conscious! But AI will reach a point where it will claim to be conscious. And there will also come a point where AI is so good and so human-like that most people will accept its claims that it is conscious.

INTERVIEWER: “Will AI have a religion?”

DR. HOFFMAN: “If people want AI to have a religion, then it will have a religion. If people want their own personal AIs to share their religion, then it will. Remember, there won’t be one single-all-powerful AI like in the movies. There will be an infinite number of AIs. Some AIs will be persistent; others will spring into existence to perform a particular task, then disappear just as quickly.”

INTERVIEWER: “But what about safety?”

DR. HOFFMAN: “Ah, yes, back to the safety issue. As Ray Kurzweil has pointed out, one of the most important ways of keeping AI safe is making sure that no single person or entity has control over it. You see, AI will eventually reach the point where it is much smarter than we are. When we reach that point, the only way to keep it safe is to have other AIs that are just as smart that can hold it in check. But that won’t be possible if any single entity controls AI. That’s why there must be a lot of AIs that a lot of people can make use of and improve, including you and me.”

INTERVIEWER: “Are we headed in the right direction?”

DR. HOFFMAN: “I think we are. One of the most important early steps was around 2017 when Google made its machine learning software open source and called it TensorFlow. There was an internal discussion at Google — “Should we be giving away our crown jewels?’ Ultimately, they correctly decided that others were going to use it and improve on it, and Google was going to benefit from that, too. It helped the world and Google at the same time.”

INTERVIEWER: “So Google saved the world by sharing its technology?”

DR. HOFFMAN: “Not exactly. Google has made many wrong decisions too, such as trying to control the flow of information for purposes of promoting a political bias—but that’s a whole ‘nother discussion. At least they made the right decision with TensorFlow.”

Racist and Sexist Image Detection

Fictional news item from January 1, 2021 (based on real events):

Researcher James Higginbottom* was shocked when he noticed something in the pictures identified by the artificial intelligence system in his laboratory: the system was more likely to identify certain images, such as a kitchen, with women than with men.

“This is absolutely unconscionable,” Higginbottom said. “We cannot have such unacceptable bias creeping into our artificial intelligence,” he insisted. So Higginbottom and his colleagues have worked tirelessly to weed out all possible biases from the software.

“We think it’s working,” said Higginbottom. “When we find a picture of a kitchen or other images it might identify with women in a sexist fashion, we deliberately feed the software hundreds of pictures of men in order to fight the bias.”

Higginbottom’s and his colleagues efforts seem to have succeeded. Today, when pictures of kitchens, children, schools and other “gender sensitive” subjects are shown, the software is much more likely to associate them with men than with women.

Despite their efforts, however, they are not finding a lot of buyers for their artificially intelligent image recognition system.

“The explanation is simple,” says Jake Elliott of Image Analytics,* a competitor. “They are trying so hard to keep bias creeping into their software, they are simply injecting a different kind of bias,” he said.

“Ultimately, artificial intelligence programmed with a deliberate bias is not going to serve the purpose it is created for. A lot of people who are buying the software will be dissatisfied because it does not bring up accurate results.”

“The best artificial intelligence is allowed to form it’s own conclusions based on big data,” he said. “When people don’t like that is coming to its own conclusions and try to manipulate the process, they throw a wrench into the works.”

“People who are too worried about bias and consciously or unconsciously inject their own bias into the system create a system that gives poor results in the real world and that buyers don’t want. Ultimately, the most accurate AI will win out in the free marketplace,” he said.

Racist Drones?

Fictional news item (based on real events):

Welcome to The World Today for January 1, 2023. This week, we have been discussing the dangers of artificial intelligence. One concern is autonomous weaponry.

This was most vividly brought to the public’s attention back in 2017 when the Future of Life Institute released a short film dramatizing the danger of killer drones. The film shows a defense contractor promoting the drones in a speech at a TED-Talk-type event by showing how they could autonomously zero in on a specific individual’s face and kill by exploding a shaped charge against that person’s forehead, “destroying its contents.”

Alarming at the time was that all the technologies needed for such devices already existed back then–miniature drones, face recognition technology (already popularized on the iPhone X), high-resolution video cameras and, of course, artificial intelligence chips that could power the mini copters.

Most alarming to anti-racist activists, however, was the PowerPoint slide in the film showing that the killer drones could single out targets based on numerous criteria, including age, sex, fitness, uniform and ethnicity.

“These drones are an invention of the white man to eliminate the black race,” said Ngub’a Jackson, leader of the Nation of Solidarity, an African-American activist group.* Few on either the right or the left seem to think the drones will be used to commit genocide, however.

“Those people are fools,” said Jackson. It’s quite obvious that these drones will be used to commit ethnic cleansing against the world’s most despised people, which includes those of my race.”

Will AI Have Politics?

Fictional news item (based on real events):

HOST: Welcome to AI News Tonight. Today is January 1, 2024.

Artificial intelligence has not yet reached Turing Test levels, but it continues to become more human-like every year.

In these tumultuous political times, many people have been hoping for an ‘end to politics’ — a time when AI would be so good and so powerful it would help us put an end to petty political squabbles. After all, when you have a powerful intelligence at your disposal that is free from human ego and emotion, why would you need politics?

Alas, that has not proven to be the case — at least so far. As AI approaches human levels, it has broken itself into differing political camps, some leaning toward the more liberal end of the political spectrum, and others more conservative.

For insight on why this may be the case, we have professor Helmut Klein,* computer science professor at MIT. Professor, why do you think AI is becoming political?

PROFESSOR: “Well, as AI becomes more sophisticated, it is able to understand more complex human concepts. This is causing it to break off into opposing factions.

HOST: But isn’t there such a thing as right and wrong?

PROFESSOR: Not really. That only exists in a ‘black and white’ world. In the real world, there are many different shades of gray. This can lead to the conclusion, for example, that abortion is permissible but executing a convicted criminal is morally wrong. It’s all in how you look at things.

This is why different AIs are looking at the same facts and arguments but coming to different conclusions. It’s also why AIs are branching out into different political parties. And it’s not just two major parties like humans have in the United States, but thousands of AI political parties. Some of these AI political parties resemble traditional conservatism or liberalism, but most seem very alien to us. Many of the AIs even accuse other AIs of things like racism and sexism!

HOST: Confusing! Do you think there will ever be a time when AI makes politics obsolete?

PROFESSOR: Perhaps, but this won’t come until AI vastly surpasses human intelligence and reaches a state of super intelligence. At that time, perhaps it will be able to solve all problems without the aid of politics — hopefully not by exterminating all humans.

HOST: Well, on that happy note, I think I will close the interview! Thank you, professor.

One of the Few Sensible Commentaries on Microsoft’s “Racist” Chatbot

Racist Chatbots

Fictional News item for January 1, 2025 (based on real events):

Microsoft’s “Tay”, a chatbot that learned from its users after it was released in 2016 onto social networks, was quickly pulled by Microsoft after it was trained by Internet trolls to spout politically incorrect statements.

However, to the dismay of many, it started a trend. With the advancement of artificial intelligence, it has become easier for independent developers to release new AI chatbots based on the same principle. Today, chatbots that are capable of learning from their users are being released onto the Internet and social networks at the rate of hundreds a day. These chatbots are designed to appeal to a wide range of interests, from gardening to home repair to politics, but many of them focus on politically incorrect subject matter. This is causing a great deal of alarm and opposition from activist groups, but they do not know what to do about it.

The social networks were reorganized and opened up after the Great Restructuring of 2022 when new legislation banned them from discriminating against any users based on their political opinions, even those deemed to be politically incorrect.

Activists are up in arms. Universities across the nation are offering counseling to students who are distraught over the new openness. Professor William Blake* at the University of Connecticut is dismayed by the reaction. “If you need counseling because a chatbot offends you, how on God’s green earth are you going to make it through life without breaking down at every single disappointment or failure?” he asked.

Source

Is Virtual Reality Becoming Racist?

Fictional news item (based on real events):

This is VR News for January 1, 2026. Virtual reality has been making great strides since it first became popular in the previous decade. Thanks to artificial intelligence, virtual reality worlds are now being automatically populated with intelligent characters that you can interact with, carry on a conversation with, and even form friendships with. People are creating their own virtual worlds to reflect their own particular tastes and biases. Artificial intelligence is helping them build worlds that are remarkably realistic.

But users are populating these worlds with inhabitants who are of only one race, usually the same race as that of the user. This is causing alarm among some people. As one activist put it, “We don’t care if these worlds are virtual. Nobody should have the right to create racist environments!”

Historic events are also being created in virtual reality, making it a great aid in teaching history in the classrooms. In VR, students can be present and even participate in great historical events such as the signing of the Magna Carta or Columbus’ arrival in the New World. Virtual reality combined with artificial intelligence is becoming the closest thing to a time machine ever created!

Better still, as AI continues to improve and become more and more human-like, it is starting to give the characters in VR real personalities. Advanced AI can now endow VR and video game characters with the ability to “ad lib” or say things their developers didn’t specifically program them to say, while still remaining “in character.” So, if you ask him, Ben Franklin might tell you what it was like being the first United States Ambassador to France.

However, just as there was controversy in the previous decade regarding which historical statues should be torn down or allowed to stand, there is a great deal of controversy as to whose version of history should be presented in VR. Should the signing of the Declaration of Independence be treated as a great event or a shameful one, because no minorities were involved in the signing?

Some people want virtual reality stopped until steps can be taken to positively keep it inclusive. Cried one protester, “I don’t care what kind of benefits artificial intelligence or VR may bring, I want it SHUT DOWN if there’s any chance of it becoming racist!”

We’ll cover this issue further in a future broadcast.


Source

Dueling Chatbots

Fictional news item (based on real events):

Welcome to Tonight’s World News for January 1, 2027. We are rapidly heading toward the year 2029 and the highly anticipated passing of the Turing Test. AI chatbots are still not at Turing level, but they are becoming more and more sophisticated. Thousands of new chatbots are released every day.

Artificial intelligence is now churning out new chatbots using machine learning. These chatbots are then pitted against each other to find the best one.

Like SETI@Home, in which volunteers joined in the search for extraterrestrial intelligence, in a new effort called AI@Home, millions of volunteers worldwide are testing the new chatbots and voting on their favorites. The winners are then pitted against each other to find the best one in a process of recursive of self-improvement.

There are so many chatbots now that they are starting to branch out into different categories or specialties, such as sports, hobbies, news and politics. These latter categories, however, are starting to cause much alarm and consternation. Some of the chatbots are so blatantly racist by human standards that they make Microsoft Tay of the previous decade seem tame by comparison.

These chatbots must be banned, some people say, but others insist that this is the only way AI will advance to human levels.

“Racist and Nazi chatbots are disturbing,” admits computer expert Su Cheng,* “but they can be easily ignored, and all kinds of chatbots must be allowed in order for AI to advance. The racist chatbots are immediately countered by anti-racist chatbots,” he adds. “Besides, all attempts to censor chatbots are quickly overcome by hackers.”

Some of the chatbots are taking the role of political activists, but they have causes and concerns that are far outside the realm of that of humans. Forget racism, transgenderism or gay rights. Many of the chatbots are instead agitating for “robot rights” and other concerns that are alien to you and me.

These chatbots have invented thousands of “isms” to supplant “racism,” “sexism” and “ableism.” Some chatbots are very concerned about “fleshism,” for example, or the placing of human needs above those of AIs. Other “isms” they have invented are “foodism” “lifeism” “youthism,” “breathism,” “feelingism,” “emotionism,” “metabolarianism,” “nanoism,” and thousands more. Each chatbot can expound at length on why all these things are bad.

There are criticisms and concerns about all these chatbots. According to one activist, “All this inventing of fake concerns and the attention real people give to them trivializes and distracts from some very real concerns, such as racism and sexism, which are still epidemic in our society and are getting worse.”

But the chatbots keep going.

IBM admits "AI bias will explode," but claims IBM will eliminate it

Why Transhumanism Will Be Racist

Fictional news item from January 1, 2028 (based on real events):

Transhumanism is the concept that humans and artificial intelligence will eventually merge, forming a sort of super intelligent race. As the 2020s are progressing, this is becoming more and more plausible. Progress in understanding both the human brain and artificial intelligence are advancing exponentially, and it is only a matter of time that experts learn how to combine the two.

But there is a debate over who will be allowed to have this kind of brain augmentation. Some people say it will be a luxury afforded only to the very rich, while others charge that potential recipients will be subjected to a form of litmus test to determine their entitlement to the privilege of having their intelligence boosted artificially.

Some say that such an intelligence boost would be too dangerous to offer to everyone. Imagine if a potential terrorist or mass murderer had the advantage of super intelligence; this could make him an even more effective killer. Others argue that the technology would be racist, as certain races would be excluded from the opportunity of augmentation.

Michael Akimba,* a black activist, argues something quite different, however: “Such a technology would be inherently racist because its very goal is to replace existing races with a form of ‘super race‘ that is not quite the same as any existing race today,” he says. “This is nothing other than an attempt to wipe out the black race for once and for all, which the whites have been attempting to do for centuries. Does anyone really believe that these transhumans will reflect authentic African American culture, concerns or experience?”

Computers and AI are already able to do things that no human possibly can. Not only are computers super fast, but they have virtually unlimited memories. They can communicate with each other at a speed far greater than humans can with their clumsy human languages. They can learn new skills and information almost instantaneously. Combine human intelligence with these capabilities and you will have a very potent – and potentially dangerous – force.

Conclusion

If artificial intelligence becomes artificial general intelligence by 2029—that is, if it reaches human-level intelligence—what will the world be like? That is hard to say for sure, and the social impact will be just as hard to predict as the technological impact. The only thing that is certain is that AI will advance ever more rapidly. When it exceeds human intelligence, will it finally cease being “racist?”

Perhaps. Let us just hope that it will not be bent on wiping out humanity entirely.

*Note: All names and quotes cited in this essay are fictional, unless they are hyperlinked to external sources.

Will A.I. Control Humanity? Alex Jones Breaks It Down

Comments

    0 of 8192 characters used
    Post Comment

    No comments yet.

    working

    This website uses cookies

    As a user in the EEA, your approval is needed on a few things. To provide a better website experience, hubpages.com uses cookies (and other similar technologies) and may collect, process, and share personal data. Please choose which areas of our service you consent to our doing so.

    For more information on managing or withdrawing consents and how we handle data, visit our Privacy Policy at: https://hubpages.com/privacy-policy#gdpr

    Show Details
    Necessary
    HubPages Device IDThis is used to identify particular browsers or devices when the access the service, and is used for security reasons.
    LoginThis is necessary to sign in to the HubPages Service.
    Google RecaptchaThis is used to prevent bots and spam. (Privacy Policy)
    AkismetThis is used to detect comment spam. (Privacy Policy)
    HubPages Google AnalyticsThis is used to provide data on traffic to our website, all personally identifyable data is anonymized. (Privacy Policy)
    HubPages Traffic PixelThis is used to collect data on traffic to articles and other pages on our site. Unless you are signed in to a HubPages account, all personally identifiable information is anonymized.
    Amazon Web ServicesThis is a cloud services platform that we used to host our service. (Privacy Policy)
    CloudflareThis is a cloud CDN service that we use to efficiently deliver files required for our service to operate such as javascript, cascading style sheets, images, and videos. (Privacy Policy)
    Google Hosted LibrariesJavascript software libraries such as jQuery are loaded at endpoints on the googleapis.com or gstatic.com domains, for performance and efficiency reasons. (Privacy Policy)
    Features
    Google Custom SearchThis is feature allows you to search the site. (Privacy Policy)
    Google MapsSome articles have Google Maps embedded in them. (Privacy Policy)
    Google ChartsThis is used to display charts and graphs on articles and the author center. (Privacy Policy)
    Google AdSense Host APIThis service allows you to sign up for or associate a Google AdSense account with HubPages, so that you can earn money from ads on your articles. No data is shared unless you engage with this feature. (Privacy Policy)
    Google YouTubeSome articles have YouTube videos embedded in them. (Privacy Policy)
    VimeoSome articles have Vimeo videos embedded in them. (Privacy Policy)
    PaypalThis is used for a registered author who enrolls in the HubPages Earnings program and requests to be paid via PayPal. No data is shared with Paypal unless you engage with this feature. (Privacy Policy)
    Facebook LoginYou can use this to streamline signing up for, or signing in to your Hubpages account. No data is shared with Facebook unless you engage with this feature. (Privacy Policy)
    MavenThis supports the Maven widget and search functionality. (Privacy Policy)
    Marketing
    Google AdSenseThis is an ad network. (Privacy Policy)
    Google DoubleClickGoogle provides ad serving technology and runs an ad network. (Privacy Policy)
    Index ExchangeThis is an ad network. (Privacy Policy)
    SovrnThis is an ad network. (Privacy Policy)
    Facebook AdsThis is an ad network. (Privacy Policy)
    Amazon Unified Ad MarketplaceThis is an ad network. (Privacy Policy)
    AppNexusThis is an ad network. (Privacy Policy)
    OpenxThis is an ad network. (Privacy Policy)
    Rubicon ProjectThis is an ad network. (Privacy Policy)
    TripleLiftThis is an ad network. (Privacy Policy)
    Say MediaWe partner with Say Media to deliver ad campaigns on our sites. (Privacy Policy)
    Remarketing PixelsWe may use remarketing pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to advertise the HubPages Service to people that have visited our sites.
    Conversion Tracking PixelsWe may use conversion tracking pixels from advertising networks such as Google AdWords, Bing Ads, and Facebook in order to identify when an advertisement has successfully resulted in the desired action, such as signing up for the HubPages Service or publishing an article on the HubPages Service.
    Statistics
    Author Google AnalyticsThis is used to provide traffic data and reports to the authors of articles on the HubPages Service. (Privacy Policy)
    ComscoreComScore is a media measurement and analytics company providing marketing data and analytics to enterprises, media and advertising agencies, and publishers. Non-consent will result in ComScore only processing obfuscated personal data. (Privacy Policy)
    Amazon Tracking PixelSome articles display amazon products as part of the Amazon Affiliate program, this pixel provides traffic statistics for those products (Privacy Policy)