As the technological realm becomes more pervasive, who can we trust? Each week, Liberty Nation brings new insight into the fraudulent use of personal data, breaches of privacy, and attempts to filter our perception.
According to the U.S. Declaration of Independence, “all Men are created equal.” The U.N. Declaration of Human Rights states that “All human beings are born free and equal in dignity and rights.” While this idea may not always be applied in practice, each individual can try to make it a reality. But what will happen once artificial intelligence is making many of our decisions for us? AI is increasingly used to make calculations that affect our daily lives – but whose interests are being served? Can AI be prejudiced? If so, who will be the favored groups? Will computers hate us? This week we have a discrimination special to explore these issues and more.
Will We All Be Equal According to AI?
In March, Stanford University launched its new Institute for Human-Centered Artificial Intelligence, which will work on “guiding artificial intelligence to benefit humanity.” The program states that in order for AI to serve humanity as a whole, “the creators and designers of AI must be broadly representative of humanity.” The program was quickly panned for its relative lack of black and female staff members, and while on one level that is the product of predictable social justice outrage, there are genuine concerns that AI will end up working for only select groups of users, while others get left behind.
As is typical, the social justice warriors focused only on race and gender, but what other groups have been excluded from the program, and is it even reasonable to expect every type of person be involved in AI’s development? What about people with physical or mental disabilities, diseases, or unusual characteristics? It would seem none of these groups are represented at Stanford’s institute. While it claims to value cultural differences, how many of the poor and uneducated global masses are represented in this ivy-league program? The stated goals are simply unrealistic. Human variation is far broader than the simple matters of skin color and sex; will AI treat all humans as equal, and is it sensible to expect that we could teach it to do so? Some are predicting that the computing age will herald a new era of social inequality, eugenics, and censorship – how will those who do not fit into the “standard” or even “ideal” fit into such a society? Will the artificial intelligence that we increasingly rely on be capable of compassion, or recognizing the value in the “flawed” or “different”?
Google was forced to cancel its planned AI ethics board over protests that they had invited right-wing activist and president of the Heritage Foundation thinktank Kay Coles James. The purpose of the board was to include people of diverse perspectives, but with today’s obsession with political correctness, there can be little question that unapproved political views will be minimized as much as possible.
Racist Computers?
Can technology be racist? There are numerous studies that show AI will not judge an individual on his or her own merits, but fit people into stereotypical patterns, for example judging black men to be more aggressive than white men even when smiling, or evaluating black criminals to have greater risk of reoffending than white ones. As reported by the Daily Mail, “There is good reason to believe that the use of facial recognition could formalize preexisting stereotypes into algorithms, automatically embedding them into everyday life.”
A Russian-developed program designed to judge a beauty contest made headlines in 2016 when it picked nearly all-white winners and only one finalist with dark skin. Far from the “objective” measure of attractiveness that was expected, the program had merely learned to value Caucasian faces because it had been exposed to more Caucasian samples during its development – unsurprising in a majority-white country.
Facial recognition software has been far less adept at correctly identifying black faces compared to lighter-skinned ones. Much to Google’s embarrassment, its facial recognition technology has been unable to distinguish black faces from apes or gorillas, struggling to deal with the darker hues. In a more life-threatening example, research from Georgia Tech suggested that “machine vision” systems such as those used in driverless cars may be less likely to recognize darker skinned pedestrians than lighter skinned ones – although luckily these systems are not in use currently.
Police technology has also been particularly unreliable at correctly identifying black faces, raising concerns over false accusations and faulty evidence. The ACLU has been campaigning against facial recognition as a policing tool, claiming it will result in “discriminatory surveillance.” While the ACLU seems to imply that racial discrimination will be intentional on the part of law enforcement, the larger question is whether humans will even have control over the decisions or biases of AI machines.
As Alex Hern comments at The Guardian, “Such technologies are frequently described as a ‘black box,’ capable of producing powerful results, but with little ability on the part of their creators to understand exactly how and why they make the decisions they do.” If scientists are struggling to understand their creations now, at the infant stage, how will they be able to do so when these technologies are more sophisticated?
Hateful AI
It has been assumed that biased artificial intelligence is reflecting human prejudices projected unconsciously onto the technology – but that may not be true. According to a new study published in scientific journal Nature, computers might decide they hate you, or any group of creatures, with no human input necessary. Researchers at Cardiff University and MIT found that a group of simulations run with artificial intelligence personalities interacted with each other in such a way as to develop prejudices by “simply identifying, copying and learning this behaviour from one another.” In a press release, the researchers said the study “demonstrates the possibility of AI evolving prejudicial groups on their own,” and that a high cognitive level was not necessary for this process to occur.
Co-author of the study Professor Roger Whitaker said:
“By running these simulations thousands and thousands of times over, we begin to get an understanding of how prejudice evolves and the conditions that promote or impede it. It is feasible that autonomous machines with the ability to identify with discrimination and copy others could in future be susceptible to prejudicial phenomena that we see in the human population…
Many of the AI developments that we are seeing involve autonomy and self-control, meaning that the behaviour of devices is also influenced by others around them. Vehicles and the Internet of Things are two recent examples. Our study gives a theoretical insight where simulated agents periodically call upon others for some kind of resource.”
There are already signs that some computers may have racial or sexist biases programmed inadvertently into them, but who knows what kind of prejudices computers might independently develop that we humans haven’t even considered yet? Who can guess how they will perceive, value, measure, or choose to group people, or any other species on this Earth? Could some or all of us find ourselves victims of computer discrimination in a society that is becoming ever more intertwined with and reliant on these technologies?
That’s all for this week’s edition of You’re Never Alone. Check back in next Monday to find out what’s happening in the digital realm and how it impacts you.