Home Technology

Dealing with prejudice in artificial intelligence

The link between artificial intelligence (AI) and prejudice is alarming. As artificial intelligence becomes more and more human-like, it is becoming increasingly clear that human prejudice affects technology in a negative, potentially dangerous way.

It is therefore important to examine how artificial intelligence and prejudice are linked. , and what they do to reduce the impact of prejudice in AI applications. There are three basic questions to be asked about artificial intelligence and bias:

1. How does bias in artificial intelligence affect automated decision-making systems? work. The AI ​​makes decisions on small issues, such as restaurant preferences, and on critical issues, such as determining which patient to receive an organ donation. While the stakes may be different, whether human bias plays a role in AI decisions will certainly influence outcomes. Poor product recommendations affect retailers ’profits, and medical decisions can have a direct impact on the lives of individual patients.

Vincent C. Müller in Ethics of Artificial Intelligence, Summer 2021, The Stanford Encyclopedia of Philosophy. and Robotics (Ethics of Artificial Intelligence and Robotics) deals with artificial intelligence and bias. According to Müller, fairness is the primary consideration in policing and notes that there are human biases in the data sets used by the police, which are used to decide, for example, where to focus patrols or which prisoners are likely to be recidivized.

*) This type of “predictive policing”, Müller argues, relies heavily on data influenced by cognitive biases, especially confirmatory biases, even if the bias is implicit and unknown to human programmers.

Christina Pazzanese refers to the work of Michael Sandel, a political philosopher and professor of governance, in his article “The Great Promise but a Potential Danger” in The Harvard Gazette.

“The appeal of algorithmic decision-making lies in part that it offers a seemingly objective way to overcome human subjectivity, bias, and prejudice.But let’s discover that many algorithms, which, for example, decides who gets parole or who gets a job or a housing opportunity … repeats and embeds the prejudices that already exist in our society, “writes Sandel.

2. Why prejudice exists in artificial intelligence

To figure out how to eliminate or at least reduce bias in AI decision-making platforms, we need to examine why it exists at all.

Take, for example, the history of training AI chatbots from 2016. The chatbot has been set up to have conversations on Twitter and connect with users via tweets and instant messages. In other words, the general public played a big role in defining the “personality” of the chatbot. The chatbot responded to users with offensive, racist messages within a few hours of its release, as it was trained on anonymous public data that was immediately co-opted by a group.

The chatbot was deliberately heavily influenced, but this is often less clear. . In their joint article in the Harvard Business Review, What Do We Do About the Biases in AI? they also know that they are in them – they can significantly affect artificial intelligence.

According to the article, prejudices can creep into algorithms in several ways. It may involve biased human choices, or it may reflect “historical or social inequalities, even if sensitive variables such as gender, race, or sexual orientation are removed.” As an example, researchers point to Amazon, which stopped using a recruitment algorithm after finding that it preferred applications based on words such as “executed” or “captured,” which were more common in men’s resumes.

Incorrect data sampling is another concern, writes the trio, when groups are over- or under-represented in training data that teach AI algorithms to make decisions. For example, facial analysis technologies analyzed by MIT researchers Joy Buolamwini and Timnit Gebru showed higher error rates for minorities, especially minority women, potentially due to underrepresented training data

3. How to reduce prejudice in artificial intelligence

McKinsey Global Institute’s article Tackling bias in artificial intelligence (and in humans) – Managing prejudice in artificial intelligence (and humans) – by Jake Silberg and James Manyika outline six guidelines that AI creators can follow to reduce bias in AI:

  • Be aware of the contexts in which AI can help correct bias and those where there is a high risk that AI may exacerbate bias.
  • Processes and practices should be established to test and mitigate the bias of AI systems.
  • Continuing fact-based conversations about possible distortions in human decisions
  • A full exploration of how people and and how machines can work best together.

Invest more in prejudice research, do make more data available for research, respecting privacy and taking a multidisciplinary approach.

  • Invest more in diversifying the field of artificial intelligence itself
  • Researchers acknowledge that these guidelines will not completely eliminate prejudice, but if applied consistently, they can significantly improve the situation.

    Hardware, software, tests, curiosities and colorful news from the IT world here

    NO COMMENTS

    Leave a ReplyCancel reply

    Exit mobile version