Algorithmic bias occurs when an algorithm, or a set of rules used by a computer to make decisions, produces unfair or prejudiced results. Here are a few real-world examples:
- Gender Bias in Resume Screening: In 2018, Amazon scrapped a hiring tool that favored male candidates over female ones. The algorithm learned this bias from historical data, which showed a male-dominated workforce, and unintentionally perpetuated it.
- Racial Bias in Facial Recognition: Some facial recognition systems have been found to be less accurate for people with darker skin tones, especially women of color. This is because the data used to train these algorithms often includes a majority of lighter-skinned individuals.
- Biased Healthcare Algorithms: A study in 2019 revealed a healthcare algorithm widely used in the US was biased against Black patients. The algorithm assigned lower risk scores to Black patients despite their higher healthcare needs, resulting in reduced access to vital healthcare programs.
- Unfair Sentencing in Criminal Justice: The COMPAS algorithm, used by courts in the US to assess the risk of re-offending, was found to be biased against African-American defendants. The algorithm assigned higher risk scores to Black defendants even if their criminal histories were similar to those of white defendants.
In each case, biased algorithms resulted from flawed or incomplete data, historical bias, or a lack of diverse representation in the data. To address these issues, it’s crucial to ensure that data is representative, diverse, and unbiased and to test and audit algorithms for fairness routinely.