AI isn’t unbiased because humans are biased

Artificial Intelligence, Bias, Algorithms
Unlike humans, computers do not have subjective views on, for example, caste, religion, race, gender and sexuality | Illustration - Eunice Dhivya

“Machine Bias”, screamed the headlines. The tagline said, "There's software used across the country to predict future criminals. And it's biased against blacks."

In a revealing exposé in 2016, ProPublica, a US-based Pulitzer Prize-winning non-profit news organisation, analysed the software known as COMPAS used by US courts and police to forecast which criminals are most likely to re-offend, and found it biased against Afro-Americans.

Guided by inputs from an algorithm, police and judges in America made decisions on defendants and convicts, determining everything from bail amounts to sentences. The report concluded that COMPAS software was twice as likely to falsely label black defendants as future criminals than white defendants.

Further, it was more likely to falsely label white defendants as low risk. In other words, more supposedly high-risk black defendants did not commit crimes. In contrast, more supposedly low-risk white defendants did commit crimes. The allegedly unbiased algorithms were definitely unfair.

To continue reading this article...

You have to be a Premium Subscriber

Start your subscription with a free trial

Enjoy unlimited Eighth column, archives and games on
The Federal.com and The Federal APP and many more features.
You will also be supporting ethical and unbiased journalism.
After trial subscription plans start from Rs. 99
Get breaking news and latest updates from India
and around the world on thefederal.com
FOLLOW US: