The author draws on her decade-long experience of writing about AI to alert us about the horrors that humans have unleashed while getting technology to mimic human intelligence.

In her book, which has been shortlisted for the inaugural Women’s Prize for Non-Fiction, London-based journalist Madhumita Murgia also showcases how AI can be put to good use


What is the big deal about Artificial Intelligence (AI)? Why are activists so hell-bent on smelling a conspiracy in something as commonplace and convenient as face recognition technology? Where is the evidence to back the claim that automation disproportionately affects marginalised communities? London-based journalist Madhumita Murgia has written a banger of a book — Code Dependent: Living in the Shadow of AI — that will speak to anyone whose head is buzzing with these questions, and is genuinely interested in hearing answers.

Published by Pan Macmillan, Murgia’s book has been shortlisted for the 2024 inaugural Women’s Prize for Non-Fiction. The honour is well-deserved because the author draws on her decade-long experience of writing about AI and the social impacts of technology to alert us about the horrors that humans have unleashed while getting technology to mimic human intelligence. And she does this through empathetic storytelling instead of drowning us in a sea of statistics.

The deepfakes nuisance: Women are the main victims

Take, for instance, the story of poet, novelist and memoirist Helen Mort from Sheffield in the United Kingdom, who was shocked to discover photographs of herself on a pornography website. Pictures from her Facebook and Instagram accounts were scraped without consent, and then manipulated digitally to create deepfakes that presented her as a victim of gangrape.

Murgia writes, “Deepfakes aren’t unintended consequences of AI. They are tools knowingly twisted by individuals trying to cause harm.” Neither the website nor the police offered Mort any support. What made things worse was the realization that “something so violating, so unfair, was perfectly legitimate in the eyes of the law”. This took a toll on her mental health.

Mort began to wonder which man in her life might have doctored the images to humiliate her. The episode made her suspicious of all the men around her, and poisoned her relationships. She found it tough to fall asleep because she had frequent nightmares about the deepfakes. This is not an isolated incident. During her research, Murgia stumbled upon Henry Ajder’s disturbing study tracking deepfakes for Sensity AI. It shows that, in 2019, roughly 95 percent of online deepfake videos were non-consensual pornography, all of which featured women.

This book will make you think twice, thrice or perhaps a dozen times before you casually post thirst trap selfies on social media to flaunt your physique or a new outfit. You never know which creep might use them as raw material to create deepfakes and haunt you later. This thought is scary because the Internet has become an integral part of our everyday lives, and we are habituated to sharing personal images without thinking about consequences.

One hopes that, in a future edition of the book, Murgia will also write about the impact of deepfakes on the lives of LGBTQIA+ people because many of them who do not have access to safe spaces offline use the Internet to find a sense of community and also look for partners.

Other AI targets: Migrants, people on the margins

Apart from women, AI has been scripting disasters for other groups such as religious minorities, ethnic minorities and racial minorities in various parts of the world. In 2021, a civil rights activist named S.Q. Masood found the Hyderabad police using surveillance cameras to build a face database of citizens. Interestingly, this exercise was being carried out only in slums and not wealthy neighbourhoods; therefore, Muslims and Dalits were targets.

Masood realized that the technology being deployed against Black people in the United States, and Uyghurs in China, was being used against Muslims in Hyderabad. Murgia writes, “Minorities, in this case religious, (were) being identified at scale, like criminals, with no legitimate underlying rationale…It took away their ability to roam freely, and gather in groups, without scrutiny.” They were made to feel like criminals without committing a crime.

The saddest part, as Murgia reveals through her investigations, is that Western corporations building AI systems often exploit migrants and refugees fleeing war or persecution as cheap labour. Once these people sign up to learn digital skills that can be monetized, they are made to work as content moderators. They have to screen violent content, which traumatizes them for several years, but they cannot talk about it openly due to non-disclosure agreements.

How ‘data colonialism’ operates

The process of researching this book took the author to Nairobi, where she visited the office of a non-profit organization contracted by Meta to employ hundreds of content moderators whose job was “to tag, categorize and remove illicit and distressing content” from Facebook and Instagram. Murgia was not permitted to enter the actual working floor where these data workers “watched bodies dismembered from drone attacks, child pornography, bestiality, necrophilia and suicides” and filtered them out. Continued exposure to such imagery led to nightmares. Some of these workers were on anti-depressants. “Others had drifted away from their families, unable to bear being near their own children any longer,” adds Murgia.

The author does an excellent job of tearing down the shiny façade of technological progress that keeps us from seeing the murky back story of human rights violations. She points out that, in Argentina, Microsoft had been building “algorithms to predict which girls were likely to get pregnant in their teens” so that the local government could effectively direct resources to those families and help prevent these pregnancies. Oddly, the AI model did not take into account the fact “that the pregnancies could be a result of rape”. Moreover, the data collection process did not include boys or men, thus blaming only girls for unwanted pregnancies.

This book provides several such examples to show how “data colonialism” operates. In a world where data is wealth, companies that have their headquarters in Silicon Valley are turning to developing countries to hire data workers at low wages and build cheap datasets that can be used to train AI. They are able to get past concerns around privacy and consent because they partner with local governments that show scant regard for human dignity.

While this book is weighed down by despair, it never degenerates into pessimism. Murgia also celebrates the ingenuity of people like Armin Samii — son of Iranian immigrants to the US — who coded an algorithm-auditing tool named UberCheats as a response to the frustration he felt while working as a courier for UberEats. After several attempts “to get a human being at Uber to explain discrepancies in his wages”, he created a web application “to extract GPS coordinates from receipts, then calculate how many miles a courier had actually travelled, compared to what Uber claimed they had.” Samii made this app free to use.

Living with AI

Apart from being a must-read for people interested in the ethical conundrums let loose by AI, Murgia’s book is also a terrific resource for readers who want to draw inspiration from people all over the world using their knowledge, skills, imagination and networks to fight back. They refuse to sit quietly while corporations treat human beings as resources rather than rights-bearing entities. They remind us that we cannot remain blissfully unaware of the monstrosities that our labour is contributing to, even if we are mere cogs in a big machine.

A great example is human rights lawyer Cori Crider, whose company Foxglove took the Home Office of the UK government to court for using “a secretive visa-awarding algorithm that they believed was discriminating against applicants on the basis of nationality.” With the help of this software, visa applications were being classified into different priority streams based on levels of perceived risk. Foxglove called it “speedy boarding for white people”.

A thorough reading of this book makes it clear that Murgia is not anti-AI; as a journalist, she has taken on the task of documenting how it is upending the lives of the most disadvantaged people in the world. However, she does not lose out on the opportunity to showcase how AI can be and has been put to good uses. Through the stories of Ashita Singh in Maharashtra, and Ziad Obermeyer in California, Murgia emphasizes how AI can help doctors improve their decision-making when they use it as an assistive tool, and not a substitute for human agency.

The book does not claim to be the last word on the subject. It invites us to engage in some soul-searching, aided by a checklist of questions that appear at the end. After all, AI cannot be banished from the world. It is here to stay. We need to find constructive ways to address exploitation, reduce inequities, and support people who speak up about discrimination.

Next Story