adolescents online toxicity, NLP researhers, social scientists, digital mentor, Angel, emotional well-being, depression, Instagram, Facebook, WhatsApp
With the focus of the project on the emotional wellbeing of adolescent girls, the idea here is to have a resident ‘angel’ in smart devices that steps in to protect them from online toxicity. Representational image: iStock.

IIIT Hyderabad to set up digital mentor for adolescent girls

Neuro-Linguistic Programming (NLP) researchers at the International Institute of Information Technology Hyderabad (IIITH) joined hands with social scientists in a first-of-its-kind multi-disciplinary project that seeks to build a digital mentor, shielding adolescent girls from online toxicity while promoting their emotional well-being.

The age of adolescence is a trying one. Ask the parents! But seriously, it’s a deadly concoction of raging hormones and a roller coaster of emotions. Research has long revealed how self-esteem takes a hit during this time, in turn affecting academic performance and other social relationships.

During casual interactions with some city-based highschoolers, IIITH researchers discovered a similar pattern among adolescents, girls in particular, that translated into a gradual decline in academic grades.

Also read: Madras HC to hear plea on roping in social media giants to track cyber crime on Sep 19

But what they found even more alarming was the high incidence of depression among these pre-teens. Informal surveys revealed disturbing statistics with around 35-40% of pre-teen girls reporting feeling low and unhappy, while more than 25-30% said that they were depressed and often sought help. “Around 20% of them were actually clinically depressed necessitating medical intervention,” says Prof. Vasudeva Varma, head of the Information Retrieval and Extraction Lab.

Further studies to probe possible causes revealed that, apart from spending time at home and school, adolescent girls spent more and more time in the cyber world. With all the trappings of anonymity, the online world serves as a refuge for most of these young girls.

“They are often subject to bold interactions, trolled, cyber-bullied and body-shamed leading to depression,” says Prof. Varma. And with the prevalence and proliferation of social media, coupled with the growing reluctance among youngsters to share information of their online (mis)adventures, it would seem that we are sitting on a ticking time bomb.

Project Angel

When the researchers brainstormed to find a solution to aid troubled teens, they realised that it was more complicated than expected, necessitating a collaborative effort across disciplines such as social and cognitive sciences, in addition to natural language processing.

The name of the project itself draws inspiration from Prof. Raj Reddy, Turing award winner and one of the pioneers of AI, the founding director of the Robotics Institute at Carnegie Mellon University and also the chairman of IIIT Hyderabad governing Council.

Also read: Watching what children watch: Why cyber safety is everyone’s concern

In several of his talks on the future of AI, Prof. Reddy predicts the advent of personalized or guardian angel apps that will spring into action to assist you when something goes wrong. Like an app that takes over when an airplane is crashing, correcting its course. Or a malfunctioning nuclear reactor where the guardian angel app steps in to fix the error.

With the focus of the project on the emotional wellbeing of adolescent girls, the idea here is to have a resident ‘angel’ in smart devices that steps in to protect them from online toxicity. And steer them towards positivity with appropriate reading recommendations and so on. With an anthropologist and a researcher at Adobe whose field of interest is affective computing, this multi-disciplinary project seeks to bring in novel insights and analyses of not just language in social media, but also images posted and shared online.

Identifying Toxicity

The Information Retrieval and Extraction Lab at IIITH has always believed in dealing with real-world problems and finding appropriate solutions to them. As part of social media analysis, handling online toxic content is one of them.

“We’ve been involved in hate-speech recognition for long and recently, there was a project on sexism too. In fact, the sexism project arose as an off-shoot of the hate-speech project to figure out what fine-grained categories of sexist comments exist online,” says Prof. Varma. He and his students developed deep neural networks that not only detected online sexist comments but also automatically labelled and categorized them.

With the inputs of social scientists like Prof. Radhika Krishnan, the team was able to come up with new insights not just from a technological point of view but also from the social sciences point of view that will assist social scientists and policy-makers to study and counter sexism better.

NLP Tool Sets

While looking around for similar well-being efforts elsewhere, the Project Angel team discovered an initiative carried out by the University of Pennsylvania, known as the World Well-being Project (WWBP). The WWBP is a collaborative effort between computer scientists, psychologists, and statisticians who are working on unobtrusive well-being measures by analyzing language in social media.

In a similar fashion, the Project Angel team has begun working on fundamental building blocks such as detecting social biases online or toxicity in the form of body-shaming, the presence of echo chambers, inclusivity, sexual harassment among many others.

Also read: How internet might change your brain

“We are essentially building NLP toolsets or models to understand the language of teens across continents, such as Australia and the US because not only is the problem prevalent everywhere but also data from these countries is readily available,” says Prof. Varma.

The team is also working on finding the right point of interventions necessary, to bring in the required positivity into any of the platforms used by teens. “What this means is that if positive messages are found on Twitter, we are trying to transfer that into an Instagram message so that the message will come from the ‘angel’ present on the Instagram network,” explains Prof. Varma.

Anthropological Insights

Prof. Nimmi Rangaswamy calls herself a human-computer interactions anthropologist. Remarking that working on this project has been a sort of dream come true, she says, “Typically computer scientists use texts and other content that is amenable to machine learning and not the other way around. That is, they don’t look at data because data is significant but they will use data because it can be machine-read! I like the idea of pushing the boundaries of machine learning, probably making it heed to some kind of social science logic and some kind of data that is more than machine-readable.”

For Project Angel, Prof. Rangaswamy’s focus is on Instagram. “The young people have all moved on from Facebook; Whatsapp is end-to-end encrypted (though a lot of toxicity is happening there as well), and hence Instagram is the next big thing,” she says.

For starters, Prof. Rangaswamy’s students are following a curated bunch of influencers on Instagram and trying to analyse their posts and the type of comments they are attracting.

The Right Cause

In keeping with the institute’s policy of research with societal impact, the project aims to assist troubled adolescent girls. All the stakeholders involved in this project are currently working on a voluntary basis.

“There are many problems that we have to solve. It doesn’t fit into anyone’s specific agenda as such, not the corporates or the government. But hopefully someone will connect to this malaise,” says Dr. Varma.

Read More
Next Story