Online harassment against women has long since been a recurring issue on social media platforms. According to a Web Foundation survey, one out of two girls admit that they are on the receiving end of either threatening messages and private images online (sometimes even both).
In a mission to create a safer space for women, machine learning expert Richi Nayak and other researchers from the Queensland University of Technology (QUT) crafted an algorithm that identifies and reports misogynistic posts on social media platforms.
The team trained the algorithm to understand content, context, and intent by equipping it with linguistic capability. They first fed 1 million tweets to the machine, which were categorized to include three keywords: whore, slut, and rape. Next, they refined it down to 5,000 and separated the datasets based on context and intent.
Through deep learning algorithms, the system successfully adjusted its knowledge of terminology to differentiate the difference between abuse and friendly conversations.
“Take the phrase ‘get back to the kitchen’ as an example — devoid of context of structural inequality, a machine’s literal interpretation could miss the misogynistic meaning,” shared Naya. “But seen with the understanding of what constitutes abusive or misogynistic language, it can be identified as a misogynistic tweet.”
Currently, the system can identify misogynistic tweets with 75% accuracy. As for future plans, Naya and the team hope that social media platforms can refine their study and develop an abuse detection tool.
“At the moment, the onus is on the user to report abuse they receive,” said Naya. “We hope our machine-learning solution can be adopted by social media platforms to automatically identify and report this content to protect women and other user groups online.”