In an exclusive interview with Chamber UK, Gabriela De Oliveira, Head of Policy, Research, and Campaigns at Glitch, shared her thoughts on the critical challenges of regulating online abuse. Glitch, an award-winning charity, is dedicated to ending online abuse, with a focus on ensuring the safety of women and girls. Gabriela offered insights into the pressing issues surrounding online safety, the accountability of tech companies, and the evolving challenges brought on by artificial intelligence (AI).
The State of Online Abuse: Areas Needing Urgent Attention
Gabriela identified three critical areas in regulating online abuse:
- Mandatory Risk Assessments: The Online Safety Act currently leaves risk assessments for violence against women and girls as optional rather than mandatory for tech companies. Gabriela argues this is a core issue. Glitch’s latest report shows the disturbing prevalence of digital misogynoir—a term coined by Dr. Moya Bailey—highlighting the unchecked dehumanisation of Black women online. Gabriela stressed the need for tech companies to conduct mandatory risk assessments to address this pervasive issue effectively.
- Investment in Prevention: Glitch’s research reveals minimal commitment and investment in preventing online abuse. Instances of violent abuse, particularly against Black women during election periods, demonstrate the urgent need for proactive measures. Gabriela advocates for long-term investment in digital literacy and preventative strategies, including national guidance on digital safety and employer policies. To support this, Glitch proposes that the government allocate 2% of the digital services tax towards prevention efforts, enabling sustained funding for these crucial initiatives.
- Preparedness for AI Harms: Gabriela highlighted the current lack of preparedness for AI-related harms. Deepfake abuse, which disproportionately affects women, is on the rise. The government has committed to bringing forward legislation to address AI-related harms, but more needs to be done to ensure the legislation effectively mitigates risks at the source, particularly during the development and deployment of AI tools.
Holding Platforms Accountable: A Dual Responsibility
When discussing accountability, Gabriela emphasised that both the government and tech companies share responsibility for curbing online abuse. While government legislation aims to hold tech companies to account, Gabriela noted that tech companies often prioritise commercial incentives over user safety. The prevalence of online gender-based violence on these platforms indicates that tech companies are making deliberate choices that allow abuse to persist.
To counteract this, Glitch works directly with tech companies, participating in Trust and Safety Councils. Gabriela highlighted the reduction in investment in these areas, with recent years seeing cuts to trust and safety work. For instance, Twitter’s previous hateful conduct policy, which was effective in addressing certain forms of content, has been completely removed. Gabriela insists that government intervention is necessary to set standards and hold tech companies accountable, ensuring user safety is prioritised over profit.
Freedom of Expression vs. Freedom to Abuse
A common argument against monitoring online abuse is that it may stifle free expression. Gabriela challenged this notion, asserting that “freedom of expression is not freedom to abuse.” Online abuse extends far beyond hurt feelings, encompassing severe acts like death threats and targeted sexist attacks. Such behaviour disproportionately targets women, particularly those from ethnic minorities, and serves to intimidate and silence them, effectively stifling their freedom of expression.
Gabriela stressed that holding public figures accountable should not involve attacks based on identity or threats of violence. She calls for a redefinition of free expression that does not excuse or enable abusive behaviour. This stance aligns with Glitch’s mission to empower individuals to thrive online without fear of harassment.
The Next Big Challenge: Generative AI and Deepfake Abuse
Looking to the future, Gabriela identified generative AI as a significant challenge for online safety. Deepfake technology, which can manipulate voice, image, and video, is being misused for non-consensual sexual abuse. Shockingly, a 2023 report by Security Heroes found that 98% of deepfake videos are image-based sexual abuse, with 99% targeting women.
The open-source nature of much of this technology allows for its rapid proliferation, increasing the risk of abuse. Gabriela pointed to the European Union’s AI Act as a bold move toward regulating AI, though gaps remain. Glitch is actively working to bring gender-based violence into focus within the AI Act, aiming to address systemic risks effectively.
Final Thought
Gabriela’s insights reveal a digital landscape faced with challenges but also opportunities for meaningful change. The need for mandatory risk assessments, investment in prevention, and preparedness for AI-related harms are paramount in ensuring online safety. While tech companies and governments share responsibility, a collaborative effort is required to establish a safer digital environment.
Glitch’s work underscores the importance of redefining freedom of expression to exclude abusive behaviour and highlights the urgency of proactive measures to protect individuals, particularly women and marginalised groups, from online abuse. The evolving challenges posed by AI only add to the complexity, but with informed legislation and dedicated prevention efforts, progress can be made toward a safer online world.
To watch the full interview, click below!
Chamber UK also exclusively interviewed Baroness Ritchie on the topic of online abuse ahead of the oral questions last week which raised the issue of online safety regulation. To view her responses, please visit our YouTube chahttps://www.youtube.com/@chamberuknnel for exclusive content.