12 Oct 23

A Deep Dive into Online Content Moderation

Juliana Eniraiyetan

According to predictions from the World Economic Forum, the global community is on track to generate an astonishing 400 billion gigabytes of data every day by 2025 – that’s the equivalent of 6.25 trillion 64-gigabyte USB flash drives. Much of this data is user-generated content (UGC) – when you tweet about the coffee you had this morning, or share a picture of your latest beach trip, or even write a comment on a business-focused blog, this is considered as UGC. These contribute to the expansive digital landscape with massive platforms like Facebook, X (formerly known as Twitter), and YouTube. The guardians of content guidelines (who are responsible for online content moderation) often remain hidden in the shadows.

Although the exact systems for these have been, for the most part, intentionally elusive, content moderation is largely carried out by a vast army of online content moderators, mostly employed by subcontractors in developing countries, like Kenya, India, and the Philippines. Sites of this nature, namely social media platforms, serve as breeding grounds for a diverse array of user-generated content contributed by a global community. This diversity extends to the nature of content uploaded, a notable portion of which is deemed unacceptable by both users and the companies themselves.

Sites and Content Moderation

To police this diverse content, these sites include clauses in their Terms of Service, incorporating community guidelines that outline their moderation policies. For example, YouTube’s Community Guidelines prohibit content such as “nudity or sexual content,” “harmful or dangerous content,” and “hateful content.” However, the responsibility of sorting through this content and preventing its dissemination largely falls on human moderators.

The diverse nature of UGC demands a vigilant eye. Whether it’s hate speech, misinformation, graphic content, or instigation of violence, moderators bear the responsibility of upholding platform guidelines and adhering to regulatory standards. These workers typically utilise automated moderation tools, and discussions of incorporating AI as part of content moderation, the importance of such a task is apparent.

Content Moderation Approaches

There are different types of moderation that have their pros and cons. Here is a description of the main content moderation approaches:

 AI’s Role in Content Moderation

Content moderation often involves a delicate balance between human and automated approaches. Human moderation offers advantages in nuanced interpretation and cultural awareness, enabling distinctions between content like family photos and inappropriate material. However, it is challenging and costly to scale up quickly. On the other hand, automated moderation, powered by AI and machine learning models, handles vast content volumes swiftly and doesn’t suffer from psychological stresses.

Nevertheless, it may lack the nuanced judgment of humans and occasionally misclassify content. Many platforms opt for a hybrid approach, combining AI’s efficiency with human judgment for ambiguous cases, striving to strike a balance between the strengths and weaknesses of both methods.

Ethical Implications and The Impact of Content Moderation

Yet, beneath the surface of the ongoing debate about enhancing moderation tools and increasing transparency, an underemphasized aspect persists—the human toll that content moderation poses on the moderators themselves.

Amidst the harrowing task of meeting numeric targets while grappling with disturbing content such as violent imagery, explicit material, and hate speech, many moderators have reported experiencing symptoms similar to post-traumatic stress disorder (PTSD) and other mental health challenges. Despite this recognition, the lack of viable solutions is evident.

The requirement to employ more moderators raises concerns due to mounting evidence that current content moderation practices inflict significant psychological risks on employees. However, addressing these concerns requires acknowledging the challenge posed by the lack of information surrounding these practices. Companies deliberately maintain secrecy and resist external scrutiny, while non-disclosure agreements prevent employees from discussing their work.

Nonetheless, insights have emerged thanks to the work of journalists and researchers. The documentary “The Cleaners,” for instance, features interviews with former moderators employed by a Philippine subcontractor. These interviews reveal the toll of filtering through the darkest corners of the internet, with many former moderators expressing fatigue, distress, and even depression due to their work. One previously undisclosed aspect of their job is the requirement to meet numerical quotas, involving screening thousands of images or videos each day to retain employment.

Furthermore, individuals typically earn significantly lower wages compared to their Silicon Valley counterparts, while making split-second decisions on whether to remove questionable content that frequently straddles ambiguous and culturally specific lines.

Despite the under-recognition of these issues, some initiatives are striving to improve the work environment, enhance information for potential employees, and seek compensation for past harm. However, many of these efforts are limited in scope, often only addressing issues within the United States and neglecting contractors or a broader international workforce.

Legal Challenges to Content Moderators

Consequently, major tech companies have faced a growing number of lawsuits related to the working conditions and psychological impact of content moderators. One of the latest lawsuits is a $1.6 billion suit against Meta/Facebook and the subcontractor Sama raised by Kenyan employees. Also, OpenAI, the creators of ChatGPT, are seemingly heading into similar legal issues.

These actions have shed light on the challenges these moderators encounter while filtering user-generated content. Lawsuits have highlighted concerns about inadequate training, lack of psychological support, and exposure to disturbing content that can lead to mental health issues. Some cases have alleged that companies failed to inform content moderators of the potential psychological risks associated with their roles. These legal challenges emphasize the need for greater transparency, improved working conditions, and proper support for content moderators, prompting discussions about industry-wide changes to ensure the well-being of those responsible for maintaining online safety and community standards.

In conclusion, the role of content moderators in the tech industry is undeniably crucial in maintaining a safer online environment for users and upholding community guidelines. However, as the demand for user-generated content continues to surge, so do the challenges faced by these moderators. Striking a balance between human judgment, which can discern nuances and cultural context, and AI’s efficiency in handling large volumes of data is key. Tech companies must recognize their responsibility in safeguarding the mental health of these essential workers and strive for greater transparency. Ultimately, the future of content moderation lies in a collaborative effort that ensures online platforms remain safe and welcoming spaces for users around the world.

 

Interested in joining our diverse team? Find out more about the Rockborne graduate programme here.

Apply