Cookie Preferences
close

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Close icon
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Is AI content moderation better than human content moderation?

Who is better at detecting inappropriate content – computers or humans? To answer this question we compare AI and humans on different levels, including scale, personal risk, cost, bias and context.

Unitary

Table of contents

As the sheer volume of online content continues to expand exponentially, there is one question that comes up time and again – is AI content moderation better than human moderation? Historically, humans have been considered the ‘gold standard’ of content moderation, but is that still the case? Let’s start by comparing them on different levels.

AI vs humans: Scale

The amount of content being shared online every day is simply unmanageable by human moderators. One report suggests that every minute there are 1.7m items shared on Facebook, 66,000 pictures posted to Instagram and 500 hours of video uploaded to YouTube. Considering all the other available online interactive services, it is obvious that humans alone simply cannot keep up.

Fortunately, AI algorithms are actually designed to work at scale. They can process content in faster-than-real time, helping brands and platforms reduce or prevent moderation backlogs. So, in terms of speed and volume, it is fair to say that humans simply cannot compete with AI.

Human moderators drowning content.
Human moderators drowning content. Original image via Sandy & Sandy, edited by Unitary.

AI vs humans: Personal risk

One of the main reasons for moderating content is to prevent harm to service users. In the interests of protecting mental health (and brand reputation), unsuitable content needs to be identified immediately.

However, human moderators are just that – human. Exposing them to harmful content also exposes them to the risk of developing mental health issues like PTSD. AI on the other hand is automated – and highly proficient at identifying and removing exactly this type of harmful subject matter without exposing it to ‘real’ people. In this regard, AI is much safer than humans.

AI vs humans: Cost

Maintaining a moderation team can be expensive – and the cost increases exponentially in line with content volumes. The more data users uploaded, the more human moderators are needed to process it.

Because they can (theoretically) work at near-infinite scale, AI algorithms can lift some weight off the majority of the moderation team. AI allows your business to expand its services without growing headcount, helping to control operating costs.

AI vs humans: Bias

Humans naturally have their own concerns, interests and ultimately, biases. What one person regards as harmful may be passable to another – even when there are clear guidelines in place that define acceptability. For instance, just consider how often Wikipedia is embroiled in controversy each time content is edited or removed based on the editor’s personal biases.

AI is different. It is not biassed in the same way humans are, meaning that if a system is designed to execute certain actions it will do so consistently. Yet, it is still important to acknowledge that the lack of diverse representation in the development and training of AI systems can result in biassed outcomes. Bias can also be introduced in the design or implementation of the AI system, for example, if the creators make assumptions about what is ‘normal’ or ‘expected’ behaviour. This is why it is essential to a) have diverse and representative data, b) monitor and test the AI system for bias, and c) have a robust ethical and transparent development process.

An illustration of a representative and diverse dataset. Image via DALL-E 2.


AI vs humans: Context

The one area where humans have previously had the edge over AI is in terms of context. The human brain is incredible at simultaneously processing content in relation to other factors to arrive at an accurate assessment of its suitability. And in every case humans would outperform early moderation AI engines.

With the arrival of multimodal AI algorithms, the situation is changing. Today, sophisticated systems are able to process multiple modalities of content simultaneously (such as text and audio accompanying a video) to assess content in the same way that a human moderator does. Multimodal capabilities are quickly eroding any human ‘advantage’ – at the very least AI reduces workloads by limiting the number of submissions that require further analysis by humans.

What is context? Image via Unitary.

And the winner is…

The question ‘Is AI content moderation better than humans?’ is increasingly irrelevant because humans simply cannot keep pace with the amount of content that needs to be moderated every hour. A better question would be ‘Can AI moderation completely replace humans?’.  As things stand, AI and human moderators should probably co-exist. However, continued advances in multimodal algorithm development will further reduce reliance on humans – and if this makes the internet a safer place, would that really be a bad thing?

Read more about our approach to building AI systems that can achieve human-level accuracy when classifying content.