Cookie Preferences
close

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Close icon
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

How can Trust & Safety professionals design inclusive online spaces?

Learn how trust and safety professionals can create inclusive online spaces using artificial intelligence. Discover the complexities of building a safe, inclusive online environment for all users.

Unitary

Table of contents

Since the advent of wireless technology, the speed and scope of our communication has changed drastically, now with numerous environments from which we can send and receive messages, and upload information. With this change comes the responsibility, now of Trust and Safety (T&S) professionals, to ensure these environments are inclusive.

But what does an inclusive space mean exactly? It means co-creation and trust, between users and platforms. Users who ‘trust’ an online space should feel comfortable, confident and not at risk to exist in the space, regardless of their race, background or any other characteristic. They should feel safe. Behind this fairly simplistic idea lies layers of complexity, not only on a technological level, but also on an organisational and societal one.

Societal level

Many T&S dilemmas relate to deep philosophical questions surrounding freedom of expression. This is because moderation, which is a fundamentally human decision-making process, has the power to shape how we experience the online world. However, in turning to philosophical answers it becomes clear that there is no such thing as an ‘absolute’ freedom of speech, as this freedom is generally conditioned on the context within which it is exercised. On online platforms, content always exists within a certain context. For this reason, even if the content remains the same, just changing the context can alter the meaning of a situation.

Same message, different context. Photo by Unitary AI.

But, understanding context is no easy task, even for humans. One must be an expert of the culture, community and trends of those generating the content. On the one hand, the risk of under-moderating involves potentially exposing users to harmful material, which may cause users to disengage and generally move towards a less inclusive space. On the other side of the spectrum, over-moderating may engender serious violations of freedom of expression, and likewise may also turn people away from the platform. So where does one draw the line?

A first step is promoting safety and inclusivity by design. The world is diverse and multifarious and an inclusive system is one that accommodates all this diversity. However, as The Drum Bot acutely points out, we must avoid turning diversity into an ‘exercise in ticking the boxes’ as this takes ‘the real soul and meaning out of inclusivity.’ To avoid this, inclusive spaces must represent the world as it is, and not just a portion of it. Any human moderator inherently brings their own biases based on their individual life circumstances. But algorithmic moderation, although removed from human emotion, can also be fraught with bias. In machine learning, the problem is often rooted in dataset bias and the magnification of such bias in algorithmic outputs. Research on ‘ethical scaling’ by Harvard University presents an interesting viewpoint. For the case of speech detection, this study posits that today’s systems are inadequate, and will continue to be, unless they start representing marginalised communities in their data-sphere.

Building datasets that are representative of society is itself an extremely challenging task, but one whose importance cannot be understated. Very often, deciding who and what is represented comes down to questions of visibility and power. This is why T&S challenges are often not limited to ‘what users see’, but extend to more fundamental questions often involving the very structure of organisations themselves.

Organisational level

Organisations dealing with T&S sit at the intersection between people and technology. This is not an easy spot at the table. These teams often deal with big questions that require efficient coordination and communication across all levels of the organisation, as well as an understanding of regulatory issues across the globe. While the internet enables a single platform to operate throughout the world, this single T&S team has to respond to laws, cultures and frameworks in each different region.

Digital platforms operating across the globe. Photo by Unitary AI.

While T&S is an issue across every social platform, there can be no ‘one size fits all’ solution, as the specific issues faced by each organisation are often unique, with their own ecosystem of content and users.

In terms of moderation ‘solutions’, usually T&S teams are faced with three options:

Three types of content moderation approaches.
Three types of content moderation approaches. Photo by Unitary.

Reddit is a well-known example of a platform that is heavily reliant on community moderation, where moderators are invited to act in ‘good faith’ and be transparent. This transparency requires that guidelines on moderation are clear and available to users. However, most UGC platforms tend to opt for a mix of human content moderation and AI solutions. Both options come at a price.

Being a human content moderator requires emotional resilience, because you are regularly exposed to potentially harmful material. On a practical level, the scale of moderation cannot be a human task. However, the inadequacies of automated solutions, particularly around understanding context, have meant that moderation solutions often largely involve human moderators.

So why are AI solutions not good enough? A lot of it has to do with boundaries and nuances. Is a cartoon of a beheading ok or is it considered violence? A trained human moderator may grasp the severity of the image instinctively, while a machine may struggle to interpret its true meaning.

Technological level

Zuckerberg’s remark, ‘It’s easier to detect a nipple than hate speech with AI’, adequately reflects how moderating certain kinds of harm requires a different level of nuance. However, this is a problem. Content moderation determines what is seen, who is visible and to some extent ‘what we know.’ So the question here is: How do we design content moderation systems that are fair and inclusive to all? Advances in ML and AI can play a key role in answering this question as they allow for a better contextual understanding of online content.

Inclusive and representative datasets.
Towards inclusive and representative datasets. Photo by Unitary.

This contextual understanding relies on both greater model sophistication, as well as more representative datasets. The datasets the models are trained on need to be representative of our world and reflective of how people post online — only this can lead to an understanding of context. Developing more inclusive and representative datasets as well as methods to interrogate and understand them are crucial steps towards fairer content moderation.

Here at Unitary we build technology that enables safe and positive online experiences. Our goal is to understand the visual internet and create a transparent digital space.

For more information on what we do you can check out our website or email us at contact@unitary.ai.

For more posts like these follow us on Twitter, LinkedIn and Medium.