Cookie Preferences
close

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Close icon
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

A message to users: how do content policies work?

We interviewed a former TikTok policy manager to understand decision-making in a globally important platform, the role of social media in society, and our musings on what online platforms might look like both without any moderation at all and in a heavily moderated environment.

Ippolita Magrone

Table of contents

An Interview with a former TikTok policy manager

Marika Tedroff, also known as Malla, is interested in the internet, culture, and policy. Her experiences range from VC-related work all the way to managing policy at TikTok (from which she recently left). Most recently, she worked in TikTok’s monetization policy team, crafting the moderation guidelines that make up some of TikTok’s global policies. Last month, she wrote a blog post about her experiences at TikTok, and why moderation is so hard. One theme that arose is the disconnect between how users view content moderation, and the work that happens behind the scenes. We had a chat to dig into this issue further, and get her thoughts on the future of Trust & Safety more generally. Here’s an extract from our fascinating discussion that covered topics ranging from decision-making in a globally important platform, the role of social media in society, and our musings on what online platforms might look like both without any moderation at all and in a heavily moderated environment.

IPPOLITA: In your last blog post, you highlight that moderation decision-making is “all about binary choices, which is hard because content isn’t binary, so the systems, processes and structures” used by platforms “make little sense.” More transparency is needed around the “how’s and why’s” of content moderation, so that users understand this space more. So my question is: In what ways do you think the industry can shed light on the ‘how’s and why’s’ of moderation systems for users? And do you think this disconnect is also a question of media literacy?

I absolutely think it’s a question of media literacy, among other things. I think the space in general is complicated, it’s very reliant on everybody using their own critical minds to assess what they’re seeing and why things are happening. And in relation to transparency, I think there’s a huge gap between the impressions users are getting about how things work and how they actually work.

But we also have to remember that there are many reasons for why not everything is super transparent, reasons that ultimately help protect users.

At the same time, I do think it is important that users understand there are operational and product issues behind these challenges, it’s not as straightforward as many seem to think. People might attribute issues to value judgements or targeting specific posts or accounts, but this is not what I experienced when I was working there; rather, ops issues played a key role. It’s complex.

Overall, I don’t think transparency alone or even communicating ‘why’ and ‘how’, will fix any problems. But it’s perhaps more about addressing the users, their concerns and worries, because there’s a lot of assumptions about how things work that are not always rooted in reality. People make these assumptions when they do not have enough information and data about ‘how’ and ‘why’ things are happening. This is true for most companies and industries.

A user's assumptions of the behind-the-scenes of platform operations. Photo by Unitary.

IPPOLITA: You mentioned how without moderators’ work on platforms, users would not actually want to spend a lot of time on these places as they wouldn’t be what they are today. But from a policy perspective, why is content classification such a hard task? Do you ever feel like you are in charge of regulating the ‘world of information’? And how do you deal with the responsibility of having to decide what users see / not see?

Yeah, it is super hard because ultimately you have to represent a lot of different voices. There are a lot of elements to consider. There are legal restrictions, regulatory factors, public pressures etc. For example, public pressure is why I wrote about body image. There are a lot of discussions online about how social media platforms are harmful for young women’s body image. Obviously it’s not against the law to upload a video saying, ‘you should eat 500 calories a day to look hot.’ But then, young women are feeling awful about themselves because of everything they see online, and obviously platforms could do something about it, right?

However, you always have this other side of it, which is: ‘people should be able to post whatever they want’. And not every person will interpret a video about eating 500 calories a day as something that is harmful or triggering.

It’s very much in the perception of how users see it. And it’s hard to make that distinction as a platform, which is supposed to be a neutral place. Because the minute you’re taking a [policy] stance, you’re also saying ‘this is good’ and ‘this is bad’ and ‘this is what society should be and not be’. But ultimately, on a day-to-day basis, I didn’t feel this kind of pressure, because obviously this whole mission feels separate from what you’re doing day-to-day, which is smaller tasks and making sure everything works.

However, those legal requirements, the public pressure, what users want or not, and where we draw the line between being responsible but not silencing users, that’s really hard to do. Because not everything can fit into this box. So you always have to measure these risks internally in your mind:

What is the harm of people seeing this versus what is the harm of people being silenced because of it? Which voice should carry the most weight? Should we listen to all the parents being concerned about their children spending too much time on social media? Should we listen to what the users want to publish? Should we listen to the business needs?

There are all these distinct voices, and a lot of noise. As a policy person you have to write something that meets all these different requirements, balancing these various needs.

I always envisioned it as: you’re standing on a line over the water; you’re trying not to fall on either side, you know? You need to balance the weight.

Balancing the weight of content policy decisions. Image by Unitary.

IPPOLITA: To what extent were these policy challenges unique to TikTok? What role do you think the design of a specific platform plays into how content is created and disseminated?

I mean, I don’t know. I haven’t worked at any other big company before that, so I’m not sure what things were TikTok related, and what relates to big companies in general. But I think what is unique about TikTok — and other platforms that are increasingly moving in this direction — is this multi type of content/formats. For example, we now have video, live streaming and more. All these different formats bring new moderation challenges. Pictures are easier, because you can quickly make an assessment. When there’s video, there’s all kinds of things and more nuances to account for. You need to think about how it is presented. What voice are they using in the video? What caption is related to it? Maybe the caption is the thing that reveals that the video is actually sarcastic or is it the audio in the background that reveals that? How do we moderate live videos when something is happening as we speak and no one can review it when it goes live? This is something I thought about a lot: how much more difficult it is when there is video involved.

As for platforms, as TikTok grew over the last year — I mean, it’s still new, which is crazy to think about how quickly it exploded — it also became clear that it is playing a more prominent role in the reporting of events. Because it grew so quickly, I think that challenge might have been unique to TikTok because of the quick growth pace.

IPPOLITA: You said you had more to say about “why it’s hard to fix ‘obvious’ problems such as misinformation and scams.” Misinformation is clearly a major issue. What are your thoughts on how social media platforms can tackle this?

Yeah, misinformation, fraud and all these things are very difficult for many reasons. I think two of the main drivers are:

a) Facts are increasingly expressed as opinions. It’s very hard to say that something is misinformation if someone is just talking about their experience, but presented in a way that is not really factual.

b) Bad actors are never shining their red lights and saying, ‘hey, I’m a fraudulent bad actor and I’m spreading misinformation’.

For platforms, it’s very hard to know the balance between being proactive and reactive. For instance, a proactive measure would be, before an election, the platform decides: ‘we’re going to fact-check specific keywords because we know that misinformation will increase during an election’. But a reactive approach would be after an election, reviewing everything that’s been reported by users as misinformation. You can read more about this on platforms’ public websites.

So, finding the path forward is really hard and I’m not sure what the future will look like, but it is a very, very difficult problem because of these recent hard-to-identify, hard-to-know-when-to-do-what instances around distinguishing truths from facts. Especially when events are unfolding, then it’s very difficult. Fact-checking something that is unfolding as we speak is really, really difficult.

IPPOLITA: Definitely. And I think misinformation in particular is quite a tricky area because, you know, it’s about the ‘truth’ of something which is hard to define... Despite these challenges, do you feel optimistic about the future of online safety?

I am generally optimistic. It has become an important topic, and it wasn’t some years ago. It is becoming a core part that people think about when they build companies. I’ve seen a lot of new startups in the social media and live chat space incorporating policy and community guidelines from the outset. So, I think long-term we will see a positive effect from this. But I also think that the platforms and products we’re building are, at times, facilitating the problems we are trying to fix, meaning we will never really reach a perfect state.

IPPOLITA: Do you have an ideal, ‘dream’ solution in mind, if so what would it be?

I don’t have a dream solution but I think it would be cool to see a new social media platform where not everybody is allowed to post content. A place where we put more emphasis on verifying the account and the user at the beginning of the funnel, rather than the content. It would be interesting to see how that impacts safety, content and integrity challenges. Of course, utility and use cases would differ from traditional social media apps, but given the fact most people don’t actually post content, it might still be interesting if it results in higher quality content for users.

IPPOLITA: How would you measure who has the right post?

I don’t know, this is just an idea. There are many different verification processes and approaches to explore. In general, I would love to see more transparency for users (including how things work and what happens to their content) and more communication with users, because otherwise it just creates a lot of frustrations and false information going around. And I always think that better understanding will lead to better outcomes for everybody.

IPPOLITA: I agree… the same way that in school people learn about history, the government and laws, I think schools should incorporate learning about platforms and the internet because ultimately it is part of our society today. For example, students should be equipped with knowledge of how a recommendation system works.

Teaching media literacy in schools. Image via Unitary.

IPPOLITA: Do you have any final thoughts on this?

Yeah. And just this general idea of knowing that ‘I’m really shaped by this app’ and just being reminded of that constantly because it’s so easy to forget. We look at these type of things when we’re tired and in bed. And I don’t think we really realise how much impact they have on the way we shape our worldviews.

For more posts like these follow us on Twitter, LinkedIn and Medium. Stay tuned for more interviews and discussions with people working across the Trust & Safety and Brand Safety space.

At Unitary we build technology that enables safe and positive online experiences. Our goal is to understand the visual internet and create a transparent digital space.

For more information on what we do you can check out our website or email us at contact@unitary.ai.