We attended TrustCon - a conference dedicated to trust and safety professionals from around the globe who are responsible for keeping the internet safe. It was an invaluable learning experience and here are some key takeaways.
TrustCon 2023 opened with an air of urgency and hope - over 800 professionals from 300 organisations across 33 countries gathered to address escalating online harms. We heard about child safety, disinformation, hate speech, and more, across panels and workshops.
Our three days there let us meet professionals across the trust & safety universe, all of them tackling the larger problem of internet safety. We know that content classification is a small, yet crucial piece of the puzzle, but it was invaluable to meet so many people working on the big picture. Here is what we learnt:
Online safety is a shared responsibility that requires collaboration across companies, not proprietary competitive advantages. No single platform can address online harms alone, and withholding solutions that could protect people makes the overall online ecosystem less safe. This is in part why the field of Trust & Safety feels so much like a community. Professionals are keen to help each other and share best practices. The end goal is not ‘to be the safest platform’, but rather ‘to create a safer environment for all.’
More than anything, ‘safety’ is an ethical imperative. It would be grossly immoral for a doctor who found the cure to an untreatable disease to keep it to themselves simply because they want to be the ‘best’. Similarly, keeping users safe should be a priority over any business competition.
Child safety, specifically CSAM and age assurance, were top priorities across most panels. This comes as no surprise—every 0.5 seconds a child goes online for the very first time, and balancing children’s independence with their safety seems to be the hardest challenge.
Also Generative AI’s dual potential was top of mind. On the one hand, it could help with the task of moderating content, an interesting take by Alex Rosenblatt, Founder of SafetyKit, in this LinkedIn post. On the other hand, it is a vehicle for generating mis- and disinformation, deepfakes, and CSAM.
Another challenge is how this harmful content travels. When things like CSAM and AI generated content are uploaded via livestreams and / or long-form videos, they are even harder to detect. With these formats, relying solely on humans is just not feasible, especially at scale. Currently, automated tools are mostly text-based and lack the contextual understanding necessary to identify more nuanced types of harmful content.
Something as complex as internet safety cannot and should not be solved single-handedly. Now more than ever, civil society, governments, tech vendors, and platforms need to collaborate—each weighing their expertise to approach things holistically.
As Katie Harbath noted in her newsletter, this year there was more “diversity of online platforms plus vendors.” Tech Policy Press captured a similar sentiment; “some platforms are turning to a growing ecosystem of vendors,” which “constitute a new locus of energy.” As a vendor in this space, our objective is to help platforms safely scale their content moderation. Our mission since the outset has always been to reduce harm by decreasing a) the hours that human moderators have to spend reviewing atrocious content, and b) the amount of harmful content users see. By collaborating with online platforms, organisations, and other professionals, we truly believe that Unitary can make a difference as the safety layer that sits between platforms and users.
Participating at TrustCon 2023 allowed us to connect directly with the online safety community we ultimately aim to serve. We met counterparts from around the globe, all grappling with unique local challenges but united in protecting users. Joining these crucial conversations fuelled our commitment to collaborate across organisations, industries, and the mission of building a safer online ecosystem.