Cookie Preferences
close

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Close icon
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Towards a safe and transparent digital space: A recap of Sasha Haco’s talk at the U.S. Chamber of Commerce

This June, the U.S. Chamber AI Commission hosted a field hearing in London (UK), to discuss global competitiveness, inclusion and innovation in relation to AI. To explore these topics, numerous key leaders in public policy, AI and innovation were invited, including Sasha Haco , CEO of Unitary.

Ippolita Magrone

Table of contents

This June, the U.S. Chamber AI Commission hosted a field hearing in London (UK), to discuss global competitiveness, inclusion and innovation in relation to AI. To explore these topics, numerous key leaders in public policy, AI and innovation were invited, including

Sasha Haco, CEO of Unitary.

Today, regulators, policy-makers and business leaders are all struggling to keep up with technological advancements, which are oftentimes so fast-paced that regulations generally follow rather than precede them. This field hearing was a great opportunity for Unitary to open up a conversation alongside various AI leaders from the University of Oxford, Shell, Yoti, and many more. They discussed both the risks associated with artificial intelligence, as well as the potential to use AI to benefit society and drive social good.

Sasha Haco at the U.S. Chamber AI Commission talking about AI.
Final Panel at U.S. Chamber AI Commission, featuring Sasha Haco, Unitary's CEO, alongside various leaders in the industry. Photo via Chamber Technology Engagement Center

Three take-aways from Sasha’s testimony:

Online safety has reached a pivotal moment

Whether we realize it or not, our day-to-day actions are increasingly mediated by digital technologies. A great part of the interactions that formerly took place face-to-face now happen in an environment that is subject to different laws. Information online can be searched, stored and replicated. As a result, while ‘offline’ a harmful or offensive statement might be confined to the situation where it takes place, online it can be altered, cropped, taken out of context, or spread to new and unintended audiences. This can greatly increase the potential reach and impact of the harm.

An illustration of how harmful content travels online from the moment someone uploads a recording to the point where it spreads across the internet.
A harmful video's journey from the moment it is recorded to the point it spreads across the web. Image via Unitary.

In parallel, video content is becoming the dominant media type, so much so that it constitutes 80% of online traffic. In terms of content moderation, videos (compared to text and images) pose unique challenges.

The unregulated proliferation of online content is also being addressed by the UK government which, in 2021, passed a first reading of the Online Safety Bill. Gradually, regulators are realising that digital and social platforms hosting user generated content (UGC) cannot tackle content moderation on their own. Current approaches are unable to deal with the breadth and volume of UGC hosted by these platforms. As a result, regulators are cautiously taking this problem into their own hands to ensure that it is handled correctly. This is why, now more than ever, we must have the right ecosystem in place, one that welcomes the collaboration of businesses and policymakers, so that we can all drive towards a common goal of a safer online space.

Successful content moderation models that can adhere to regulation must be built in such a way as to maximise transparency, robustness and explainability. One way to promote transparency and help foster collaboration is through the development of open source tools. An example is our publicly available Detoxify library, which can be used to detect online toxicity and hate speech. By building models such as Detoxify that are open access, we hope to encourage further research and development in this area and promote broader discussion of these important topics.

Complexity of meaning and multimodal models

As humans, our brain’s ability to understand the meaning of a sentence’s content is something that most of us simply perceive as ‘common sense.’ These taken-for-granted human abilities are actually very challenging for machines to enact. A key barrier for video content moderation is the need to simultaneously understand different modalities, including text, audio, and image.

An illustration showing that AI is confused when having to analyse text, audio and visual content all tat once.
A confused AI model trying to simultaneously understand different modalities. Photo by Unitary.

Our capacity to analyse these modalities all at once is what allows us to really understand the content, and provide ‘context’, which is often crucial to our interpretation.

In terms of meaning, imagine watching a Tarantino movie filled with guns and shootings. Visual clues tell us this is fiction, the ‘meaning’ of the film is conditioned by it being a movie, and we can distinguish it from a video of a real mass shooting. Now as for context, take for instance visual content depicting the action of taking prescription drugs. We would interpret this same footage very differently if the associated title or caption located the scene within a medical setting, as opposed to a drug abuse context. Thus, understanding additional modalities such as the video’s title, in combination with the visual footage, allows us to contextualise the video, providing critical information that impacts our understanding.

Two identical images with two different captions, showing how context influences meaning.
How a caption can change the context of an image. Photo by Unitary.

It becomes clear how the same image or video, with two different captions, gives rise to two completely different interpretations. Text, as a modality, influences the ‘context,’ and overall meaning of the post.

Effective content moderation requires a real understanding of online content. We can make great strides towards this understanding by designing models that incorporate context. This is what Unitary is creating. That is, a highly specialized multimodal machine learning model for contextual content categorization that enables more ‘human’, (but without the human), content moderation.

What is the internet made of?

In part, the internet is composed of a collection of information that in some way or another reflects our society. This information is on the internet because someone, somewhere, ‘uploaded’ it. However, just like society, the internet is huge and deep, and exhibits many of the same societal challenges and biases that are also evident in our offline world. In order to generate real value from the internet, we must first be able to understand what it consists of. Central to the development of better AI models that can have large-scale positive impact, is an understanding of the underlying data. We must establish AI innovation datasets and benchmarks that are representative of the real world, which starts with the ability to recognise harmful content. This will allow us to ensure that new technologies are developed and applied in a fair and transparent manner. A better understanding of what makes up the internet is the first step to creating a transparent and safe digital space.

A diverse community emerging from a planet (earth) that sits inside a transparent globe.
A transparent and safe digital space. Photo by Unitary.

There is no doubt that humans are more apt at perceiving nuances and meaning, however for human content moderators this is a painful task as they need to review the vast quantity of disturbing material that is uploaded, stored and replicated online. For these reasons, Unitary is building a service that moderates content in a more ‘human’ way, without human cost.

If you are also interested in starting a conversation with the U.S. Chamber AI Commission, click here to respond to their three Requests for Information (RFIs).

For more information on what Unitary does you can check out our website or email us at contact@unitary.ai.

For more posts like these follow us on Twitter, LinkedIn and Medium.