Cookie Preferences

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Close icon
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Brand Safety in a World of Deepfakes & Misinformation

Find out how AI and machine learning are changing the online landscape and learn practical tips for dealing with deepfakes and misinformation.


Table of contents

A person in a room full of screens. Image via Adobe Stock.

Not all AI is the same, however as a market-leading (AI) brand safety solution we know a thing or two about how these systems work and what you can do to keep brands and platforms safe.

In recent months, ChatGPT has taken the internet by storm thanks to its impressive generative capabilities. But as well as bringing AI and machine learning to mainstream attention, it also creates a series of new brand safety headaches.

Generative AI and misinformation

The perceived credibility of text generated by ChatGPT is achieved because it reads naturally. You can easily believe that another human being has written the resulting text. Cut and paste that output onto another webpage or a social media feed and the average reader has no idea that it was written by a bot – or whether the content is even factually correct.

All very clever, but generative AI is a serious threat to brand safety. Hacktivists and internet trolls are using generative AI tools to create videos and texts to promote hate for instance. By using a well-known celebrity’s face or brand themes, these deepfakes are extremely convincing – and if people take them seriously, the reputational damage could be severe.

ChatGPT and the future of search

Viral content has always created problems for brand safety specialists, but this is likely to get worse as malicious users can now create deepfake content at scale. Perhaps more worrying is the integration of ChatGPT and similar technologies into popular search engines which could skew search results in favour of fake content.

Brand safety will have to be more proactive than ever, identifying and responding to AI-generated fakes as quickly as possible.

How to deal with deepfakes

Moderation is already an impossible task because of the sheer volume of content being generated every hour. And now that a realistic deepfake video can be created in minutes, it is not humanly possible to stop malicious content slipping through.

A picture of Pope Francis wearing a white puffer jacket went viral recently as internet users were hugely amused by the mash-up of traditional and modernity. However, the “Pope Coat'' image turned out to be a deepfake - but only after it had been re-shared many thousands of times. The reason why the “Pope Coat” was popular is exactly the same reason why it should have been identified as a fake much sooner - the image was so unlikely it couldn’t be real.

Training users to consider context will be essential in the battle against deepfakes. It won’t stop deepfakes being created, or even fooling some people, but it could at least help the majority of internet users.

An interesting initiative is set forth by the MIT Media Lab, who launched Detect Fakes, a research project designed to help people detect AI-generated misinformation. When in doubt, they suggest reflecting on the following areas:

Screenshot from MIT Media Lab, project: Detect DeepFakes: How to counteract Misinformation Created by AI.

The future of deepfake detection

As AI-driven deepfake technology continues to improve, will AI need to be deployed against AI? For a long time, images have been regarded as undeniable proof that an event took place. Deepfakes change this, making people question the very nature of what they see online.

The situation with deepfake detection is a tricky one. As deepfakes become increasingly sophisticated and ‘realistic’, researchers strive to develop new techniques to detect them, while adversaries find novel ways to evade detection.

Overall, computer vision algorithms, machine learning models and metadata analysis can be used to examine content for inconsistencies and trace it back to the original source. Nonetheless, these processes are indeed complicated, often requiring a mix of technical expertise and sophisticated technology.

Moving forward, as the scale and complexity of content continue to increase, relying on AI to ensure a brand safe environment will be inevitable. AI is faster than human moderators, allowing brands to stay on top of the flood of new content being generated every day.

Although we do not currently offer solutions for deepfakes, learn more about how we can help you supercharge your brand safety efforts with our GARM Plug and Play solution.