Showing all posts in category:
Deepfake detection often feels like a vicious cycle. As methods are developed to detect deepfakes, the AI systems generating content evolve and bad actors change tactics. And it's not long before the metrics and thresholds for detecting deepfakes become outdated. In this article we explore how Goodhart's law, 'when a measure becomes a target, it ceases to be a good measure', can help us see things under a different light.
Learn what computer vision and natural language processing are, and how they can be used together to help platforms better moderate content online. Discover why Unitary's AI-powered technology is the ultimate solution for online safety.
We’ve all heard about the power and value of algorithms, but what is it exactly that makes multimodal algorithms so special?
Who is better at detecting inappropriate content – computers or humans? To answer this question we compare AI and humans on different levels, including scale, personal risk, cost, bias and context.
With UK Safer Internet Day approaching, take this online safety test to check if you can identify harmful content. How well do you know what’s appropriate for underage audiences? Put your skills to the test and see if you can identify online harm!
In this interview with Captur's founder and CEO, Charlotte Bax, we explore how both computer vision and online safety challenges can show up in unexpected places. Learn more about the power of computer vision in daily operations!
Learn how to ensure brand safety with this conversation that explores evolving digital advertising and its implications for brand safety. Tim Finn, Head of Partnerships at Unitary, shares 20 years of adtech experience on content moderation & contextual ad targeting.
Learn how trust and safety professionals can create inclusive online spaces using artificial intelligence. Discover the complexities of building a safe, inclusive online environment for all users.