Showing all posts in category:
Unitary's goal is to support safer and kinder online connections. Intimate exchanges can often have more than one meaning. Playful banter between consenting users could in certain cases be misclassified as harmful or vice versa. We believe these subtleties are important as they shape the quality of our online experiences.
We attended TrustCon - a conference dedicated to trust and safety professionals from around the globe who are responsible for keeping the internet safe. It was an invaluable learning experience and here are some key takeaways.
Automated content moderation used to be an enterprise-only issue. But upcoming changes to the law mean that any content sharing platform needs to start paying closer attention to user-generated content.
The situation with algospeak is complex: a given word can have multiple meanings and can be used in a variety of contexts.
We interviewed a former TikTok policy manager to understand decision-making in a globally important platform, the role of social media in society, and our musings on what online platforms might look like both without any moderation at all and in a heavily moderated environment.
This June, the U.S. Chamber AI Commission hosted a field hearing in London (UK), to discuss global competitiveness, inclusion and innovation in relation to AI. To explore these topics, numerous key leaders in public policy, AI and innovation were invited, including Sasha Haco , CEO of Unitary.
The issue of online toxicity is one of the most challenging problems on the internet today. We know it can amplify discord and discrimination, from racism and anti-semitism, to misogyny and homophobia. In some cases, toxic comments online can result in real life violence [1,2].
Online content is increasingly complex. Current solutions fail to address this complexity and do not work at scale. At Unitary we are deeply motivated by our mission to build visual understanding AI, which can be used to make the internet safer.