Showing all posts in category:
The implementation of watermarking for AI-generated content poses several challenges. This Deep Dive explores the complexities of watermarking for different media types, including images, audio, video, and text. And highlights the difficulties in creating robust watermarking techniques that can withstand various attacks.
In discussions about AI regulation, there has been relatively little of direct relevance to trust & safety. Most of the focus is on use by regulated organizations rather than misuse by platform users. However, as large platforms play increasingly significant roles in providing access to AI models, we might expect future regulatory efforts to more directly impact T&S.
Concerns about bias in social media content moderation have prompted the drafting of laws and the need for better explanations. Studies show users perceive automated moderation as more impartial with human oversight and that trust in AI moderation increases with transparency.