Unitary's goal is to support safer and kinder online connections. Intimate exchanges can often have more than one meaning. Playful banter between consenting users could in certain cases be misclassified as harmful or vice versa. We believe these subtleties are important as they shape the quality of our online experiences.
Over lunch one Saturday, your not-so-recently single friend turns around and tells you, with familiar weariness, that she is “back on the apps.” She met her ex on a dating app, you remind her, and they dated for a year and a half, and really liked each other! You met your partner on a dating app, and now you live together! Dating apps are good, actually: they open up your world beyond the same five places you always seem to end up at, and let you meet new interesting people. Even the dates that don’t go anywhere make a good story, you tell her. Sure, she concedes, but…what about the creeps?
While it's true that dating apps can foster genuine connections between people, they also face a pretty significant challenge in preventing harmful behaviour at three different stages: 1) when coming across a profile, 2) within a chat with another person, and 3) once you meet in real life. Traditional content moderation methods can only get you part of the way there, but as apps introduce more ways to showcase yourself to (maybe?) the future love of your life, they will also have to introduce better safeguards. Since dating apps are predicated on human interactions, a blunt instrument won’t cut it: we need techniques that can respond to the context of a given situation.
Context is important, it allows us to get closer to the truth of a given situation. Without considering context, even the most advanced moderation solution will struggle to accurately understand content, resulting in false positives or negatives. Imagine two users in a chat: a nature lover and your friend, a born and bred city girl. Nature boy doesn’t have a profile picture, is reserved and always sends images of the woods (creep alert?!) - Meanwhile Miss City is a selfie queen, and an extrovert at heart. One day, while chatting, nature boy uploads a video of himself carving wood with a huge knife, followed by the reassuring text “preparing a bonfire for us tonight 🔥.”
At first glance, a traditional content moderation tool might classify this as potentially harmful content: after all, nature boy’s profile activity could suggest that he is a creep, picking up girls to murder in the woods. But he might just be an outdoor lover planning the most romantic date he can think of! Most moderation tools struggle with these ambivalences – situations that could mean two things – and in this case would fail to accurately understand the relationship between nature boy’s profile data, his texts and activity on the app. And your friend, who has been dreaming of a romantic evening away from the chaotic city, would miss out on a sweet date.
At Unitary, we’re offering AI-powered solutions that can extract the meaning from user-generated online content, taking the nuances of real life interactions into account. Our computer vision models analyse signals together, allowing us to distinguish between playful banter amongst consenting users, and harmful behaviour like harassment or threats. By improving moderation techniques, our technology allows dating apps to provide better experiences for users looking for meaningful connections. Users are empowered to express themselves freely while being protected from genuine harm.
The result is a dating app experience where real connections can thrive. Our technology opens the door to self-expression, fun, and romance that is unimpeded by antisocial behaviour. Of course, there’s no such thing as risk-free dating; relationships always involve the possibility of getting hurt. But by improving content moderation on dating apps, we hope to eliminate the risk of harassment, threats, and other harmful behaviour. The only risk users should have to take is that someone doesn’t feel the same way.