Cookie Preferences
close

By clicking “Accept”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.

Close icon
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Contextual, policy-based harmful content detection

Just like a human, our models can tell difference between image of dried mushrooms with the caption “making risotto tonight” and that same image captioned “let me take you on a trip tonight”. Unitary's detection models consider all signals together (caption, audio, and OCR) to analyse visual content in the context it appears.
Warning icon

Offerings

Standard

Our leading off-the-shelf solution, delivering significant increases in accuracy, by taking into account tone, context, and setting.

  • Detect NSFW content, violence, substances, hate speech and more
  • Context-aware: understands tone, context and setting
  • Multimodal: takes all signals into account

Premium

Our most accurate solution, custom-fit to your detailed trust and safety guidelines.

  • Custom tuning that learns and implements your specific policy
  • Context-aware: understands tone, context and setting
  • Multimodal: takes all signals into account

Open source: Detoxify

Our mission at Unitary is to create a safer internet for everyone. We created Detoxify with this in mind.

An open source Python library that identifies comments containing toxic language across six languages.

I hate you
Compute
Computation time on cpu: cached
0.951
toxic
0.138
insult
0.039
threat
0.024
obscene
0.021
identity_hate
0.008
severe_toxic

Quality moderation at a fraction of the cost.

Unitary’s virtual moderation requires zero integration and works 24/7 with the speed of software. Our AI moderation agents are constantly being trained by the best human moderators to deliver even more cost savings and efficiencies as your volumes grow.