Try Pienso using your own data, in a two-week complimentary pilot.  Book your pilot scoping call.

Content Moderation

You have a responsibility to protect your community members from harmful content — but how can you do this systematically, when content is diverse, free-form, and non-stop?

Whether the medium is a publicly-available social network, or a corporate Slack channel, users have an expectation of safety and security.  And increasingly, laws and regulations are insisting that platform providers take material steps to protect their user base from harmful content such as threats of violence, misinformation, and hate speech.  But content moderation is a challenge, requiring either inflexible/automated business rules, or burdensome manual analysis.

You need an AI tool that helps your moderators scale to the volume of content, and impart their nuanced understanding of the material and potentially harmful topics.  No programming skills required.

Use Cases

Content Moderation can take many forms.

Fine-Tuned Moderation

Typical automated moderation tools leverage rudimentary techniques such as wordspotting, or require programmer intervention to update filters as new topics emerge.
Pienso can be used to train and deploy advanced Deep Learning models that yield more nuanced content flagging — accounting for nicknames, euphemisms and colloquialisms.

Threat Identification

Social media posts can provide valuable advanced warning of potential violent acts. But the nuances of social media, tone, and sarcasm make automated recognition and understanding of such threats difficult.

Pienso empowers SMEs to train classification models infused with their nuanced understanding of users and the topics that typically lead to threats or actual violence.

Spam Detection

Spam and phishing techniques are becoming increasingly difficult to detect, as bad actors work to creatively circumvent automated filters and traditional AI.

Pienso offers the ability to consistently react to spam and phishing strategies, by offering the ability to quickly and easily retrain models as bad actors’ strategies evolve.