How to moderate user content in your community application

January 30, 2024 (6mo ago)

THE PROBLEM:

Between bad-words, obscenity, swearjar, there are no shortage of npm packages for filtering text for users. The problem is that none of them really work for what they’re designed to do. In linguistics there’s something known as the Scunthorpe problem, it’s when online content is blocked because it intentionally or unintentionally contains text that appears to be profane.

“Hey jerk, why you such a chicken?” “Hey did you make jerk chicken? Smells amazing!”

Clearly, using a hardcoded list of words is an unacceptable solution for a scalable software product. Indeed, with images it becomes even more challenging. Are you going to scrape every inappropriate image in the internet and try to match them with what you users post? You’re going to need more GPUs than Silicon Valley can provide!

THE SOLUTION:

With the explosion of LLMs and generative AI, we now have the tools to attempt a realistic, scalable solution for content moderation. Packages like bad-words and obscenity do have a place in filtering particular bad words that are never acceptable in a community application, but with AI we no longer have to rely on a hardcoded list of words. OpenModerator is a suite of tools to make moderation easy:

✅ Open-source TypeScript package content-checker that’s a modernized version of the popular but outdated bad-words JS package

✅ API with AI text and image moderation endpoints (more to come!) integrated with the content-checker package

✅ Next.js 14 demo repo to show you how easy it is to use content-checker