OpenAI plans to test its GPT-4 LLM for content moderation to aid digital platforms that often work on user-generated content.
OpenAI says anyone with a GPT-4 plug-in can start testing it for content moderation, which is scalable, consistent and customisable easily. The model can quickly adapt to the new policies crafted by governments and even be modified to platform-specific rules.
Helping Social Media Platforms
For a long time, content moderation has been the most bothering part of the internet, where some user-generated content is objectionable to the public. Though the respective platforms – primarily social media – are trying their best to remove them, it’s insufficient.
And the worst part, it’s constantly degrading the lives of human moderators they appoint, who suffer traumas and depression. An example is Meta paying 11,000 human moderators $1,000 each in 2020 as compensation for mental health issues arising from moderating content on Facebook.
And it’s not just with Meta, but other social media platforms too. Though some platforms are using AI tools to moderate content, they’re far from being perfect. Thus, OpenAI is now pitching in its GPT-4 LLM as a worthy bet to moderate content with AI capabilities.
In a blog post, OpenAI said its GPT-4 large multimodal model can be used “to build a content moderation system that is scalable, consistent and customisable.” Further, the LLM can be used to develop and tune procedures, effectively “reducing the cycle from months to hours.”
The company even claims that GPT-4 can understand various regulations and content policies to adapt, and act accordingly instantly, says OpenAI’s Lilian Weng, Vik Goel and Andrea Vallone.
“We believe this offers a more positive vision of the future of digital platforms, where AI can help moderate online traffic according to platform-specific policy and relieve the mental burden of a large number of human moderators.”
Anyone with GPT-4 API access can start using it for making their AI-assisted moderation system, says OpenAI, which also claims to reduce to work that companies carry out on this from six months to just a day! Though boasting about AI prowess, OpenAI still wants humans to be involved.
Since AI is still unpredictable, any judgements made by these tools are vulnerable to undesired biases. Thus, humans must carefully monitor, validate and tinker with the technology when needed.