Have you ever scrolled randomly through your Twitter timeline and a trigger warning sign popped up? Or watched a TikToker complain about how their video was taken down. Perhaps your Instagram post on COVID-19 has a COVID-19 Information Centre label attached to it. These actions occur thanks to a concept known as content moderation. You’re probably wondering, What is Content Moderation?
For starters, content moderation belongs to a wider family of similar concepts which are housed under the umbrella of trust and safety. Content Moderation is the filtering of content generated and uploaded by you (the user) on platforms such as websites and social media to ensure that the content is not illegal, inappropriate and failing to meet any other standards set by the platform. Every platform determines their benchmark for acceptable content and develops policies and guidelines in line with the benchmark. The process of content moderation occurs via various mediums including –
1. Automated Moderation
Automated moderation is arguably the type of moderation that we as users as most familiar with. Automated Moderated is carried out using artificial intelligence (AI) which is usually built with the platform in mind. The AI is programmed to moderate content that violates any of the platform’s policies and standards by using tools such as keyword filters and image recognition. As the name implies, automated moderation occurs without human prompts and operates faster to filter out inappropriate content.
2. Pre-Moderation
Pre-moderation occurs when the content you generate as a user is screened by the platform before it is allowed to go live. This type of content moderation enables the platform to ensure that all content on the platform meets their policies and guidelines. Pre-moderation can be expensive and time-consuming if the platform attracts a lot of user generated content and relies on manual moderators. For context, imagine if Twitter had to screen every single tweet!
3. Post Moderation
As the opposite of pre-moderation, post moderation occurs when the content generated by users is reviewed after its has been published. While post-moderation can be as expensive and time-consuming as pre-moderation, it gives you the user the satisfaction of immediately publishing your content which is absent in pre-moderation.
4. Reactive Moderation
If you have ever reported a social media post or had someone report your post, you are definitely familiar with reactive moderation. This type of moderation occurs when platforms allow users to report or flag content published by other users. Reactive Moderation depends on users to bring inappropriate or violating content to the attention of the platform moderators who decide the actions to take with respect to the content.
5. Distributed Moderation
This is quite similar to reactive moderation and occurs frequently on question-and-answer platforms. This type of moderation entails a voting feature in which content with higher likes are more visible than downvoted content/content with lower likes. Distributed moderation can also entail users being able to vote on whether content is in line with the community guidelines.
No type of content moderation is absolute or should be used in isolation. Nonetheless, content moderation is important because it keeps you as the user safe from harmful and abusive content, violent language, online bullying and harassment and misinformation. When using social media platforms as well as other online platforms, it is advisable for you to go through the policies of the platform to ensure that your content does not violate the policies. Content policies are often referred to as community guidelines or user content and conduct policies and are readily available on the webpages of the platforms. Always remember that in all its forms, content moderation keeps you safe online!