It all depends on the platform you want to use. For example, TikTok has employees directly in charge of moderating the content that users are publishing, while Facebook, Youtube, and Instagram are investing huge amounts of money to outsourced workers to review the shared content. They did it because it is very hard to check on a huge amount of the data shared on them and between them on an hourly or daily basis. Some platforms are almost fully relying on volunteer work (such as Reddit). There is a common problem in human reviewers’ communities and that is mental health. People are asking for additional mental help support because of the content they see and hear. That is maybe even the main reason why big companies are investing in algorithms for hate speech, any kind of abuse, nudity, and similar content recognition and prohibition. On the other side, social networks like Parler becoming increasingly popular because they are promising minimal content moderation of the content their users share. Regardless of the popularity, it gained, Parler, has been suspended from Google and Apple’s app stores and Amazon Web Services hosting. Simply to be said, their privacy policy led them to the ban by all major platforms and their stores which means a huge reduction of access to numerous users worldwide.
Privacy of platforms is gaining more and more attention every day worldwide, and it emphasizes the importance of how platforms are reviewing the data and what is acceptable to be named as protection of free speech. If you are interested to work as a content moderator, feel free to explore the remote work option for different platforms, their privacy policies, and eventual past incidents on them. It shouldn’t be a tough process, and I suppose that most of them are in constant demand for remote associates in this field.