The coronavirus outbreak has led to Facebook shifting the moderation of its most sensitive content including counter terrorism and self-harm from third party contractors to the company’s full time employees. As per an official source, the CEO mentioned his goal of having contractors moderate less emotionally taxing content while leveraging the benefit of WFH amid coronavirus emergency.
It has been reported that formulation of guidelines for contractors working from home would take some time which could also lead to an increase in the number of full time employees doing this task, including mental health support for moderators and keeping data privacy protections in place.
Apparently, this decision comes ahead of the pressure the company has been facing with regards to policing content violations.
Speculations have it that the company has been drawing criticism for encouraging its employees to work from home while the content moderation contractors were prompted to come to offices in the cities across the globe amid the pandemic scare. However, the social media giant recently put forth its plan on relying more on automated tools to remove offensive content while working with contract vendors to send moderators home with pay.
Additionally, the company mentioned that following this week, more videos and other content might be incorrectly removed from the sites for policy violations, considering its reliance on automated takedown software. However, a contractor amidst this lockdown came up to raise his concern regarding the emergence of an AI that would be able to correctly furnish the content moderation, resulting in loss of job for him and several others.
As per reports, various Facebook content moderators who have also been working with third party vendors state that the employers have been scrambling to take a fixed taken on whether the moderators should work remotely or work from office.