When a machine moderates content, it evaluates text and images as data using an algorithm that has been trained on existing data sets. The process for selecting training data has come under fire as it’s been shown to have racial, gender and other biases.
Exactly. The real problem is lack of human oversight, and lack of a way to contact someone.
And these days, even if you manage to get someone they’ll be some call center in India or Philippines that are only there to help with faq-level things and otherwise politely tell you to fuck off. They can’t actually do anything.