Facebook’s improved AI isn’t preventing harmful content from spreading

Facebook claims it’s becoming better at detecting — and removing — objectionable content from its platform, despite the fact that misleading, untrue, and otherwise harmful posts continue to make their way into millions of users’ feeds.

Read More

Facebook’s redoubled AI efforts won’t stop the spread of harmful content

Facebook says it’s using AI to prioritize potentially problematic posts for human moderators to review as it works to more quickly remove content that violates its community guidelines. The social media giant previously leveraged machine learning models to proactively take down low-priority content and left high-priority content reported by users to human reviewers.

Read More

Google returns to using human YouTube moderators after AI errors

Google is returning to using humans for YouTube moderation after repeated errors with its AI system. Moderating a large network like YouTube is no easy task. Aside from the sheer volume of content uploaded every day, moderators are subjected to the worst of humanity and often end up requiring therapy. They’re the unsung heroes. AI… Read more »
The post Google returns to using human YouTube moderators after AI errors appeared first on AI News.

Read More