Skip to yearly menu bar Skip to main content


Invited Talk
in
Workshop: 1st Workshop on Multimodal Content Moderation

Understanding Health Risks for Content Moderators and Opportunities to Help


Abstract:

Social media platforms must detect a wide variety of unacceptable user-generated images and videos. Such detection is difficult to automate due to high accuracy requirements, continually changing content, and nuanced rules for what is and is not acceptable. Consequently, platforms rely in practice on a vast and largely invisible workforce of human moderators to filter such content when automated detection falls short. However, mounting evidence suggests that exposure to disturbing content can cause lasting psychological and emotional damage to moderators. Given this, what can be done to help reduce such impacts?

My talk will discuss two works in this vein. The first involves the design of blurring interfaces for reducing moderator exposure to disturbing content whilst preserving the ability to quickly and accurately flag it. We find that interactive blurring can reduce psychological impacts on workers without sacrificing moderation accuracy or speed (see demo at http://ir.ischool.utexas.edu/CM/demo/). Following this, I describe a broader analysis of the problem space, conducted in partnership with clinical psychologists responsible for wellness measurement and intervention in commercial moderation settings. This analysis spans both social and technological approaches, reviewing current best practices and identifying important directions for future work, as well as the need for greater academic-industry collaboration

Chat is not available.