Skip to yearly menu bar Skip to main content


Workshop

1st Workshop on Multimodal Content Moderation

Mei Chen · Cristian Canton · Davide Modolo · Maarten Sap · Maria Zontak · Chris Bregler

East 17

Sun 18 Jun, 8 a.m. PDT

Keywords:  CV for social good  

Content moderation (CM) is a rapidly growing need in today’s industry, with a high societal impact, where automated CM systems can discover discrimination, violent acts, hate/toxicity, and much more, on a variety of signals (visual, text/OCR, speech, audio, language, generated content, etc.). Leaving or providing unsafe content on social platforms and devices can cause a variety of harmful consequences, including brand damage to institutions and public figures, erosion of trust in science and government, marginalization of minorities, geo-political conflicts, suicidal thoughts and more. Besides user-generated content, content generated by powerful AI models such as DALL-E and GPT present additional challenges to CM systems.

With the prevalence of multimedia social networking and online gaming, the problem of sensitive content detection and moderation is by nature multimodal. The Hateful memes dataset [1] highlights the multimodal nature of content moderation, for example, an image of a skunk and a sentence “you smell good” are benign/neutral separately, but can be hateful when interpreted together. Another aspect is the complementary nature of multimodal analysis where there may be ambiguity in interpreting individual modalities separately. Moreover, content moderation is contextual and culturally multifaceted, for example, different cultures have different conventions about gestures. This requires CM approach to be not only multimodal, but also context aware and culturally sensitive.

Despite the urgency and complexity of the content moderation problem, it has not been an area of focus in the research community. By having a workshop at CVPR, we hope to bring attention to this important research and application area, build and grow the community of interested researchers, and generate new discussion and momentum for positive social impact. Through invited talks, panels, and paper submissions, this workshop will build a forum to discuss ongoing efforts in industry and academia, share best practices, and engage the community in working towards socially responsible solutions for these problems.

With organizers across industry and academia, speakers who are experts across relevant disciplines investigating technical and policy challenges, we are confident that the Workshop on Multimodal Content Moderation (MMCM) will complement the main conference by strengthening and nurturing the community for interdisciplinary cross-organization knowledge sharing to push the envelope of what is possible, and improve the quality and safety of multimodal sensitive content detection and moderation solutions that will benefit the society at large.

Chat is not available.
Timezone: America/Los_Angeles

Schedule