Skip to yearly menu bar Skip to main content


Workshop

First Workshop on Experimental Model Auditing via Controllable Synthesis (EMACS)

Viraj Prabhu · Prithvijit Chattopadhyay · Sriram Yenamandra · Hao Liang · Krish Kabra · Guha Balakrishnan · Judy Hoffman · Pietro Perona

208 B

Thu 12 Jun, 8 a.m. PDT

Keywords:  Synthetic Data  

With the increasing adoption of machine learning models in high-stakes applications, rigorous audits of model behavior have assumed paramount importance. However, traditional auditing methods fall short of being truly experimental, as they rely on wild-caught observational data that has been manually labeled. Enter generative techniques, which have recently shown impressive capabilities in automatically generating and labeling high-quality synthetic data at scale. Critically, many such methods allow for the isolation and manipulation of specific attributes of interest, paving the path towards robust experimental analysis. x000D
x000D
x000D
This workshop is dedicated to exploring techniques for auditing the behavior of machine learning models – including (but not limited) to performance, bias, and failure modes – by the controlled synthesis (via generation or simulation) of data. Of special interest are algorithms for generating data (images, text, audio, etc.) and benchmarking that provide reliable insights into model behavior by minimizing the impact of potential confounders. We also welcome work on the broader topic of using synthetic or quasi-synthetic data for model debugging, broadly construed, with the goal of providing a venue for interdisciplinary exchange of ideas on this emerging topic.

Live content is unavailable. Log in and register to view live content