Workshop
WorldModelBench: The First Workshop on Benchmarking World Foundation Models
Heng Wang · Prithvijit Chattopadhyay · Ming-Yu Liu · Mike Zheng Shou · Jay Zhangjie Wu · Xihui Liu · Deepti Ghadiyaram · Gowthami Somepalli · Huaxiu Yao · Wenhu Chen · Jiaming Song · Humphrey Shi
108
Thu 12 Jun, 6 a.m. PDT
Keywords: Generative Models
World models are predictive systems that enable Physical AI agents to understand, decide, plan, and analyze counterfactuals through integrated perception, instruction processing, controllability, physical plausibility, and future prediction capabilities. The past year has witnessed significant advancements from both academic and industrial research teams, with various models utilizing different conditioning approaches (text, image, video, control) being released openly and commercially. While these developments enable applications in content creation, autonomous driving, and robotics, the models' diversity in training methods, data sources, architecture, and input processing necessitates critical evaluation. The WorldModelBench workshop aims to address this need by fostering discussions on evaluation criteria (physical correctness, prompt alignment, generalizability), metrics development, standardized methodologies, and crucial topics including accessible benchmarking, quantitative evaluation protocols, downstream task assessment, and safety/bias considerations in world models.
Live content is unavailable. Log in and register to view live content