Ego-STAR: Spatiotemporal Hints for Egocentric Video Understanding
Abstract
Video reasoning models are a core component of egocentric and embodied agents. However, standard benchmarks for assessing models provide only evaluation of the output (e.g., the answer to a question), without evaluation of intermediate reasoning steps, and most provide answers only in the text domain. We introduce EgoSTAR, a benchmark for evaluating complex egocentric visual reasoning. We extend recent high-quality video data sources recorded from egocentric / embodied settings with a set of challenging, multi-step multimodal questions and spatiotemporally-dense human-annotated reasoning traces. Benchmarking experiments show that state-of-the-art models still have a large gap to human performance. To investigate this gap in detail, we annotate each reasoning trace in the dataset with the objects of interest required to solve the question, for which we also have spatio-temporal mask annotations. Through extensive evaluations, we identify that if frontier models are prompted with hints of where' andwhen' to look, we can get substantial improvements in performance. EgoSTAR will be released publicly to foster progress in egocentric reasoning.