Poster
UWAV: Uncertainty-weighted Weakly-supervised Audio-Visual Video Parsing
Yung-Hsuan Lai · Janek Ebbers · Yu-Chiang Frank Wang · François Germain · Michael J. Jones · Moitreya Chatterjee
Audio-Visual Video Parsing (AVVP) entails the challenging task of localizing both unimodal events, i.e., those occurring either exclusively in the visual or acoustic modalities of a video, and multimodal events, i.e., those occurring in both modalities concurrently. Moreover, the prohibitive cost of annotating the training data with the class labels of all these events, along with their start and end times, imposes constraints on the scalability of AVVP techniques unless they can be trained in a weakly-supervised setting, e.g. where only modality-agnostic, video-level labels might be assumed to be available in the training data. To this end, recently proposed approaches seek to generate segment-level pseudo-labels to better guide the training of these methods. However, the lack of inter-segment consistency of these pseudo-labels and the general bias towards predicting labels that are absent in a segment, limit their performance. This work proposes a novel approach towards overcoming these weaknesses called Uncertainty-weighted Weakly-supervised Audio-visual Video Parsing (UWAV).Additionally, our innovative approach factors in the uncertainty associated with these estimated pseudo-labels and incorporates a feature mixup based training regularization for improved training. Empirical evaluations show that UWAV outperforms the current state-of-the-art for the AVVP task on multiple metrics, across two different datasets, attesting to its effectiveness and generalizability.
Live content is unavailable. Log in and register to view live content