Skip to yearly menu bar Skip to main content


EGTR: Extracting Graph from Transformer for Scene Graph Generation

Jinbae Im · JeongYeon Nam · Nokyung Park · Hyungmin Lee · Seunghyun Park

Arch 4A-E Poster #408
[ ] [ Project Page ]
Fri 21 Jun 5 p.m. PDT — 6:30 p.m. PDT
Oral presentation: Orals 6C Multi-modal learning
Fri 21 Jun 1 p.m. PDT — 2:30 p.m. PDT


Scene Graph Generation (SGG) is a challenging task of detecting objects and predicting relationships between objects. After DETR was developed, one-stage SGG models based on a one-stage object detector have been actively studied. However, complex modeling is used to predict the relationship between objects, and the inherent relationship between object queries learned in the multi-head self-attention of the object detector has been neglected. We propose a lightweight one-stage SGG model that extracts the relation graph from the various relationships learned in the multi-head self-attention layers of the DETR decoder. By fully utilizing the self-attention by-products, the relation graph can be extracted effectively with a shallow relation extraction head. Considering the dependency of the relation extraction task on the object detection task, we propose a novel relation smoothing technique that adjusts the relation label adaptively according to the quality of the detected objects. By the relation smoothing, the model is trained according to the continuous curriculum that focuses on object detection task at the beginning of training and performs multi-task learning as the object detection performance gradually improves. Furthermore, we propose a connectivity prediction task that predicts whether a relation exists between object pairs as an auxiliary task of the relation extraction. We demonstrate the effectiveness and efficiency of our method for the Visual Genome and Open Image V6 datasets. Our code is publicly available at

Live content is unavailable. Log in and register to view live content