Skip to yearly menu bar Skip to main content


Poster

Global-Local Tree Search in VLMs for 3D Indoor Scene Generation

Wei Deng · Mengshi Qi · Huadong Ma


Abstract:

Large Vision-Language Models (VLMs), such as GPT-4, have achieved remarkable success across various fields. However, there are few work studies on 3D indoor scene generation with VLMs. This paper considers this task as a planning problem subject to spatial and layout common sense constraints. To solve the problem with a VLM, we propose a new global-local tree search algorithm. Globally, the method places each object sequentially and explores multiple placements during each placement process, where the problem space is presented as a tree. To reduce the depth of the tree, we decompose the scene structure hierarchically, \ie room level, region level, floor object level, and supported object level. The algorithm independently generates the floor objects in different regions and supported objects placed on different floor objects. Locally, we also decompose the sub-task, the placement of each object, into multiple steps. The algorithm searches the tree of problem space. To leverage the VLM model to produce positions of objects, we discrete the top-down view space as a dense grid and fill each cell with diverse emojis to make to cells distinct. We prompt the VLM with the emoji grid and the VLM produces a reasonable location for the object by describing the position with the name of emojis. The quantitative and qualitative experiments results illustrate our approach generates more plausible 3D scenes than state-of-the-art approaches. We will release our code and model.

Live content is unavailable. Log in and register to view live content