Skip to yearly menu bar Skip to main content


Poster

Making Visual Sense of Oracle Bones for You and Me

Runqi Qiao · LAN YANG · Kaiyue Pang · Honggang Zhang


Abstract:

Visual perception evolves over time. This is particularly the case of oracle bone scripts, where visual glyphs seem intuitive to people from distant past prove difficult to be understood in contemporary eyes. While semantic correspondence of an oracle can be found via a dictionary lookup, this proves to be not enough for public viewers to connect the dots, i.e., why does this oracle mean that? Common solution relies on a laborious curation process to collect visual guide for each oracle (Fig.1), which hinges on the case-by-case effort and taste of curators. This paper delves into one natural follow-up question: can AI take over? Begin with a comprehensive human study, we show participants could indeed make better sense of an oracle glyph subjected to a proper visual guide and its efficacy can be approximated via a novel metric termed TransOV (Transferable Oracle Visuals). We then define a new conditional visual generation task based on an oracle glyph and its semantic meaning and importantly approach it by circumventing any form of model training in the presence of fatal lack of oracle data. At its heart is to leverage foundation model like GPT-4V to reason about the visual cues hidden inside an oracle and take advantage of an existing text-to-image model for final visual guide generation. Extensive empirical evidence shows our AI-enabled visual guides achieve significantly comparable TransOV performance compared with those collected under manual efforts. Finally, we demonstrate the versatility of our system under a more complex setting, where it is required to work alongside an AI image denoiser to cope with raw oracle scan image inputs (cf. processed clean oracle glyphs). The code, data, and model will be made publicly available.

Live content is unavailable. Log in and register to view live content