Skip to yearly menu bar Skip to main content


LUWA Dataset: Learning Lithic Use-Wear Analysis on Microscopic Images

Jing Zhang · Irving Fang · Hao Wu · Akshat Kaushik · Alice Rodriguez · Hanwen Zhao · Juexiao Zhang · Zhuo Zheng · Radu Iovita · Chen Feng

Arch 4A-E Poster #289
award Highlight
[ ]
Fri 21 Jun 10:30 a.m. PDT — noon PDT

Abstract: Lithic Use-Wear Analysis (LUWA) using microscopic images is an underexplored vision-for-science research area. It seeks to distinguish the worked material, which is critical for understanding archaeological artifacts, material interactions, tool functionalities, and dental records. However, this challenging task goes beyond the well-studied image classification problem for common objects. It is affected by many confounders owing to the complex wear mechanism and microscopic imaging, which makes it difficult even for human experts to identify the worked material successfully. In this paper, we investigate the following three questions on this unique vision task for the first time:($\textbf{i}$) How well can state-of-the-art pre-trained models (like DINOv2) generalize to the rarely seen domain? ($\textbf{ii}$) How can few-shot learning be exploited for scarce microscopic images? ($\textbf{iii}$) How do the ambiguous magnification and sensing modality influence the classification accuracy? To study these, we collaborated with archaeologists and built the first open-source and the largest LUWA dataset containing 23,130 microscopic images with different magnifications and sensing modalities. Extensive experiments show that existing pre-trained models notably outperform human experts but still leave a large gap for improvements. Most importantly, the LUWA dataset provides an underexplored opportunity for vision and learning communities and complements existing image classification problems on common objects.

Live content is unavailable. Log in and register to view live content