Skip to yearly menu bar Skip to main content


Poster

From Head to Tail: Efficient Black-box Model Inversion Attack via Long-tailed Learning

Ziang Li · Hongguang Zhang · Juan Wang · Meihui Chen · Hongxin Hu · Wenzhe Yi · Xiaoyang Xu · Mengda Yang · Chenjun Ma


Abstract:

Model Inversion Attacks (MIAs) aim to reconstruct private training data from models, leading to privacy leakage, particularly in facial recognition systems. Although many studies have enhanced the effectiveness of white-box MIAs, less attention has been paid to improving efficiency and utility under limited attacker capabilities. Existing black-box MIAs necessitate an impractical number of queries, incurring significant overhead. Therefore, we analyze the limitations of existing MIAs and introduce Surrogate Model-based Inversion with Long-tailed Enhancement (SMILE), a high-resolution oriented and query-efficient MIA for the black-box setting. We begin by analyzing the initialization of MIAs from a data distribution perspective and propose a long-tailed surrogate training method to obtain high-quality initial points. We then enhance the attack's effectiveness by employing the gradient-free black-box optimization algorithm selected by NGOpt. Our experiments show that SMILE outperforms existing state-of-the-art black-box MIAs while requiring only about 5% of the query overhead.

Live content is unavailable. Log in and register to view live content