Skip to yearly menu bar Skip to main content


Poster

S3GaitNet: Learning Local Features and Size Awareness from LiDAR Point Clouds for 3D Gait Recognition

Chuanfu Shen · Rui Wang · Lixin Duan · Shiqi Yu


Abstract: Point clouds have gained growing interest in gait recognition. However, current methods, which typically convert point clouds into 3D voxels, often fail to extract essential gait-specific features. In this paper, we explore gait recognition within 3D point clouds from the perspectives of architectural designs and gait representation modeling. We indicate the significance of local and body size features in 3D gait recognition and introduce S3GaitNet, a novel framework combining advanced local representation learning techniques with a novel size-aware learning mechanism. Specifically, S3GaitNet utilizes Set Abstraction (SA) layer and Pyramid Point Pooling (P3) layer for learning locally fine-grained gait representations from 3D point clouds directly. Both the SA and P3 layers can be further enhanced with size-aware learning to make the model aware of the actual size of the subjects. In the end, S3GaitNet not only outperforms current state-of-the-art methods, but it also consistently demonstrates robust performance and great generalizability on two benchmarks. Our extensive experiments validate the effectiveness of size and local features in gait recognition. The code will be publicly available.

Live content is unavailable. Log in and register to view live content