Paper
in
Workshop: Efficient Large Vision Models
From Data to Design: Leveraging Frequency Statistics for Efficient Neural Network Architectures
Mustafa Munir
This paper delves into the frequency analysis of image datasets and neural networks, particularly Vision Transformers (ViTs) and Convolutional Neural Networks (CNNs), and reveals the alignment property between datasets and network architecture design. Our analysis suggests that the frequency statistics of image datasets and the learning behavior of neural networks are intertwined. Based on this observation, our main contribution consists of a new framework for network optimization that guides the design process by adjusting the network's depth and width to align the frequency characteristics of untrained models with those of trained models. Our frequency analysis framework can be used to design better neural networks with better performance-model size trade-offs. Our results on ImageNet-1k image classification, CIFAR-100 image classification, and MS-COCO object detection and instance segmentation benchmarks show that our method is broadly applicable and can improve network architecture performance. Our investigation into the alignment between the frequency characteristics of image datasets and network architecture opens up a new direction in model analysis that can be used to design more efficient networks.