Skip to yearly menu bar Skip to main content


Poster

HOT: Hadamard-based Optimized Training

Seonggon Kim · Juncheol Shin · Seung-taek Woo · Eunhyeok Park


Abstract: It has become increasingly important to optimize backpropagation to reduce memory usage and computational overhead. Achieving this goal is highly challenging, as multiple objectives must be considered jointly while maintaining training quality. In this paper, we focus on matrix multiplication, which accounts for the largest portion of training costs, and analyze its backpropagation in detail to identify lightweight techniques that offer the best benefits. Based on this analysis, we introduce a novel method, Hadamard-based Optimized Training (HOT). In this approach, we apply Hadamard-based optimizations, such as Hadamard quantization and Hadamard low-rank approximation, selectively and with awareness of the suitability of each optimization for different backward paths. Additionally, we introduce two enhancements: activation buffer compression and layer-wise quantizer selection. Our extensive analysis shows that HOT achieves up to 75\% memory savings and a 2.6× acceleration on real GPUs, with negligible accuracy loss compared to FP32 precision.

Live content is unavailable. Log in and register to view live content