Paper
in
Workshop: ReGenAI: Second Workshop on Responsible Generative AI
ECO-AI – Energy-Conscious Optimization for AI Training
Janos Horvath
The rapid advancement of large-scale generative AI models, including diffusion-based image generators and multimodal text-to-image systems, has led to unprecedented capabilities but also substantial energy demands and carbon emissions. In this paper, we systematically analyze the energy footprint of training generative models at varying scales, ranging from 200 million to 3 billion parameters. Our study evaluates the impact of key factors such as training duration (48, 96, 240 hours), precision modes (FP32 vs.\ FP16), optimizer choices (AdamW vs.\ Lion), and batch sizes, across two distinct energy grid compositions—70% renewable and 30% renewable. Through extensive empirical measurements, we identify critical efficiency trade-offs and propose optimization strategies to mitigate environmental impact. Our findings highlight the importance of energy-conscious AI development, demonstrating that mixed-precision training, strategic optimizer selection, and renewable-aware scheduling can significantly reduce carbon footprints without compromising model performance. This work underscores the urgent need for sustainable AI practices and provides actionable insights for reducing the ecological costs of generative AI.