Skip to yearly menu bar Skip to main content


Poster

Automated Proof of Polynomial Inequalities via Reinforcement Learning

Banglong Liu · Niuniu Qi · Xia Zeng · Lydia Dehbi · Zhengfeng Yang


Abstract:

Polynomial inequality proving is fundamental to many mathematical disciplines and finds wide applications in diverse fields. Current traditional algebraic methods are based on searching for a polynomial positive definite representation over a set of basis. However, these methods are limited by truncation degree.To address this issue, this paper proposes an approach based on reinforcement learning to find a Krivine-basis representation for proving polynomial inequalities.Specifically, we formulate the inequality proving problem as a linear programming (LP) problem and encode it as a basis selection problem using reinforcement learning (RL), achieving a non-negative Krivine basis. Moreover, a fast multivariate polynomial multiplication method based on Fast Fourier Transform (FFT) is deployed to enhance the efficiency of action space search. Furthermore, we have implemented a tool called APPIRL (Automated Proof of Polynomial Inequalities via Reinforcement Learning).Experimental evaluation on benchmark problems demonstrates the feasibility and effectiveness of our approach. In addition, APPIRL can successfully be applied to attack the maximum stable set problem.

Live content is unavailable. Log in and register to view live content