报告题目 (Title):A GPU Based Halpern Peaceman-Rachford (HPR) Method for Solving Convex Programming(基于GPU的Halpern Peaceman-Rachford (HPR)凸优化求解方法)
报告人 (Speaker):赵欣苑 教授(北京工业大学)
报告时间 (Time):2025年10月31日 (周五) 13:30
报告地点 (Place):校本部F309
邀请人(Inviter):徐姿 教授
主办部门:理学院数学系
报告摘要:
We study the acceleration of a preconditioned alternating direction method of multipliers (pADMM), where the proximal terms are convex quadratic functions, for solving linearly constrained convex optimization problems. Our approach begins by reformulating pADMM as a proximal point method (PPM) with a positive semidefinite preconditioner, which may be degenerate due to the lack of strong convexity in the original proximal terms. We then accelerate pADMM by accelerating the resulting degenerate PPM (dPPM). In particular, we first develop an accelerated dPPM by incorporating Halpern iteration, achieving non-asymptotic convergence guarantees. Building upon this result, we further design an accelerated pADMM that enjoys non-asymptotic and nonergodic convergence rates under practical stopping criteria, including the Karush–Kuhn–Tucker (KKT) residual and the primal objective gap. Extensive GPU-based experiments on large-scale linear programming and convex composite quadratic programming benchmarks demonstrate the substantial advantages of our Halpern Peaceman–Rachford (HPR) method—a special case of the Halpern-accelerated pADMM applied to the dual formulation—over state-of-the-art solvers, including the award-winning PDLP, as well as PDQP, SCS, CuClarabel, and Gurobi, in delivering high-accuracy solutions.