论文标题

学习重新制定线性编程

Learning to Reformulate for Linear Programming

论文作者

Li, Xijun, Qu, Qingyu, Zhu, Fangzhou, Zeng, Jia, Yuan, Mingxuan, Mao, Kun, Wang, Jie

论文摘要

已验证的是,线性编程(LP)能够制定许​​多真实的优化问题,这些问题可以通过诉诸于Optverse,Gurobi和Cplex等相应求解器来获得最佳。在过去的几十年中,已经提出了一系列传统的操作研究算法,以在更少的解决时间内获得给定LP的最佳。最近,有一种使用机器学习(ML)技术来提高上述求解器的性能的趋势。但是,几乎没有以前的工作利用ML技术来从前端(即建模(或配方))提高求解器的性能。在本文中,我们是第一个为LP提出基于增强学习的重新制定方法来提高解决过程的性能。使用开源求解器硬币或LP(CLP)作为环境,我们在两个公共研究LP数据集和一个从实际生产计划场景中收集的大规模LP数据集实施了建议的方法。评估结果表明,与直接求解原始LP实例相比,所提出的方法可以有效地减少求解迭代编号($ 25 \%\ downrow $)和平均数据集的求解时间($ 15 \%\ downarrow $)。

It has been verified that the linear programming (LP) is able to formulate many real-life optimization problems, which can obtain the optimum by resorting to corresponding solvers such as OptVerse, Gurobi and CPLEX. In the past decades, a serial of traditional operation research algorithms have been proposed to obtain the optimum of a given LP in a fewer solving time. Recently, there is a trend of using machine learning (ML) techniques to improve the performance of above solvers. However, almost no previous work takes advantage of ML techniques to improve the performance of solver from the front end, i.e., the modeling (or formulation). In this paper, we are the first to propose a reinforcement learning-based reformulation method for LP to improve the performance of solving process. Using an open-source solver COIN-OR LP (CLP) as an environment, we implement the proposed method over two public research LP datasets and one large-scale LP dataset collected from practical production planning scenario. The evaluation results suggest that the proposed method can effectively reduce both the solving iteration number ($25\%\downarrow$) and the solving time ($15\%\downarrow$) over above datasets in average, compared to directly solving the original LP instances.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源