论文标题

设计可拖动的分段仿射政策,用于多阶段可调稳定性优化

Designing Tractable Piecewise Affine Policies for Multi-Stage Adjustable Robust Optimization

论文作者

Thomä, Simon, Walther, Grit, Schiffer, Maximilian

论文摘要

我们研究了具有非负右侧不确定性的多阶段可调节鲁棒优化(ARO)问题的分段仿射策略。首先,我们构建了新的主导不确定性集,并展示当不确定性被这些新集取代时,如何通过线性程序有效地解决多阶段ARO问题。然后,我们演示了如何将解决此类问题的解决方案转化为原始问题的解决方案。通过仔细选择主导集,我们证明了策略的强近似范围,并将许多以前最著名的问题扩展到两期问题变体的多个阶段。此外,据我们所知,新的界限是所考虑的一般多阶段ARO问题所显示的第一范围。我们将我们的政策与文献中的其他政策进行了广泛的比较,并证明了相对绩效保证。在两个数值实验中,我们确定了不同政策的有益和不利特性,并进行了有效的调整,以应对政策中最关键的缺点。总体而言,实验表明,我们的分段仿射策略可以比仿射策略快的数量级计算,同时通常会产生可比甚至更好的结果。

We study piecewise affine policies for multi-stage adjustable robust optimization (ARO) problems with non-negative right-hand side uncertainty. First, we construct new dominating uncertainty sets and show how a multi-stage ARO problem can be solved efficiently with a linear program when uncertainty is replaced by these new sets. We then demonstrate how solutions for this alternative problem can be transformed into solutions for the original problem. By carefully choosing the dominating sets, we prove strong approximation bounds for our policies and extend many previously best-known bounds for the two-staged problem variant to its multi-stage counterpart. Moreover, the new bounds are - to the best of our knowledge - the first bounds shown for the general multi-stage ARO problem considered. We extensively compare our policies to other policies from the literature and prove relative performance guarantees. In two numerical experiments, we identify beneficial and disadvantageous properties for different policies and present effective adjustments to tackle the most critical disadvantages of our policies. Overall, the experiments show that our piecewise affine policies can be computed by orders of magnitude faster than affine policies, while often yielding comparable or even better results.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源