Strong Formulations and Algorithms for Regularized A-optimal Design

讲座通知

图片

2025 年 12月 3 日(星期三),

下午 14:00 - 15:00

图片

信息管理与工程学院102室
上海财经大学(武东路校区)

上海市杨浦区武东路100号

主题

Strong Formulations and Algorithms for Regularized A-optimal Design Chenhao Wang

图片

主讲人

Yongchun Li

The Chinese University of Hong Kong, Shenzhen

Yongchun Li is an assistant professor in the School of Data Science at the Chinese   University of Hong Kong, Shenzhen. From 2024-2025, Dr. Li worked as an   assistant professor in the Department of Industrial and Systems Engineering at the University of Tennessee, Knoxville. She received a PhD in Operations   Research from Georgia Tech. Her research interests include optimization,   machine learning, and statistics. Her research has received several   recognitions, including Finalist of the 2025 INFORMS Junior Faculty Interest   Group (JFIG) Paper Competition, Runner-up of the 2021 INFORMS Computing   Society Student Paper Award, and Winner of the 2020 INFORMS Data Mining Section Student Paper Award.

讲座简介


We   study the Regularized A-optimal Design (RAOD) problem, which selects a subset of k experiments to minimize the inverse of the Fisher information matrix,   regularized with a scaled identity matrix. RAOD has broad applications in   Bayesian experimental design, sensor placement, and cold-start recommendation. We prove its NP-hardness via a reduction from the independent   set problem. By leveraging convex envelope techniques, we propose a new   convex integer programming formulation for RAOD, whose continuous relaxation   dominates those of existing formulations. More importantly, we demonstrate   that our continuous relaxation achieves bounded optimality gaps for all k,   whereas previous relaxations may suffer from unbounded gaps. This new   formulation enables the development of an exact cutting-plane algorithm with superior efficiency, especially in high-dimensional and small-k scenarios. We   also investigate scalable forward and backward greedy algorithms for solving RAOD, each with provable performance guarantees for different k ranges.   Finally, our numerical results on synthetic and real data demonstrate the efficacy of the proposed exact and approximation algorithms. We further showcase the practical effectiveness of RAOD by applying it to a real-world   user cold-start recommendation problem.