What can optimization and game theory offer to machine learning?


讲座通知

图片

2025 年 7月 4 日(星期五),

下午 14:00 - 15:00

图片

信息管理与工程学院102室
上海财经大学(第三教学楼西侧)

上海市杨浦区武东路100号

主题

What can optimization and game theory offer to machine learning?


图片

Speaker

Tianyi Lin

Columbia University

I am an assistant professor in the Department of Industrial Engineering and Operations Research (IEOR) at Columbia University. My research interests lie in optimization and machine learning,   game theory, social and economic network, and optimal transport.

I obtained my Ph.D. in Electrical Engineering and   Computer Science at UC Berkeley, where I was advised by Professor Michael I. Jordan and was associated with the Berkeley Artificial   Intelligence Research (BAIR) group. From 2023 to 2024, I was a postdoctoral researcher at the Laboratory for Information & Decision Systems (LIDS) at   Massachusetts Institute of Technology, working with Professor Asuman Ozdaglar. Prior to that, I received a   B.S. in Mathematics from Nanjing University, a M.S. in Pure Mathematics and   Statistics from University of Cambridge and a M.S. in Operations Research from UC Berkeley.

Abstract

The vigorous development of linear programming, and then of game theory in a wider sense, began after the second world war. In 1948, there was a meeting of the Econometric Society in Wisconsin attended by well-known mathematicians and economists. After George Dantzig presented his solution to linear programming, the mathematician Harold Hotelling countered: ``But we all know the world is nonlinear". In 2025, we might extol the similar virtues of nonconvex thinking given the revolution of the last 13 years and doubt why we learn about classical theory given that many interesting applications are nonconvex.


This talk will take Dantzig’s viewpoint to foster classical theory: The world is nonconvex. Fortunately, the classical optimization tools permit us to tackle various structured nonconvex models encountered in practice. It consists of three parts: (1) we examine several simple yet insightful ideas from 1950's to present and explain how they form the basis of adaptive momentum estimation (Adam); (2) we present our results on gradient-free methods with applications to simulation and alignment; (3) we conclude this talk with brief discussions on two recent topics: reasoning and economics of AI. The second part is based on the joint works with P. L. Chen, X. Chen, G. Kornowski, O. Shamir, W. Yin, M. Zampetakis, Z. Zheng and my advisor M. I. Jordan.

微信号|SUFE_RIIS

扫描二维码 关注我们


RIIS

RESEARCH INSTITUTE for INTERDISCIPLINARY SCIENCES