Tinkering with Double-Greedy

Speaker

Google Research

Host

Govind Ramnarayan, Quanquan Liu, Sitan Chen
Submodular functions can be used to model a wide array of important problems from fields such as theoretical computer science, economics, and machine learning. One of the most basic problems in submodular optimization is to maximize a submodular function f. When the function is not necessarily monotone, this problem is known to be NP-hard to approximate better than a factor of 1/2 [Feige, Mirrokni, Vondrak '11; Dobzinski and Vondrak '12]. This lower bound was met by the landmark Double-Greedy algorithm of [Buchbinder, Feldman, Naor, Schwartz '12]. In this talk, I review this embarrassingly simple algorithm and then present two extensions to different settings. The first extension is to online no-regret learning, where we receive a submodular function each round and want to do well compared to the best fixed point in hindsight. The second extension concerns the weaker generalization of submodularity to the continuous setting, where we are not guaranteed that the function is coordinate-wise concave. Both extensions are powered by a game-theoretic view of the Double-Greedy algorithm.

Joint work with Tim Roughgarden and Rad Niazadeh.