Many optimization problems in machine learning rely on noisy, estimated parameters. Neglecting this uncertainty can lead to great fluctuations in performance. We are developing algorithms for these already nonconvex problems that are robust to such errors.

Many optimization problems in machine learning and data mining, such as data summarization, budget allocation and influence maximization, rely on parameters that are learned from noisy data. For example, influence maximization requires user-provided influence probabilities that are never known exactly in practice. The outcomes are often highly sensitive to this noise, so in critical decision-making settings (e.g. investment), stronger guarantees are needed. These guarantees need to take the uncertainty into account. Such a method uses not only a single “best guess” of the parameters, but an "uncertainty set" that is highly likely to contain the true parameters. By phrasing an adversarial game, robust optimization then seeks a decision that is good for any realization of the parameters. However, this makes optimization hard, especially for problems like influence maximization where even the original non-robust problem is hard to optimize exactly. Specifically, the resulting optimization problem can be nonconvex in both the influencer’s as well as the adversary’s parameters. However, by carefully exploiting the mathematical structure in such problems, we are still able to develop robust optimization algorithms with theoretical guarantees, with the end goal of practical, efficient and reliable algorithms that perform well in the face of uncertainty.

Research Areas

Impact Areas