The goal is to advance the theory and methods in machine learning where learning requires non-convex optimization.
Machine learning has provided high-impact data-driven technology that has been used in spam filters, recommender systems, object recognition, speech recognition, internet advertisement, demand prediction, market analysis, fault detection, and more. As the amount of available data is increasing, we are facing a rise in the importance of machine learning itself, and also in the demand of machine learning methods that are more adaptable to the data at hand, such as representation learning. Whereas traditional machine learning makes use of features designed by human users or experts as a type of prior, representation learning tries to learn features from the data as well. Representation learning, such as deep learning, often requires us to deal with nonconvex optimization problems. Despite the recent great practical success of deep learning, the lack of the theoretical understanding of nonconvex optimization in deep learning has been a major challenge to the achievement of reliability and verifiability of systems with deep learning modules, and to the rigorous study of many proposed methods in deep learning. By analyzing the properties of nonconvex optimization in machine learning, it is aimed to propose new machine learning models and optimization methods.
If you would like to contact us about our work, please refer to our members below and reach out to one of the group leads directly.