Introducing Algorithmic Thinking Theory for Foundation Models

Speaker

Google Research

Host

CSAIL, EECS

The last few months have witnessed tremendous advances on Large Language Model (LLM) reasoning capabilities with Gemini and GPT winning a gold medal at the International Mathematical Olympiad (IMO) [1] and International Collegiate Programming Contest (ICPC) [2]. Several papers have shown that inference scaling techniques significantly improve the reasoning performances of the LLM, in particular for the IMO [3]. We will discuss these results and how one can frame the problem as an optimization problem, relate it to empirical results shown in [3], and derive optimal (algorithmic) thinking strategies. We will also discuss avenues for refining the model and improving inference scaling methods.

[1] https://deepmind.google/discover/blog/advanced-version-of-gemini-with-deep-think-officially-achieves-gold-medal-standard-at-the-international-mathematical-olympiad/

[2] https://deepmind.google/discover/blog/gemini-achieves-gold-level-performance-at-the-international-collegiate-programming-contest-world-finals/

[3] https://arxiv.org/abs/2507.15855