Why Music Technology is a Two-Way Street, and How Engineering Can Benefit From It


Paris Smaragdis


Justin Solomon
MIT Departments of Music & Theater Arts and the Schwarzman College of Computing Special Music Technology Seminars.

Abstract: The evolution of music has been intertwined with advances in technology for a few millennia now. While we primarily recognize the impact of new technology on music, there is also a substantial flow of ideas from music towards technology. In this talk, through my work, I will show three examples of how “musical thinking” has resulted in new ideas in computing and engineering. I will talk about how polyphony has transformed the way we perform signal processing and machine learning on audio. I will present a musically intuitive way of thinking about signals, leading us to graph approaches that afford us orders of magnitude improvements in computational efficiency and invariance to signal representations. Finally, I will talk about how modeling a studio engineer’s intuition led us to design meta-learning systems that perform adaptive filtering which significantly outperforms years of fine-tuning advances. Hopefully, at the end of the talk I would have you convinced that taking a music course will make you a better engineer and that innovation flows both ways!

Bio: Paris Smaragdis is a Professor and Associate Department Head in the Computer Science department in the University of Illinois at Urbana-Champaign. After a music degree in Berklee, he completed his graduate studies and postdoc at MIT in 2001, and has since spent equal amounts of time in academic and industrial research. His research lies in the intersection between signal processing and machine learning as they relate to sound. His research work has been productized many times worldwide and is widely used in both consumer and professional systems in applications ranging from on-device speech enhancement to professional music software.
He was a recipient of the TR35 award in 2006, and he has received the IEEE Signal Processing Society (SPS) Best Paper Award twice (2017,2020) in recent years. He is an IEEE Fellow (class of 2015), and an IEEE SPS Distinguished Lecturer (2016-2017). He has served as the chair of the IEEE the Machine Learning for Signal Processing Technical Committee, the Audio and Acoustic Signal Processing Technical Committee, and the Data Science Initiative, and as a member of the IEEE Signal Processing Society Board of Governors. He is currently the Editor in Chief of the ACM/IEEE Transactions on Audio, Speech, and Language Processing. At UIUC he designed the CS+Music degree program (the first joint program between the Engineering and Fine Arts colleges), and is also the founder of the Center for Audio Arts and Sciences which acts as a nexus for audio-oriented faculty from multiple colleges and disciplines.