Visual Understanding of Human Activity: Towards Ambient Intelligence

Speaker

Stanford
Title:
Visual Understanding of Human Activity: Towards Ambient Intelligence

Abstract:
A goal of AI has long been intelligent systems interacting with humans to assist us in every aspect of our lives. Half of this story is creating robotic and autonomous agents. The other half is endowing the physical space and environment around us with ambient intelligence. In this talk I will discuss my work on visual understanding of human activity towards the latter goal. I will present recent works along several directions required for ambient intelligence. The first addresses the dense and detailed action labeling needed for full contextual awareness. The second is a reinforcement learning-based approach to learn policies for efficient action detection, an important factor in embedded vision. And the third is a method for learning new concepts from noisy web videos, towards the fast adaptivity needed for constantly evolving smart environments. Finally, I will discuss the transfer of my work from theory into practice, specifically the implementation of an AI-Assisted Smart Hospital where we have equipped units at two partner hospitals with visual sensors, towards enabling ambient intelligence for assistance with clinical care.

Bio:
Serena Yeung is a 5th year PhD student in the Stanford Vision and Learning Lab, advised by Fei-Fei Li and Arnold Milstein. Her research focuses on developing computer vision algorithms for video understanding and human activity recognition. More broadly, she is passionate about using these algorithms to equip physical spaces with ambientintelligence, in particular a Smart Hospital. Serena is a member of the Stanford Partnership in AI-Assisted Care (PAC), a collaboration between the Stanford School of Engineering and School of Medicine. She interned at Facebook AI Research in 2016, and Google Cloud AI in 2017. She was also co-instructor for Stanford's CS231N course on Convolutional Neural Networks for Visual Recognition in 2017.