July 02

Add to Calendar 2019-07-02 13:00:00 2019-07-02 14:00:00 America/New_York Learning geometry-aware representations: 3D object and human pose inference Traditional convolutional networks exhibit unprecedented robustness to intraclass nuisances when trained on big data. However, such data have to be augmented to cover geometric transformations. Several approaches have shown recently that data augmentation can be avoided if networks are structured such that feature representations are transformed the same way as the input,a desirable property called equivariance. We show in this talk that global equivariance can be achieved for the case of 2D scaling, rotation, and translation as well as 3D rotations. We show state of the art results using an order of magnitude lower capacity than competing approaches. Moreover, we show how such geometric embeddings can recover the 3D pose of objects without keypoints or using ground-truth pose on regression. We finish by showing how graph convolutions enable the recovery of human pose and shape without any 2D annotations.Bio:Kostas Daniilidis is the Ruth Yalom Stone Professor of Computer and Information Science at the University of Pennsylvania where he has been faculty since 1998. He is an IEEE Fellow. He was the director of the GRASP laboratory from 2008 to 2013, Associate Dean for Graduate Education from 2012-2016, and Faculty Director of Online Learning 2012-2017. He obtained his undergraduate degree in Electrical Engineering from the National Technical University of Athens, 1986, and his PhD in Computer Science from the University of Karlsruhe, 1992. He is co-recipient of the Best Conference Paper Award at ICRA 2017 and Best Paper Finalist at IEEE CASE 2015, RSS 2018, and CVPR 2019. Kostas’ main interest today is in geometric deep learning, event-based cameras, and action representations as applied to vision based manipulation and navigation. 32-D463 (Star)

May 17

Add to Calendar 2019-05-17 14:00:00 2019-05-17 15:00:00 America/New_York Exploiting inter-problem structure in motion planning and control Abstract: The ability for a robot to plan its own motions is a critical component of intelligent behavior, but it has so far proven challenging to calculate high-quality motions quickly and reliably. This limits the speed at which dynamic systems can react to changing sensor input, and makes systems less robust to uncertainty. Moreover, planning problems involving many sequential interrelated tasks, like walking on rough terrain or cleaning a kitchen, can take minutes or hours to solve. This talk will describe methods that exploit experience to solve motion planning and optimal control problems much faster than de novo methods. Unlike typical machine learning settings, the planning and optimal control setting introduces peculiar inter-problem (codimensional) similarity structures that must be exploited to obtain good generalization. This line of work has seen successful application in several domains over the years, including legged robots, dynamic vehicle navigation, multi-object manipulation, and workcell design.Bio: Kris Hauser is an Associate Professor at Duke University with a joint appointment in the Electrical and Computer Engineering Department and the Mechanical Engineering and Materials Science Department. He received his PhD in Computer Science from Stanford University in 2008, bachelor's degrees in Computer Science and Mathematics from UC Berkeley in 2003, and worked as a postdoctoral fellow at UC Berkeley. He then joined the faculty at Indiana University from 2009-2014, moved to Duke in 2014, and will begin at University of Illinois Urbana-Champaign in 2019. He is a recipient of a Stanford Graduate Fellowship, Siebel Scholar Fellowship, Best Paper Award at IEEE Humanoids 2015, and an NSF CAREER award. 32-D463 (Star)

May 16

Add to Calendar 2019-05-16 11:00:00 2019-05-16 12:00:00 America/New_York Digital Twin Knowledge Bases --- Knowledge Representation and Reasoning for Robotic Agents Robotic agents that can accomplish manipulation tasks with the competence of humans have been one of the grand research challenges for artificial intelligence (AI) and robotics research for more than 50 years. However, while the fields made huge progress over the years, this ultimate goal is still out of reach. I believe that this is the case because the knowledge representation and reasoning methods that have been proposed in AI so far are necessary but too abstract. In this talk I propose to address this problem by endowing robots with the capability to internally emulate and simulate their perception-action loops based on realistic images and faithful physics simulations, which are made machine-understandable by casting them as virtual symbolic knowledge bases. These capabilities allow robots to generate huge collections of machine-understandable manipulation experiences, which robotic agents can generalize into commonsense and intuitive physics knowledge applicable to open varieties of manipulation tasks. The combination of learning, representation, and reasoning will equip robots with an understanding of the relation between their motions and the physical effects they cause at an unprecedented level of realism, depth, and breadth, and enable them to master human-scale manipulation tasks. This breakthrough will be achievable by combining leading-edge simulation and visual rendering technologies with mechanisms to semantically interpret and introspect internal simulation data structures and processes. Robots with such manipulation capabilities can help us to better deal with important societal, humanitarian, and economic challenges of our aging societies. Bio:Michael Beetz is a professor for Computer Science at the Faculty for Mathematics & Informatics of the University Bremen and head of the Institute for Artificial Intelligence (IAI). He received his diploma degree in Computer Science with distinction from the University of Kaiserslautern. His MSc, MPhil, and PhD degrees were awarded by Yale University in 1993, 1994, and 1996, and his Venia Legendi from the University of Bonn in 2000. In February 2019 he received an Honorary Doctorate from Örebro University. He was vice-coordinator of the German cluster of excellence CoTeSys (Cognition for Technical Systems, 2006--2011), coordinator of the European FP7 integrating project RoboHow (web-enabled and experience-based cognitive robots that learn complex everyday manipulation tasks, 2012-2016), and is the coordinator of the German collaborative research centre EASE (Everyday Activity Science and Engineering, since 2017). His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognition-enabled perception. 32-D463 (Star)

May 10

Add to Calendar 2019-05-10 14:00:00 2019-05-10 15:00:00 America/New_York Certifiably-Robust Spatial Perception for Robots and Autonomous Vehicles Abstract: Spatial perception is concerned with the estimation of a world model --that describes the state of the robot and the environment-- using sensor data and prior knowledge. As such, it includes a broad set of robotics and computer vision problems, ranging from object detection and pose estimation to robot localization and mapping. Most perception algorithms require extensive and application-dependent parameter tuning and often fail in off-nominal conditions (e.g., in the presence of large noise and outliers). While many applications can afford occasional failures (e.g., AR/VR, domestic robotics) or can structure the environment to simplify perception (e.g., industrial robotics), safety-critical applications of robotics in the wild, ranging from self-driving vehicles to search & rescue, demand a new generation of algorithms. In this talk I present recent advances in the design of spatial perception algorithms that are robust to extreme amounts of outliers and afford performance guarantees. I first provide a negative result, showing that a general formulation of outlier rejection is inapproximable: in the worst case, it is impossible to design an algorithm (even “slightly slower” than polynomial time) that approximately finds the set of outliers. While it is impossible to guarantee that an algorithm will reject outliers in worst-case scenarios, our second contribution is to develop certifiably-robust spatial perception algorithms, that are able to assess their performance in every given problem instance. We consider two popular spatial perception problems: Simultaneous Localization And Mapping and 3D registration, and present efficient algorithms that are certifiably-robust to extreme amounts of outliers. As a result, we can solve registration problems where 99% of the measurements are outliers and succeed in localizing objects where an average human would fail. Bio: Luca Carlone is the Charles Stark Draper Assistant Professor in the Department of Aeronautics and Astronautics at the Massachusetts Institute of Technology, and a Principal Investigator in the Laboratory for Information & Decision Systems (LIDS). He received his PhD from the Polytechnic University of Turin in 2012. He joined LIDS as a postdoctoral associate (2015) and later as a Research Scientist (2016), after spending two years as a postdoctoral fellow at the Georgia Institute of Technology (2013-2015). His research interests include nonlinear estimation, numerical and distributed optimization, and probabilistic inference, applied to sensing, perception, and decision-making in single and multi-robot systems. He is a recipient of the 2017 Transactions on Robotics King-Sun Fu Memorial Best Paper Award, and the best paper award at WAFR 2016. 32-D463 (Star)

May 03

Add to Calendar 2019-05-03 14:00:00 2019-05-03 15:00:00 America/New_York Socio-Emotive Intelligence for Long-term Robot Companions: Why and How? Abstract: In this talk, I’d like to engage the Robotics @ MIT community to question whether robots need socio-emotive intelligence. To answer this question though, we need to first think about a new dimension of evaluating AI algorithms and systems that we build - measuring their impact on people’s lives in the real-world contexts. I will highlight a number of provocative research findings from our recent long-term deployment of social robots in schools, homes, and older adult living communities. We employ an affective reinforcement learning approach to personalize robot’s actions to modulate each user’s engagement and maximize the interaction benefit. The robot observes users’ verbal and nonverbal affective cues to understand the user state and to receive feedback on its actions. Our results show that the interaction with a robot companion influences users’ beliefs, learning, and how they interact with others. The affective personalization boosts these effects and helps sustain long-term engagement. During our deployment studies, we observed that people treat and interact with artificial agents as social partners and catalysts. We also learned that the effect of the interaction strongly correlates to the social relational bonding the user has built with the robot. So, to answer the question “does a robot need socio-emotive intelligence,” I argue that we should only draw conclusions based on what impact it has on the people living with it - is it helping us flourish in the direction that we want to thrive?Bio: Hae Won Park is a Research Scientist at MIT Media Lab and a Principal Investigator of the Social Robot Companions Program. Her research focuses on socio-emotive AI and personalization of social robots that support long-term interaction and relationship between users and their robot companions. Her work spans a range of applications including education for young children and wellbeing benefits for older adults. Her research has been published at top robotics and AI venues and has received awards for best paper (HRI 2017), innovative robot applications (ICRA 2013), and pecha-kucha presentation (ICRA 2014). Hae Won received her PhD from Georgia Tech in 2014, at which time she also co-founded Zyrobotics, an assistive education robotics startup that was recognized as the best 2015 US robotics startup by Robohub and was the finalist of the Intel Innovation Award. 32-D463 (Star)

April 26

Add to Calendar 2019-04-26 14:00:00 2019-04-26 15:00:00 America/New_York Shared Autonomy for Robots in Dynamic Environments: Advances in Learning Control and Representations Abstract: The next generation of robots are going to work much more closely with humans, other robots and interact significantly with the environment around it. As a result, the key paradigms are shifting from isolated decision making systems to one that involves shared control -- with significant autonomy devolved to the robot platform; and end-users in the loop making only high level decisions.This talk will briefly introduce powerful machine learning technologies ranging from robust multi-modal sensing, shared representations, scalable real-time learning and adaptation and compliant actuation that are enabling us to reap the benefits of increased autonomy while still feeling securely in control. This also raises some fundamental questions: while the robots are ready to share control, what is the optimal trade-off between autonomy and control that we are comfortable with?Domains where this debate is relevant include unmanned space exploration, self-driving cars, offshore asset inspection & maintenance, deep sea & autonomous mining, shared manufacturing, exoskeletons/prosthetics for rehabilitation as well as smart cities to list a few.Bio: Sethu Vijayakumar is the Professor of Robotics in the School of Informatics at the University of Edinburgh and the Director of the Edinburgh Centre for Robotics. He holds the prestigious Senior Research Fellowship of the Royal Academy of Engineering, co-funded by Microsoft Research and is also an Adjunct Faculty of the University of Southern California (USC), Los Angeles. Professor Vijayakumar, who has a PhD (1998) from the Tokyo Institute of Technology, is world renowned for development of large scale machine learning techniques in the real time control of several iconic large degree of freedom anthropomorphic robotic systems including the SARCOS and the HONDA ASIMO humanoid robots, KUKA-DLR robot arm and iLIMB prosthetic hand. His latest project (2016) involves a collaboration with NASA Johnson Space Centre on the Valkyrie humanoid robot being prepared for unmanned robotic pre-deployment missions to Mars. He is the author of over 180 highly cited publications in robotics and machine learning and the winner of the IEEE Vincent Bendix award, the Japanese Monbusho fellowship, 2013 IEEE Transaction on Robotics Best Paper Award and several other paper awards from leading conferences and journals. He has led several UK, EU and international projects in the field of Robotics, attracted funding of over £38M in research grants over the last 8 years and has been appointed to grant review panels for the DFG-Germany, NSF-USA and the EU. He is a Fellow of the Royal Society of Edinburgh and a keen science communicator with a significant annual outreach agenda. He is the recipient of the 2015 Tam Dalyell Award for excellence in engaging the public with science and serves as a judge on BBC Robot Wars and was involved with the UK wide launch of the BBC micro:bit initiative for STEM education. Since September 2018, he has taken on the role of the Programme co-Director of The Alan Turing Institute, the UK’s national institute for data science and artificial intelligence, driving their Robotics and Autonomous Systems agenda. 32-D463 (Star)

April 19

Add to Calendar 2019-04-19 14:00:00 2019-04-19 15:00:00 America/New_York Formalizing Teamwork in Human-Robot Interaction Abstract: Robots out in the world today work for people but not with people. Before robots can work closely with ordinary people as part of a human-robot team in a home or office setting, robots need the ability to acquire a new mix of functional and social skills. Working with people requires a shared understanding of the task, capabilities, intentions, and background knowledge. For robots to act jointly as part of a team with people, they must engage in collaborative planning, which involves forming a consensus through an exchange of information about goals, capabilities, and partial plans. Often, much of this information is conveyed through implicit communication.In this talk, I formalize components of teamwork involving collaboration, communication, and representation. I illustrate how these concepts interact in the application of social navigation, which I argue is a first-class example of teamwork. In this setting, participants must avoid collision by legibly conveying intended passing sides via nonverbal cues like path shape. A topological representation using the braid groups enables the robot to reason about a small enumerable set of passing outcomes. I show how implicit communication of topological group plans achieves rapid convergence to a group consensus, and how a robot in the group can deliberately influence the ultimate outcome to maximize joint performance, yielding pedestrian comfort with the robot.Bio: Ross A. Knepper is an Assistant Professor in the Department of Computer Science at Cornell University, where he directs the Robotic Personal Assistants Lab. His research focuses on the theory and algorithms of human-robot interaction in collaborative work. He builds systems to perform complex tasks where partnering a human and robot together is advantageous for both, such as factory assembly or home chores. Knepper has built robot systems that can assemble Ikea furniture, ask for help when something goes wrong, interpret informal speech and gesture commands, and navigate in a socially-competent manner among people. Before Cornell, Knepper was a Research Scientist at MIT. He received his Ph.D. in Robotics from Carnegie Mellon University in 2011. 32-D463 (Star)

April 12

Add to Calendar 2019-04-12 14:00:00 2019-04-12 15:00:00 America/New_York Machine Learning for Robotics: Safety and Performance Guarantees for Learning-Based Control Abstract: The ultimate promise of robotics is to design devices that can physically interact with the world. To date, robots have been primarily deployed in highly structured and predictable environments. However, we envision the next generation of robots (ranging from self-driving and -flying vehicles to robot assistants) to operate in unpredictable and generally unknown environments alongside humans. This challenges current robot algorithms, which have been largely based on a-priori knowledge about the system and its environment. While research has shown that robots are able to learn new skills from experience and adapt to unknown situations, these results have been limited to learning single tasks, and demonstrated in simulation or lab settings. The next challenge is to enable robot learning in real-world application scenarios. This will require versatile, data-efficient and online learning algorithms that guarantee safety when placed in a closed-loop system architecture. It will also require to answer the fundamental question of how to design learning architectures for dynamic and interactive agents. This talk will highlight our recent progress in combining learning methods with formal results from control theory. By combining models with data, our algorithms achieve adaptation to changing conditions during long-term operation, data-efficient multi-robot, multi-task transfer learning, and safe reinforcement learning. We demonstrate our algorithms in vision-based off-road driving and drone flight experiments, as well as on mobile manipulators.Bio: Angela Schoellig is an Assistant Professor at the University of Toronto Institute for Aerospace Studies and an Associate Director of the Centre for Aerial Robotics Research and Education. She holds a Canada Research Chair in Machine Learning for Robotics and Control, is a principal investigator of the NSERC Canadian Robotics Network, and is a Faculty Affiliate of the Vector Institute for Artificial Intelligence. She conducts research at the intersection of robotics, controls, and machine learning. Her goal is to enhance the performance, safety, and autonomy of robots by enabling them to learn from past experiments and from each other. She is a recipient of a Sloan Research Fellowship (2017), an Ontario Early Researcher Award (2017), and a Connaught New Researcher Award (2015). She is one of MIT Technology Review’s Innovators Under 35 (2017), a Canada Science Leadership Program Fellow (2014), and one of Robohub’s “25 women in robotics you need to know about (2013)”. Her team won the 2018 North-American SAE AutoDrive Challenge sponsored by General Motors. Her PhD at ETH Zurich (2013) was awarded the ETH Medal and the Dimitris N. Chorafas Foundation Award. She holds both an M.Sc. in Engineering Cybernetics from the University of Stuttgart (2008) and an M.Sc. in Engineering Science and Mechanics from the Georgia Institute of Technology (2007). More information can be found at: www.schoellig.name. 32-D463 (Star)

April 05

Add to Calendar 2019-04-05 14:00:00 2019-04-05 15:00:00 America/New_York Models of Robotic Manipulation Abstract: Some of my earliest work focused on robot grasping: on decomposing the grasping process into phases, examining the contacts and motions occurring in each phase, and developing conditions under which a stable grasp might be obtained while addressing initial pose uncertainty. That was the start of my interest in the physics of manipulation, which has continued until now. The early work was inspired partly by robot programming experiences, especially applications in manufacturing automation. It is possible that materials handling is going to play a similar role, as the primary application of research in autonomous manipulation, changing our perceptions of the challenges and opportunities our field. Bio: Matt Mason earned his PhD in Computer Science and Artificial Intelligence from MIT in 1982. He has worked in robotics for over forty years. For most of that time Matt has been a Professor of Robotics and Computer Science at Carnegie Mellon University (CMU). He was Director of the Robotics Institute from 2004 to 2014. Since 2014, Matt has split his time between CMU and Berkshire Grey, where he is the Chief Scientist. Berkshire Grey is a Boston-based company that produces innovative materials-handling solutions for eCommerce and logistics. Matt is a Fellow of the AAAI and the IEEE. He won the IEEE R&A Pioneer Award, and the IEEE Technical Field Award in Robotics and Automation. 32-D463 (Star)

March 22

Add to Calendar 2019-03-22 14:00:00 2019-03-22 15:00:00 America/New_York Safety and Resilience in Multi-Agent Systems: Theory and Algorithms for Adversarially-Robust Multi-Robot Teams and Human-Robot Collaboration Abstract: Planning, decision-making and control for uncertain multi-agent systems has been a popular topic of research with numerous applications, e.g., in robotic networks operating in dynamic, unknown, or even adversarial environments, within or without the presence of humans. Despite significant progress over the years, challenges such as constraints (in terms of state and time specifications), malicious or faulty information, environmental uncertainty and scalability are typically not treated well enough with existing methods. In the first part of this talk, I will present some of our recent results and ongoing work on safety and resilience of multi-agent systems in the presence of adversaries. I will discuss (i) our approach on achieving safe, resilient consensus in the presence of malicious information and its application to resilient leader-follower robot teams under bounded inputs, and (ii) our method on safe multi-agent motion planning and de-confliction using finite-time controllers and estimators in the presence of bounded uncertainty. In the second part of the talk, I will present (iii) our results on human-robot collaboration that involve the unsupervised, on-the-fly learning of assistive information (camera views) by teams of co-robots in human multi-tasking environments.Bio: Dimitra Panagou received the Diploma and PhD degrees in Mechanical Engineering from the National Technical University of Athens, Greece, in 2006 and 2012, respectively. Since September 2014 she has been an Assistant Professor with the Department of Aerospace Engineering, University of Michigan. Prior to joining the University of Michigan, she was a postdoctoral research associate with the Coordinated Science Laboratory, University of Illinois, Urbana-Champaign (2012-2014), a visiting research scholar with the GRASP Lab, University of Pennsylvania (June 2013, fall 2010) and a visiting research scholar with the University of Delaware, Mechanical Engineering Department (spring 2009).Dr. Panagou's research program emphasizes in the exploration, development, and implementation of control and estimation methods in order to address real-world problems via provably correct solutions. Her research spans the areas of nonlinear systems and control; control of multi-agent systems and networks; distributed systems and control; motion and path planning; switched and hybrid systems; constrained decision-making and control; navigation, guidance, and control of aerospace vehicles. She is particularly interested in the development of provably correct methods for the robustly safe and secure (resilient) operation of autonomous systems in complex missions, with applications in unmanned aerial systems, robot/sensor networks and multi-vehicle systems (ground, marine, aerial, space). Dr. Panagou is a recipient of a NASA Early Career Faculty Award, of an AFOSR Young Investigator Award, and a member of the IEEE and the AIAA. More details: http://www-personal.umich.edu/~dpanagou/research/index.html 32-D463 (Star)

March 08

Add to Calendar 2019-03-08 14:00:00 2019-03-08 15:00:00 America/New_York Toward robust manipulation in complex scenarios Abstract: Over the last years, advances in deep learning and GPU based computing have enabled significant progress in several areas of robotics, including visual recognition, real-time tracking, object manipulation, and learning-based control. This progress has turned applications such as autonomous driving and delivery tasks in warehouses, hospitals, or hotels into realistic application scenarios. However, robust manipulation in complex settings is still an open research problem. Various research efforts show promising results on individual pieces of the manipulation puzzle, including manipulator control, touch sensing, object pose detection, task and motion planning, and object pickup. In this talk, I will present our recent efforts in integrating such components into a complete manipulation system. Specifically, I will describe a mobile robot manipulator that moves through a kitchen, can open and close cabinet doors and drawers, detect and pickup objects, and move these objects to desired locations. Our baseline system is designed to be applicable in a wide variety of environments, only relying on 3D articulated models of the kitchen and the relevant objects. I will discuss the design choices behind our approach, the lessons we learned so far, and various research directions toward enabling more robust and general manipulation systems.Bio: Dieter Fox is Senior Director of Robotics Research at NVIDIA. He is also a Professor in the Paul G. Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter obtained his Ph.D. from the University of Bonn, Germany. His research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as mapping, object detection and tracking, manipulation, and activity recognition. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE and the AAAI, and he received several best paper awards at major robotics, AI, and computer vision conferences. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference. 32-D463 (Star)