Add to Calendar
2017-12-15 14:00:00
2017-12-15 15:00:00
America/New_York
Dirty Data, Robotics, and Artificial Intelligence
Abstract:Large training datasets have revolutionized AI research, but enabling similar breakthroughs in other fields, such as Robotics, requires a new understanding of how to acquire, clean, and structure emergent forms of large-scale, unstructured sequential data. My talk presents a systematic approach to handling such dirty data in the context of modern AI applications. I start by introducing a statistical formalization on data cleaning in this setting including research on: (1) how common data cleaning operations affect model training, (2) how data cleaning programs can be expected to generalize to unseen data, (3) and how to prioritize limited human intervention in rapidly growing datasets. Then, using surgical robotics as a motivating example, I present a series of robust Bayesian models for automatically extracting hierarchical structure from highly varied and noisy robot trajectory data facilitating imitation learning and reinforcement learning on short, consistent sub-problems. I present how the combination of clean training data and structured learning tasks enables learning highly accurate control policies in tasks ranging from surgical cutting to debridement.Bio:Sanjay Krishnan is a Computer Science PhD candidate in the RISELab and in the Berkeley Laboratory for Automation Science and Engineering at UC Berkeley. His research studies problems on the intersection of database theory, machine learning, and robotics. Sanjay's work has received a number of awards including the 2016 SIGMOD Best Demonstration award, 2015 IEEE GHTC Best Paper award, and Sage Scholar award. https://www.ocf.berkeley.edu/~sanjayk/
Star (32-D463)
December 15
December 05
Cancelled
Toward Personal Robots and Daily Life
This event has been cancelled
November 28
Add to Calendar
2017-11-28 11:00:00
2017-11-28 12:00:00
America/New_York
Robots at Sea
AbstractUnderwater robotics is undergoing a transformation. Advances in AI and machine learning are enabling a new generation of underwater robots to make intelligent decisions (where to sample? how to navigate?) by reasoning about their environment (what is the shipping and water forecast?). At USC, we are engaged in a long-term effort to explore ideas and develop algorithms that will lead to persistent, autonomous underwater robots. In this talk, I will discuss some of our recent results focusing on two problems in adaptive sampling: underwater change detection and biological sampling. Time permitting; I will also present our work on hazard avoidance, allowing underwater robots to operate in regions where there is substantial ship traffic.BioGaurav S. Sukhatme is the Fletcher Jones Professor of Computer Science and Electrical Engineering at the University of Southern California (USC). He currently serves as the Executive Vice Dean of the USC Viterbi School of Engineering. His research is in networked robots with applications to aquatic robots and on-body networks. Sukhatme has published extensively in these areas and served as PI on numerous federal grants. He is Fellow of the IEEE and a recipient of the NSF CAREER and the Okawa foundation research awards. He is one of the founders of the RSS conference, serves on the RSS Foundation Board, and has served as program chair of three major robotics conferences (ICRA, IROS and RSS). He is the Editor-in-Chief of the Springer journal Autonomous Robots.
32-G449
November 14
Add to Calendar
2017-11-14 11:00:00
2017-11-14 12:00:00
America/New_York
Multicellular Machines: A Bio-inspired approach to automated electromechanical design and fabrication
Abstract:Designing and building robots is a labor-intensive process that requires experts at all stages. This reality is due, in part, to the fact that the robot design-space is unbounded. To address this issue, I have borrowed a simple but powerful design concept from multi-cellular organisms: the regular tiling of a relatively small number of individual cell types yields assemblies with spectacular functional capacity. This capability comes at the cost of substantial complexity in design synthesis and assembly, which nature has addressed via evolutionary search and developmental processes. I will describe my application of these ideas to electromechanical systems, which has led to the development of electro-mechanical “cell” types, automated assembly methods, and design synthesis tools. The inspiration for this work comes from ongoing collaborations with Ecologists and Evolutionary Biologists. As part of this effort I have developed wildlife monitoring tools that provide unprecedented volumes of data, enabling previously intractable scientific studies of small organisms. Sensor mass, which is dominated by energy-storage, is the primary constraint for these applications, and I will discuss a time-of-arrival tracking system that is 3 orders of magnitude more energy-efficient than equivalent position tracking methods.Bio Sketch:Dr. Robert MacCurdy is a Postdoctoral Associate with Daniela Rus at MIT and will be an assistant professor at the University of Colorado Boulder in January 2018. He is developing new methods to automatically design and manufacture robots. As part of this work, he developed an additive manufacturing process, Printable Hydraulics, that incorporates liquids into 3D-printed parts as they are built, allowing hydraulically-actuated robots to be automatically fabricated. Rob did his PhD work with Hod Lipson at Cornell University where he developed materials and methods to automatically design and build electromechanical systems using additive manufacturing and digital materials. Funded by an NSF graduate research fellowship and a Liebmann Fund fellowship, this work demonstrated systems capable of automatically assembling functional electromechanical devices, with the goal of printing robots that literally walk out of the printer. Rob is also committed to developing research tools that automate the study and conservation of wildlife, work that he began while working as a research engineer at Cornell’s Lab of Ornithology. He holds a B.A. in Physics from Ithaca College, a B.S. in Electrical Engineering from Cornell University, and an M.S. and PhD in Mechanical Engineering from Cornell University.
32-G449 Patil/Kiva
November 07
Add to Calendar
2017-11-07 11:00:00
2017-11-07 12:00:00
America/New_York
Mobile Manipulators for Intelligent Physical Assistance
Abstract:Since founding the Healthcare Robotics Lab at Georgia Tech 10 years ago, my research has focused on developing mobile manipulators for intelligent physical assistance. Mobile manipulators are mobile robots with the ability to physically manipulate their surroundings. They offer a number of distinct capabilities compared to other forms of robotic assistance, including being able to operate independently from the user, being appropriate for users with diverse needs, and being able to assist with a wide variety of tasks, such as object retrieval, hygiene, and feeding. My lab has worked with hundreds of representative end users - including older adults, nurses, and people with severe motor impairments - to better understand the challenges and opportunities associated with this technology. In my talk, I will provide evidence for the following assertions: 1) many people will be open to assistance from mobile manipulators; 2) assistive mobile manipulation at home is feasible for people with profound motor impairments using off-the-shelf computer access devices; and 3) permitting contact and intelligently controlling forces increases the effectiveness of mobile manipulators. I will conclude with a brief overview of some of our most recent research.Bio: Charles C. Kemp (Charlie) is an Associate Professor at the Georgia Institute of Technology in the Department of Biomedical Engineering with adjunct appointments in the School of Interactive Computing and the School of Electrical and Computer Engineering. He earned a doctorate in Electrical Engineering and Computer Science (2005), an MEng, and BS from MIT. In 2007, he joined the faculty at Georgia Tech where he directs the Healthcare Robotics Lab ( http://healthcare-robotics.com ). He is an active member of Georgia Tech’s Institute for Robotics & Intelligent Machines (IRIM) and its multidisciplinary Robotics Ph.D. program. He has received a 3M Non-tenured Faculty Award, the Georgia Tech Research Corporation Robotics Award, a Google Faculty Research Award, and an NSF CAREER award. He was a Hesburgh Award Teaching Fellow in 2017. His research has been covered extensively by the popular media, including the New York Times, Technology Review, ABC, and CNN.
32-G449 Patil/Kiva
October 31
The Dexterity Network: Deep Learning to Plan Robust Robot Grasps using Datasets of Synthetic Point Clouds, Analytic Grasp Metrics, and 3D Object Models
Jeff Mahler
UC Berkeley
Add to Calendar
2017-10-31 11:00:00
2017-10-31 12:00:00
America/New_York
The Dexterity Network: Deep Learning to Plan Robust Robot Grasps using Datasets of Synthetic Point Clouds, Analytic Grasp Metrics, and 3D Object Models
Abstract:Reliable robot grasping across a wide variety of objects is challenging due to imprecision in sensing, which leads to uncertainty about properties such as object shape, pose, mass, and friction. Recent results suggest that deep learning from millions of labeled grasps and images can be used to rapidly plan successful grasps across a diverse set of objects without explicit inference of physical properties, but training typically requires tedious hand-labeling or months of execution time. In this talk I present the Dexterity Network (Dex-Net), a framework to automatically synthesize training datasets containing millions of point clouds and robot grasps labeled with robustness to perturbations by analyzing contact models across thousands of 3D object CAD models. I will describe generative models for datasets of both parallel-jaw and suction-cup grasps. Experiments suggest that Convolutional Neural Networks trained from scratch on Dex-Net datasets can be used to plan grasps for novel objects in clutter with high precision on a physical robot.Bio:Jeff Mahler is a Ph.D. student at the University of California at Berkeley advised by Prof. Ken Goldberg and a member of the the AUTOLAB and Berkeley Artificial Intelligence Research Lab. His current research is on the Dexterity Network (Dex-Net), a project that aims to train robot grasping policies from massive synthetic datasets of labeled point clouds and grasps generated using stochastic contact analysis across thousands of 3D object CAD models. He has also studied deep learning from demonstration and control for surgical robots. He received the National Defense Science and Engineering Fellowship in 2015 and cofounded the 3D scanning startup Lynx Laboratories in 2012 as an undergraduate at the University of Texas at Austin.
32-G449 Patil/Kiva
October 17
Synthesis for robots: guarantees and feedback for complex behaviors
Hadas Kress-Gazit
Cornell
Add to Calendar
2017-10-17 11:00:00
2017-10-17 12:00:00
America/New_York
Synthesis for robots: guarantees and feedback for complex behaviors
Abstract:Getting a robot to perform a complex task, for example completing the DARPA Robotics Challenge, typically requires a team of engineers who program the robot in a time consuming and error prone process and who validate the resulting robot behavior through testing in different environments. The vision of synthesis for robotics is to bypass the manual programming and testing cycle by enabling users to provide specifications – what the robot should do – and automatically generating, from the specification, robot control that provides guarantees for the robot’s behavior.In this talk I will describe the work done in my group towards realizing the synthesis vision. I will discuss what it means to provide guarantees for physical robots, types of feedback we can generate, specification formalisms that we use and our approach to synthesis for different robotic systems such as modular robots and multi robot systems.Bio:Hadas Kress-Gazit is an Associate Professor at the Sibley School of Mechanical and Aerospace Engineering at Cornell University. She received her Ph.D. in Electrical and Systems Engineering from the University of Pennsylvania in 2008 and has been at Cornell since 2009. Her research focuses on formal methods for robotics and automation and more specifically on synthesis for robotics - automatically creating verifiable robot controllers for complex high-level tasks. Her group explores different types of robotic systems including modular robots, soft robots and swarms and synthesizes (pun intended) ideas from different communities such as robotics, formal methods, control, hybrid systems and computational linguistics. She received an NSF CAREER award in 2010, a DARPA Young Faculty Award in 2012 and the Fiona Ip Li '78 and Donald Li '75 Excellence in teaching award in 2013. She lives in Ithaca with her partner and two kids.
32-G449 Patil/Kiva
October 03
A Control and Estimation Framework for Robotic Swarms in Uncertain Environments
Spring Berman
Arizona State University
Add to Calendar
2017-10-03 11:00:00
2017-10-03 12:00:00
America/New_York
A Control and Estimation Framework for Robotic Swarms in Uncertain Environments
Abstract: Robotic “swarms” comprising tens to thousands of robots have the potential to greatly reduce human workload and risk to human life. In many scenarios, the robots will lack global localization, prior data about the environment, and reliable communication, and they will be restricted to local sensing and signaling. We are developing a rigorous control and estimation framework for swarms that are subject to these constraints. This framework will enable swarms to operate largely autonomously, with user input consisting only of high-level directives. In this talk, I describe our work on various aspects of the framework, including scalable strategies for coverage, mapping, scalar field estimation, and cooperative manipulation. We use stochastic and deterministic models from chemical reaction network theory and fluid dynamics to describe the robots’ roles, state transitions, and motion at both the microscopic (individual) and macroscopic (population) levels. We also employ techniques from algebraic topology, nonlinear control theory, and optimization, and we model analogous behaviors in ant colonies to identify robot controllers that yield similarly robust performance. We are validating our framework on small mobile robots, called “Pheeno,” that we have designed to be low-cost, customizable platforms for multi-robot research and education.Bio: Spring Berman is an assistant professor of Mechanical and Aerospace Engineering at Arizona State University (ASU), where she directs the Autonomous Collective Systems (ACS) Laboratory. She received the B.S.E. degree in Mechanical and Aerospace Engineering from Princeton University in 2005 and the Ph.D. degree in Mechanical Engineering and Applied Mechanics from the University of Pennsylvania (GRASP Laboratory) in 2010. From 2010 to 2012, she was a postdoctoral researcher in Computer Science at Harvard University. Her research focuses on controlling swarms of resource-limited robots with stochastic behaviors to reliably perform collective tasks in realistic environments. She was a recipient of the 2014 DARPA Young Faculty Award and the 2016 ONR Young Investigator Award. She currently serves as the associate director of the newly established ASU Center for Human, Artificial Intelligence, and Robotic Teaming.
32-G449 Patil/Kiva
September 19
Sufficient Symbols to Make Optimization-Based Manipulation Planning Tractable
University of Stuttgart
Add to Calendar
2017-09-19 11:00:00
2017-09-19 12:00:00
America/New_York
Sufficient Symbols to Make Optimization-Based Manipulation Planning Tractable
ABSTRACT: Not only is combined Task and Motion Planning a hard problem, it also relies on an appropriate symbolic representation to describe the task level, which to find is perhaps an even more fundamental problem. I first briefly report on our work that considered symbol learning and, relatedly, manipulation skill learning on its own. However, I now believe that what are appropriate abstractions should depend on their use in higher level planning. I will introduce Logic-Geometric Programming as a framework in which the role of the symbolic level is to make optimization over complex manipulation paths tractable. Similarly to enumerating categorical aspects of an otherwise infeasible problem, such as enumerating homotopy classes in path planning, or local optima in general optimization. I then report on recent results we got with this framework for combined task and motion planning and human-robot cooperative manipulation planning.BIO:Marc Toussaint currently visiting scholar at CSAIL until summer 2018. He is full professor for Machine Learning and Robotics at the University of Stuttgart since 2012. Before he was assistant professor and leading an Emmy Noether research group at FU & TU Berlin. His research focuses on the combination of decision theory and machine learning, motivated by fundamental research questions in robotics. Specific interests include combining geometry, logic and probabilities in learning and reasoning, and appropriate representations and priors for real-world manipulation learning.
32-G449 (Patil/Kiva)
August 29
Thesis Defense: SLAM-aware, Self-Supervised Perception in Mobile Robots (Sudeep Pillai)
Sudeep Pillai
MIT EECS
Add to Calendar
2017-08-29 11:00:00
2017-08-29 13:00:00
America/New_York
Thesis Defense: SLAM-aware, Self-Supervised Perception in Mobile Robots (Sudeep Pillai)
Thesis DefenseSudeep Pillai, MIT EECSTitle: SLAM-aware, Self-Supervised Perception in Mobile RobotsDate: 29 August 2017Time: 11AMLocation: Patil/Kiva Seminar Room, 32-G449Abstract:Simultaneous Localization and Mapping (SLAM) is a fundamental capability in mobile robots, and has been typically considered in the context of aiding mapping and navigation tasks. In this thesis, we advocate for the use of SLAM as a supervisory signal to further the perceptual capabilities in robots. Through the concept of SLAM-supported object recognition, we develop the ability for robots equipped with a single camera to be able to leverage their SLAM-awareness (via Monocular Visual-SLAM) to better inform object recognition within its immediate environment. Additionally, bymaintaining a spatially-cognizant view of the world, we find our SLAM-aware approach to be particularly amenable to few-shot object learning. We show that a SLAM-aware, few-shot object learning strategy can be especially advantageous to mobile robots, and is able to learn object detectors from a reduced set of training examples.Implicit to realizing modern visual-SLAM systems is its choice of map representation. It is imperative that the map representation is crucially utilized by multiple components in the robot's perception stack, while it is constantly optimized as more measurements are available. Motivated by the need for a unified map representation in vision-based mapping, navigation and planning, we develop an iterative and high-performance mesh-reconstruction algorithm for stereo imagery. We envision that in the future, these tunable mesh representations can potentially enable robots to quickly reconstruct their immediate surroundings while being able to directly plan in them and maneuver at high-speeds.While most visual-SLAM front-ends explicitly encode application-specific constraints for accurate and robust operation, we advocate for an automated solution to developing these systems. By bootstrapping the robot's ability to perform GPS-aided SLAM, this thesis develops, to the best of our knowledge, the first self-supervised visual-SLAM front-end capable of performing visual ego-motion, and vision-based loop-closure recognition in mobile robots. We propose a novel, generative model solution that it is able to predict ego-motion estimates from optical flow, while also allowing for the prediction of induced scene flow conditioned on the ego-motion. Following a similar bootstrapped learning strategy, we explore the ability to self-supervise place recognition in mobile robots and cast it as a metric learning problem, with a GPS-aided SLAM solution providing the relevant supervision. Furthermore, we show that the newly learned embedding can be particularly powerful in discriminating visual scene instances from each other for the purpose of loop-closure detection. We envision that such self-supervised solutions to vision-based task learning will have far-reaching implications in several domains, especially facilitating life-long learning in autonomous systems.Thesis Supervisor: John J. Leonard Thesis Committee: Antonio Torralba, Leslie Kaelbling, and Nicholas Roy
Seminar Room G449 (Patil/Kiva)
July 20
Legged robots and mobile manipulation
Marco Hutter
ETH Zurich
Add to Calendar
2017-07-20 14:00:00
2017-07-20 15:00:00
America/New_York
Legged robots and mobile manipulation
Abstract:This talk provides an insight into our recent work on four-legged robots that can operate under harsh conditions. I will outline some design aspects and general concepts that were followed to create a platform for dynamic legged locomotion. This includes work on compact, precisely torque controllable and impact robust actuator modules as well as on an overall system architecture aiming at high mobility and versatility. I will present a number of control, environment perception, and motion planning tools for static and dynamic locomotion in non-flat terrain and discuss all results in the context of different experiments that were conducted under realistic conditions in the field. Moreover, I will show how these concepts can be transferred from locomotion to manipulation as well as to different scales and actuation types such as for autonomous hydraulic excavators.Short Bio:Marco Hutter is assistant professor for Robotic Systems at ETH Zurich and Branco Weiss Fellow. He is part of the national competence centers for robotics (NCCR robotics) and digital fabrication (NCCR dfab), and member of the Intel Network on Intelligent Systems. His group is participating in several research projects, industrial collaborations, and international competitions that target the application of high-mobile autonomous vehicles in challenging environments such as for search and rescue, industrial inspection, or construction operation. Marco’s research interests are in the development of novel machines and actuation concepts together with the underlying control, planning, and optimization algorithms for locomotion and manipulation.
Seminar Room G449 (Patil/Kiva)
May 16
Adversarial learning for generative models and inference
Aaron Courville
Université de Montréal
Add to Calendar
2017-05-16 11:00:00
2017-05-16 12:00:00
America/New_York
Adversarial learning for generative models and inference
Generative Adversarial Networks (GANs) pose the learning of a generative model as an adversarial game between a discriminator, trained to distinguish true and generated samples, and a generator, trained to try to fool the discriminator. Since their introduction in 2014, GANs have been the subject of a surge of research activity, due to their ability to produce realistic samples of highly structured data such as natural images.In this talk I will present a brief introduction to Generative Adversarial Networks (GANs), and discuss some of our recent work in improving the stability of training of GAN models. I will also describe our recent work on adversarially learned inference (ALI), which jointly learns a generation network and an inference network using a GAN-like adversarial process. In ALI, the generation network maps samples from stochastic latent variables to the data space while the inference network maps training examples in data space to the space of latent variables. An adversarial game is cast between these two networks and a discriminative network is trained to distinguish between joint latent/data-space samples from the generative network and joint samples from the inference network.
32-G449 Patil/Kiva
May 02
Robotic Manipulation Without Geometric Models
Robert Platt
Northeastern University
Add to Calendar
2017-05-02 11:00:00
2017-05-02 12:00:00
America/New_York
Robotic Manipulation Without Geometric Models
Abstract:Most approaches to planning for robotic manipulation take a geometric description of the world and the objects in it as input. Unfortunately, despite successes in SLAM, estimating the geometry of the world from sensor data can be challenging. This is particularly true in open world scenarios where we have little prior information about the geometry or appearance of the objects to be handled. This is a problem because even small modelling errors can cause a grasp or manipulation operation to fail. In this talk, I will describe some recent work on approaches to robotic manipulation that eschew geometric models. Our recent results show that these methods excel on manipulation tasks involving novel objects presented in dense clutter.Bio:Dr. Robert Platt is an Assistant Professor of Computer Science at Northeastern University. Prior to coming to Northeastern, he was a Research Scientist at MIT and a technical lead at NASA Johnson Space Center, where he helped develop the control and autonomy subsystems for Robonaut 2, the first humanoid robot in space.
32-G449 Patil/Kiva
April 25
Dynamics, Design, and Control of Legged Robots that Rapidly Run and Climb
Jonathan Clark
Florida State University
Add to Calendar
2017-04-25 11:00:00
2017-04-25 12:00:00
America/New_York
Dynamics, Design, and Control of Legged Robots that Rapidly Run and Climb
Abstract: Finely tuned robotic limb systems that explicitly exploit their body’s natural dynamics have begun to rival specific performance criteria, such as speed over smooth terrain, of the most accomplished biological systems. The earliest successful robot implementations however, used only very specialized designs with a very limited number of active degrees of freedom. While more flexible, higher degree-of-freedom designs have been around for some time they have usually been restricted to comparatively slow speeds or manipulation of light-weight objects. The design of fast, dynamic multi-purpose robots has been stymied by the limitation of available mechanical actuators and the complexity of the design and control of these systems. This talk will describe recent efforts to understand how to effectively design robotic limbs to enable dynamic motions in multiple modalities, specifically high-speed running on horizontal and vertical surfaces. Bio: Jonathan Clark received his BS in Mechanical Engineering from Brigham Young University and his MS and PhD from Stanford University. Dr. Clark worked as an IC Postdoctoral Fellow at the GRASP lab at the University of Pennsylvania, and is currently an associate professor at the FAMU/FSU College of Engineering in the Department of Mechanical Engineering. During Dr. Clark’s career he has worked on a wide range of dynamic legged robotic systems including the Sprawl and RHex families of running robots, as well as the world’s first dynamical and fastest legged climbing robot Dynoclimber. In 2014, he received an NSF CAREER award for work on rotational dynamics for improved legged locomotion. His recent work has involved the development of multi-modal robots that can operate in varied terrain by running, climbing and flying. He currently serves as the associate director of the Center of Intelligent Systems, Control, and Robotics (CISCOR) and the director of the STRIDe lab.
32-G449 Patil/Kiva
April 18
Add to Calendar
2017-04-18 11:00:00
2017-04-18 12:00:00
America/New_York
Enhancing Human Capability with Intelligent Machine Teammates
Every team has top performers -- people who excel at working in a team to find the right solutions in complex, difficult situations. These top performers include nurses who run hospital floors, emergency response teams, air traffic controllers, and factory line supervisors. While they may outperform the most sophisticated optimization and scheduling algorithms, they cannot often tell us how they do it. Similarly, even when a machine can do the job better than most of us, it can’t explain how. In this talk I share recent work investigating effective ways to blend the unique decision-making strengths of humans and machines. I discuss the development of computational models that enable machines to efficiently infer the mental state of human teammates and thereby collaborate with people in richer, more flexible ways. Our studies demonstrate statistically significant improvements in people’s performance on military, healthcare and manufacturing tasks, when aided by intelligent machine teammates. Bio: Julie Shah is an Associate Professor of Aeronautics and Astronautics at MIT and director of the Interactive Robotics Group, which aims to imagine the future of work by designing collaborative robot teammates that enhance human capability. As a current fellow of Harvard University's Radcliffe Institute for Advanced Study, she is expanding the use of human cognitive models for artificial intelligence. She has translated her work to manufacturing assembly lines, healthcare applications, transportation and defense. Before joining the faculty, she worked at Boeing Research and Technology on robotics applications for aerospace manufacturing. Prof. Shah has been recognized by the National Science Foundation with a Faculty Early Career Development (CAREER) award and by MIT Technology Review on its 35 Innovators Under 35 list. Her work on industrial human-robot collaboration was also in Technology Review’s 2013 list of 10 Breakthrough Technologies. She has received international recognition in the form of best paper awards and nominations from the ACM/IEEE International Conference on Human-Robot Interaction, the American Institute of Aeronautics and Astronautics, the Human Factors and Ergonomics Society, the International Conference on Automated Planning and Scheduling, and the International Symposium on Robotics. She earned degrees in aeronautics and astronautics and in autonomous systems from MIT.
32-G449 Patil/Kiva
April 11
Toward Resilient Robot Autonomy through Learning, Interaction and Semantic Reasoning
Sonia Chernova
Georgia Tech
Add to Calendar
2017-04-11 11:00:00
2017-04-11 12:00:00
America/New_York
Toward Resilient Robot Autonomy through Learning, Interaction and Semantic Reasoning
Abstract: Robotics is undergoing an exciting transition from factory automation to the deployment of autonomous systems in less structured environments, such as warehouses, hospitals and homes. One of the critical barriers to the wider adoption of autonomous robotic systems in the wild is the challenge of achieving reliable autonomy in complex and changing human environments. In this talk, I will discuss ways in which innovations in learning from demonstration and remote access technologies can be used to develop and deploy autonomous robotic systems alongside and in collaboration with human partners. I will present applications of this research paradigm to robot learning, object manipulation and semantic reasoning, as well as explore some exciting avenues for future research in this area.Bio: Sonia Chernova is the Catherine M. and James E. Allchin Early-Career Assistant Professor in the School of Interactive Computing at Georgia Tech, where she directs the Robot Autonomy and Interactive Learning research lab. She received B.S. and Ph.D. degrees in Computer Science from Carnegie Mellon University, and held positions as a Postdoctoral Associate at the MIT Media Lab and as Assistant Professor at Worcester Polytechnic Institute prior to joining Georgia Tech in August 2015. Prof. Chernova’s research focuses on developing robots that are able to effectively operate in human environments. Her work spans robotics and artificial intelligence, including semantic reasoning, adjustable autonomy, human computation and cloud robotics. She is the recipient of the NSF CAREER, ONR Young Investigator and NASA Early Career Faculty awards.
32-G449 Patil/Kiva
April 04
Getting More from What You've Already Got: Improving Stereo Visual Odometry Using Deep Visual Illumination Estimation
Jon Kelly
UTIAS
Add to Calendar
2017-04-04 11:00:00
2017-04-04 12:00:00
America/New_York
Getting More from What You've Already Got: Improving Stereo Visual Odometry Using Deep Visual Illumination Estimation
Abstract:Visual navigation is essential for many successful robotics applications. Visual odometry (VO), an incremental dead reckoning technique, in particular, has been widely employed on many platforms, including the Mars Exploration Rovers and the Mars Science Laboratory. However, a drawback of this visual motion estimation approach is that it exhibits superlinear growth in positioning error with time, due in large part to orientation drift.In this talk, I will describe recent work in our group on a method to incorporate global orientation information from the sun into a visual odometry (VO) pipeline, using data from the existing image stream only. This is challenging in part because the sun is typically not visible in the input images. Our work leverages recent advances in Bayesian Convolutional Neural Networks (BCNNs) to train and implement a sun detection model (dubbed Sun-BCNN) that infers a three-dimensional sun direction vector from a single RGB image. Crucially, the technique also computes a principled uncertainty associated with each prediction, using a Monte Carlo dropout scheme. We incorporate this uncertainty into a sliding window stereo VO pipeline where accurate uncertainty estimates are critical for optimal data fusion.I will present the results of our evaluation on the KITTI odometry benchmark, where significant improvements are obtained over ‘vanilla’ VO. I will also describe additional experimental evaluation on 10 km of navigation data from Devon Island in the Canadian High Arctic, at a Mars analogue site. Finally, I will give an overview of our analysis of the sensitivity of the model to cloud cover, and discuss the possibility of model transfer between urban and planetary analogue environments.Bio:Dr. Kelly is an Assistant Professor at the University of Toronto Institute for Aerospace Studies, where he directs the Space & Terrestrial Autonomous Robotic Systems (STARS) Laboratory. Prior to joining U of T, he was a postdoctoral researcher in the Robust Robotics Group at MIT. Dr. Kelly received his PhD degree in 2011 from the University of Southern California, under the supervision of Prof. Gaurav Sukhatme. He was supported at USC in part by an Annenberg Fellowship. Prior to graduate school, he was a software engineer at the Canadian Space Agency in Montreal, Canada. His research interests lie primarily in the areas of sensor fusion, estimation, and machine learning for navigation and mapping, applied to both robots and human-centred assistive technologies.
32-G449 Patil/Kiva
March 28
Robust Navigation: From UAVs to Robot Swarms
Grace Gao
UIUC
Add to Calendar
2017-03-28 11:00:00
2017-03-28 12:00:00
America/New_York
Robust Navigation: From UAVs to Robot Swarms
Abstract: Robust navigation is critical and challenging for the ever-growing applications of robotics. Take Unmanned Aerial Vehicles (UAVs) as an example: the boom in applications of low-cost multi-copters requires UAVs to navigate in urban environments at low altitude. Traditionally, a UAV is equipped with a GPS receiver for outdoor flight. It may suffer from GPS signal blockage and multipath issues, making GPS-based positioning erroneous or unavailable. Moreover, GPS signals are vulnerable against attacks, such as jamming or spoofing. These attacks either disable GPS positioning, or more deliberately mislead the UAV with wrong positioning. In this talk, we present our recent work on robust UAV navigation. We deeply fuse GPS information with Lidar, camera vision and inertial measurements on the raw signal level. In addition, we turn the unwanted multipath signals into an additional useful signal source. Instead of one GPS receiver, we use multiple receivers either on the same UAV platform or across a wide area to further improve navigation accuracy, reliability and resilience to attacks.The second part of the talk will address our work on navigating a swarm of 100 robots, designed and built in our lab. We call them “Shinerbots,” because they are inspired by the schooling behaviors of Golden Shiner Fish. We will demonstrate the successful navigation and environment exploration of our Shinerbot swarm. Bio: Grace Xingxin Gao is an assistant professor in the Aerospace Engineering Department at University of Illinois at Urbana-Champaign. She obtained her Ph.D. degree in Electrical Engineering from the GPS Laboratory at Stanford University. Prof. Gao has won a number of awards, including RTCA William E. Jackson Award and Institute of Navigation Early Achievement Award. She was named one of 50 GNSS Leaders to Watch by the GPS World Magazine. She has won Best Paper/Presentation of the Session Awards 11 times at ION GNSS+ conferences. She received Dean's Award for Excellence in Research from College of Engineering, University of Illinois at Urbana-Champaign. For her teaching, Prof. Gao has been on the List of Teachers Ranked as Excellent by Their Students at University of Illinois multiple times. She won the College of Engineering Everitt Award for Teaching Excellence at University of Illinois at Urbana-Champaign in 2015. She was chosen as American Institute of Aeronautics and Astronautics (AIAA) Illinois Chapter’s Teacher of the Year in 2016.
32-G 449 Patil/Kiva
March 07
Add to Calendar
2017-03-07 11:00:00
2017-03-07 12:00:00
America/New_York
Human-in-the-Loop: Deep Learning for Shared Autonomy in Naturalistic Driving
Abstract:Localization, mapping, perception, control, and trajectory planning are components of autonomous vehicle design that each have seen considerable progress in the previous three decades and especially since the first DARPA Robotics Challenge. These are areas of robotics research focused on perceiving and interacting with the external world through outward facing sensors and actuators. However, semi-autonomous driving is in many ways a human-centric activity where the at-times distracted, irrational, drowsy human may need to be included in-the-loop of safe and intelligent autonomous vehicle operation through the driver state sensing, communication, and shared control.In this talk, I will present deep neural network approaches for various subtasks of supervised vehicle autonomy with a special focus on driver state sensing and how those approaches helped us in (1) the collection, analysis, and understanding of human behavior over 100,000 miles and 1 billion video frames of on-road semi-autonomous driving in Tesla vehicles and (2) the design of real-time driver assistance systems that bring the human back into the loop of safe shared autonomy.Bio:Lex Fridman is a postdoc at MIT, working on computer vision and deep learning approaches in the context of self-driving cars with a human-in-the-loop. His work focuses on large-scale, real-world data, with the goal of building intelligent systems that have real world impact. Lex received his BS, MS, and PhD from Drexel University where he worked on applications of machine learning, computer vision, and decision fusion techniques in a number of fields including robotics, active authentication, activity recognition, and optimal resource allocation on multi-commodity networks. Before joining MIT, Lex was at Google working on deep learning and decision fusion methods for large-scale behavior-based authentication. Lex is a recipient of a CHI-17 best paper award.
32-G449 (Patil/Kiva)
February 28
Deep Learning for Robot Navigation and Perception
Wolfram Burgard
University of Freiburg
Add to Calendar
2017-02-28 11:00:00
2017-02-28 12:00:00
America/New_York
Deep Learning for Robot Navigation and Perception
Abstract:Autonomous robots are faced with a series of learning problems to optimize their behavior. In this presentation I will describe recent approaches developed in my group based on deep learning architectures for different perception problems including object recognition and segmentation and using RGB(-D) images. In addition, I will present a terrain classification approaches that utilize sound and vision. For all approaches I will describe expensive experiments quantifying in which way the corresponding approaches extend the state of the art.Bio:Wolfram Burgard is a professor for computer science at the University of Freiburg and head of the research lab for Autonomous Intelligent Systems. His areas of interest lie in artificial intelligence and mobile robots. His research mainly focuses on the development of robust and adaptive techniques for state estimation and control. Over the past years Wolfram Burgard and his group have developed a series of innovative probabilistic techniques for robot navigation and control. They cover different aspects such as localization, map-building, SLAM, path-planning, exploration, and several other aspects. Wolfram Burgard coauthored two books and more than 300 scientific papers. In 2009, Wolfram Burgard received the Gottfried Wilhelm Leibniz Prize, the most prestigious German research award. In 2010, Wolfram Burgard received an Advanced Grant of the European Research Council. Since 2012, Wolfram Burgard is the coordinator of the Cluster of Excellence BrainLinks-BrainTools funded by the German Research Foundation. Wolfram Burgard is Fellow of the ECCAI, the AAAI, and the IEEE.
32-G449 (Patil/Kiva)