December 04

Add to Calendar 2018-12-04 11:00:00 2018-12-04 12:00:00 America/New_York Motion Planning and Control for Robot and Human Manipulation In this talk I will describe our progress on motion planning and control for two very different manipulation problems: (1) dexterous manipulation by robots and (2) control of arm neuroprosthetics for humans with spinal cord injuries.The first part of the talk will focus on manipulation modes commonly used by humans but mostly avoided by robots, such as rolling, sliding, pushing, pivoting, tapping, and in-hand manipulation. These manipulation modes exploit controlled motion of the object relative to the manipulator to increase dexterity.In the second part of the talk I will describe control of a functional electrical stimulation neuroprosthetic for the human arm. The goal of the project is to allow people with high spinal cord injury to recover the use of their arms for activities of daily living. Beginning with traditional methods for system identification and control of robot arms, I will describe how we have adapted the approach to identification and control of an electrically stimulated human arm.Bio: Kevin Lynch is Professor and Chair of the Mechanical Engineering Department at Northwestern University. He is a member of the Neuroscience and Robotics Lab (nxr.northwestern.edu) and the Northwestern Institute on Complex Systems (nico.northwestern.edu). His research focuses on dynamics, motion planning, and control for robot manipulation and locomotion; self-organizing multi-agent systems; and functional electrical stimulation for restoration of human function. Dr. Lynch is Editor-in-Chief of the IEEE Transactions on Robotics. He is co-author of the textbooks "Modern Robotics: Mechanics, Planning, and Control" (Cambridge University Press, 2017, http://modernrobotics.org), "Embedded Computing and Mechatronics" (Elsevier, 2015, http://nu32.org), and "Principles of Robot Motion" (MIT Press, 2005). He is the recipient of Northwestern's Professorship of Teaching Excellence and the Northwestern Teacher of the Year award in engineering. He earned a BSE in electrical engineering from Princeton University and a PhD in robotics from Carnegie Mellon University. 32-G449 Patil / Kiva

November 27

Add to Calendar 2018-11-27 11:00:00 2018-11-27 12:00:00 America/New_York HASEL Artificial Muscles—Versatile High-Performance Actuators for a New Generation of Life-like Robots Abstract: Robots today rely on rigid components and electric motors based on metal and magnets, making them heavy, unsafe near humans, expensive and ill-suited for unpredictable environments. Nature, in contrast, makes extensive use of soft materials and has produced organisms that drastically outperform robots in terms of agility, dexterity, and adaptability. The Keplinger Lab aims to fundamentally challenge current limitations of robotic hardware, using an interdisciplinary approach that synergizes concepts from soft matter physics and chemistry with advanced engineering technologies to introduce intelligent materials systems for a new generation of life-like robots. One major theme of research is the development of new classes of actuators – a key component of all robotic systems – that replicate the sweeping success of biological muscle, a masterpiece of evolution featuring astonishing all-around actuation performance, the ability to self-heal after damage, and seamless integration with sensing.This talk is focused on the labs' recently introduced HASEL artificial muscle technology. Hydraulically Amplified Self-healing ELectrostatic (HASEL) transducers are a new class of self-sensing, high-performance muscle-mimetic actuators, which are electrically driven and harness a mechanism that couples electrostatic and hydraulic forces to achieve a wide variety of actuation modes. Current designs of HASEL are capable of exceeding actuation stress of 0.3 MPa, linear strain of 100%, specific power of 600W/kg, full-cycle electromechanical efficiency of 30% and bandwidth of over 100Hz; all these metrics match or exceed the capabilities of biological muscle. Additionally, HASEL actuators can repeatedly and autonomously self-heal after electric breakdown, thereby enabling robust performance. Further, this talk introduces a facile fabrication technique that uses an inexpensive CNC heat sealing device to rapidly prototype HASELs. New designs of HASEL incorporate mechanisms to greatly reduce operating voltages, enabling the use of lightweight and portable electronics packages to drive untethered soft robotic devices powered by HASELs. Modeling results predict the impact of material parameters and scaling laws of these actuators, laying out a roadmap towards future HASEL actuators with drastically improved performance. These results highlight opportunities to further develop HASEL artificial muscles for wide use in next-generation robots that replicate the vast capabilities of biological systems.Bio: Christoph Keplinger is an Assistant Professor of Mechanical Engineering and a Fellow of the Materials Science and Engineering Program at the University of Colorado Boulder, where he also holds an endowed appointment serving as Mollenkopf Faculty Fellow. Building upon his background in soft matter physics (PhD, JKU Linz), mechanics and chemistry (Postdoc, Harvard University), he leads a highly interdisciplinary research group at Boulder, with a current focus on(I) soft, muscle-mimetic actuators and sensors, (II) energy harvesting and (III) functional polymers. His work has been published in top journals including Science, Science Robotics, PNAS, Advanced Materials and Nature Chemistry, as well as highlighted in popular outlets such as National Geographic. He has received prestigious US awards such as a 2017 Packard Fellowship for Science and Engineering, and international awards such as the 2013 EAPromising European Researcher Award from the European Scientific Network for Artificial Muscles. He is the principal inventor of HASEL artificial muscles, a new technology that will help enable a next generation of life-like robotic hardware; in 2018 he co-founded Artimus Robotics to commercialize the HASEL technology. 32-G449 Patil / Kiva

November 20

Add to Calendar 2018-11-20 11:00:00 2018-11-20 12:00:00 America/New_York Handheld Kinesthetic Devices and Reinforcement Learning for Haptic Guidance Abstract: Screens, headphones, and now virtual and augmented reality headsets can provide people with instructions and guidance through visual and auditory feedback. Yet those senses are often overloaded, motivating the display of information through the sense of touch. Haptic devices, which display forces, vibrations, or other touch cues to a user’s hands or body, can be private, intuitive, and leave the other senses free. In this talk, I will discuss several novel hand-held haptic devices that provide clear, directional, kinesthetic cues while allowing the user to move through a large workspace. Using these devices, we study the anisotropies and variability in human touch perception and movement. Using modeling and reinforcement learning techniques, haptic devices can adapt to the user’s responses and provide effective guidance and intuitive touch interactions. These devices have applications in medical guidance and training, navigation, sports, and entertainment. Such holdable devices could enable haptic interfaces to become as prevalent and impactful in our daily lives as visual or audio interfaces.Bio: Julie Walker is a Ph.D. Candidate in Mechanical Engineering at Stanford University. She is a member of the Collaborative Haptics and Robotics in Medicine Lab, led by Professor Allison Okamura. She received a masters degree from Stanford University and a bachelors degree in Mechanical Engineering at Rice University. She has worked in haptics and human robot-interaction research since 2012, studying haptic feedback for prosthetic hands, robotic surgery, and teleoperation. Her Ph.D. thesis work focuses on haptic guidance through novel handheld devices, particularly for medical applications. She has received an NSF Graduate Research Fellowship and a Chateaubriand Fellowship. 32-G449 Patil / Kiva

November 13

Add to Calendar 2018-11-13 11:00:00 2018-11-13 12:00:00 America/New_York Insect-scale mechanisms: from flying robots to piezoelectric fans Abstract: In recent years, there has been heightened interest in developing sub-gram hovering vehicles, in part for their predicted high maneuverability (based on the relative scaling of torques and inertias). In this regime, the efficiency of electromagnetic motors drops substantially, and piezoelectrics are generally the actuator of choice. These typically operate in an oscillatory mode, which is well matched with flapping wings. However, at such a small size, integrating on-board power and electronics is quite challenging (particularly given the high voltages required for piezoelectrics), and such vehicles have thus been limited to fly tethered to an off-board power supply and control system. In this talk, I will discuss recent advances in the Harvard Robobee to overcome these challenges, including non-linear resonance modeling, improved manufacturing, and multi-wing designs.I will also discuss fabrication of an alternative mechanism for converting piezoelectric vibration to airflow. This is of interest as a low-profile fan for CPU cooling, a growing issue as electronic devices pack increasing power consumption (and thus heat) into smaller spaces. Additionally, a thruster based on this technology could achieve higher thrust-per-area and speed than flapping wings or propellers (at the expense of efficiency). Its extremely modular nature is also attractive in such an application.When we operate robots near resonance, particularly with very non-linear systems and/or multiple mechanically interacting actuators, control can be extremely challenging. In these scenarios, knowledge of the instantaneous deflections or velocities of each actuator is crucial. Toward this end, I will describe our work on monitoring the actuators’ current to obtain accurate velocity data regardless of external loading, without the need for any additional sensors.Bio: Noah T. Jafferis obtained his PhD in the Electrical Engineering Department at Princeton University in 2012, and is currently a Postdoctoral Research Associate in Harvard University's Microrobotics Lab. Noah was home-schooled until entering Yale University at the age of 16, where he received his B.S. in Electrical Engineering in 2005. At Princeton, Noah's research included printing silicon from nanoparticle suspensions and the development of a "flying carpet" (traveling wave based propulsion of a thin plastic sheet). His current research at Harvard includes nonlinear resonance modeling, scaling, and system optimization for flapping-wing vehicles; piezoelectric actuators and motors (manufacturing and modeling for optimal power density, efficiency, and lifetime); a fan/thruster using piezoelectrically actuated peristaltic pumping; solar power for autonomous operation of insect-scale robots; and self-sensing actuation. Some of his many research interests include micro/nano-robotics, bio-inspired engineering, 3D integrated circuits, MEMS/NEMS, piezoelectrics, 3D printing, energy harvesting, and large-area/flexible electronics. 32-G449 Patil / Kiva

November 06

Add to Calendar 2018-11-06 11:15:00 2018-11-06 12:15:00 America/New_York Doing for our robots what evolution did for us Abstract: We, as robot engineers, have to think hard about our role in the design of robots and how it interacts with learning, both in "the factory" (that is, at engineering time) and in "the wild" (that is, when the robot is delivered to a customer). I will share some general thoughts about the strategies for robot design and then talk in detail about some work I have been involved in, both in the design of an overall architecture for an intelligent robot and in strategies for learning to integrate new skills into the repertoire of an already competent robot.Joint work with: Tomas Lozano-Perez, Zi Wang, Caelan Garrett and a fearless group of summer robot studentsBio: Leslie is a Professor at MIT. She has an undergraduate degree in Philosophy and a PhD in Computer Science from Stanford, and was previously on the faculty at Brown University. She was the founding editor-in-chief of the Journal of Machine Learning Research. Her research agenda is to make intelligent robots using methods including estimation, learning, planning, and reasoning. She is not a robot. 32-G449 Patil / Kiva

October 30

Add to Calendar 2018-10-30 11:00:00 2018-10-30 12:00:00 America/New_York 3D Scene Understanding with a RGB-D Camera Abstract: Three-dimensional scene understanding is important for computer systems that respond to and/or interact with the physical world, such as robotic manipulation and autonomous navigatio. For example, they may need to estimate the 3D geometry of the surrounding space (e.g., in order to navigate without collisions) and/or to recognize the semantic categories of nearby objects (e.g., in order to interact with them appropriately). In this talk, I will describe recent work on 3D scene understanding by the 3D Vision Group at Princeton University. I will focus on three projects that infer 3D structural and semantic models of scenes from partial observations with a RGB-D camera. The first learns to infer depth (D) from color (RGB) in regions where the depth sensor provides no return (e.g., because surfaces are shiny or far away). The second learns to predict the 3D structure and semantics within volumes of space occluded from view (e.g., behind a table). The third learns to infer the 3D structure and semantics of the entire surrounding environment (i.e., inferring an annotated 360 degree panorama from a single image). For each project, I will discuss the problem formulation, scene representation, network architecture, dataset curation, and potential applications.This is joint work with Angel X. Chang, Angela Dai, Kyle Genova, Maciej Halber, Matthias Niessner, Shuran Song, Fisher Yu, Andy Zeng, and Yinda Zhang.Bio: Thomas Funkhouser is the David M. Siegel Professor of Computer Science at Princeton University. He received a PhD in computer science from UC Berkeley in 1993 and was a member of the technical staff at Bell Labs until 1997 before joining the faculty at Princeton. For most of his career, he focused on research problems in computer graphics, including foundational work on 3D shape retrieval, analysis, and modeling. His most recent research has focused on 3D scene understanding in computer vision and robotics. He has published more than 100 research papers and received several awards, including a ACM SIGGRAPH Computer Graphics Achievement Award, ACM SIGGRAPH Academy membership, NSF Career Award, Sloan Foundation Fellowship, Emerson Electric, E. Lawrence Keyes Faculty Advancement Award, and University Council Excellence in Teaching Awards. 32-G882 (Hewlett Room)

October 23

Add to Calendar 2018-10-23 11:00:00 2018-10-23 12:00:00 America/New_York Machine Learning for Robot Perception, Planning, and Control Abstract: The main goal of this talk is to illustrate how machine learning can start to address some of the fundamental perceptual and control challenges involved in building intelligent robots. I’ll discuss how to learn dynamics models for planning and control, how to use imitation to efficiently learn deep policies directly from sensor data, and how policies can be parameterized with task-relevant structure. I’ll show how some of these ideas have been applied to a new high speed autonomous “AutoRally” platform built at Georgia Tech and an off-road racing task that requires impressive sensing, speed, and agility to complete. Along the way, I’ll show how theoretical insights from reinforcement learning, imitation learning, and online learning help us to overcome practical challenges involved in learning on real-world platforms. I will conclude by discussing ongoing work in my lab related to machine learning for robotics.Bio: Byron Boots is an Assistant Professor of Interactive Computing in the College of Computing at the Georgia Institute of Technology. He concurrently holds an adjunct appointment in the School of Electrical and Computer Engineering at Georgia Tech and a Visiting Faculty appointment at Nvidia Research. Byron received his M.S. and Ph.D. in Machine Learning from Carnegie Mellon University and was a postdoctoral scholar in Computer Science and Engineering at the University of Washington. He joined Georgia Tech in 2014, where he founded the Georgia Tech Robot Learning Lab, affiliated with the Center for Machine Learning and the Institute for Robotics and Intelligent Machines. Byron is the recipient of several awards including Best Paper at ICML, Best Paper at AISTATS, Best Paper Finalist at ICRA, Best Systems Paper Finalist at RSS, and the NSF CAREER award. His main research interests are in theory and systems that tightly integrate perception, learning, and control. 32-G449 Patil / Kiva

October 16

Add to Calendar 2018-10-16 11:00:00 2018-10-16 12:00:00 America/New_York The Mechanical Side of Artificial Intelligence Abstract: Artificial Intelligence typically focuses on perception, learning, and control methods to enable autonomous robots to make and act on decisions in real environments. On the contrary, our research is focused on the design, mechanics, materials, and manufacturing of novel robot platforms that make the perception, control, or action easier or more robust for natural, unstructured, and often unpredictable environments. Key principles in this pursuit include bioinspired designs, smart materials for novel sensors and actuators, and the development of multi-scale, multi-material manufacturing methods. This talk will illustrate this philosophy by highlighting the creation of two unique classes of robots: soft-bodied autonomous robots and highly agile aerial and terrestrial robotic insects.Bio: Robert Wood is the Charles River Professor of Engineering and Applied Sciences in the Harvard John A. Paulson School of Engineering and Applied Sciences, a founding core faculty member of the Wyss Institute for Biologically Inspired Engineering and a National Geographic Explorer. Prof. Wood completed his M.S. and Ph.D. degrees in the Dept. of Electrical Engineering and Computer Sciences at the University of California, Berkeley. He is the winner of multiple awards for his work including the DARPA Young Faculty Award, NSF Career Award, ONR Young Investigator Award, Air Force Young Investigator Award, Technology Review's TR35, and multiple best paper awards. In 2010 Wood received the Presidential Early Career Award for Scientists and Engineers from President Obama for his work in microrobotics. In 2012 he was selected for the Alan T. Waterman award, the National Science Foundation's most prestigious early career award. In 2014 he was named one of National Geographic's "Emerging Explorers". Wood's group is also dedicated to STEM education by using novel robots to motivate young students to pursue careers in science and engineering. 32-G449 Patil / Kiva

September 25

Add to Calendar 2018-09-25 11:00:00 2018-09-25 12:00:00 America/New_York How to Make, Sense, and Make Sense of Contact in Robotic Manipulation Abstract: Dexterous manipulation is a key open problem for many new robotic applications, owing in great measure to the difficulty of dealing with transient contact. From an analytical standpoint, intermittent frictional contact (the essence of manipulation) is difficult to model, as it gives rise to non convex problems with no known efficient solvers. Contact is also difficult to sense, particularly with sensors integrated in a mechanical package that must also be compact, highly articulated and appropriately actuated (i.e. a robot hand). Articulation and actuation present their own challenges: a dexterous hand comes with a high-dimensional posture space, difficult to design, actuate, and control. In this talk, I will present our work trying to address these challenges: analytical models of grasp stability (with realistic energy dissipation constraints), design and use of sensors (tactile and proprioceptive) for manipulation, and hand posture subspaces (for design optimization and teleoperation). These are stepping stones towards achieving versatile robotic manipulation, needed by applications as diverse as logistics, manufacturing, disaster response and space robots.Bio: Matei Ciocarlie is an Associate Professor of Mechanical Engineering at Columbia University. His current work focuses on robot motor control, mechanism and sensor design, planning and learning, all aiming to demonstrate complex motor skills such as dexterous manipulation. Matei completed his Ph.D. at Columbia University in New York; before joining the faculty at Columbia, he was a Research Scientist and Group Manager at Willow Garage, Inc., a privately funded Silicon Valley robotics research lab, and then a Senior Research Scientist at Google, Inc. In recognition of his work, Matei has been awarded the Early Career Award by the IEEE Robotics and Automation Society, a Young Investigator Award by the Office of Naval Research, a CAREER Award by the National Science Foundation, and a Sloan Research Fellowship by the Alfred P. Sloan Foundation. 32-G449 Patil / Kiva

May 29

Who Doesn't Want Another Arm?

Kenneth Salisbury
Computer Science and Surgery Stanford University
Add to Calendar 2018-05-29 11:00:00 2018-05-29 12:00:00 America/New_York Who Doesn't Want Another Arm? Abstract: Our laboratory has been developing wearable robot arms. They are designed to augment your capabilities and dexterity through physical cooperation. These "Third Arms" are typically waist-mounted and designed to work in the volume directly in front of you, cooperating with your arms' actions. We are not developing fast or strong robots, rather we focus on the interaction design issues and variety of task opportunities that arise.Putting a robot directly in your personal space enables new ways for human-robot cooperation. Ours are designed to contact the environment with all their surfaces. This enables "whole-arm manipulation" as well as end-effector-based actions. How should you communicate with such a robot? How do you it teach it physical tasks? Can it learn by observing things you do and anticipate helpful actions?In this talk I will describe our work on wearables and discuss the design process leading to current embodiments. Be prepared to tell me what your favorite 3rd arm task is!!Bio: Professor Salisbury received his Ph.D. from Stanford in 1982. That fall he arrived at MIT for a one year post-doc. He says he was having so much fun that he ended up spending the next 16 years at the 'tute. He then spent four years at Intuitive Surgical helping develop the first-gen da Vinci robot. He then returned to Stanford to become a Professor in the Departments of Computer Science and Surgery. He and his students have been responsible for a number of seminal technologies including the Salisbury Hands, the PHANToM Haptic Interface, the MIT WAM/Barrett Arm, the da Vinci Haptic Interface, the Silver Falcon Medical Robot, implicit surface and polyhedral haptic rendering techniques, the JPL Force Reflecting Hand Controller, and other devices. Kenneth is inventor or co-inventor on over 50 patents on robotics, haptics, sensors, rendering, UI and other topics. His current research interests include arm design, active physical perception, high fidelity haptics. In his spare time, he plays the flute a lot and makes things. 32-G449 Patil / Kiva

May 01

Add to Calendar 2018-05-01 11:00:00 2018-05-01 12:00:00 America/New_York Avian Inspired Design Many organisms fly in order to survive and reproduce. My lab focusses on understanding bird flight to improve flying robots—because birds fly further, longer, and more reliable in complex visual and wind environments. I use this multidisciplinary lens that integrates biomechanics, aerodynamics, and robotics to advance our understanding of the evolution of flight more generally across birds, bats, insects, and autorotating seeds. The development of flying organisms as an individual and their evolution as a species are shaped by the physical interaction between organism and surrounding air. The organism’s architecture is tuned for propelling itself and controlling its motion. Flying animals and plants maximize performance by generating and manipulating vortices. These vortices are created close to the body as it is driven by the action of muscles or gravity, then are ‘shed’ to form a wake (a trackway left behind in the fluid). I study how the organism’s architecture is tuned to utilize these and other aeromechanical principles to compare the function of bird wings to that of bat, insect, and maple seed wings. The experimental approaches range from making robotic models to training birds to fly in a custom-designed wind tunnel as well as in visual flight arena’s—and inventing methods to 3D scan birds and measure the aerodynamic force they generate—nonintrusively—with a novel aerodynamic force platform. The studies reveal that animals and plants have converged upon the same solution for generating high lift: A strong vortex that runs parallel to the leading edge of the wing, which it sucks upward. Why this vortex remains stably attached to flapping animal and spinning plant wings is elucidated and linked to kinematics and wing morphology. While wing morphology is quite rigid in insects and maple seeds, it is extremely fluid in birds. I will show how such ‘wing morphing’ significantly expands the performance envelope of birds during flight, and will dissect the mechanisms that enable birds to morph better than any aircraft can. Finally, I will show how these findings have inspired my students to design new flapping and morphing aerial robots.Bio: Professor Lentink's multidisciplinary lab studies how birds fly to develop better flying robots—integrating biomechanics, fluid mechanics, and robot design. He has a BS and MS in Aerospace Engineering (Aerodynamics, Delft University of Technology) and a PhD in Experimental Zoology cum laude (Wageningen University). During his PhD he visited the California institute of Technology for 9 months to study insect flight. His postdoctoral training at Harvard was focused on studying bird flight. Publications range from technical journals to cover publications in Nature and Science. He is an alumnus of the Young Academy of the Royal Netherlands Academy of Arts and Sciences, recipient of the Dutch Academic Year Prize, the NSF CAREER award, he has been recognized in 2013 as one of 40 scientists under 40 by the World Economic Forum, and he is the inaugural winner of the Steven Vogel Young Investigator Award from the Journal Bioinspiration & Biomimetics for early career brilliance. 32-G449 Patil/Kiva

April 24

Add to Calendar 2018-04-24 11:00:00 2018-04-24 12:00:00 America/New_York Building Trust in Decision Support Systems for Aerospace This seminar is jointly hosted by the New Trends in Aerospace Seminar Series in the Department of Aeronautics and Astronautics and the Robotics Seminar Series in CSAIL.Abstract: Starting in the 1970s, decades of effort went into building human-designed rules for providing automatic maneuver guidance to pilots to avoid mid-air collisions. The resulting system was later mandated worldwide on all large aircraft and significantly improved the safety of the airspace. Recent work has investigated the feasibility of using computational techniques to help derive optimized decision logic that better handles various sources of uncertainty and balances competing system objectives. This approach has resulted in a system called Airborne Collision Avoidance System (ACAS) X that significantly reduces the risk of mid-air collision while also reducing the alert rate, and it is in the process of becoming the next international standard. Using ACAS X as a case study, this talk will discuss lessons learned about building trust in advanced decision support systems. This talk will also outline research challenges in facilitating greater levels of automation into safety critical systems.Bio: Mykel Kochenderfer is Assistant Professor of Aeronautics and Astronautics at Stanford University. He is the director of the Stanford Intelligent Systems Laboratory (SISL), conducting research on advanced algorithms and analytical methods for the design of robust decision making systems. Of particular interest are systems for air traffic control, unmanned aircraft, and other aerospace applications where decisions must be made in uncertain, dynamic environments while maintaining safety and efficiency. Prior to joining the faculty, he was at MIT Lincoln Laboratory where he worked on airspace modeling and aircraft collision avoidance, with his early work leading to the establishment of the ACAS X program. He received a Ph.D. from the University of Edinburgh and B.S. and M.S. degrees in computer science from Stanford University. He is the author of "Decision Making under Uncertainty: Theory and Application" from MIT Press. He is a third generation pilot. 32G-449 Patil/Kiva

April 10

Add to Calendar 2018-04-10 11:00:00 2018-04-10 12:00:00 America/New_York One fish, two fish: The role of Robotics in Fisheries Stock Assessment Fisheries stock have been decimated around the world. In this talk we examine the role of Robotics to help us assess fish stock. Unlike traditional methods Robotics hold the promise of yielding assessment methods that do not rely on actually catching fish. The complexities of Robotics for fish counting however include complexities associated with imaging underwater, fish avoidance and attraction, and the role of camouflage in fish predator prey interactions. This talks looks at how we are coming to grips with these issues and how the insights gained in tackling these problems have spilled out to other areas of robotics including edge cases to do with autonomous driving.Bio: Hanumant Singh is a Professor at Northeastern University where he is also the Director of the multidisciplinary Center for Robotics at NU. He received his Ph.D. from the MIT WHOI Joint Program in 1995 after which he worked on the Staff at WHOI until 2016 when he joined Northeastern. His group designed and built the Seabed AUV, as well as the Jetyak Autonomous Surface Vehicle dozens of which are in use for scientific and other purposes across the globe. He has participated in 60 expeditions in all of the world's oceans in support of Marine Geology, Marine Biology, Deep Water Archaeology, Chemical Oceanography, Polar Studies, and Coral Reef Ecology. 32-G449 Patil/Kiva

April 03

Add to Calendar 2018-04-03 11:00:00 2018-04-03 12:00:00 America/New_York Marine Robotics: Planning, Decision Making, and Learning Abstract: Underwater gliders, propeller-driven submersibles, and other marine robots are increasingly being tasked with gathering information (e.g., in environmental monitoring, offshore inspection, and coastal surveillance scenarios). However, in most of these scenarios, human operators must carefully plan the mission to ensure completion of the task. Strict human oversight not only makes such deployments expensive and time consuming but also makes some tasks impossible due to the requirement for heavy cognitive loads or reliable communication between the operator and the vehicle. We can mitigate these limitations by making the robotic information gatherers semi-autonomous, where the human provides high-level input to the system and the vehicle fills in the details on how to execute the plan. In this talk, I will show how a general framework that unifies information theoretic optimization and physical motion planning makes semi-autonomous information gathering feasible in marine environments. I will leverage techniques from stochastic motion planning, adaptive decision making, and deep learning to provide scalable solutions in a diverse set of applications such as underwater inspection, ocean search, and ecological monitoring. The techniques discussed here make it possible for autonomous marine robots to “go where no one has gone before,” allowing for information gathering in environments previously outside the reach of human divers.Bio:Geoff Hollinger is an Assistant Professor in the Collaborative Robotics and Intelligent Systems (CoRIS) Institute at Oregon State University. His current research interests are in adaptive information gathering, distributed coordination, and learning for autonomous robotic systems. He has previously held research positions at the University of Southern California, Intel Research Pittsburgh, University of Pennsylvania’s GRASP Laboratory, and NASA's Marshall Space Flight Center. He received his Ph.D. (2010) and M.S. (2007) in Robotics from Carnegie Mellon University and his B.S. in General Engineering along with his B.A. in Philosophy from Swarthmore College (2005). He is a recipient of the 2017 Office of Naval Research Young Investigator Program (YIP) award. 32-G449 Patil/Kiva

March 20

Add to Calendar 2018-03-20 11:00:00 2018-03-20 12:00:00 America/New_York Everyday Activity Science and Engineering (EASE) Abstract: Recently we have witnessed the first robotic agents performing everyday manipulation activities such as loading a dishwasher and setting a table. While these agents successfully accomplish specific instances of these tasks, they only perform them within the narrow range of conditions for which they have been carefully designed. They are still far from achieving the human ability to autonomously perform a wide range of everyday tasks reliably in a wide range of contexts. In other words, they are far from mastering everyday activities. Mastering everyday activities is an important step for robots to become the competent (co-)workers, assistants, and companions who are widely considered a necessity for dealing with the enormous challenges our aging society is facing.We propose Everyday Activity Science and Engineering (EASE), a fundamental research endeavour to investigate the cognitive information processing principles employed by humans to master everyday activities and to transfer the obtained insights to models for autonomous control of robotic agents. The aim of EASE is to boost the robustness, efficiency, and flexibility of various information processing subtasks necessary to master everyday activities by uncovering and exploiting the structures within these tasks. Everyday activities are by definition mundane, mostly stereotypical, and performed regularly. The core research hypothesis of EASE is that robots can achieve mastery by exploiting the nature of everyday activities. We intend to investigate this hypothesis by focusing on two core principles:The first principle is narrative-enabled episodic memories (NEEMs), which are data structures that enable robotic agents to draw knowledge from a large body of observations, experiences, or descriptions of activities. The NEEMs are used to find representations that can exploit the structure of activities by transferring tasks into problem spaces that are computationally easier to handle than the original spaces. These representations are termed pragmatic everyday activity manifolds (PEAMs), analogous to the concept of manifolds as low-dimensional local representations in mathematics. The exploitation of PEAMs should enable agents to achieve the desired task performance while preserving computational feasibility.The vision behind EASE is a cognition-enabled robot capable of performing human-scale everyday manipulation tasks in the open world based on high-level instructions and mastering them.Bio: Michael Beetz is a professor for Computer Science at the Faculty for Mathematics & Informatics of the University Bremen and head of the Institute for Artificial Intelligence (IAI). IAI investigates AI-based control methods for robotic agents, with a focus on human-scale everyday manipulation tasks. With his openEASE, a web-based knowledge service providing robot and human activity data, Michael Beetz aims at improving interoperability in robotics and lowering the barriers for robot programming. Due to this the IAI group provides most of its results as open-source software, primarily in the ROS software library.Michael Beetz received his diploma degree in Computer Science with distinction from the University of Kaiserslautern. His MSc, MPhil, and PhD degrees were awarded by Yale University in 1993, 1994, and 1996 and his Venia Legendi from the University of Bonn in 2000. Michael Beetz was a member of the steering committee of the European network of excellence in AI planning (PLANET) and coordinating the research area “robot planning”. He is associate editor of the AI Journal. His research interests include plan-based control of robotic agents, knowledge processing and representation for robots, integrated robot learning, and cognitive perception. 32-G449 Patil/Kiva

March 13

March 06

Add to Calendar 2018-03-06 11:00:00 2018-03-06 12:00:00 America/New_York Planning and Decision Making for Autonomous Spacecraft and Space Robots This seminar is jointly hosted by the New Trends in Aerospace Seminar Series in the Department of Aeronautics and Astronautics and the Robotics Seminar Series in CSAIL. Abstract: In this talk I will present planning and decision-making techniques for safely and efficiently maneuvering autonomous aerospace vehicles during proximity operations, manipulation tasks, and surface locomotion. I will first address the "spacecraft motion planning problem," by discussing its unique aspects and presenting recent results on planning under uncertainty via Monte Carlo sampling. I will then turn the discussion to higher-level decision making; in particular, I will discuss an axiomatic theory of risk and how one can leverage such a theory for a principled and tractable inclusion of risk-awareness in robotic decision making, in the context of Markov decision processes and reinforcement learning. Throughout the talk, I will highlight a variety of space-robotic applications my research group is contributing to (including the Mars 2020 and Hedgehog rovers, and the Astrobee free-flying robot), as well as applications to the automotive and UAV domains.This work is in collaboration with NASA JPL, NASA Ames, NASA Goddard, and MIT.Bio: Dr. Marco Pavone is an Assistant Professor of Aeronautics and Astronautics at Stanford University, where he is the Director of the Autonomous Systems Laboratory and Co-Director of the Center for Automotive Research at Stanford. Before joining Stanford, he was a Research Technologist within the Robotics Section at the NASA Jet Propulsion Laboratory. He received a Ph.D. degree in Aeronautics and Astronautics from the Massachusetts Institute of Technology in 2010. His main research interests are in the development of methodologies for the analysis, design, and control of autonomous systems, with an emphasis on autonomous aerospace vehicles and large-scale robotic networks. He is a recipient of a Presidential Early Career Award for Scientists and Engineers, an ONR YIP Award, an NSF CAREER Award, a NASA Early Career Faculty Award, a Hellman Faculty Scholar Award, and was named NASA NIAC Fellow in 2011. His work has been recognized with best paper nominations or awards at the Field and Service Robotics Conference, at the Robotics: Science and Systems Conference, and at NASA symposia. 32-G449 Patil/Kiva