Agile Robotics Forklift Demo at Fort Lee (June 2010)
On June 15 and 16, 2010, the MIT Agile Robotics team demonstrated three robots at a busy SSA (Supply Support Activity, essentially an outdoor warehouse and supply depot) at Fort Lee in Virginia: a robot forklift; a robot rover; and a two-armed robot porter. The robot forklift demonstration included: understanding and execution of verbal commands; interpretation of tablet gestures and speech; seamless handoff of control from autonomous to manned and back again; interpretation of a narrated guided tour of palletized supplies; use of visual memory to find and fetch specific pallets; detection of spoken or shouted "Stop" commands; autonomous approach, lifting, transport and placement of palletized supplies; and spoken confirmation of commands. The robot rover demonst
Albert Huang- Lane Estimation for Autonomous Vehicles using Vision and LIDAR
Autonomous ground vehicles, or self-driving cars, require a high level of situational awareness in order to operate safely and efficiently in real-world conditions. A system that is able to quickly and reliably estimate the roadway and its lanes based upon local sensor data would be a valuable asset both to fully autonomous vehicles as well as driver assistance systems. To be most useful, it must accommodate a variety of roadways, environments with a range of weather and lighting conditions, and highly dynamic scenes with other vehicles and moving objects. Lane estimation can be modeled as a curve estimation problem, where sensor data provides partial and noisy observations of curves.
The project's goal is to enhance an ordinary ordinary powered wheelchair using sensors to perceive the wheelchair's surroundings, a speech interface to interpret commands, a wireless device for room-level location determination, and motor-control software to effect the wheelchair's motion. The robotic wheelchair learns the layout of its environment (hospital, rehabilitation center, home, etc.) through a narrated, guided tour given by the user or the user's caregivers. Subsequently, the wheelchair can move to any previously-named location under voice command (e.g., "Take me to the cafeteria"). This technology is appropriate for people who have lost mobility due to brain injury or the loss of limbs, but who retain speech.
This paper describes a method for bringing two videos (recorded at different times) into spatiotemporal alignment, then comparing and combining corresponding pixels for applications such as background subtraction, compositing, and increasing dynamic range. We align a pair of videos by searching for frames that best match according to a robust image registration process. This process uses locally weighted regression to interpolate and extrapolate high-likelihood image correspondences, allowing new correspondences to be discovered and refined. Image regions that cannot be matched are detected and ignored, providing robustness to changes in scene content and lighting, which allows a variety of new applications.