Toward robotic autonomy in data-scarce and visually challenging environments

Speaker

Jungseok Hong
University of Minnesota Twin Cities

Host

John Leonard
CSAIL MIT
Abstract:
Recent advances in field robotics allow robots to be deployed in potentially dangerous unstructured environments (e.g., underwater, space, and disaster scenes) instead of risking human safety. In these spaces, it can be extremely difficult for robots to localize and map their surroundings, forcing robots to rely on human operators via tethered or wireless
communication. This complicates operations and limits the usage of robotic missions. Effective human-robot collaboration can ease these difficulties, further enabling robot autonomy and human efficiency in complex environments.

My long-term research goal is enabling robots to perform tasks autonomously in unstructured environments with robust visual perception and human-robot collaboration. I focus my research on the following four aspects: (1) visual perception in
data-scarce and visually challenging underwater environments, (2) localization and exploration in unstructured environments.
(3) algorithms for effective human-robot collaboration, and (4) developing open-source underwater robotic systems.
Additionally, I work to address perception challenges in agricultural and grasping robots, demonstrating that my research is applicable across environments and applications. My research expertise with robots from various domains (underwater, agriculture, and manipulation) offers the potential for future collaborative research with other disciplines beyond computer science, including agriculture, healthcare, and oceanology.

In this talk, I will highlight my research in underwater object detection, perception for human-robot interaction, and an open-source underwater robotic system. I will also briefly discuss open research problems on these topics.?

Bio: Jungseok Hong is a Ph.D. candidate in Robotics at the Department of Computer Science and Engineering and the Minnesota Robotics Institute, University of Minnesota Twin Cities, and a member of the IRV Lab, advised by Professor Junaed Sattar. His research is at the intersection of robotics, computer vision, and machine learning. He primarily focuses on improving robotic perception in data-scarce and visually challenging environments using deep learning and probabilistic approaches.