Localization and mapping is a fundamental competence for design of any mobile robot system. The research is focused on design of localization and mapping systems that are robust to data association errors, have scalable complexity and generalize across multiple sensory modalities. Graphical models are leveraged for estimation, Bayesian models are used for multi sensory fusion and system are evaluated across in- and out-door scenes. Current research considers use of vision as a primary modality for mapping and localization. Research also includes integration of vision, IMU and RGBD sensors. Finally, recent research includes opportunitic localization.
The next challenge in mobile robotics is to endow the systems with cognitive capabilities. Cognition implies competencies to represent knowledge about the external world in terms of objects, events, and agents, to autonomously be able to acquire such knowledge and to reason about the world to facilitate action generation. The research is focused on several aspect of cognitive robots such as, recognition of objects and activities, and dialog generation to enable interaction with humans as part of learning, and execution of tasks.
Sensor Based Manipulation
Traditionally robot manipulators have achieved their accuracy through use of excellent mechanisms and strong models for control. This has enabled design of robots with accuracies below 1 mm. To achieve repeatable accuracies better than 0.1mm there is a need to integrate sensors into the outer feedback loop. A number of different sensory modalities can be utilized such as force-torque, tactile, and computer vision. We are particularly interested in vision and range data for non-contact sensing and use of force-torque for control as part of contact configuration. The objective is here to integrate multiple models into hybrid dynamic control models that optimize accuracy and robustness.
Human Robot Interaction
The acceptance of robots by non-experts is essential to wide adoptable and utilization of robot systems. The Human-Robot Interaction is essential to such adoption. This requires consideration of all aspects of HRI from design, over social interaction to physical interaction. In our research we focus in particular on physical HRI and the interplay between design and interaction.
Georgia Tech entered the DARPA Urban Grand Challenge on autonomous driving in urban setting. This was part of a longer-term effort to study autonomously driving vehicles, initially for non-public road applications such as homeland security, search-and-rescue, and convoying. The autonomous driving includes aspect of sensory fusion, situation awareness, planning in dynamic environments, and extreme robustness. The platform is ideal for empirical studies of these problem under realistic conditions.