Localization and mapping is a fundamental competence for design of any mobile robot system. The research is focused on design of localization and mapping systems that are robust to data association errors, have scalable complexity and generalize across multiple sensory modalities. Graphical models are leveraged for estimation, Bayesian models are used for multi sensory fusion and system are evaluated across in- and out- door scenes. Current research considers use of vision as a primary modality for mapping and localization.
Almost all items in stores have been on a truck at least once. For the transportation there is a need to place items on pallets and preferably to optimize the packaging such that a minimum number of pallets are utilized to optimize transportation costs. The problem of placing a multitude of packages on a pallet is often referred to as the mixed palletizing problem. This is a special case of the more general knap-sack problem or the space carving problem, which has been widely studied in computer science.
The problem is known to have NP complexity and a number of heuristics have been developed to provide solutions. In collaboration with NIST the Virtual Manufacturing Challenge has been designed to study to problem in simulation and in real-world scenarios. New methods for mixed palletizing are studied as planning problems with high complexity using a number of heuristics from sampling to branch-and-bound.
Every year we organize the virtual manufacturing competititon in association with ICRA. Check the web-site for details - www.vma-competition.com
The next challenge in mobile robotics is to endow the systems with cognitive capabilities. Cognition implies competencies to represent knowledge about the external world in terms of objects, events, and agents, to autonomously be able to acquire such knowledge and to reason about the world to facilitate action generation. The research is focused on several aspect of cognitive robots such as, recognition of objects and activities, and dialog generation to enable interaction with humans as part of learning and execution of tasks.
Sensor Based Manipulation
Traditionally robot manipulators have achieved their accuracy through use of excellent mechanisms and strong models for control. This has enabled design of robots with accuracies below 1 mm. To achieve repeatable accuracies better than 0.1mm there is a need to integrate sensors into the outer feedback loop. A number of different sensory modalities can be utilized such as force-torque, tactile, and computer vision. We are particularly interested in vision and range data for non-contact sensing and use of force-torque for control as part of contact configuration. The objective is here to integrate multiple models into hybrid dynamic control models that optimize accuracy and robustness.
Human Robot Interaction
The acceptance of robots by non-experts is essential to wide adoptable and utilization of robot systems. The Human-Robot Interaction is essential to such adoption. This requires consideration of all aspects of HRI from design, over social interaction to physical interaction. In our research we focus in particular on physical HRI and the interplay between design and interaction.
Georgia Tech entered the DARPA Urban Grand Challenge on autonomous driving in urban setting. This was part of a longer-term effort to study autonomously driving vehicles, initially for non-public road applications such as homeland security, search-and-rescue, and convoying. The autonomous driving includes aspect of sensory fusion, situation awareness, planning in dynamic environments, and extreme robustness. The platform is ideal for empirical studies of these problem under realistic conditions.