Haptic virtual fixtures have been shown to improve operator performance in several teleoperated tasks. However, the rendering of such virtual fixtures is oftentimes specific to a particular setup or extendable only to a single, particular class of sensor-based rendering. Handling information from multiple, auxiliary sensors is a challenging task, and will provide means for enhancing and finding new potential for immersion and realism during teleoperation. This is particularly useful in scenarios where the remote environment is not well known and on-the-fly, sensor-based virtual fixture rendering is desired. We would like to synthesize, based on arbitrary sensor parameters and operator intention/task, an optimal haptic virtual fixture.
Several immediate problems exist. When using multiple sensors, a clever way of resolving conflicting information is necessary to best utilize the data. Overlaying sensors of different types and with varying parameters is a real-time sensor fusion problem. Constructing haptic virtual fixtures based on these various sensors and information further complicates the task, and sensor limitations may yield the task intractable.
This work is directly aimed at increasing immersion and realism experienced by a teleoperator. As telerobots continue to be deployed in critical, potentially life-threatening scenarios, it is imperative that the operator be as aware and conscious of the situation as possible. By integrating multiple sensors with haptic virtual fixtures, teleoperated task safety and efficiency can be enhanced.