The Problem
Human interaction with the physical world is increasingly mediated by intelligent machines, such as surgical robots and active prostheses / orthoses. While machines can be engineered to produce a high degree of accuracy for a given input, current models of human-computer interaction, specifically how humans learn to operate these machines are still under investigation. Our goal is to amplify this interaction by designing machines that adapt to and learn from their human partners to accelerate operator learning.
Our Solution
We sought to elucidate the feedback and feedforward control strategies used by human operators to supervise these machines. We investigated this line of inquiry by simulating robotic teleoperation via a path following video game.
Our preliminary results show that human operators learn and invert system dynamics while executing these teleoperation simulations. These data suggest that operators engage in model inversion, a process of defining a desired output and internally planning an input to arrive at the intended target.
These results may provide insights towards developing control interfaces that can adapt to their human operators with high fidelity and guarantee reliably safe teleoperation.
Impact
The results from this study will help predict how human operators interact with novel systems, and provide a starting point for ensuring safe operation of these systems. In addition, this method can be extended to be used in the clinic as a method to measure motor learning for individuals with motor impairment.
Publications
M Yamagami, D Howell, E Roth, SA Burden. Contributions of feedforward and feedback control in a manual trajectory-tracking task. 2nd IFAC Conference on Cyber-Physical & Human Systems, Miami, USA, Dec 14-15, 2018.
Affiliated Students and Faculty: Momona Yamagami, Ben Chasnov, Sam Burden
Related Media: