The Multimodal User Supervised Interface and Intelligent Control (MUSIIC) project focuses on a multimodal human-machine interface which addresses user need to manipulate familiar objects in an unstructured environment. The control of a robot by individuals with significant physical limitations presents a challenging problem of telemanipulation. This is addressed by a unique user-interface integrating the user's command (speech) and gestures (pointing) with autonomous planning techniques (knowledge-bases and 3-D vision). The resultant test-bed offers the opportunity to study telemanipulation by individuals with physical disabilities, and can be generalized to an effective technique for other, including remote and time-delayed, telemanipulation. This paper focuses on the knowledge-driven planning mechanism that is central to the MUSIIC system.
Email your librarian or administrator to recommend adding this journal to your organisation's collection.
* Views captured on Cambridge Core between September 2016 - 21st July 2017. This data will be updated every 24 hours.