Hostname: page-component-77f85d65b8-2tv5m Total loading time: 0 Render date: 2026-03-29T08:46:46.456Z Has data issue: false hasContentIssue false

A virtual reality-based dual-mode robot teleoperation architecture

Published online by Cambridge University Press:  07 May 2024

Marco Gallipoli*
Affiliation:
Department of Industrial Engineering, University of Naples Federico II, Naples, Italy
Sara Buonocore
Affiliation:
Department of Industrial Engineering, University of Naples Federico II, Naples, Italy
Mario Selvaggio
Affiliation:
Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
Giuseppe Andrea Fontanelli
Affiliation:
Herobots s.r.l., Naples, Italy
Stanislao Grazioso
Affiliation:
Department of Industrial Engineering, University of Naples Federico II, Naples, Italy
Giuseppe Di Gironimo
Affiliation:
Department of Industrial Engineering, University of Naples Federico II, Naples, Italy
*
Corresponding author: Marco Gallipoli; Email: mar.gallipoli@studenti.unina.it
Rights & Permissions [Opens in a new window]

Abstract

This paper proposes a virtual reality-based dual-mode teleoperation architecture to assist human operators in remotely operating robotic manipulation systems in a safe and flexible way. The architecture, implemented via a finite state machine, enables the operator to switch between two operational modes: the Approach mode, where the operator indirectly controls the robotic system by specifying its target configuration via the immersive virtual reality (VR) interface, and the Telemanip mode, where the operator directly controls the robot end-effector motion via input devices. The two independent control modes have been tested along the task of reaching a glass on a table by a sample population of 18 participants. Two working groups have been considered to distinguish users with previous experience with VR technologies from the novices. The results of the user study presented in this work show the potential of the proposed architecture in terms of usability, both physical and mental workload, and user satisfaction. Finally, a statistical analysis showed no significant differences along these three metrics between the two considered groups demonstrating ease of use of the proposed architecture by both people with and with no previous experience in VR.

Information

Type
Review Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2024. Published by Cambridge University Press
Figure 0

Figure 1. Scheme of the proposed teleoperation architecture composed of two sides: the local VR side (upper part of the image) features a local workstation with markers’ tracking, interaction, and visualization modules for the human operator; the remote robot side (lower part of the image) features a remote workstation that implements the proposed dual-mode teleoperation control architecture that communicates with the robot cabinet responsible of the low-level real-time robot control.

Figure 1

Figure 2. State machine of the proposed dual-mode teleoperation architecture. Approach and Telemanip are the main states besides the Idle one. In the Approach State, the operator can move around and place the virtual end-effector in the desired pose, plan a trajectory for the robot within the Plan traj state, and subsequently visualize it through the Cmd traj state. In the Telemanip State, instead, the user can realign input devices or command the real robot end-effector velocity in the Cmd vel state. Switching among states is triggered by pressing the input devices buttons.

Figure 2

Figure 3. Task execution phases. In phase I, the operator can see the Idle state and open the disk menu to choose the next state. In phase II, the Approach State has been enabled, and it is possible to send the desired pose and to control the Transparent arm and the Opaque arm. In phase III, the user can directly control the Opaque arm to accomplish the task.

Figure 3

Algorithm 1 Telemanip State

Figure 4

Figure 4. 3D representation of BRILLO scenario. As shown in ref. [4], it has been recreated in CoppeliaSim; the bartender robot consists of two KUKA Lbr 14 R820 and two Schunk EGL 90 PN grippers.

Figure 5

Figure 5. Reference frames in the vision tracker module. The markers’ poses are defined with respect to the RRF. ARF is centered in the center of the ArUco marker, CRF in the focal plane of the camera, and GRF collocated at the center of the glass basis. The camera measures the relative pose of the ARF which is rigidly attached to the GRF.

Figure 6

Figure 6. Communication framework. ROS and Unity publishing and subscribing data into multiple topics. The topics are divided into message types (geometry, sensor, string, number) and organized according to the information they transmit. On the left, the topics written by ROS, and on the right, the ones published by Unity.

Figure 7

Figure 7. Controller bindings: (I) Trackpad button, (II) Grip button, (III) System button, and (IV) Trigger button.

Figure 8

Figure 8. Experimental phases. The BRILLO case study can be divided into three main phases: the training phase, which involved showing a video to the participants to help them understand how to use the HRI; the test execution phase, where the system was tested one by one; and finally, the assessment surveys, during which participants completed their questionnaires.

Figure 9

Table I SUS score classes.

Figure 10

Table II. NASA score classes.

Figure 11

Table III. SAT answer score.

Figure 12

Figure 9. SUS score. Two diagrams are used to illustrate the results, depicting the distribution of scores in both general (on the left) and with a breakdown into “YES VR” and “NO VR” categories (on the right). The x-axis indicates the score classes, while the y-axis shows the corresponding percentages.

Figure 13

Figure 10. NASA TLX score. The results are shown in two diagrams, which demonstrate the distribution of scores in both general (on the left) and with a specific categorization into “YES VR” and “NO VR” (on the right). The x-axis represents the score classes, while the y-axis displays the corresponding percentages.

Figure 14

Figure 11. SAT score. The results are presented through two diagrams, showcasing the distribution of scores in general (on the left), as well as with a distinct categorization into “YES VR” and “NO VR” (on the right). The x-axis denotes the score classes, while the y-axis exhibits the corresponding percentages.

Figure 15

Figure 12. Box plots providing a visual representation of the statistic study carried out to evaluate the significance of previous experience in VR (YES vs. NO) on the three metrics evaluated in this work: SUS score (left), NASA TLX (center), SAT score (right).