Hostname: page-component-77f85d65b8-pkds5 Total loading time: 0 Render date: 2026-03-28T19:35:30.039Z Has data issue: false hasContentIssue false

Design and validation of a novel online platform to support the usability evaluation of wearable robotic devices

Published online by Cambridge University Press:  24 January 2023

Jan T. Meyer*
Affiliation:
Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zürich, Zürich, Switzerland
Natalie Tanczak
Affiliation:
Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zürich, Zürich, Switzerland Future Health Technologies, Singapore-ETH Centre, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore, Singapore
Christoph M. Kanzler
Affiliation:
Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zürich, Zürich, Switzerland Future Health Technologies, Singapore-ETH Centre, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore, Singapore
Colin Pelletier
Affiliation:
Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zürich, Zürich, Switzerland
Roger Gassert
Affiliation:
Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zürich, Zürich, Switzerland Future Health Technologies, Singapore-ETH Centre, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore, Singapore
Olivier Lambercy
Affiliation:
Rehabilitation Engineering Laboratory, Department of Health Sciences and Technology, ETH Zürich, Zürich, Switzerland Future Health Technologies, Singapore-ETH Centre, Campus for Research Excellence and Technological Enterprise (CREATE), Singapore, Singapore
*
*Author for correspondence: Jan T. Meyer, Email: relab.publications@hest.ethz.ch

Abstract

Wearable robotic devices (WRD) are still struggling to fulfill their vast potential. Inadequate daily life usability is one of the main hindrances to increased technology acceptance. Improving usability evaluation practices during the development of WRD could help address these limitations. In this work, we present the design and validation of a novel online platform aiming to fill this gap, the Interactive Usability Toolbox (IUT). This platform consists of a public website that offers an interactive, context-specific search within a database of 154 user research methods and educational information about usability. In a dedicated study, the effect of this platform to support usability evaluation was investigated. Twelve WRD experts were asked to complete the task of defining usability evaluation protocols for two specific use cases. The platform was provided to support one of the use cases. The quality and composition of the proposed protocols were assessed by (i) two blinded reviewers, (ii) the participants themselves, and (iii) the study coordinators. We showed that using the IUT significantly affected the proposed evaluation focus, shifting protocols from mainly effectiveness-oriented to more user-focused studies. The protocol quality, as rated by the external reviewers, remained equivalent to those designed with conventional strategies. A mixed-method usability evaluation of the platform yielded an overall positive image, with detailed suggestions for further improvements. The IUT is expected to positively affect the evaluation and development of WRD through its educational value, the context-specific recommendations supporting ongoing benchmarking endeavors, and highlighting the value of qualitative user research.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2023. Published by Cambridge University Press
Figure 0

Figure 1. Overview of the Wizard tool of the Interactive Usability Toolbox: (a) Interface of the Wizard, with a step-by-step guide in defining the WRD context of use, (b) The last step of the Wizard is the definition of the evaluation focus by selecting a maximum of five usability attributes, (c) Result of the Wizard search, with a list of evaluation items listed by their Context Fit and Recorded Use. Individual items can be added to a selection and later viewed in bulk in a summary page.

Figure 1

Figure 2. Overview of the study design: A cross-over study design was applied, with the Interactive Usability Toolbox as intervention (IUT, green color). Participants were randomly allocated to Group A or B, either starting with (+IUT) or without (−IUT) the toolbox. All data collection points are marked in red, and the outcome measures applied during the data collection points listed as bullet points.

Figure 2

Table 1. Participant demographics and wearable robotic device experience (n = 12)

Figure 3

Table 2. External reviewer quality grading (1 = lowest, 10 = highest) and ranking (1–12) of usability evaluation protocols (n = 24)

Figure 4

Table 3. Perspectives of participants on quality criteria of usability evaluation protocols (11-point Likert scale, n = 12)

Figure 5

Figure 3. Participant’s view on the usability evaluation focus: The focus of the proposed protocol was rated by the participants by allocating a total of 100 points between the three usability dimensions effectiveness (EFT), satisfaction (SAT), and efficiency (EFI), * = p < .05.

Figure 6

Figure 4. Likert scale results on learning effects and knowledge transfer: The participants of Group A (UC1; −IUT, UC2; +IUT) and Group B (UC1; +IUT, UC2; −IUT) rated their level of agreement to four statements on the knowledge transfer and learning effects.

Figure 7

Table 4. Analysis of evaluation protocol composition (n = 24)

Figure 8

Figure 5. Likert scale results from the custom usability questionnaire: The level of agreement to eight statements about the IUT impact on the study task WRD evaluation was rated from 1 = strongly disagree to 10 = strongly agree.

Supplementary material: PDF

Meyer et al. supplementary material

Meyer et al. supplementary material

Download Meyer et al. supplementary material(PDF)
PDF 114.5 KB