Hostname: page-component-89b8bd64d-46n74 Total loading time: 0 Render date: 2026-05-08T08:28:15.035Z Has data issue: false hasContentIssue false

Characterising individual dexterity in physical product interaction: A data-driven approach

Published online by Cambridge University Press:  08 May 2026

Isabelle Ormerod*
Affiliation:
University of Bristol , UK
Mike Fraser
Affiliation:
University of Bristol , UK
Chris Snider
Affiliation:
University of Bristol , UK
*
Corresponding author Isabelle Ormerodisabelle.ormerod@bristol.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

Understanding dexterity is a critical factor when designing physical and interactive products, shaped by the unique proprioceptive and musculoskeletal traits of users. The extent to which these individual differences manifest during physical product interactions and the methods to effectively quantify them remain largely unexplored. Measurement of subtle characteristics in hand–object interactions could assist researchers and practitioners to better understand how users interact with products, leading the way for more refined, accessible, bespoke or adaptive products tailored to individuals’ dexterity and usage. This paper investigates (1) individual differences within object interactions for single-handed highly dexterous tasks and (2) the feasibility of data-driven measurement of dexterous interaction. A study explores the ability of data-driven techniques to identify individual differences and characterise dexterous interaction, for (i) an unconstrained hand–object interaction scenario and (ii) a constrained hand–tool–object manipulation scenario. Despite a reduction in performance variance during the constrained task, the classification of user actions remained heavily dependent on participant-specific features. Models trained on group data failed to generalise to new users, highlighting the significant inter-participant variability in dexterous strategies, even under constrained conditions. Our results demonstrate that user-specific data capture could aid personalised product development and provide recommendations for implementation in future work.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2026. Published by Cambridge University Press
Figure 0

Figure 1. (a) Hand-tracking camera and monitor setup. (b) MediaPipe hand-tracking marker locations.

Figure 1

Figure 2. Multiple camera views of the participant.

Figure 2

Figure 3. Study and scenario flow chart.

Figure 3

Figure 4. (a) Dexterity Test Board with 4 of each object (b) Example action for object 9.

Figure 4

Figure 5. Example 3D movement visualisation for Scenario 1: Participant 5.

Figure 5

Figure 6. (a) Test equipment used in Scenario 2, Large tweezers (Left) and small tweezers (Right). (b) The grip trainer participants used in the fatiguing exercise.

Figure 6

Figure 7. Example 3D movement visualisation for Scenario 2: Big tweezers.

Figure 7

Figure 8. Pre-processing Diagram. After filtering, the video streams were labelled using BORIS. Three separate analysis streams were undertaken: Classification, linear discriminant analysis and participant performance analysis.

Figure 8

Table 1. Classification rates for Scenario 1

Figure 9

Table 2. Silhouette scores for Scenario 1

Figure 10

Figure 9. Linear discriminant analysis by participant for Scenario 1, shown for RH participants.

Figure 11

Figure 10. Linear discriminant analysis by object for Scenario 1, shown for RH participants.

Figure 12

Table 3. Statistical values and significance values between dominant and non-dominant handed object interactions in Scenario 1

Figure 13

Table 4. NASA-TLX Significance Values for Scenario 1 between dominant and non-dominant hands

Figure 14

Figure 11. Example confusion matrix results for the LOGO analysis split.

Figure 15

Table 5. Classification rates for Scenario 2

Figure 16

Figure 12. Linear discriminant analysis by participant for left and right-handed participants.

Figure 17

Figure 13. Linear discriminant analysis by task for left and right-handed participants.

Figure 18

Table 6. Silhouette scores for Scenario 2

Figure 19

Table 7. Statistical test and significance values for coarse performance metrics in Scenario 2

Figure 20

Table 8. NASA-TLX test statistics and significance values for task permutations presented in Scenario 2

Figure 21

Table 9. Two-way ANOVA significance values for all hand joint angles (tweezer size versus fatigue)

Figure 22

Figure 14. An example confusion matrix is shown for the LOGO analysis split.