Hostname: page-component-77f85d65b8-8wtlm Total loading time: 0 Render date: 2026-03-26T19:09:54.938Z Has data issue: false hasContentIssue false

Interactive machine learning framework enabling affordable and accurate prototyping for supporting decision-making

Published online by Cambridge University Press:  27 August 2025

Qiyu Li*
Affiliation:
Texas A&M University, USA
Daniel McAdams
Affiliation:
Texas A&M University, USA

Abstract:

This study proposes an ML-based interactive framework for early-stage design, addressing the challenge where physical prototypes are accurate but costly, and virtual prototypes are affordable but less reliable. The NN-based human-in-the-loop framework integrates pre-training and fine-tuning techniques to reduce reliance on extensive physical prototyping while maintaining model accuracy. Using projectile motion as an example, the framework demonstrates its ability to guide design by iteratively updating models based on limited experimental data and human expertise. The results highlight the framework’s effectiveness in achieving performance comparable to models trained on larger datasets, offering a cost-effective solution for creating accurate design models.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025
Figure 0

Figure 1. Matrix about prototyping selection (adapted from Ulrich and Eppinger (2016))

Figure 1

Figure 2. Human in the loop machine learning (adapted from Mosqueira-Rey et al. (2023))

Figure 2

Figure 3. Proposed framework

Figure 3

Figure 4. Learning strategy (blue represents the freezing layer and orange represents the trainable layer)

Figure 4

Figure 5. Projectile motion (left: without air drag, right: with air drag)

Figure 5

Table 1. Proposed method steps for projectile motion

Figure 6

Figure 6. Models performance with different datasets

Figure 7

Table 2. SHD for models with different angles in the first experiment

Figure 8

Table 3. Comparison for two models