Hostname: page-component-89b8bd64d-b5k59 Total loading time: 0 Render date: 2026-05-07T03:49:33.727Z Has data issue: false hasContentIssue false

Can large language models support machine learning implementation in product development? A comparative analysis and perspectives

Published online by Cambridge University Press:  27 August 2025

Sebastian Sonntag*
Affiliation:
Universität Duisburg-Essen, Germany
Janosch Luttmer
Affiliation:
Universität Duisburg-Essen, Germany
Arun Nagarajah
Affiliation:
Universität Duisburg-Essen, Germany

Abstract:

Recent advancements in machine learning (ML) offer substantial potential for enhancing product development. However, adoption in companies remains limited due to challenges in framing domain-specific problems as ML tasks and selecting suitable ML algorithms, requiring expertise often lacking. This study investigates the use of large language models (LLMs) as recommender systems for facilitating ML implementation. Using a dataset derived from peer-reviewed publications, the LLMs were evaluated for their ability to recommend ML algorithms for product development-related problems. The results indicate moderate success, with GPT-4o achieving the highest accuracy by recommending suitable ML algorithms in 61% of cases. Key limitations include inaccurate recommendations and challenges in identifying multiple sub-problems. Future research will explore prompt engineering to improve performance.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025
Figure 0

Figure 1. Required translation of the product-related problem formulation into the corresponding ML-related problem formulation

Figure 1

Figure 2. Study design

Figure 2

Figure 3. Review process

Figure 3

Figure 4. Prompt design based on the information provided in the abstracts

Figure 4

Table 1. Comparison of the achieved TF- and IA-rates of the LLMs

Figure 5

Table 2. Comparison of TF-rates for zero-shot and one-shot approaches

Figure 6

Figure 5. Classes of failures