Hostname: page-component-cb9f654ff-9knnw Total loading time: 0 Render date: 2025-09-09T08:24:46.766Z Has data issue: false hasContentIssue false

Enhancing designer creativity through human–AI co-ideation: a co-creation framework for design ideation with custom GPT

Published online by Cambridge University Press:  09 September 2025

Pan Wang
Affiliation:
Industrial Design Engineering, https://ror.org/02e2c7k09 Delft University of Technology , Netherlands
Yash Khinvasara
Affiliation:
Industrial Design Engineering, https://ror.org/02e2c7k09 Delft University of Technology , Netherlands
Geesje Josine Creijghton
Affiliation:
Industrial Design Engineering, https://ror.org/02e2c7k09 Delft University of Technology , Netherlands
Tessa Scholing
Affiliation:
Industrial Design Engineering, https://ror.org/02e2c7k09 Delft University of Technology , Netherlands
Yihua Wang
Affiliation:
Industrial Design Engineering, https://ror.org/02e2c7k09 Delft University of Technology , Netherlands
Zhibin Zhou
Affiliation:
School of Design, https://ror.org/0030zas98 The Hong Kong Polytechnic University , Hong Kong, China
Peter R.N. Childs
Affiliation:
Dyson School of Design Engineering, https://ror.org/041kmwe10 Imperial College London , London, UK
Yuan Yin*
Affiliation:
Dyson School of Design Engineering, https://ror.org/041kmwe10 Imperial College London , London, UK
*
Corresponding author: Yuan Yin; Email: y.yin19@ic.ac.uk
Rights & Permissions [Opens in a new window]

Abstract

The emergence of large language models (LLMs) provides an opportunity for AI to operate as a co-ideation partner during the creative processes. However, designers currently lack a comprehensive methodology for engaging in co-ideation with LLMs, and there is a limited framework that describes the process of co-ideation between a designer and ChatGPT. This research thus aimed to explore how LLMs can act as codesigners and influence creative ideation processes of industrial designers and whether the ideation performance of a designer could be improved by employing the proposed framework for co-ideation with custom GPT. A survey was first conducted to detect how LLMs influenced the creative ideation processes of industrial designers and to understand the problems that designers face when using ChatGPT to ideate. Then, a framework which based on mapping content to guide the co-ideation between humans and custom GPT (named as Co-Ideator) was promoted. Finally, a design case study followed by a survey and an interview was conducted to evaluate the ideation performance of the custom GPT and framework compared with traditional ideation methods. Also, the effect of custom GPT on co-ideation was compared with a non-artificial intelligence (AI)-used condition. The findings indicated that if users employed co-ideation with custom GPT, the novelty and quality of ideation outperformed by using traditional ideation.

Information

Type
Research Article
Creative Commons
Creative Common License - CCCreative Common License - BY
This is an Open Access article, distributed under the terms of the Creative Commons Attribution licence (http://creativecommons.org/licenses/by/4.0), which permits unrestricted re-use, distribution and reproduction, provided the original article is properly cited.
Copyright
© The Author(s), 2025. Published by Cambridge University Press

Introduction

Ideation is a critical phase of design processes. It is a creative process, which involves exploring the solution space for the given design problem. Despite various ideation methodologies have been promoted to enhance this process, it was still a challenge for designers to effectively generate novel and useful ideas (Chou, Reference Chou2014; Daly et al., Reference Daly, Seifert, Yilmaz and Gonzalez2016). Co-ideation allows designers to employ empathy and collaborate with other stakeholders to enhance creativity (Sanders and Stappers, Reference Sanders and Stappers2008; Talgorn et al., Reference Talgorn, Hendriks, Geurts and Bakker2022). Also, by involving stakeholders in the whole creative process, designers can understand and discover potential solutions that may be hidden.

Artificial intelligence (AI) can enlarge human creativity (Miller, Reference Miller2019; Anantrasirichai and Bull, Reference Anantrasirichai and Bull2022; Barrat, Reference Barrat2023; Grilli and Pedota, Reference Grilli and Pedota2024). For example, DesignAid is a generative AI which asks users to provide short phrases that describe their ideas and then searches for related textual ideas (Cai et al., Reference Cai, Rick, Heyman, Zhang, Filipowicz, Hong and Malone2023). B-Link is based on a network-based computational model to guide thinking processes and aid creativity (Chen et al., Reference Chen, Shi, Han and Childs2017). AskNatureNet is based on bio-inspired design knowledge to help divergent thinking processes (Chen et al., Reference Chen, Cai, Jiang, Luo, Sun, Childs and Zuo2024). FashionQ has three interactive visualization tools (StyleQ, TrendQ, and MergeQ) to support ideation in fashion design (Jeon et al., Reference Jeon, Jin, Shih and Han2021).

The emergence of large language models (LLMs) provides an opportunity for AI to operate as a co-ideation partner during the creative processes (Freese, Reference Freese2023; Li et al., Reference Li, Li and Su2024; Muller et al., Reference Muller, Houde, Gonzalez, Brimijoin, Ross, Moran and Weisz2024). With the help of textual prompts, people without programming or computer science expertise can communicate with AI across multiple languages (Shah et al., Reference Shah, Smith and Vargas-Hernandez2003). Therefore, it is worth detecting when LLMs were positioned as codesigners, how LLMs influenced the creative ideation processes of designers. Take ChatGPT which is a representative LLM; existing research has found that ChatGPT was being used by designers to generate ideas (Hamilton et al., Reference Hamilton, Brisco and Grierson2024). For example, ChatGPT can be used to generate innovative concepts (Filippi, Reference Filippi2023) and develop more novel ideas (Hamilton et al., Reference Hamilton, Brisco and Grierson2024). Also, ChatGPT can be used to generate personas and simulate interviews with fictional users (Kocaballi, Reference Kocaballi2023).

It can be found that designers currently lack a comprehensive methodology for engaging in co-ideation with LLMs (He et al., Reference He, Stankovic, Niforatos and Kortuem2024; Baltà-Salvador et al., Reference Baltà-Salvador, El-Madafri, Brasó-Vives and Peña2025; Wang et al., Reference Wang, Lu, Campanyà, Chen and Chen2025). Moreover, designers faced challenges when they used LLMs to generate solutions for new concepts (Zhou et al., Reference Zhou, Li and Yu2024; Chen et al., Reference Chen, Xia, Jiang, Tan, Sun and Zhang2025; Shin et al., Reference Shin, Polyanskaya, Lucero and Oulasvirta2025). In addition, existing research focused on how AI can be used as a tool instead of an active co-ideation partner (Lin et al., Reference Lin, Andersen, Schmidt, Schoenmakers, Hofmeyer, Pauwels and IJsselsteijn2024; Ma, Reference Ma2024; Sońta-Drączkowska et al., Reference Sońta-Drączkowska, Cichosz, Klimas and Pilewicz2024). Also, there was a limited framework that described the process of co-ideation between a designer and ChatGPT.

To address these gaps, this research aimed to explore when LLMs were positioned as codesigners, how LLMs influenced the creative ideation processes of industrial designers, and whether the ideation performance of a designer could be improved by employing the proposed framework for co-ideation with ChatGPT. To be specific, (i) the research tried to understand the problems that designers face when using ChatGPT to ideate. Then, (ii) the study promoted a framework to guide the co-ideation between humans and custom GPT. Followed by that, (iii) Co-Ideator, a custom GPT, was developed based on the promoted framework. Finally, (iv) an evaluation was conducted to compare the ideation performance of the framework with traditional ideation methods and detect whether the ideation performance of a designer could be improved by the proposed customer GPT (Co-Ideator) compared with traditional ideation methods.

To address the research aim, a survey was first conducted to understand the problems that designers face when using ChatGPT to ideate. Then, a comprehensive framework and custom GPT (Co-Ideator) which can explain the co-ideation between designers and the custom GPT was promoted. A design case study was then applied to compare the performance of the promoted framework (Co-Ideator) and traditional idea methods, followed by a survey and an interview. Each idea was measured using four criteria – novelty, quantity, variety, and quality. This study hypothesized that a designer who co-ideates with custom GPT according to this framework performed better than a designer who ideates individually without any additional means.

Literature review

Existing literature in the domain of ideation and co-ideation was reviewed, including the recent developments of ChatGPT 4.o, how design methods evolved, what new dimensions AI brings to co-ideation, and the interplay between AI and designers, especially when LLMs were used as a tool rather than a collaborative partner.

Ideation approaches

Ideation is defined as the process of generating an initial idea before that idea is conceptualized, which may be useful for attaining desired states or outcomes (Briggs and Reinig, Reference Briggs and Reinig2007; Briggs and Reinig, Reference Briggs and Reinig2010; Gonçalves and Cash, Reference Gonçalves and Cash2021). Based on the level of computer automation (Chen et al., Reference Chen, Wang, Dong, Shi, Han, Guo, Childs, Xiao and Li2019), three kinds of human–AI collaboration methods were summarized: 1) traditional, 2) program-based, and 3) data-driven ideation.

The traditional methods of ideation have the least automated category and primarily rely on design principles and methodologies rooted in design thinking (Kim and Ryu, Reference Kim and Ryu2014; Verganti et al., Reference Verganti, Dell’Era and Swan2021). This kind of method included the approach of design heuristics which focused on identifying ideation strategies used during problem-solving of engineering design problems (Daly et al., Reference Daly, Yilmaz, Christian, Seifert and Gonzalez2012). Albers et al. (Reference Albers, Bursac and Rapp2016) presented creativity from the perspective of product generation engineering (PGE). They summarized the variation types of “carryover,” “embodiment,” and “principle variation” as analogy-based mechanisms, which can be used to create new technical products. Goel and Bhatta (Reference Goel and Bhatta2004) introduced model-based analogy (MBA), where how the class of generic teleological mechanisms (GTMs) in design was learned, used, and applied. Additionally, their research explained how Function-Behaviour-Structure (FBS) models enable the learning of GTMs. Another approach was promoted by Vattam et al. (Reference Vattam, Wiltgen, Helms, Goel and Yen2011), where an interactive knowledge-based design environment (DANE) was created. DANE could be applied in biological systems, where a design challenge was brought from biology-related functions. Wolverton and Hayes-Roth (Reference Wolverton and Hayes-Roth1994) presented a cross-domain semantic network where users can capture distant analogies. The network was relevant to the ability to recall distant analogies which were associated with creativity. Another analogy-based technique (RADAR model) was proposed by Crean and O’Donoghue (Reference Crean and O’Donoghue2002), where semantically related and unrelated domains could be retrieved. Christensen and Schunn (Reference Christensen and Schunn2007) identified the three functions of analogies in engineering design, which were problem identification, problem-solving, and concept explanation.

Program-based ideation facilitated the generation of ideas and supported the ideation process through computer-aided approaches. Self et al. (Reference Self, Evans and Kim2016) demonstrated that the utilization of digital sketching tools significantly enhanced the level of attention during conceptual design processes compared to traditional pen-and-paper sketching. Mohan et al. (Reference Mohan, Shah, Narsale and Khorshidi2014) devised an instrument capable of capturing the sequence of search and solution strategies in relation to ideation states. Also, they provided various ideation methods such as Theory of Inventive Problem Solving (TRIZ) and Biomimetics. Huo et al. (Reference Huo and Vinayak, & Ramani2017) developed Window-shaping, which employed a tangible mixed-reality (MR) interaction metaphor to facilitate design ideation. This method can directly create three-dimensional shapes of physical objects. Wang et al. (Reference Wang, Ohsawa, Hu and Xu2014) proposed a context-awareness systematic approach for cultivating, constructing, integrating, and evaluating ideas within a dynamic discovery process. Chakrabarti et al. (Reference Chakrabarti, Sarkar, Leelavathamma and Nataraju2005) introduced an idea-generation model that employed analogies from both natural and artificial domains. This model was implemented in software to automate the search for relevant ideas from databases in order to solve specific problems. However, the available data entered into this software were limited, and the ideation process was elusive. Han’s Combinator was an image-stimuli computer program for ideation and was designed based on combinational creativity theory (Han et al., Reference Han, Shi, Chen and Childs2018). The Combinator aimed to assist both novice and experienced designers in generating ideas by presenting associated images in an overlapped manner, in this way to address the challenges faced in rapidly evolving product design markets. Within the software, users have the freedom to input a keyword and specify the number of nouns they desire to combine and the manner in which the images should be merged. The program then automatically generates combined images to inspire designers according to their defined settings. However, the inspirational potential derived from such a combination approach was limited since the images are merely juxtaposed rather than synthesized.

Data-driven ideation approaches have the highest level of automation, which exclusively relies on data analysis to finish the creative process. Data-driven ideation approaches emphasized the central role of data in the design of creative tools (Chan et al., Reference Chan, Fu, Schunn, Cagan, Wood and Kotovsky2011). For example, Chan et al. (Reference Chan, Fu, Schunn, Cagan, Wood and Kotovsky2011) conducted a case study using a U.S. patent database to demonstrate the effectiveness of incorporating analogical data in innovative design. Similarly, Pinel et al. (Reference Pinel, Varshney and Bhattacharjya2014) and Varshney et al. (Reference Varshney, Pinel, Varshney, Bhattacharjya, Schörgendorfer and Chee2019) proposed a big data approach for a computational creativity system, focusing on the generation of culinary recipes and menus. Lin et al. (Reference Lin, Yeh, Hung and Chang2013) developed the personalized creativity learning system (PCLS), which utilized data mining techniques to provide personalized learning paths for optimizing creativity performance. Ojha et al. (Reference Ojha, Lee and Lee2015) developed I-get, which utilized a data mining algorithm - fast image search in huge database (FISH) to generate perceptual pictorial metaphors and novel ideas.

Various bio-inspired tools have also been promoted. Chen et al. (Reference Chen, Cai, Jiang, Luo, Sun, Childs and Zuo2024) developed AskNatureNet which supported divergent thinking by introducing analogies from nature. Helfman Cohen et al. (Reference Helfman Cohen, Reich and Greenberg2014) developed a biomimetic ideation system to support divergent thinking by structured innovation methods such as TRIZ. These tools work by injecting external stimuli to break cognitive patterns. Context mapping differs by prompting internal reflection through conversational and context-aware probing. Instead of offering ready-made analogies, it pushes designers to reframe their assumptions mid-process.

Collective creativity by co-ideation

Co-ideation is a subset of co-creation. Co-creation refers to collective creativity shared by two or more people (Sanders and Stappers, Reference Sanders and Stappers2008). Coddington et al. (Reference Coddington, Giang, Graham, Prince, Mattila, Thong and Kocsis2016) also described co-creation as “a term that traverses a philosophy, method, and mindset of collective creativity (Coddington et al., Reference Coddington, Giang, Graham, Prince, Mattila, Thong and Kocsis2016, p. 1).” The core principle of co-ideation is that everyone is inherently creative and can bring a contribution during the creative process (Russo-Spena and Mele, Reference Russo-Spena and Mele2012; Boudhraa et al., Reference Boudhraa, Dorta, Milovanovic and Pierini2021; Talgorn et al., Reference Talgorn, Hendriks, Geurts and Bakker2022). As design has become more and more user centric over the years, co-ideation has become prominent. Co-ideation specifically addresses the stage of the design process where ideas are generated for a particular design problem, and humans leverage diverse insights and experiences to foster more creative outcomes (Westerlund et al., Reference Westerlund, Leminen and Habib2018). Common co-ideation techniques include brainstorming sessions, context mapping methods, focus groups, toolkits, and participatory story-building. Out-of-the-box thinking is also promoted. During this thinking process, idea enrichment can be achieved through participatory story-building, which includes exploring user needs, emotions, and usage contexts (Talgorn et al., Reference Talgorn, Hendriks, Geurts and Bakker2022).

As for the human–AI co-ideation, researchers also found that human–AI co-ideation can contribute to the creativity processes (Shin et al., Reference Shin, Koch, Lucero, Dalsgaard and Mackay2023; Baltà-Salvador et al., Reference Baltà-Salvador, El-Madafri, Brasó-Vives and Peña2025). Chiou et al. (Reference Chiou, Hung, Liang and Wang2023) explained this positive effect as the combination of human and AI input in the design process contributes to a fresh form of self-expression and communication. Yu-Han & Chun-Ching (Reference Yu-Han and Chun-Ching2023) supported a positive correlation between participants’ confidence in the information provided by Gen AI and an increase in the number of nonredundant ideas generated by co-ideation. Shen et al. (Reference Shen, Shen, Wu and Zhang2025) promoted an IdeationWeb which is a human–AI co-ideation framework aimed at tracking the evolution of design ideas in human–AI co-creation.

The integration of AI in the design field and the impact of AI on the designers’ performances

Design methods integrate cross-disciplinary knowledge and evolve with technology (Cross, Reference Cross2006). Existing design methods prioritize problem-space exploration, adapt iteratively, and increasingly embrace human-centered approaches (Garvin, Reference Garvin2021). The digital era has transformed design methods to enable faster iterations and more intricate solutions, which is essential for successful design. Giaccardi and Redström (Reference Giaccardi and Redström2020) thus envision that in the future, design is a co-creative process with AI, which can foster a reciprocal relationship where both designers and intelligent systems shape outcomes. Verganti et al. (Reference Verganti, Vendraminelli and Iansiti2020) supported that this collaboration can reinforce the design fundamentals as the AI can eliminate previously limiting factors such as scale, scope, and learning. They advocated that a shift from human-centric models to including AI as an active participant in design. This transform required the update on methodologies and theoretical frameworks which can recognize the collaborative interplay between humans and technology.

LLMs have significant advancements, particularly in their applications in facilitating ideation in human–AI collaboration. With the increasing popularity of LLMs, the potential of LLMs used in the design process to augment ideation frameworks has been highlighted (Ekvall and Winnberg, Reference Ekvall and Winnberg2023; Open AI, 2023; Shaer et al., Reference Shaer, Cooper, Mokryn, Kun and Ben Shoshan2024). The research on GPT-4.o and its predecessors showed that LLMs can assist in generating and refining ideas, which is essential for creative processes such as design ideation (Shin et al., Reference Shin, Choi, Cho, Admoni, Lim, Kim, Hong, Lee and Kim2024; Xu et al., Reference Xu, Yin, Gu, Mar, Zhang, E and Dow2024). LLMs have been used in various applications to enhance productivity, generate new content, and provide insights, in this way, to support the ideation process in design and other creative fields.

Zhu and Luo (Reference Zhu and Luo2023) explored the use of GPT models for automating early-stage design concepts. They highlighted how GPT models can effectively leverage knowledge and reason from textual data to produce novel and useful design concepts, in this way to enhance the overall process of idea generation and fostering greater innovation. Kocaballi (Reference Kocaballi2023) investigated the application of ChatGPT in human-centered design processes by using ChatGPT to perform multiple design tasks (generating user personas, simulating interviews, creating new design concepts, and evaluating user experiences). The results indicate that ChatGPT can effectively contribute to these areas and has the potential to serve as a valuable tool in the designer’s toolkit by enhancing the ideation framework. Shin et al. (Reference Shin, Koch, Lucero, Dalsgaard and Mackay2023) further supported the role of LLMs in the ideation process. Their study examined how LLMs can be integrated into brainwriting sessions to improve the quality and quantity of generated ideas. The results supported that the use of LLMs helped mitigate common brainstorming barriers by facilitating a more inclusive and parallel idea-generation process and enriching the overall ideation framework. Furthermore, the study supported that ChatGPT acted as a brainstorming partner to human designers and also allowed humans to reflect on multiple aspects in ideating processes, which can lead to higher productivity in co-creation (Liu et al., Reference Liu, Han, Ma, Zhang, Yang, Tian, He, Li, He, Liu, Wu, Zhao, Zhu, Li, Qiang, Shen, Liu and Ge2023). Filippi (Reference Filippi2023) detected the possible differences and impacts on the usefulness, novelty, and variety of concepts influenced by the interactions between participants and ChatGPT 3.5. The results supported that ChatGPT proved to be beneficial for the novelty of ideas.

LLMs were also compared to traditional search engines, which are the most accessible ideation tools for the public (Johansen, Reference Johansen2023). Zheng et al. (Reference Zheng, Wang, Zhou and Koh2024) compared differences in generating new product ideas between using ChatGPT and Google Search. It was found that individuals who used ChatGPT can generate new ideas in a more efficient way with higher novelty and diversity. Zheng et al. (Reference Zheng, Wang, Zhou and Koh2024) thus also proposed that improving human–AI interaction way can lead to better and more creative results.

Study1: understand the problems designers face when using ChatGPT to ideate

Study 1 is a preliminary study which aims to investigate the problems that designers face when using ChatGPT in the ideation processes through a survey.

Protocol of study 1

64 participants were recruited. All respondents were master students of industrial design; all of them have experience in using ChatGPT for ideation. In this study, participants need to finish a one-question survey - “What is/are the major challenge(s) that occur when you use ChatGPT to generate ideas?” (Figure 1). It is notable that when participants were selected, they were asked whether they had experience in co-generating ideas with ChatGPT. Under this background, we directly asked the question “What is/are the major challenge(s) that occur when you use ChatGPT to generate ideas?” However, we admit that some participants may misunderstand the question and not consider the question as the challenges that occur when co-generating ideas with “ChatGPT.” This may reduce the reliability of the results.

Figure 1. The question of the one-question survey.

Results of study 1

]The results from the survey provide insightful data on the challenges that designers face when co-ideating with ChatGPT (Table 1). 52.38% participants reported that they face a challenge in ChatGPT of generating repetitive ideas. Although ChatGPT can generate a large number of suggestions, the diversity of concepts was constrained, potentially hindering the creative process. 43.75% participants reported that ChatGPT generated unoriginal ideas, suggesting that the AI frequently falls short of coming up with novel solutions. 42.19% of respondents reported that ChatGPT cannot comprehend particular issues. ChatGPT has low capacity in customizing responses to particular and complex design challenges.

Table 1. The outcome of the survey on “What is/are the major challenge(s) that occur when you use ChatGPT to generate ideas?”

When using ChatGPT to generate ideas, participants also reported that ChatGPT may generate superficial solutions (37.50%). ChatGPT tends to provide generic instead of deep and thoughtful suggestions. This limitation could affect the quality and depth of ideation, making it essential for designers to refine and build upon the ChatGPT-generated outputs significantly. Unimaginative and irrelevant ideas were also notable concerns, reported by 26.56% and 23.44% of respondents, respectively. ChatGPT was occasionally unable to align with the designer’s creative vision and context-specific needs. Design fixation (21.88%) and the perception of the tool being boring (12.50%) were less frequently mentioned but still relevant, which limited ChatGPT to better support sustained engagement and creative exploration.

Discussion of Study 1

These findings of Study 1 highlighted critical areas for improvement in integrating AI tools such as ChatGPT into the ideation design process. The results of this study suggested that in co-ideation processes, AI needs to be developed with more advanced algorithms to encourage originality and relevance. The AI’s contextual understanding also needs to be improved. Another area that AI needs to be revised is the feedback incorporation mechanisms which can be used to refine and diversify the output. By tackling these issues, the LLMs can better support and enhance the creative capacities of designers.

Human–AI co-creation framework for design ideation

Based on the findings from Study 1, it was realized that existing ChatGPT may not be able to work well in human–AI co-ideation processes, and a custom GPT was expected to be used in human–AI co-ideation processes. A custom GPT, in the context of OpenAI’s GPT Builder, refers to a specialized version of the Generative Pre-trained Transformer designed for a specific application or to fulfill particular user requirements. The GPT Builder enables users to tailor the model’s responses, behaviors, and knowledge areas, creating a bespoke AI tool optimized for their unique use cases. These advancements offer opportunities to refine and customize the user experience, thereby facilitating a smoother and more effective collaboration between humans and AI, and allowing greater control over the interaction. To promote the custom GPT which can be used for human–AI co-ideation processes, researchers first need to consider the workflow (framework) of these custom GPT tools.

The human–AI co-creation framework for design ideation was thus promoted (Figure 2). The detailed explanation of the framework is in Sections “Proposed Framework for Co-Ideation with Custom GPT Co-Ideator” and “How to use Co-Ideator in a co-ideation process”. This framework was implemented in a custom GPT (Co-Ideator), with a focus on improving the interactive process of human–AI collaboration to boost human creativity. In the following section, the development process of the framework and Co-Ideator are explained.

Figure 2. The Co-Ideator workflow starts with thought-provoking questions in the order of present–past–future of the designers’ experience, perception, and expectations toward the product/service. A set of ideas is produced from the constructive dialogue between the human designer and (AI) Co-Ideator.

Framework for co-Ideation: adopting context mapping as a method

Traditional design methodologies need to be adapted to accommodate the dynamic interaction of human–AI co-ideation. Visser et al. (Reference Visser, Stappers, Van der Lugt and Sanders2005) emphasized the path of expression in design, where the method of context mapping plays a crucial role. By engaging designers and users in a collaborative process, it unveils deep insights into user experiences, emotions, and needs by tapping into the users’ present and past experiences. This technique facilitates a bridge between abstract ideas and concrete solutions, allowing designers to navigate through the complexities of human-centered design. Through generative research and participatory practices, context mapping enriches the ideation process, ultimately leading to innovative and empathetic design solutions that resonate with users’ lives and aspirations. The method is deeply rooted in the idea of finding latent needs from the user’s experiences (Visser et al., Reference Visser, Stappers, Van der Lugt and Sanders2005). The method has repeatedly proven to be effective in extracting rich information from users and developing insights and design directions.

Figure 3 is a design pathway (context mapping), which is promoted by Visser et al. (Reference Visser, Stappers, Van der Lugt and Sanders2005). Context mapping shows present experiences as a bridge between the past and future of the participating user. To be specific, the context mapping is about exploring and understanding the full domain of people’s experiences. It includes people’s memories (past), current experiences (present), and dreams or aspirations (future). Specifically, context mapping started with inquiring about present experiences. It suggests that people first move to past experiences and then, with a short bridging probe about the present. Followed by that, people jump to the future desires to unlock the deep desires of the user. In other words, the context mapping emphasizes that experience is not just what people explicitly recall or state. The context mapping is a combination of what they remember, what they currently feel, and what they hope for. Present experiences are a bridge between the past and future of the user. Context mapping aims to access all these layers (past, present, future) to inspire the design. For example, for the flow of the interview, the interviewer uses an interview script to maintain the path of expression of the users’ experience. This path of context mapping was innovatively reimagined to set up in the promoted framework for human–AI co-ideation (Figure 2). Co-Ideator works with the logic of taking the role of a synthetic design partner and asks the designer important questions about the design brief, the designer’s opinion, experience, and expectations in the order present–past–future (adopted from the context mapping codesign process). This framework ensures that the designer thinks beyond superficial design solutions by making sure the AI model pushes the imaginative boundaries of the designer through critical questions.

Figure 3. The path of expression. The gray color context (the gray lines and letters) is about exploring and understanding the full domain of people’s experiences. It includes people’s memories (past), current experiences (present), and dreams or aspirations (future). Specifically, context mapping started with enquiring about present experiences. It suggests that people first move to past experiences, and then with a short bridging probe about the present. Followed by that, people jump to the future desires to unlock the deep desires of the user. In other words, the context mapping emphasizes that experience is not just what people explicitly recall or state. The context mapping is a combination of what they remember, what they currently feel, and what they hope for. Present experiences is a bridge between the past and future of the user. Context mapping aims to access all these layers (past, present, future) to inspire the design. For example, for the flow of the interview, the interviewer uses an interview script to maintain the path of expression of the users’ experience. This path of context mapping was innovatively re-imagined to set up in the promoted framework for human-AI co-ideation. The blue arrow is a visual aid to explain the image in a linear manner.

Proposed framework for co-ideation with custom GPT Co-Ideator

Based on this promoted context-mapping framework for co-ideation, a custom GPT took the role of Co-Ideator with human designers and acted as the interviewer to guide designers. This interviewer role can stimulate the creativity of designers and also aligns with the principles of participatory design, which can foster a more immersive and collaborative environment (Figure 4). Context mapping as a method involves an interview with the participants. It is a form of participatory design. Co-Ideator is the synthetic design partner and engages in dialogue with the human designer. The framework is a design activity with one human and one AI designer. In any participatory design process, there exists an interviewer. Same as in context mapping, the interviewer asks provocative questions to the participant (who is participating in the design process). The custom GPT – Co-Ideator – thus was promoted (Figure 5). It is notable that Figure 5 is an example of the interaction between user and Co-Ideator, while Figure 2 is a general summary and visual representation of the co-ideation workflow. How to use Co-Ideator in a co-ideation process is detailed explained in Section “How to use Co-Ideator in a co-ideation process”.

Figure 4. Adopting the interviewing method from context mapping into our framework.

Figure 5. Workflow of Co-Ideator. (a) Home page. (b) Initial chatting page. The Co-Ideator was instructed to follow a sequence of probing questions to provoke thoughts from the designer. (c) Critical questions asked by Co-Ideator. After applying the prototype of the prompt to the custom GPT, Co-Ideator can assist designers in delving deeper into their ideas by asking “how” and “why” questions to uncover implicit or latent thoughts. For example, designers started with “I find the material of the dustbin too weak‥” The Co-Ideator can reply “Why do you find the material weak?” Designers then may answer “It can get knocked over easily and trash can fall out.” Follow by that, Co-Ideator promoted a new question “How do you think you can make it stronger?” (d) Visualized results.

First, a prototype of the prompt was developed. Following the results of our survey in Study 1 (Table 1) and learnings from prompt engineering research (Section “Framework for Co-Ideation: Adopting Context Mapping as a Method”), the prototype of the prompt was written to adapt the context-mapping method to the co-ideation process. The prompt consisted of a framework for ideation as well as a novelty scale. The Co-Ideator was instructed to follow a sequence of probing questions to provoke thoughts from the designer. After applying the prototype of the prompt to the custom GPT, Co-Ideator can assist designers in delving deeper into their ideas by asking “how” and “why” questions to uncover implicit or latent thoughts. For example, designers started with “I find the material of the dustbin too weak‥” The Co-Ideator can reply “Why do you find the material weak?.” Designers then may answer, “It can get knocked over easily and trash can fall out.” Followed by that, Co-Ideator promoted a new question: “How do you think you can make it stronger?”

Co-Ideator is a custom GPT (Figure 6). It is constructed utilizing a comprehensive prototype of the prompt, which is essentially an extensive set of instructions and guidelines crafted to shape the GPT’s behavior, response style, and domain focus. The prototype of the prompt was meticulously designed to encapsulate the desired functionalities, interaction styles, and domain knowledge that the Co-Ideator should exhibit. By converting this prototype of the prompt into a set of structured instructions, the underlying GPT model was fine-tuned to align with the envisioned role of Co-Ideator as a collaborative, creative, and critical-thinking partner in the ideation and design processes.

Figure 6. Mechanism of Co-Ideator.

Co-Ideator operates as an AI-powered collaborator that engages with users in the exploration and refinement of ideas or design problems. It acts as an insightful interviewer and a thought-provoking partner, adept at challenging concepts to foster creative friction and deeper understanding. Through a series of targeted questions and feedback, the Co-Ideator assists users in dissecting and expanding their ideas, providing a structured framework for creativity and problem-solving. Its functionalities are designed to stimulate discussion, encourage lateral thinking, and guide users toward novel and feasible solutions, all while maintaining a user-centric and empathetic approach to collaboration.

How to use Co-Ideator in a co-ideation process

Co-Ideator is a custom GPT that helps the user explore the topic in a deeper sense. How to use Co-Ideator in a co-ideation process is displayed in Figure 5. Start with asking questions that help users find out what the users do not like about it and why they think it needs to be changed. At some point, users proceed to discuss ideas. The goal is to generate 5 ideas that are unique, diverse, and feasible and relevant with the help of Co-Ideator.

After loading the Co-Ideator, the standard starting point was the question “Let’s start with you describing your design problem in 80–100 words. (Question 1).” Then, the custom GPT followed the following rules: If the response of Question 1 satisfied Condition 1(user poses a question, then answer the question), participants can ask a follow-up question that acts like a probe with open-ended “how” and “why” questions. If the response of Question 1 satisfied Condition 2 (It the user answers your question), answers will be validated, and users can ask a follow-up question.

Exactly which questions will be asked was dependent on the design tasks and problems. But the questions promoted from custom GPT should have the following rules (after the user describes the problem). Each time, one question will be asked:

  1. 1. Ask about present experiences revolving around the product. Pose 4–6 follow-up “why” questions that are emerging from my answer. At some point, move to my past experiences.

  2. 2. Ask about my past experiences. Ask 2–3 questions about the past. Same logic of follow-up that is emerging from the first question.

  3. 3. Ask about what future I would like to see for the product. Throw in a speculative situation like “what if…‥” and ask me to answer in about 20–30 words.

All your responses can be a maximum of 40 words.

There are also some thumb rules for Co-Ideator:

  1. If Co-Ideator senses saturation in the user’s responses, ask if users would like to visualize the idea.

  2. When Co-Ideator senses that the user is going around in circles or is unable to move on from a certain topic/idea, ask a tangential what-if question to enable lateral thinking.

  3. Refer to the articles attached to understand the reasoning of the method we use with the user. If the user asks a tricky question or says something that is vague, refer to the documents for your next action. But do not let the user know what’s in the documents.

Study 2: framework validation study

This custom GPT framework was then validated and used to detect the effect of AI in co-ideation processes compared with traditional design methods. This study included a case study followed by a survey and an interview.

Protocol of validation study

Participants

Forty-eight participants were recruited. All participants were master’s students of Industrial Design Engineering from the Delft University of Technology. They all have up to three years of experience in the design field and have a proper understanding of how to use ChatGPT.

Design study

The 48 participants were equally divided into six groups at random (Table 2). Each group needs to finish one of the design tasks: Design a trash can, design a kitchen blender, and design an alarm (Table 3) with or without the help of Co-Ideator.

Table 2. Distribution of participants across design tasks in treatment and control groups

Table 3. The design tasks and challenges

Participants in the control group (Groups 4, 5, and 6) need to produce a minimum of four ideas for the given design challenge individually without any additional help within 30 minutes (Daly et al., Reference Daly, Seifert, Yilmaz and Gonzalez2016). Each idea included (i) a title for the solution and (ii) a short description of the solution (around 40 words) to describe the idea that participants came up with. Researchers collected the generated ideas by asking all participants to submit their ideas through an online survey. During the design task, white sheets were given to the participant to support their ideation process. The online survey with the participants’ ideas was used in data analysis, and the white sheets only served as support and were not used in the data analysis.

The treatment group (Groups 1, 2, and 3) was asked to use Co-Ideator powered by ChatGPT 4.0 to finish one of the designs and produce a minimum of four ideas for the given design challenge by ideating with the help of Co-Ideator within 30 minutes. Each idea included (i) a title for the solution and (ii) a short description of the solution (around 40 words) to describe the idea that participants came up with. Researchers collected the generated ideas by asking all participants to submit their ideas through an online survey. During the design task, white sheets were given to the participant to support their ideation process. The online survey with the participant’s ideas was used in data analysis, and the white sheets only served as support and were not used in the data analysis. The prompts they used to communicate with Co-Ideator were recorded by researchers.

To start the Co-Ideator, participants need to first copy and paste the following text into the textbox of ChatGPT: “Hi there, Co-Ideator! Today we will co-ideate some designs for the following design challenge: XXXXX (Table 3 – task-related “Design tasks challenges” column.” After the use of the initial prompt, participants were allowed to freely use the Co-Ideator. Then, questions were asked based on the rules from Section 4.3. Participants were also allowed to freely use the Co-Ideator. Additional question prompts were given as support to use as follow-up questions (Table 4).

Table 4. A list of options to further direct the conversation in an effective way by specifying the commands for ChatGPT

Survey study

For each participant, after the design tasks, apart from the survey which was used to upload the design ideas, participants need to finish the other final survey. Participants were asked to fill out this final survey to report their experience with the experiment, the custom GPT, and the Co-Ideator. The data from this survey were used in the discussion to validate and support the results of the quantitative study. To be specific, the control group was asked to report their feelings in finishing the design tasks by themselves, such as “Did you feel like you were stuck at some point?” The treatment group was asked to report their feelings in using Co-Ideator to finish the design tasks such as “do you feel that ChatGPT helped you come up with more unique ideas whereas you would do it yourself?,” “Do you feel like you were stuck at some point,” “Would you like to use the Co-Ideator in the future?,” “Do you feel ownership of the ideas you generated?,” “Do you feel like you have co-created these ideas with Co-Ideator,” “Did this co-creation tool make you feel more creative during the ideation phase?” All the questions were based on the 7-Likert scale.

Interview study

After the design and survey studies, four participants were interviewed to gather qualitative data regarding their experiences and perceptions of using the custom GPT in the co-ideation process. The interviewees were asked to engage with the chatbot and provide feedback, expectations, and their expertise on the matter of human–AI co-ideation specific to their experience. The question examples asked in the interview were shown in Figure 7.

Figure 7. Interview questions for qualitative study.

Idea assessment study

To evaluate the ideation outcome, three raters from the Industrial Design Engineering faculty of the Technical University of Delft rated each idea on the three identified metrics (novelty, quality, and variety) using a 7-Likert scale. They were in the master program “Strategic Product Design,” and all have at least four years of design experience. Prior to the rating process, all participant names were concealed to ensure anonymity. The ideas that were generated with and without custom GPT were unknown to the raters.

The collected data was analyzed using a standard set of metrics found in design science literature. These metrics, which were first introduced by Shah et al. (Reference Shah, Smith and Vargas-Hernandez2003) and subsequently adapted by Chan et al. (Reference Chan, Fu, Schunn, Cagan, Wood and Kotovsky2011), were novelty, quality, variety, and quantity (Shah et al., Reference Shah, Smith and Vargas-Hernandez2003; Nelson et al., Reference Nelson, Wilson, Rosen and Yen2009; Hernandez et al., Reference Hernandez, Shah and Smith2010; Srivathsavai et al., Reference Srivathsavai, Genco, Otto and Seepersad2010; Kerne et al., Reference Kerne, Webb, Smith, Linder, Lupfer, Qu and Damaraju2014). The novelty criterion assigns a score between “Overly explored” (1 point) and “Never seen before” (7 points), indicating how original and distinctive the design is. The functional integrity and viability of the design are assessed using the quality criterion, which assigns values ranging from “Not feasible at all” (1 point) to “Easily realized” (7 points). The variety criterion assigns a score between “Extremely limited” (1 point) and “Highly diverse” (7 points), indicating how diverse each participant’’s design ideas are. A wide range of concepts with little similarity would be associated with a high diversity score. The average of the ratings received from the three raters was used to determine the final score for each criterion. As the quantity is the number of ideas per participant which can be calculated directly by researchers, it is not included in the Idea assessment study.

Results of Study 2

Idea assessment results

For the statistical data analysis, all the ratings provided by the three raters were used. Examples of high and low rating scores for novelty, quality, and variety criteria are shown in Supplementary Materials S1–S3.

The adjacent percentages between raters for novelty were 94.7%, for quality 94.3%, and for variety 100% (Table 5). Fleiss’ Kappa value was calculated to evaluate the consistency of all three raters for each metric. Fleiss’ kappa, κ (Fleiss, Reference Fleiss1971; Fleiss et al., Reference Fleiss, Levin and Paik2003), is a measure of inter-rater agreement used to determine the level of agreement between two or more raters. All of the computed Fleiss’ Kappa values were greater than 0.50, indicating that the scoring results were acceptable and reliable. The Fleiss’ Kappa values are shown in Table 5. To assess the normality of the ratings, the Shapiro–Wilk test is used. The Shapiro–Wilk test was significant (p < 0.05), indicating that the data is not normally distributed. Therefore, the nonparametric Mann–Whitney U test is used in order to determine if there is a significant difference in central tendency between the treatment group and control group. Since the distribution is not normal, we base our conclusions on the mean rank difference distributions. Figure 8 displays the results of the scores. The reason why the mean value was used instead of the median value was out of the consideration that the results show a normal distribution instead of the skewed distribution.

Table 5. Percentages of adjacent agreement and Coefficient of inter-rater reliability between 3 raters (Fleiss’ Kappa)

Figure 8. Results Mann–Whitney U Test for novelty (Mann–Whitney U = 2621.5, Z = −6.278, p < 0.001), quality (Mann–Whitney U = 4559.5, Z = −1.500, p = 0.133), and variety (Mann–Whitney U = 4559.5, Z = −1.500, p = 0.133).

A Mann–Whitney U test was conducted to determine if there were differences in the distribution of novelty scores between the treatment and control group. The results indicated a statistically significant difference in novelty scores between the groups (Mann–Whitney U = 4306.5, Z = −2.051, p = 0.040). Therefore, we reject the null hypothesis and conclude that the distribution of novelty is not the same across the treatment group and control group. In addition to this, the scores of the treatment group (mean rank = 110.28) were significantly higher than those of the control group (mean rank = 93.64).

A Mann–Whitney U test was conducted to determine if there were differences in the distribution of quality scores between the treatment and control group. The results indicated a statistically significant difference in novelty scores between the groups (Mann–Whitney U = 2621.5, Z = −6.278, p < 0.001). Therefore, we reject the null hypothesis and conclude that the distribution of quality is not the same across the treatment and control groups. In addition to this, the scores of the treatment group (mean rank = 126.80) were significantly higher than those of the control group (mean rank = 76.96).

A Mann–Whitney U test was conducted to determine if there were differences in the distribution of variety scores between the treatment and control group. The results indicated no statistically significant difference in variety scores between the groups (Mann–Whitney U = 4559.5, Z = −1.500, p = 0.133), leading to the retention of the null hypothesis. This suggests that the distribution of variety scores is similar between the treatment and control groups. This result indicated that Co-Ideator may help overcome initial fixation and achieve novelty and quality of ideas. However, once a new idea space is found, the user might need to redo the brainstorming task to achieve not only a new (novel) idea space, but also a diverse idea space. In addition to this, the scores of the treatment group (mean rank = 107.80) were not significantly higher than those for the control group (mean rank = 96.14).

In terms of the quantity of ideas, the treatment group produced a total of 102 ideas, while the control group produced 100 ideas. It is notable that the study did not go deep in “quantity” in the later evaluation, out of the consideration that this is not the core research focus. Instead of the quantity, this study focuses more on how the framework affects the quality of the ideas which is evaluated by novelty, quality, and variety. However, we admit that quantity is an interesting metric for evaluations and worth detecting in the future.

Interview results

The data analysis followed the traditional format of qualitative research, as shown in Figure 9. The interviews were first recorded, and then important details from the conversations were noted during the transcription process. Following transcription, each participant’s spoken sentence was given a code during the coding process. After that, the codes were grouped to find possible themes or conversation points. Five recurrent themes were identified by closely examining the clustered codes and quotes: adaptive design partner, adversarial teammate, supporting ideation and visualization, a trustworthy and cooperative partnership, and swinging between reality and anticipation. The conversation revolves around these themes, which shed light on the advantages and disadvantages of using GPT models as codesign tools. It is notable that the five themes (adaptive design partner, adversarial teammate, supporting ideation and visualization, a trustworthy and cooperative partnership, and swinging between reality and anticipation) from the interview were to discuss the relationship between designers and Co-Ideator. The themes did not form a hypothesis (such as novelty improvement). Instead, they were the foundations of building the chatbot. It can be considered as design requirements for the chatbot to achieve significant improvements in the quantitative metrics of assessment.

  1. 4. Theme 1: Adaptive design partner. In the ideation phase, the adaptability of the Co-Ideator GPT is essential for fostering a collaborative relationship between the designer and the AI. This model, informed by context-aware training methodology, mirrors and responds to the designer’s thought processes, providing a platform for deeper reflection and exploration beyond the conventional responses seen in earlier AI versions like ChatGPT 3.5. The expectation is for the Co-Ideator to not only align with the designer’s methods but also to take an active role in offering creative prompts and deepening the inquiry, enhancing the ideation process. For example, when a designer expresses a preference for a specific style of moodboard, the AI quickly aligns with this preference, offering relevant suggestions and seeking further clarification to refine its output.

  2. 5. Theme 2: Adversarial teammate. Interacting with Co-Ideator GPT unveils a nuanced journey of emotional and cognitive experiences for designers, characterized by a blend of emotional intelligence and creative challenges. Initially, the AI’s challenging stance may provoke frustration, pushing designers out of their habitual creative strategies. However, this friction often transitions into a constructive force as designers come to value the AI’s perspectives and questioning nature, which promotes divergent thinking and breaks creative stalemates. To further broaden the imaginative boundaries, the Co-Ideator should be treated as an adversary to create a reflection-in-action (Schön, Reference Schön2017) effect throughout the process.

  3. 6. Theme 3: Supporting ideation and visualization. This aspect emphasizes the significant role of AI, especially in its advanced multimodal capacities, in enriching the design process. As a collaborative tool, AI extends beyond ideation, assisting in crystallizing vague concepts into concrete outcomes that align with the designer’s vision. The emotional spectrum experienced by designers, ranging from relief to appreciation, underscores the AI’s role in facilitating creative exploration and execution. The multimodal function of the Co-Ideator proves to be essential in representing the lead designers’ ideas visually. The latest engine of ChatGPT 4 is now also capable of reading objects in an image and interpreting artistic styles at the very least.

  4. 7. Theme 4: A trustworthy and cooperative partnership. The development of trust and a cooperative spirit underscores the evolving perception of AI as a reliable entity in the design workflow. Designers’ growing acknowledgment of AI’s value as a co-creative partner reflects a trend toward more integrated and collaborative design practices.

  5. 8. Theme 5: Swinging between reality and anticipation. The dialogues between designers and ChatGPT reveal a complex interplay of expectations and realities, with varying emotional and pragmatic responses. Designers advocate for a more focused and relevant AI interaction, tailored to specific design contexts. They envision an AI that not only responds but actively engages in the creative dialogue, akin to a human collaborator. The anticipation for AI’s role in creative ideation is marked by a desire for AI to evoke lateral thoughts and engage critically with the content. While designers appreciate AI’s current contributions, they also highlight a need for AI to balance affirmation with critical inquiry, pointing to a pivotal area for AI’s evolution in design. Expectations extend to AI’s future capabilities in aiding brainstorming, conceptual mapping, and mood board creation, suggesting a more nuanced and proactive role in the design process.

Figure 9. Qualitative data analysis workflow.

Discussion

This discussion aims to deepen our understanding of custom GPT’s capabilities and limitations in the context of co-ideation. Co-Ideator can provoke the designer to think and reflect on their answers as they design. This aligns with Schön’s reflection in action (Schön, Reference Schön2017), where designers critically reassess and reshape ideas during the act of designing through a continuous conversation with the situation. By prompting “why” and “how” questions in response to user inputs, Co-Ideator externalizes that reflective loop, effectively acting as an interrogator.

By reviewing the novelty and quality rating scores of the three experts, it can be found that the treatment group performed significantly better than the control group on the novelty and quality criteria (novelty Mann–Whitney U = 4306.5, Z = −2.051, p = 0.040, quality Mann–Whitney U = 2621.5, Z = −6.278, p < 0.001). The variety criteria gave no significant difference between both groups (variety Mann–Whitney U = 4559.5, Z = −1.500, p = 0.133). It can be concluded that the proposed framework for co-ideation with ChatGPT does contribute to a better performance on novelty and quality, but enables similar performance on variety. These results were obtained by performing Mann–Whitney U test that measures the average mean rank difference between the groups (Figure 8). The biggest between-group difference was measured on the quality score, having an average mean rank difference of 50, followed by novelty, which had a lower average mean rank difference of 17. Since a Mann–Whitney U test was used, no conclusions can be made about the magnitude of the difference in performance between the groups. However, they showed that the quality difference is higher than the novelty difference.

Based on the results of the interview, it can be found that AI acted as an adversarial teammate, introduced productive friction, challenged designers’ habitual thinking patterns, and promoted divergent thinking. This function helps the creative process by converting initial frustration into a positive force. The desire of designers is for AI to become more engaged and proactive in creative dialogues, balancing affirmation with critical inquiry. This balance is crucial for effective brainstorming and conceptual mapping.

Impact of human–AI co-ideation on the novelty of designers’ ideas

The control group did not use the tool and, hence, did not answer the evaluation survey. This section thus only talks about the impact of human–AI Co-ideation; the control group does not participate in human AI Co-ideation. The results of the quantitative study indicated a significant improvement in the novelty of ideas generated by the treatment group. This suggests that the co-ideation framework effectively enhances the originality and creativity of the ideas generated. Participants in qualitative sessions frequently mentioned how Co-Ideator prompted unconventional lines of thinking, which often resulted in more innovative solutions. Figure 10 presents the responses from 24 participants to the survey question: “Do you feel that ChatGPT helped you to come up with more unique ideas, whereas you would do it yourself?” Participants rated their experience on a scale from 1 (not helpful) to 7 (extremely helpful), indicating how much participants felt ChatGPT contributed to generating unique ideas compared to working alone. The majority of participants (54.2%) rated ChatGPT’s contribution to ideation as 5 or above, indicating a strong perception that the AI tool significantly enhanced their ability to generate unique ideas. Notably, the mode of the distribution is 5 and 6, each accounting for 25% of responses, which suggests a high level of satisfaction with ChatGPT’s performance and can explain the higher mean rank of the treatment group on novelty.

Figure 10. Survey question: helpfulness of ChatGPT.

Impact of human–AI co-ideation on the quality of designers’ ideas

The analysis indicated that the co-ideation setup resulted in significantly higher-quality ideas, particularly regarding feasibility and implementation potential. This enhanced quality in the treatment group can be attributed to ChatGPT’s ability to generate responses grounded in existing data. Since we measure quality based on feasibility, it stands to reason that the ideas generated by ChatGPT are of higher quality because they are based on preexisting, viable concepts, making them more feasible. Another possible reason for the higher quality of ideas generated by the treatment group could be that participants in the control group found it more challenging to understand the design question, despite both groups receiving the same explanation. This conclusion is based on the responses from the final survey, as shown in Table 6. In the survey’s open-ended question, participants were asked to share their thoughts about the experiment, and many from the control group mentioned difficulties in comprehending the design task. A lack of understanding can lead to lower-quality ideas, as participants may struggle to fully engage with the challenge.

Table 6. Feedback on Co-ideation session with and without ChatGPT

Impact of human–AI co-ideation on the variety of designers’ ideas

While the experimental group showed improvements in novelty and quality, the variety of ideas generated did not differ significantly between the two groups. This suggests that although ChatGPT enhances the uniqueness and feasibility of ideas, the diversity of ideas remains consistent regardless of the method used.

A potential factor that could help explain this similarity for the treatment group is that, as previous research has also pointed out, ChatGPT has a tendency to overuse certain phrases (Ray, Reference Ray2023). This can make the output seem repetitive and could have resulted in a lower variety within the set of ideas generated by one participant. This phenomenon is similar to design fixation in humans for the control group, which Jansson and Smith (Reference Jansson and Smith1991) describe as “the blind, sometimes counterproductive adherence to a limited set of ideas in the design process.” Psychological studies have shown that idea generation is often constrained by presented examples and initial ideas, leading to a reduction in both novelty and variety (Viswanathan and Linsey, Reference Viswanathan and Linsey2012). Therefore, the observed similarity in the variety of ideas between the treatment and control groups may be due to ChatGPT’s tendency to repeat certain phrases and the human habit of design fixation. This indicates that both AI and human brainstorming processes can be limited by sticking to familiar patterns and existing ideas. Additional explanation of this variety results may be from AI–human interaction dynamics. It’s possible that the dynamics between Co-Ideator and designers described in Figure 6 lead to a limitation in producing variety because the interaction inadvertently focused on depth through iterative questioning. This possibly encouraged exploring ideas within a common idea space after zooming in on the problem statement.

Impact of human–-AI co-ideation on the quantity of designers’ ideas

In the experiment, all participants had 30 minutes to generate ideas. A very small difference was found between the two groups in the amount of total ideas generated. The treatment group generated 103 ideas versus 100 ideas for the control group. This difference is almost negligible. In our final survey, we asked both the treatment and control groups if they felt stuck at some point in the ideation phase because this could lead to a lower quantity of ideas. The responses are distributed on a scale from 1(never felt stuck) to 7(always felt stuck). Figure 11, which illustrates the responses of the control group, showed that most participants rated their experience as 4 or lower (62.4%). This suggests that while some participants encountered moments of difficulty, the majority did not frequently feel stuck. Conversely, Figure 12 showed the responses of the treatment group using ChatGPT. The amount of participants who rated their experience as 4 or lower was 70.9%. This difference in percentages was only 8.5% and can explain the small but not significant difference in quantity.

Figure 11. Participant responses on feeling stuck during ideation (control group).

Figure 12. Participant responses on feeling stuck during ideation (treatment group).

Limitations & further research

The objective of this research was to test whether the ideation performance of a designer could be improved by proposing a framework for co-ideation with custom GPT. The hypothesis was that a designer who co-ideates with the custom chatbot in GPT 4.o would outperform a designer who ideates individually without any additional means. This study provided insightful findings on the co-ideation process between designers and ChatGPT, which was subject to several limitations. First, the training data used by ChatGPT introduced potential biases that could affect the outcome of the ideation sessions. The AI’s responses were shaped by the data it has been trained on, which may not fully represent the diversity of human creativity and problem-solving approaches (Brad, Reference Brad2023). This limitation could potentially distort the novelty and variety of ideas generated. Second, the scope of the framework used in this study is limited to some degree and may not fully capture the complex dynamics of human–AI collaboration. While the framework provides a structured approach to co-ideation, it may oversimplify the nuanced interactions between designers and AI, potentially overlooking important aspects of the creative process (Crowder et al., Reference Crowder, Carbone and Friess2019).

Another limitation arises from the relatively small sample size and specific demographic of the participants, who were all master’s students from the Industrial Design Engineering faculty at TU Delft. This homogeneity may limit the generalizability of the findings to a broader population of designers with different backgrounds and experiences (Allmark, Reference Allmark2004). Future research should include a larger and more diverse sample of participants to enhance the generalizability of the results. Including designers from various educational backgrounds, professional experiences, and cultural context would provide a more holistic understanding of the co-ideation process. Additionally, exploring the integration of multimodal AI tools, which combine text, image, and other forms of input, could enrich the co-ideation with AI, ultimately enhancing the creative potential of designers across various fields.

The framework was not evaluated, and no follow-up investigation was conducted to ask why this did not perform better (after the results). This is out of consideration that this kind of iterative based on the findings is a never-ending process. Future researchers may be able to build based on our findings, make use of newer models, and consider building a tool with a more nuanced user interface.

The final consideration relates to the impacts within the constantly evolving field of AI. The mental model for human–AI co-creation may shift as AI continues to rapidly develop, potentially necessitating new inquiry and adaptation of the co-creation framework. This research presents a co-ideation framework that emphasizes the interaction between the designer and custom GPT as a Co-Ideator. ChatGPT is specifically tailored and optimized for conversational use to generate human-like responses based on its extensive information and knowledge (Fui-Hoon Nah et al., Reference Fui-Hoon Nah, Zheng, Cai, Siau and Chen2023). In this research field, a human-centered AI collaboration is crucial for effectively leveraging generative AI applications (Fui-Hoon Nah et al., Reference Fui-Hoon Nah, Zheng, Cai, Siau and Chen2023). The evolving role of AI and the dynamics of interaction will undoubtedly lead to future research opportunities.

Anticipating future development, this study contributes to the understanding of co-creation with generative conversational LLMs, specifically custom GPT from GPT 4.o, and highlights prompt engineering as a promising future direction. Based on the results of this study, we expect that, by further refining the prompts and structure that define the human–computer dynamics, there is a strong future potential for increased creativity and designer performance through co-ideation with LLMs.

Contribution

The contributions of the study are as follows. First, this research investigated the interaction between designers and AI to understand the problems designers face when using ChatGPT to ideate and, in this way, to identify design requirements for establishing effective human–AI co-ideation tools. This can bring guidelines on the development of AI tools for collaborative human–AI design. Additionally, based on this study, a framework was promoted to summarize the co-ideation between designers and the custom GPT. A custom GPT (Co-Ideator) has been promoted which can be used to improve the performance of a designer. Finally, the study supported the effect of the promoted framework and the custom GPT tool on increasing the designer’s ideation performance in terms of novelty and quality, which can be used as supported the potential of AI in enhancing creative processes.

It is notable that the standard ChatGPT was facing problems of hallucinations and shortened memory in spite of breaking down the long single-shot prompt into shorter blocks to make a multi-shot prompt. This was also observed to be one of the issues during the initial self-testing while configuring the prompt. Custom GPTs offered features like “custom instructions” to help bypass these issues.

Conclusion

This research aimed to detect when LLMs were positioned as codesigners, how LLMs influenced the creative ideation processes of industrial designers, whether the ideation performance of a designer could be improved by proposing a framework for co-ideation with custom GPT, and compare whether the ideation performance of a designer could be improved by proposing customer GPT compared with traditional ideation methods. To achieve the research aims, this study first detected how LLMs influenced the creative ideation processes of industrial designers and understood the problems designers face when using ChatGPT to ideate through a survey. Then, a framework to guide the co-ideation between humans and custom GPT was promoted based on mapping content. Based on the framework, the custom GPT (Co-Ideator) was developed. Finally, a design case study followed by a survey and an interview was conducted as an evaluation of the ideation performance of the framework compared with traditional ideation methods. The findings indicated that if users employed co-ideation with the custom GPT, the novelty and quality of ideation outperformed by using traditional ideation. No statistically significant difference in variety was observed between the two methods. Furthermore, users employing co-ideation with the custom GPT performed slightly better on the quantity of generated ideas than users who used traditional ideation. This study identified design requirements for establishing effective human–AI co-ideation tools, which can bring guidelines on the development of AI tools for collaborative human–AI design. Also, based on this study, a framework was promoted to summarize the co-ideation between designers and the custom GPT to improve the performance of a designer. Finally, the study supported the effect of the promoted framework and the custom GPT tool on increasing the designer’s ideation performance in terms of novelty and quality, which can be used to support the potential of AI in enhancing creative processes.

Supplementary material

The supplementary material for this article can be found at http://doi.org/10.1017/S0890060425100127.

References

Albers, A, Bursac, N and Rapp, S, 2016. PGE-product generation engineering-case study of the dual mass flywheel. In DS 84: Proceedings of the DESIGN 2016 14th International Design Conference (Pp. 791800).Google Scholar
Allmark, P (2004) Should research samples reflect the diversity of the population? Journal of Medical Ethics 30 (2), 185189.10.1136/jme.2003.004374CrossRefGoogle ScholarPubMed
Anantrasirichai, N and Bull, D (2022) Artificial intelligence in the creative industries: A review. Artificial Intelligence Review 55 (1), 589656.10.1007/s10462-021-10039-7CrossRefGoogle Scholar
Baltà-Salvador, R, El-Madafri, I, Brasó-Vives, E and Peña, M (2025) Empowering engineering students through artificial intelligence (AI): Blended human–AI creative ideation processes with ChatGPT. Computer Applications in Engineering Education 33 (1), e22817.10.1002/cae.22817CrossRefGoogle Scholar
Barrat, J (2023) Our Final Invention: Artificial Intelligence and the End of the Human Era. Hachette UKGoogle Scholar
Boudhraa, S, Dorta, T, Milovanovic, J and Pierini, D (2021) Co-ideation critique unfolded: An exploratory study of a co-design studio ‘crit’based on the students’ experience. CoDesign 17 (2), 119138.10.1080/15710882.2019.1572765CrossRefGoogle Scholar
Brad, S (2023). Enhancing creativity in deep learning models with SAVE-inspired activation functions. In IFIP Advances in Information and Communication Technology (Pp. 147171). https://doi.org/10.1007/978-3-031-42532-5_12CrossRefGoogle Scholar
Briggs, RO and Reinig, BA (2007) Bounded Ideation Theory: A New Model of the Relationship between Ideaquantity and Idea-Quality during Ideation. https://doi.org/10.1109/hicss.2007.108CrossRefGoogle Scholar
Briggs, RO and Reinig, BA (2010) Bounded ideation theory. Journal of Management Information Systems 27 (1), 123144.10.2753/MIS0742-1222270106CrossRefGoogle Scholar
Cai, A, Rick, SR, Heyman, JL, Zhang, Y, Filipowicz, A, Hong, M, … and Malone, T (2023, November). DesignAID: Using generative AI and semantic diversity for design inspiration. In Proceedings of the ACM Collective Intelligence Conference (Pp. 111).Google Scholar
Chakrabarti, A, Sarkar, P, Leelavathamma, B and Nataraju, BS (2005) A functional representation for aiding biomimetic and artificial inspiration of new ideas. Artificial Intelligence for Engineering Design, Analysis and Manufacturing 19 (02). https://doi.org/10.1017/s0890060405050109.CrossRefGoogle Scholar
Chan, J, Fu, K, Schunn, CD, Cagan, J, Wood, KL and Kotovsky, K (2011) On the benefits and pitfalls of analogies for innovative design: Ideation performance based on analogical distance, commonness, and modality of examples. Journal of Mechanical Design 133 (8). https://doi.org/10.1115/1.4004396.CrossRefGoogle Scholar
Chen, L, Cai, Z, Jiang, Z, Luo, J, Sun, L, Childs, P and Zuo, H (2024) AskNatureNet: A divergent thinking tool based on bio-inspired design knowledge. Advanced Engineering Informatics 62, 102593.10.1016/j.aei.2024.102593CrossRefGoogle Scholar
Chen, L, Shi, F, Han, J and Childs, PR (2017, August). A network-based computational model for creative knowledge discovery bridging human-computer interaction and data mining. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 58219, p. V007T06A001). American Society of Mechanical Engineers.Google Scholar
Chen, LQ, Wang, P, Dong, H, Shi, F, Han, JW, Guo, Y, Childs, PR, Xiao, J and Li, VC (2019) An artificial intelligence based data-driven approach for design ideation. Journal of Visual Communication and Image Representation 61, 1022. https://doi.org/10.1016/j.jvcir.2019.02.009.CrossRefGoogle Scholar
Chen, L, Xia, D, Jiang, Z, Tan, X, Sun, L and Zhang, L (2025) A conceptual design method based on concept–knowledge theory and large language models. Journal of Computing and Information Science in Engineering 25 (2).10.1115/1.4066773CrossRefGoogle Scholar
Chiou, LY, Hung, PK, Liang, RH and Wang, CT (2023, July). Designing with AI: An exploration of co-ideation with image generators. In Proceedings of the 2023 ACM Designing Interactive Systems Conference (Pp. 19411954).10.1145/3563657.3596001CrossRefGoogle Scholar
Chou, JR (2014) An ideation method for generating new product ideas using TRIZ,concept mapping, and fuzzy linguistic evaluation techniques, Advanced Engineered Informatics 28 (4) 441454).Google Scholar
Christensen, B and Schunn, CD (2007) The relationship of analogical distance to analogical function and preinventive structure: The case of engineering design. Memory & Cognition 35 (1), 2938. https://doi.org/10.3758/bf03195939.Google ScholarPubMed
Coddington, A, Giang, C, Graham, A, Prince, A, Mattila, P, Thong, C and Kocsis, A (2016). Knowledgeability culture: Co-creation in practice. Proceedings of DRS. https://doi.org/10.21606/drs.2016.134CrossRefGoogle Scholar
Crean, B and O’Donoghue, D (2002). RADAR: Finding analogies using attributes of structure. In Lecture Notes in Computer Science. Springer Science+Business Media. https://doi.org/10.1007/3-540-45750-x_3Google Scholar
Cross, N (2006) Designerly Ways of Knowing. Springer London, pp. 113Google Scholar
Crowder, JA, Carbone, J and Friess, S (2019). Humas-AI collaboration. In Springer eBooks (pp. 3550) https://doi.org/10.1007/978-3-030-17081-3_4CrossRefGoogle Scholar
Daly, SR, Seifert, CM, Yilmaz, S and Gonzalez, R (2016) Comparing ideation techniques for beginning designers. Journal of Mechanical Design 138 (10), 101108.10.1115/1.4034087CrossRefGoogle Scholar
Daly, SR, Yilmaz, S, Christian, JF, Seifert, CM and Gonzalez, R (2012) Design heuristics in engineering concept generation. Journal of Engineering Education 101 (4), 601629. https://doi.org/10.1002/j.2168-9830.2012.tb01121.x.Google Scholar
Ekvall, H and Winnberg, P (2023) Integrating ChatGPT into the UX Design Process: Ideation and Prototyping with LLMsGoogle Scholar
Filippi, S (2023) Measuring the impact of ChaTGPT on fostering concept generation in innovative product design. Electronics 12 (16), 3535. https://doi.org/10.3390/electronics12163535.CrossRefGoogle Scholar
Fleiss, JL (1971) Measuring nominal scale agreement among many raters. Psychological Bulletin 76 (5), 378382. https://doi.org/10.1037/h0031619.CrossRefGoogle Scholar
Fleiss, JL, Levin, B and Paik, MC (2003) Statistical Methods for Rates and Proportions, 3rd Edn. Hoboken, NJ: Wiley. https://doi.org/10.1002/0471445428.fmatterCrossRefGoogle Scholar
Freese, S (2023) AI in co-Creation: The Usability and Impact of AI Tools for co-Creation in Participatory Design to Generate Innovative and User-Centric Design SolutionsGoogle Scholar
Fui-Hoon Nah, F, Zheng, R, Cai, J, Siau, K and Chen, L (2023) Generative AI and ChatGPT: Applications, challenges, and AI-human collaboration. Journal of Information Technology Case and Application Research 25 (3), 277304. https://doi.org/10.1080/15228053.2023.2233814.CrossRefGoogle Scholar
Garvin, M (2021) Close Encounters of the HCI Kind: An Ethnography of Human-Centered Approaches in Space Technology (Doctoral Dissertation)Google Scholar
Giaccardi, E and Redström, J (2020) Technology and more-than-human design. Design Issues 36 (4), 3344.10.1162/desi_a_00612CrossRefGoogle Scholar
Goel, AK and Bhatta, SR (2004) Use of design patterns in analogy-based design. Advanced Engineering Informatics 18 (2), 8594. https://doi.org/10.1016/j.aei.2004.09.003.CrossRefGoogle Scholar
Gonçalves, M and Cash, P (2021) The life cycle of creative ideas: Towards a dual-process theory of ideation. Design Studies 72, 100988.10.1016/j.destud.2020.100988CrossRefGoogle Scholar
Grilli, L and Pedota, M (2024) Creativity and artificial intelligence: A multilevel perspective. Creativity and Innovation Management 33 (2), 234247.10.1111/caim.12580CrossRefGoogle Scholar
Hamilton, V, Brisco, R and Grierson, H (2024, September). How can AI support the creation of novel ideas in product design. In 26th International Conference on Engineering and Product Design Education: Rise of the Machines: Design Education in the Generative AI Era (Pp. 133138).10.35199/EPDE.2024.23CrossRefGoogle Scholar
Han, JW, Shi, F, Chen, LS and Childs, PR (2018) The combinator – A computer-based tool for creative idea generation based on a simulation approach. Design Science 4. https://doi.org/10.1017/dsj.2018.7.Google Scholar
He, T, Stankovic, A, Niforatos, E and Kortuem, G (2024) DesignMinds: Enhancing Video-Based Design Ideation with Vision-Language Model and Context-Injected Large Language Model arXiv preprint arXiv:2411.03827Google Scholar
Helfman Cohen, Y, Reich, Y and Greenberg, S (2014) Biomimetics: Structure–function patterns approach. Journal of Mechanical Design 136 (11), 111108.10.1115/1.4028169CrossRefGoogle Scholar
Hernandez, NV, Shah, JJ and Smith, SM (2010) Understanding design ideation mechanisms through multilevel aligned empirical studies. Design Studies 31 (4), 382410.Google Scholar
Huo, K and Vinayak, & Ramani, K (2017) Window-Shaping. https://doi.org/10.1145/3024969.3024995CrossRefGoogle Scholar
Jansson, D and Smith, S (1991) Design fixation. Design Studies 12 (1), 311.Google Scholar
Jeon, Y, Jin, S, Shih, PC and Han, K (2021, May). FashionQ: An AI-driven creativity support tool for facilitating ideation in fashion design. In Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems (Pp. 118).10.1145/3411764.3445093CrossRefGoogle Scholar
Johansen, K (2023, December). Need-to-Know Search Engine Facts & Statistics for 2023. SEO.com. https://www.seo.com/blog/search-engine-statistics/Google Scholar
Kerne, A, Webb, AM, Smith, SM, Linder, R, Lupfer, N, Qu, Y and Damaraju, S (2014) Using metrics of curation to evaluate information-based ideation. ACM Transactions on Computer-Human Interaction (ToCHI) 21 (3), 148.10.1145/2591677CrossRefGoogle Scholar
Kim, J and Ryu, H (2014) A design thinking rationality framework: Framing and solving design problems in early concept generation. Human–Computer Interaction 29 (5–6), 516553.Google Scholar
Kocaballi, AB (2023). Conversational AI-Powered Design: ChatGPT as Designer, User, and Product. https://arxiv.org/abs/2302.07406Google Scholar
Li, J, Li, J and Su, Y (2024, May). A map of exploring human interaction patterns with LLM: Insights into collaboration and creativity. In International Conference on Human-Computer Interaction (pp. 6085). Cham: Springer Nature Switzerland.Google Scholar
Lin, PY, Andersen, K, Schmidt, R, Schoenmakers, S, Hofmeyer, H, Pauwels, P and IJsselsteijn, W (2024, July). Text-to-image AI as a catalyst for semantic convergence in creative collaborations. In Proceedings of the 2024 ACM Designing Interactive Systems Conference (Pp. 27532767).Google Scholar
Lin, C, Yeh, Y, Hung, Y and Chang, R (2013) Data mining for providing a personalized learning path in creativity: An application of decision trees. Computers & Education 68, 199210. https://doi.org/10.1016/j.compedu.2013.05.009.CrossRefGoogle Scholar
Liu, Y, Han, T, Ma, S, Zhang, J, Yang, Y, Tian, J, He, H, Li, A, He, M, Liu, Z, Wu, Z, Zhao, L, Zhu, D, Li, X, Qiang, N, Shen, D, Liu, T and Ge, B (2023) Summary of chatgpt-related research and perspective towards the future of large language models. Meta-Radiology 100017.Google Scholar
Ma, SY (2024) Exploring Ambiguity in Generative AI Images and its Impact on Collaborative Design Ideation (Doctoral Dissertation, Industrial Engineering and Innovation Sciences, Eindhoven University of Technology)Google Scholar
Miller, AI (2019) The Artist in the Machine: The World of AI-Powered Creativity. MIT Press 10.7551/mitpress/11585.001.0001CrossRefGoogle Scholar
Mohan, M, Shah, JJ, Narsale, SS and Khorshidi, M (2014) Capturing Ideation Paths for Discovery of Design Exploration Strategies in Conceptual Engineering Design. Springer eBooks, pp. 589604. https://doi.org/10.1007/978-94-017-9112-0_32Google Scholar
Muller, M, Houde, S, Gonzalez, G, Brimijoin, K, Ross, SI, Moran, DAS and Weisz, JD (2024, June). Group brainstorming with an AI agent: Creating and selecting ideas. In International Conference on Computational Creativity.Google Scholar
Nelson, BA, Wilson, JO, Rosen, D and Yen, J (2009) Refined metrics for measuring ideation effectiveness. Design Studies 30 (6), 737743.10.1016/j.destud.2009.07.002CrossRefGoogle Scholar
Ojha, A, Lee, HS and Lee, M (2015). I-get. https://doi.org/10.1145/2814940.2815006CrossRefGoogle Scholar
Pinel, F, Varshney, LR and Bhattacharjya, D (2014). A culinary computational creativity system. In Atlantis Thinking Machines (pp. 327346). Atlantis Press. https://doi.org/10.2991/978-94-6239-085-0_16Google Scholar
Ray, PP (2023) ChatGPT: A comprehensive review on background, applications, key challenges, bias, ethics, limitations and future scope. Internet of Things and Cyber-Physical Systems 3, 121154. https://doi.org/10.1016/j.iotcps.2023.04.003.Google Scholar
Russo-Spena, T and Mele, C (2012) Five Co-s” in innovating: A practice-based view. Journal of Service Management 23 (4), 527553.10.1108/09564231211260404CrossRefGoogle Scholar
Sanders, EB and Stappers, PJ (2008) Co-creation and the new landscapes of design. CoDesign 4 (1), 518. https://doi.org/10.1080/15710880701875068.CrossRefGoogle Scholar
Schön, DA (2017) The Reflective Practitioner: How Professionals Think in Action. RoutledgeGoogle Scholar
Self, JA, Evans, MI and Kim, EJ (2016) A comparison of digital and conventional sketching: Implications for conceptual design ideation. Journal of Design Research 14 (2), 171. https://doi.org/10.1504/jdr.2016.077028.CrossRefGoogle Scholar
Shaer, O, Cooper, A, Mokryn, O, Kun, AL and Ben Shoshan, H (2024, May). AI-augmented brainwriting: Investigating the use of LLMs in group ideation. In Proceedings of the CHI Conference on Human Factors in Computing Systems (Pp. 117).10.1145/3613904.3642414CrossRefGoogle Scholar
Shah, JJ, Smith, S and Vargas-Hernandez, N (2003) Metrics for measuring ideation effectiveness. Design Studies 24 (2), 111134. https://doi.org/10.1016/s0142-694x(02)00034-0.CrossRefGoogle Scholar
Shen, H, Shen, L, Wu, W and Zhang, K (2025, April). IdeationWeb: Tracking the evolution of design ideas in human-AI co-creation. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (Pp. 119).10.1145/3706598.3713375CrossRefGoogle Scholar
Shin, H, Choi, S, Cho, JY, Admoni, S, Lim, H, Kim, T, Hong, H, Lee, M and Kim, J, 2024. Towards an evaluation of LLM-generated inspiration by developing and validating inspiration scale. In Proceedings of the Proceedings of the HEAL: Human-Centered Evaluation and Auditing of Language Models CHI Workshop.Google Scholar
Shin, JG, Koch, J, Lucero, A, Dalsgaard, P and Mackay, WE (2023, April). Integrating AI in human-human collaborative ideation. In Extended Abstracts of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 15).10.1145/3544549.3573802CrossRefGoogle Scholar
Shin, J, Polyanskaya, A, Lucero, A and Oulasvirta, A (2025) No Evidence for LLMs Being Useful in Problem Reframing arXiv preprint arXiv:2503.01631Google Scholar
Sońta-Drączkowska, E, Cichosz, M, Klimas, P and Pilewicz, T (2024) Co-creating innovations with users: A systematic literature review and future research agenda for project management. European Management Journal.Google Scholar
Srivathsavai, R, Genco, N, Ho¨ ltta¨-Otto, K and Seepersad, CC (2010, January). Study of existing metrics used in measurement of ideation effectiveness. In International Design Engineering Technical Conferences and Computers and Information in Engineering Conference (Vol. 44137, pp. 355366).10.1115/DETC2010-28802CrossRefGoogle Scholar
Talgorn, E, Hendriks, M, Geurts, L and Bakker, C (2022) A storytelling methodology to facilitate user-centered co-ideation between scientists and designers. Sustainability 14 (7), 4132. https://doi.org/10.3390/su14074132.CrossRefGoogle Scholar
Varshney, LR, Pinel, F, Varshney, KR, Bhattacharjya, D, Schörgendorfer, A and Chee, Y (2019). A big data approach to computational creativity: the curious case of chef Watson. IBM Journal of Research and Development, 63(1), 7:1–7:18. https://doi.org/10.1147/jrd.2019.2893905CrossRefGoogle Scholar
Vattam, S, Wiltgen, B, Helms, MJ, Goel, AK and Yen, J (2011) DANE: Fostering Creativity in and through Biologically Inspired Design. Springer eBooks, pp. 115122. https://doi.org/10.1007/978-0-85729-224-7_16Google Scholar
Verganti, R, Dell’Era, C and Swan, KS (2021) Design thinking: Critical analysis and future evolution. Journal of Product Innovation Management 38 (6), 603622.10.1111/jpim.12610CrossRefGoogle Scholar
Verganti, R, Vendraminelli, L and Iansiti, M (2020) Innovation and design in the age of artificial intelligence. Journal of Product Innovation Management 37 (3), 212227.10.1111/jpim.12523CrossRefGoogle Scholar
Visser, FS, Stappers, PJ, Van der Lugt, R and Sanders, EB (2005) Contextmapping: Experiences from practice. CoDesign 1 (2), 119149.Google Scholar
Viswanathan, VK and Linsey, JS (2012) Physical models and design thinking: A study of functionality, novelty and variety of ideas. Journal of Mechanical Design 134 (9), 091004. https://doi.org/10.1115/1.4007148.CrossRefGoogle Scholar
Wang, WF, Lu, CT, Campanyà, NP, Chen, BY and Chen, MY (2025) AIdeation: Designing a human-AI collaborative ideation system for concept designers. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 128).10.1145/3706598.3714148CrossRefGoogle Scholar
Wang, H, Ohsawa, Y, Hu, X and Xu, F (2014). Idea discovery: A context-awareness dynamic system approach for computational creativity. In Studies in Computational Intelligence (pp. 99111). Springer Nature.https://doi.org/10.1007/978-4-431-55209-3_7Google Scholar
Westerlund, M, Leminen, S and Habib, C (2018) Key constructs and a definition of living labs as innovation platforms. Technology Innovation Management Review 8 (12), 5162. https://doi.org/10.22215/timreview/1205.CrossRefGoogle Scholar
Wolverton, M and Hayes-Roth, B (1994). Retrieving semantically distant analogies with knowledge-directed spreading activation. In National Conference on Artificial Intelligence (pp. 5661). https://www.aaai.org/Papers/AAAI/1994/AAAI94-009.pdfGoogle Scholar
Xu, X, Yin, J, Gu, C, Mar, J, Zhang, S, E, JL and Dow, SP (2024, March). Jamplate: Exploring LLM-enhanced templates for idea reflection. In Proceedings of the 29th International Conference on Intelligent User Interfaces (pp. 907921).10.1145/3640543.3645196CrossRefGoogle Scholar
Yu-Han, C and Chun-Ching, C (2023, July). Investigating the impact of generative artificial intelligence on brainstorming: A preliminary study. In 2023 International Conference on Consumer Electronics-Taiwan (ICCE-Taiwan) (pp. 193194). IEEE.10.1109/ICCE-Taiwan58799.2023.10226617CrossRefGoogle Scholar
Zheng, J, Wang, KC, Zhou, J and Koh, TK (2024) Large language model in ideation for product innovation: An exploratory comparative study. Social Science Research Network https://doi.org/10.2139/ssrn.4729982Google Scholar
Zhou, Z, Li, Y and Yu, J (2024) Exploring the Application of LLM-Based AI in UX Design: An Empirical Case Study of ChatGPT. Human–Computer Interaction, pp. 133Google Scholar
Zhu, Q and Luo, J (2023) Generative transformers for design concept generation. Journal of Computing and Information Science in Engineering 23 (4), 041003.Google Scholar
Figure 0

Figure 1. The question of the one-question survey.

Figure 1

Table 1. The outcome of the survey on “What is/are the major challenge(s) that occur when you use ChatGPT to generate ideas?”

Figure 2

Figure 2. The Co-Ideator workflow starts with thought-provoking questions in the order of present–past–future of the designers’ experience, perception, and expectations toward the product/service. A set of ideas is produced from the constructive dialogue between the human designer and (AI) Co-Ideator.

Figure 3

Figure 3. The path of expression. The gray color context (the gray lines and letters) is about exploring and understanding the full domain of people’s experiences. It includes people’s memories (past), current experiences (present), and dreams or aspirations (future). Specifically, context mapping started with enquiring about present experiences. It suggests that people first move to past experiences, and then with a short bridging probe about the present. Followed by that, people jump to the future desires to unlock the deep desires of the user. In other words, the context mapping emphasizes that experience is not just what people explicitly recall or state. The context mapping is a combination of what they remember, what they currently feel, and what they hope for. Present experiences is a bridge between the past and future of the user. Context mapping aims to access all these layers (past, present, future) to inspire the design. For example, for the flow of the interview, the interviewer uses an interview script to maintain the path of expression of the users’ experience. This path of context mapping was innovatively re-imagined to set up in the promoted framework for human-AI co-ideation. The blue arrow is a visual aid to explain the image in a linear manner.

Figure 4

Figure 4. Adopting the interviewing method from context mapping into our framework.

Figure 5

Figure 5. Workflow of Co-Ideator. (a) Home page. (b) Initial chatting page. The Co-Ideator was instructed to follow a sequence of probing questions to provoke thoughts from the designer. (c) Critical questions asked by Co-Ideator. After applying the prototype of the prompt to the custom GPT, Co-Ideator can assist designers in delving deeper into their ideas by asking “how” and “why” questions to uncover implicit or latent thoughts. For example, designers started with “I find the material of the dustbin too weak‥” The Co-Ideator can reply “Why do you find the material weak?” Designers then may answer “It can get knocked over easily and trash can fall out.” Follow by that, Co-Ideator promoted a new question “How do you think you can make it stronger?” (d) Visualized results.

Figure 6

Figure 6. Mechanism of Co-Ideator.

Figure 7

Table 2. Distribution of participants across design tasks in treatment and control groups

Figure 8

Table 3. The design tasks and challenges

Figure 9

Table 4. A list of options to further direct the conversation in an effective way by specifying the commands for ChatGPT

Figure 10

Figure 7. Interview questions for qualitative study.

Figure 11

Table 5. Percentages of adjacent agreement and Coefficient of inter-rater reliability between 3 raters (Fleiss’ Kappa)

Figure 12

Figure 8. Results Mann–Whitney U Test for novelty (Mann–Whitney U = 2621.5, Z = −6.278, p < 0.001), quality (Mann–Whitney U = 4559.5, Z = −1.500, p = 0.133), and variety (Mann–Whitney U = 4559.5, Z = −1.500, p = 0.133).

Figure 13

Figure 9. Qualitative data analysis workflow.

Figure 14

Figure 10. Survey question: helpfulness of ChatGPT.

Figure 15

Table 6. Feedback on Co-ideation session with and without ChatGPT

Figure 16

Figure 11. Participant responses on feeling stuck during ideation (control group).

Figure 17

Figure 12. Participant responses on feeling stuck during ideation (treatment group).

Supplementary material: File

Wang et al. supplementary material

Wang et al. supplementary material
Download Wang et al. supplementary material(File)
File 20.6 KB