Hostname: page-component-cb9f654ff-9knnw Total loading time: 0 Render date: 2025-08-27T05:47:18.410Z Has data issue: false hasContentIssue false

AI-driven feedback for improving teamwork and learning in collaborative engineering design

Published online by Cambridge University Press:  27 August 2025

Sabah Farshad*
Affiliation:
Skolkovo Institute of Science and Technology, Russia
Clement Fortin
Affiliation:
Skolkovo Institute of Science and Technology, Russia

Abstract:

Engineering design is inherently a collaborative process that requires active engagement and effective communication. Project-based Learning (PBL) is increasingly recognized for fostering these essential skills. However, instructors face challenges in objectively monitoring interactions and providing process-oriented feedback, particularly in large-scale settings where free-riders and disengaged participants affect team dynamics. This study introduces a generative AI approach to deliver real-time, scalable, and empathetic feedback that enhances team collaboration. Findings highlight the potential of AI-driven systems to improve student engagement and learning outcomes, though limitations remain in providing context-specific advice. A secure framework for AI integration in collaborative learning environments is also proposed.

Information

Type
Article
Creative Commons
Creative Common License - CCCreative Common License - BYCreative Common License - NCCreative Common License - ND
This is an Open Access article, distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives licence (http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial re-use, distribution, and reproduction in any medium, provided the original work is unaltered and is properly cited. The written permission of Cambridge University Press must be obtained for commercial re-use or in order to create a derivative work.
Copyright
© The Author(s) 2025

1. Introduction

Engineering design is inherently collaborative, with effective communication and teamwork crucial to achieving successful outcomes. Digital tools such as instant messaging and online collaboration platforms significantly facilitate interactions among team members, allowing distributed teams to collaborate efficiently regardless of geographic constraints (Reference Farshad, Brovar, Fortin, Danjou, Harik and NyffeneggerFarshad et al., 2024; Reference Wang and HannafinWang et al., 2020). Despite technological advances, significant challenges remain in monitoring, assessing, and enhancing effective teamwork—especially evident in educational settings utilizing Project-based Learning (PBL), where students collaboratively address real-world engineering problems. Although PBL effectively fosters skills such as problem-solving, creativity, and teamwork (Reference Prince and FelderPrince and Felder, 2006), instructors often struggle to monitor and guide team interactions objectively and efficiently, particularly in large-scale environments (Reference Oakley, Felder, Brent and ElhajjOakley et al., 2004).

Common challenges in collaborative design include issues like “free riding,” unequal contributions among members, and communication breakdowns, which lead to dissatisfaction, reduced learning quality, and diminished project outcomes (Karau & Williams, Reference Karau and Williams1993; Olson & Olson, Reference Olson and Olson2000). Manual monitoring of team communication is impractical and lacks scalability, highlighting the necessity for automated, objective feedback solutions (Reference Dym, Agogino, Eris, Frey and LeiferDym et al., 2005). Existing AI-driven educational tools have improved certain aspects of learning through automation and personalized feedback but generally focus on task-specific assessments rather than the nuanced interpersonal dynamics of collaborative teams.

Our prior research has progressively explored these collaboration challenges by implementing and evaluating various technological interventions. Building upon these studies, this paper introduces a novel approach using generative AI, specifically ChatGPT, to provide scalable, personalized feedback based on Motivational Interviewing (MI) techniques. Unlike general-purpose AI models, our approach emphasizes secure, context-sensitive, and motivational feedback to enhance team engagement and effectiveness in engineering design.

This research addresses two primary questions; (1) How effectively can Generative Artificial Intelligence (GenAI) replicate human-like MI techniques in collaborative engineering design teams? (2) How can GenAI be securely integrated into team communication platforms to ensure data privacy and confidentiality in large-scale educational settings? By investigating these questions, this study aims to contribute toward scalable, context-sensitive AI solutions that not only enhance student collaboration but also provide instructors with actionable insights for effective team management.

2. Literature review

Collaborative learning in engineering education brings together diverse perspectives, allowing students to develop essential professional skills such as teamwork, communication, problem-solving, and creativity (Dym et al., Reference Dym, Agogino, Eris, Frey and Leifer2005; Prince & Felder, Reference Prince and Felder2006). PBL, one of the most recognized collaborative learning approaches in engineering education, is widely used to simulate real-world engineering challenges, significantly enhancing these skills through practical, team-based experiences (Mills & Treagust, Reference Mills and Treagust2003; Hmelo-Silver, Reference Hmelo-Silver2004). Despite its advantages, managing and assessing teamwork in PBL remains challenging, particularly in large-scale courses, due to issues such as uneven participation, communication breakdowns, and difficulties in providing timely, objective feedback (Oakley et al., Reference Oakley, Felder, Brent and Elhajj2004; Olson & Olson, Reference Olson and Olson2000).

Recent advancements in AI-driven feedback systems offer promising solutions to these challenges. Intelligent Tutoring Systems (ITS) and automated assessment platforms deliver timely, personalized feedback, significantly improving student learning outcomes in STEM education (Reference VanLehnVanLehn, 2011; Reference Saputra, Setiawan and PrihatmantoSaputra et al., 2024). Studies have shown these AI tutoring systems achieving effectiveness comparable to human tutors, providing formative assessments in real-time and enabling scalable, consistent feedback in large educational settings (Reference Gao, Gao and JiangGao et al., 2024; Reference Zhang, Chen and LiuZhang et al., 2023). In particular, Large Language Models (LLMs), such as ChatGPT, demonstrate strong capabilities in natural language tasks, expanding their use in complex assessments and collaborative learning scenarios (Reference Zhang, Chen and LiuZhang et al., 2023; Reference Nagaraj, Shen and DongNagaraj et al., 2023).

However, current AI educational tools primarily address task-oriented feedback (e.g., programming assignments), focusing less on nuanced interpersonal and motivational factors crucial to effective teamwork. For instance, while specialized chatbots provide instant feedback in engineering tasks (Reference Setiawan, Irfanuddin and AdhisaSetiawan et al., 2024), they typically lack the advanced empathic response such as MI strategies necessary for addressing complex team dynamics. Similarly, platforms like CATME facilitate peer evaluations and team formation but do not offer real-time motivational support or consider data security and confidentiality requirements critical for sensitive team interactions (Reference Sajadi, Ryan, Schibelius and HuertaSajadi et al., 2023).

Research on integrating GenAI into team-based engineering education is emerging, highlighting opportunities for improving teamwork through context-sensitive, personalized interactions (Reference Kharrufa, Alghamdi, Aziz and BullKharrufa et al., 2024). These AI-driven interactions can enhance both task-based and social aspects of collaboration by providing scalable emotional support, fostering psychological safety, and encouraging balanced participation (Reference Sajadi, Ryan, Schibelius and HuertaSajadi et al., 2023; Reference Nagaraj, Shen and DongNagaraj et al., 2023). Nonetheless, empirical studies directly assessing AI’s ability to simulate human motivational coaching techniques in collaborative team settings are still limited.

Empirical evidence also points to potential challenges, such as student over-reliance on AI-generated solutions, diminishing critical thinking and independent problem-solving skills (Reference Kazemitabaar, Baker and RicheyKazemitabaar et al., 2024). Furthermore, while AI can reliably deliver basic formative feedback, significant limitations remain in complex or domain-specific contexts, particularly when nuanced emotional and interpersonal dynamics influence learning and performance (Reference Khan, Ramsdell, Falor and KarimiKhan et al., 2023; Reference Alsaiari, Baghaei, Lahza, Lodge, Boden and KhosraviAlsaiari et al., 2024).

Therefore, despite the transformative potential of AI-driven feedback tools, clear research gaps remain in developing and validating solutions that explicitly combine personalized motivational support, secure integration, and real-time feedback tailored specifically for collaborative engineering design teams. This paper addresses these gaps by investigating how GenAI models like ChatGPT can effectively replicate human MI techniques, enhance collaboration, and securely integrate into team communication platforms in educational settings.

3. Our prior research

This research builds upon a systematic progression of our prior studies examining collaborative engineering design in PBL contexts. Each of the previous studies has been peer-reviewed and published, establishing a foundation for the current research (Farshad & Fortin, Reference Farshad and Fortin2021a; Farshad et al., Reference Farshad and Fortin2023; Farshad & Fortin, Reference Farshad and Fortin2023a, Reference Farshad and Fortin2023b; Farshad, Brovar, & Fortin, Reference Farshad, Brovar, Fortin, Danjou, Harik and Nyffenegger2024). Figure 1 summarizes these studies, illustrating our stepwise investigation into enhancing collaborative teamwork through technological interventions.

Figure 1. The development of our research in collaborative engineering design and learning: a six-stage journey to enhance active engagement

In Study I, we explored information flow within fully online engineering teams tasked with designing and launching rockets based on systems engineering principles. Our ethnographic study utilizing Distributed Cognition for Teamwork (DiCoT) revealed extensive reliance on cloud-based collaboration platforms for communication, coordination, and file sharing, but also highlighted significant challenges in ensuring equal participation among team members.

To address these issues, Study II introduced data-driven dashboards combined with questionnaires to quantify and visualize team engagement, specifically applied in a Technology Planning and Road Mapping project involving graduate student teams. This study demonstrated the feasibility of objectively measuring engagement, emphasizing the necessity of combining multiple data sources for comprehensive assessments.

Building further, Study III investigated the effectiveness of process-oriented feedback on team collaboration and performance in a Systems Engineering Air Mobility Project. A comparative design showed that feedback facilitated more balanced participation but did not yield statistically significant improvements in overall project outcomes.

Recognizing the potential for motivational strategies, Study IV employed MI techniques in a Systems Engineering PBL. A key finding was the significant positive impact of communication strategies such as MI techniques on collaboration and project outcomes (Figure 2). Additionally, sentiment analysis was integrated to assess emotional engagement, highlighting the viability of AI-enhanced approaches.

Figure 2. Test group received regular feedback on collaboration through MI methods while control group received normal classroom feedback (Reference Farshad, Brovar, Fortin, Danjou, Harik and NyffeneggerFarshad, Brovar, & Fortin, 2024)

Most recently, Study V revisited data from Studies III and IV, applying supervised machine learning and natural language processing (NLP) methods to predict collaborative engagement levels from team communication data. Results showed encouraging predictive accuracy, confirming the promise of machine learning as a scalable tool for assessing team dynamics and engagement in real-time.

Together, these prior studies established a strong foundation for developing the AI-driven approach presented in this paper, specifically aiming to replicate human MI techniques to support collaborative engineering design teams. Our current research leverages insights from previous studies to introduce a secure, scalable AI architecture capable of providing real-time, motivational feedback in educational environments.

4. Methodology

This study examines how effectively GenAI (specifically ChatGPT) can replicate MI techniques to support collaboration within PBL engineering teams. Our methodological approach comprises four main stages: (1) data preparation, (2) AI simulation, (3) evaluation by human experts, and (4) addressing scalability and security considerations.

4.1. Data collection and preparation

We initially selected 50 communication sessions from our prior dataset (Study IV), consisting of text-based (20%) and transcribed audio conversations (80%) between student teams and a human coach trained in MI. To ensure statistical robustness, a power analysis using G*Power (Faul et al., 2009) was later conducted, considering a medium effect size (0.5), alpha of 0.05, and power of 0.8, resulting in a recommended total of 65 sessions.

Therefore, we expanded our dataset accordingly, maintaining the same text-to-audio ratio. Each session included various interactions demonstrating MI techniques, providing a diverse and representative dataset suitable for evaluating AI-generated responses.

4.2. AI simulation process

We employed OpenAI’s ChatGPT (via OpenAI Playground) to simulate MI-based conversations. Selected segments from the prepared dataset served as inputs for prompting the AI. The AI was specifically instructed to mimic MI techniques, including open-ended questions, reflective listening, and empathetic dialogue. Prompt engineering was utilized without model fine-tuning to optimize responses, aiming to closely replicate human MI interactions.

4.3. Evaluation procedure

The evaluation involved three rounds conducted by two researchers experienced in MI. In the first two rounds, each researcher independently assessed AI-generated responses against original human MI interventions. The third round involved a collaborative evaluation conducted after a period of distancing from initial assessments, ensuring objectivity. Each AI response received a score for two criteria:

  • Relevance (1-5 scale): alignment with student requests and contextual appropriateness.

  • Use of MI techniques (1-5 scale): effective demonstration of MI strategies (reflective listening, open-ended questions, empathy).

An average score across evaluators was then calculated for each response. Inter-rater reliability was checked periodically to confirm consistency.

4.4. Scalability and security considerations

Finally, we propose a secure and scalable architecture for integrating GenAI into team communication platforms. Recognizing the importance of data privacy, our architecture incorporates encryption protocols and strict access controls, ensuring sensitive communication remains confidential. Additionally, scalability is addressed through automated feedback loops, enabling real-time AI-driven interventions across multiple teams simultaneously. This approach facilitates instructor oversight via secure dashboards, aligning AI-generated feedback with educational goals and maintaining human intervention options when necessary.

5. Study findings and analysis

This study has evaluated the ability of GenAI/LLMs (specifically OpenAI ChatGPT 4.o Playground) in MI techniques. We used prior communication data from PBL courses in systems engineering. Data from the fourth study, described in the third section, consisted of text and audio conversations between team members and a human coach, which were used to generate and evaluate AI responses. The task of the AI was to respond to these requests and simulate the role of a human coach who had previously used MI techniques. The aim of this study was to evaluate how close AI-generated responses are to human MI interactions and ultimately to propose an architecture for integrating GenAI into educational platforms to provide scalable coaching.

5.1. Data collection and preparation

The communication data used in this study were extracted from 65 sessions (expanded from an initial 50 following power analysis) that were randomly selected from the previous study and were divided as follows:

  • Text communication: 10 sessions (20%)

  • Audio transcripts of face-to-face conversations: 40 sessions (80%)

These sessions represented a variety of back-and-forth exchanges between the students and coach. Each interaction considered a data point while key segments (mainly collaborative-oriented) from each session were extracted for use in this simulation study, in particular those segments that involved MI techniques.

5.2. AI simulation process

We used the OpenAI Playground platform to simulate responses using ChatGPT 4.o mini. Data extracted from the conversations were used to create prompts and reconstruct the students’ conversations with their human tutor. Each response generated by the AI was evaluated against the MI techniques used by the original human trainer. The evaluations focused on the following:

  • Communication with the student’s request

  • Use MI techniques such as open-ended questions, reflective listening, and empathy

  • Effectiveness in attracting and supporting students compared to a human instructor coach

5.3. Evaluation criteria

This study evaluated the effectiveness of GenAI (ChatGPT) in simulating MI techniques within collaborative engineering design teams. Using 65 communication sessions, we analyzed how closely AI-generated responses mirrored those of a human MI-trained coach.

5.3.1. Quantitative analysis

The AI responses were evaluated against two primary metrics: relevance and use of MI techniques, scored on a 1–5 scale. AI-generated responses achieved an average relevance score of 4.2/5, indicating strong contextual alignment with student concerns. Similarly, the application of MI techniques, including reflective listening, open-ended questioning, and empathetic responses, received an average score of 4.0/5. Overall, 82% of AI responses effectively employed MI strategies, and 78% were considered both relevant and beneficial.

5.3.2. Qualitative insights

Qualitative analysis highlighted specific strengths in AI-generated interactions. Reviewers emphasized that AI demonstrated human-like conversational skills, effectively applying MI techniques in realistic team scenarios. For example:

Student statement: “Our team can’t seem to agree on anything, and it’s really slowing us down.”

AI response (example of open-ended question): “What do you think are the main points of disagreement in your group? How might you collaborate to find common ground?”

Student statement: “I feel some team members aren’t contributing enough, and it’s frustrating.”

AI response (example of reflective listening): “It sounds like you’re frustrated due to perceived imbalances in contributions. Have you discussed these feelings openly with your team?”

Reviewers noted the empathetic and supportive nature of these interactions, highlighting AI’s capability to simulate empathic engagement.

5.3.3. Limitations and improvements

Despite overall positive outcomes, some limitations emerged. Occasionally, the AI provided overly generic feedback, particularly in scenarios demanding detailed, project-specific knowledge. These shortcomings reflect the limitations inherent in the ChatGPT model available at the time of the study. However, significant advances have occurred since our initial research, including the release of newer models such as GPT-4.5 and GPT-5. Preliminary assessments suggest that these improved models better handle nuanced, context-specific conversations, potentially overcoming previous limitations.

5.4. Scalability and security framework

Building on these findings, we propose an AI architecture designed explicitly for secure integration within team collaboration platforms. This framework incorporates data encryption and stringent access controls to ensure confidentiality and data privacy. Moreover, the scalability of the architecture enables simultaneous real-time feedback across multiple teams, addressing resource constraints commonly faced in large-scale educational environments.

6. Discussion

This study highlights the potential of GenAI, particularly ChatGPT, in enhancing team collaboration and student engagement in PBL environments. Findings indicate that AI-generated replies closely align with empathic human responses, demonstrating high relevance (4.2/5) and effective use of MI strategies (4.0/5), with 82% of responses deemed effective. These results align with research emphasizing AI’s ability to replicate human-like empathy and conversational engagement (Reference Welivita and PuGilazghi, 2024; Welivita & Pu, 2024), reinforcing its role as a scalable coaching tool in collaborative learning environments where human feedback is limited (Reference Woolf, Burleson, Arroyo and CooperWoolf et al., 2010).

The study also identifies challenges in context-specific feedback. AI responses sometimes lacked depth when addressing project-specific inquiries, providing generalized advice rather than precise technical guidance. This finding aligns with previous studies on AI-powered educational tools, which highlight limitations in domain-specific assessment and feedback customization (Fan et al., 2023). While LLMs like ChatGPT have proven effective for task-based evaluation, their ability to offer nuanced, context-rich advice in real-time collaboration remains a key limitation (Reference Zhang, Chen and LiuZhang et al., 2023). However, our recent observations and tests conducted during the review process indicate that more advanced models, such as GPT-4.5, o1, and o3, have demonstrated significant improvements in generating responses with greater depth, contextual awareness, and project-specific insights. These advancements suggest that future implementations of AI in educational settings may overcome previous limitations regarding specificity and tailored recommendations.

Beyond evaluating AI’s effectiveness in simulating MI techniques, this study proposes an architecture for securely integrating GenAI into team collaboration platforms. As illustrated in Fig. 3, the framework incorporates encryption protocols to maintain data confidentiality, while access control mechanisms restrict sensitive team communications to authorized educators and administrators. Additionally, real-time analytics enable instructors to monitor engagement and participation patterns, enhancing team interactions through automated, AI-driven feedback loops. Research suggests that intelligent tutoring systems (ITS) and AI-based analytics platforms improve student motivation, self-regulated learning, and feedback personalization, supporting the relevance of GenAI-driven monitoring tools in PBL environments (Reference Saputra, Setiawan and PrihatmantoSaputra et al., 2024; Reference VanLehnVanLehn, 2011).

Figure 3. An example of AI-driven feedback architecture

The implications of these findings suggest GenAI’s transformative role in addressing persistent teamwork challenges, including free-riding, uneven participation, and limited instructor oversight. AI-driven peer assessment tools, such as CATME, have demonstrated effectiveness in evaluating team performance and facilitating peer feedback, further supporting GenAI’s integration into collaborative learning environments (Reference Sajadi, Ryan, Schibelius and HuertaSajadi et al., 2023). Additionally, studies on AI-driven grading and feedback systems emphasize the value of real-time, automated evaluation, ensuring timely, unbiased, and scalable assessment methods (Reference Gao, Gao and JiangGao et al., 2024).

Future research should focus on fine-tuning AI models for context-sensitive feedback, ensuring that responses are not only empathic but also domain-specific. Advancements in AI-enhanced educational tools, such as adaptive learning environments and personalized feedback algorithms, may further bridge the gap between automated and human coaching (Reference Nagaraj, Shen and DongNagaraj et al., 2023). Additionally, hybrid AI-human collaboration frameworks should be explored, where AI provides scalable motivational support while instructors offer domain-specific expertise, optimizing both learning outcomes and team collaboration efficiency (Reference Kharrufa, Alghamdi, Aziz and BullKharrufa et al., 2024). The observed improvements in newer AI models highlight a promising trajectory, suggesting that continuous advancements will further refine the role of GenAI in education, making AI-assisted feedback even more context-aware, precise, and impactful in engineering education and beyond.

7. Conclusion and future work

This study demonstrated the significant potential of GenAI for providing empathic feedback to improve teamwork and student engagement in collaborative engineering design within PBL environments. The AI-generated responses effectively mirrored human MI interactions, exhibiting strong relevance and empathetic communication, though some limitations in delivering context-specific feedback were observed.

To address these limitations, future research will leverage recent advancements in generative AI, such as GPT-4.5 and other advanced models, which have demonstrated substantial improvements in context-awareness and specificity. Additionally, we propose a secure and scalable architecture for integrating a fine-tuned GenAI into team communication platforms, addressing data privacy and instructor oversight concerns. Future studies will validate this architecture through real-world implementations and assess its long-term impacts on student learning, collaboration quality, and teamwork dynamics in large-scale educational contexts.

Acknowledgment

We acknowledge the use of GenAI tools in various stages of this research, including brainstorming, proofreading, grammar checking, and improving the clarity and coherence of the writing. These tools were employed to refine language and structure while ensuring the originality and integrity of the scholarly content. The authors take full responsibility for the final version of this work, including all ideas, arguments, and interpretations presented.

References

Alsaiari, O., Baghaei, N., Lahza, H., Lodge, J., Boden, M., & Khosravi, H. (2024). Emotionally enriched feedback via generative AI. arXiv:2410.15077 [cs.HC]. https://doi.org/10.48550/arXiv.2410.15077 CrossRefGoogle Scholar
Dym, C. L., Agogino, A. M., Eris, O., Frey, D. D., & Leifer, L. J. (2005). Engineering design thinking, teaching, and learning. Journal of Engineering Education, 94(1), 103120. https://doi.org/10.1002/j.2168-9830.2005.tb00832.x CrossRefGoogle Scholar
Farshad, S., & Fortin, C. (2021a). Distributed cognition transformation in complete online system engineering design teaching. Proceedings of the Design Society, 1, 13131322. https://doi.org/10.1017/pds.2021.131 CrossRefGoogle Scholar
Farshad, S., & Fortin, C. (2023a). A novel method for measuring, visualizing, and monitoring e-collaboration. International Journal of e-Collaboration, 19(1), 121.10.4018/IJeC.317223CrossRefGoogle Scholar
Farshad, S., & Fortin, C. (2023b). Active engagement in collaborative engineering design: How to measure and use it in a feedback system. Proceedings of the Design Society, 3, 455464. https://doi.org/10.1017/pds.2023.46 CrossRefGoogle Scholar
Farshad, S., Brovar, Y., & Fortin, C. (2024). Enhancing collaborative design through process feedback with motivational interviewing: Can AI play a role? In Danjou, C., Harik, R., & Nyffenegger, R. (Eds.), Advances in product development and manufacturing systems (pp. 244253). Springer. https://doi.org/10.1007/978-3-031-62582-4_22 CrossRefGoogle Scholar
Farshad, S., Zorin, E., Amangeldiuly, N., & Fortin, C. (2023). Engagement assessment in project-based education: A machine learning approach in team chat analysis. Education and Information Technologies. https://doi.org/10.1007/s10639-023-12381-5 CrossRefGoogle Scholar
Gao, X., Gao, Z., & Jiang, Y. (2024). Automated grading and feedback systems in large engineering classes. IEEE Transactions on Education, 67(2), 231243.Google Scholar
Hmelo-Silver, C. E. (2004). Problem-based learning: What and how do students learn? Educational Psychology Review, 16(3), 235266. https://doi.org/10.1023/B:EDPR.0000034022.16470.f3 CrossRefGoogle Scholar
Karau, S. J., & Williams, K. D. (1993). Social loafing: A meta-analytic review and theoretical integration. Journal of Personality and Social Psychology, 65(4), 681706. https://doi.org/10.1037/0022-3514.65.4.681 CrossRefGoogle Scholar
Kazemitabaar, M., Baker, R. S., & Richey, M. C. (2024). The impact of AI-assisted coding on novice programmers. Computers & Education, 208, 104901. https://doi.org/10.1016/j.compedu.2023.104901 CrossRefGoogle Scholar
Khan, M. F. A., Ramsdell, M., Falor, E., & Karimi, H. (2023). Assessing the promise and pitfalls of ChatGPT for automated code generation. arXiv:2302.03260 [cs.PL].Google Scholar
Kharrufa, A., Alghamdi, S., Aziz, A., & Bull, C. (2024). LLMs integration in software engineering team projects: Roles, impact, and a pedagogical design space for AI tools in computing education. arXiv:2410.23069 [cs.SE]. https://doi.org/10.48550/arXiv.2410.23069 CrossRefGoogle Scholar
Mills, J. E., & Treagust, D. F. (2003). Engineering education—is problem-based or project-based learning the answer? Australasian Journal of Engineering Education, 3(2), 216.Google Scholar
Nagaraj, D., Shen, Z., & Dong, Y. (2023). AI-enabled collaborative learning in STEM education. Journal of STEM Education Research, 2(3), 158172.Google Scholar
Oakley, B., Felder, R. M., Brent, R., & Elhajj, I. (2004). Turning student groups into effective teams. Journal of Student-Centered Learning, 2(1), 934.Google Scholar
Olson, G. M., & Olson, J. S. (2000). Distance matters. Human–Computer Interaction, 15(2–3), 139178. https://doi.org/10.1207/S15327051HCI1523_4 CrossRefGoogle Scholar
Prince, M. J., & Felder, R. M. (2006). Inductive teaching and learning methods. Journal of Engineering Education, 95(2), 123138. https://doi.org/10.1002/j.2168-9830.2006.tb00884.x CrossRefGoogle Scholar
Saputra, D., Setiawan, W., & Prihatmanto, A. (2024). Intelligent tutoring systems in STEM education. Educational Technology & Society, 27(1), 215227.Google Scholar
Sajadi, S., Ryan, O., Schibelius, L., & Huerta, M. (2023). WIP: Using generative AI to assist in individual performance feedback for engineering student teams. In 2023 IEEE Frontiers in Education Conference (FIE) (pp. 14). IEEE. https://doi.org/10.1109/FIE58773.2023.10343517 CrossRefGoogle Scholar
Setiawan, A., Irfanuddin, M. F., & Adhisa, R. R. (2024). Effectiveness of AI-based chatbots as virtual tutors in basic programming education. Jurnal Teknologi Informasi dan Pendidikan, 17(2). https://doi.org/10.24036/jtip.v17i2.853 CrossRefGoogle Scholar
VanLehn, K. (2011). The relative effectiveness of human tutoring, intelligent tutoring systems, and other tutoring systems. Educational Psychologist, 46(4), 197221. https://doi.org/10.1080/00461520.2011.611369 CrossRefGoogle Scholar
Wang, F., & Hannafin, M. J. (2020). Design-based research and technology-enhanced learning environments. Educational Technology Research and Development, 48(2), 523. https://doi.org/10.1007/s11423-019-09502-1 CrossRefGoogle Scholar
Welivita, A., & Pu, P. (2024). Is ChatGPT more empathetic than humans? arXiv preprint arXiv:2403.05572.Google Scholar
Woolf, B. P., Burleson, W., Arroyo, I., & Cooper, D. G. (2010). AI-supported collaborative learning environments. Journal of Artificial Intelligence in Education, 20(1), 110.Google Scholar
Zhang, L., Chen, Y., & Liu, X. (2023). AI feedback systems and large language models in education. Computers in Human Behavior, 143, 107826. https://doi.org/10.1016/j.chb.2023.107826 CrossRefGoogle Scholar
Figure 0

Figure 1. The development of our research in collaborative engineering design and learning: a six-stage journey to enhance active engagement

Figure 1

Figure 2. Test group received regular feedback on collaboration through MI methods while control group received normal classroom feedback (Farshad, Brovar, & Fortin, 2024)

Figure 2

Figure 3. An example of AI-driven feedback architecture