Skip to main content Accessibility help
×
Hostname: page-component-54dcc4c588-5q6g5 Total loading time: 0 Render date: 2025-09-19T06:27:50.585Z Has data issue: false hasContentIssue false

2 - User Interaction for Human–AI Interaction and Collaboration

Published online by Cambridge University Press:  aN Invalid Date NaN

Dan Wu
Affiliation:
Wuhan University, China
Shaobo Liang
Affiliation:
Wuhan University, China
Get access

Summary

It is of great importance to integrate human-centered design concepts at the core of both algorithmic research and the implementation of applications. In order to do so, it is essential to gain an understanding of human–computer interaction and collaboration from the perspective of the user. To address this issue, this chapter initially presents a description of the process of human–AI interaction and collaboration, and subsequently proposes a theoretical framework for it. In accordance with this framework, the current research hotspots are identified in terms of interaction quality and interaction mode. Among these topics, user mental modeling, interpretable AI, trust, and anthropomorphism are currently the subject of academic interest with regard to interaction quality. The level of interaction mode encompasses a range of topics, including interaction paradigms, role assignment, interaction boundaries, and interaction ethics. To further advance the related research, this chapter identifies three areas for future exploration: cognitive frameworks about Human–AI Interaction, adaptive learning, and the complementary strengths of humans and AI.

Information

Type
Chapter
Information
Publisher: Cambridge University Press
Print publication year: 2025

Access options

Get access to the full version of this content by using one of the access options below. (Log in options will check for institutional or personal access. Content may require purchase if you do not have access.)

Book purchase

Temporarily unavailable

References

Amershi, S., Weld, D., Vorvoreanu, M., Fourney, A., Nushi, B., Collisson, P., Suh, J., Iqbal, S., Bennett, P. N., Inkpen, K., Teevan, J., Kikin-Gil, R., & Horvitz, E. (2019). Guidelines for Human–AI Interaction. Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems, 1–13. https://doi.org/10.1145/3290605.3300233CrossRefGoogle Scholar
Asan, O., Bayrak, A. E., & Choudhury, A. (2020). Artificial Intelligence and Human Trust in Healthcare: Focus on Clinicians. Journal of Medical Internet Research, 22(6), e15154. https://doi.org/10.2196/15154Google ScholarPubMed
Bansal, G., Nushi, B., Kamar, E., Lasecki, W. S., Weld, D. S., & Horvitz, E. (2019). Beyond Accuracy: The Role of Mental Models in Human–AI Team Performance. Proceedings of the AAAI Conference on Human Computation and Crowdsourcing, 7, 211. https://doi.org/10.1609/hcomp.v7i1.5285CrossRefGoogle Scholar
Barnes, W. H. F. (1944). The Nature of Explanation. Nature, 153(3890), 605605. https://doi.org/10.1038/153605a0CrossRefGoogle Scholar
Biondi, F., Alvarez, I., & Jeong, K.-A. (2019). Human–System Cooperation in Automated Driving. International Journal of Human–Computer Interaction, 35(11), 917918. https://doi.org/10.1080/10447318.2018.1561793CrossRefGoogle Scholar
Borman, L. (1996). SIGCHI: The Early Years. ACM SIGCHI Bulletin, 28(1), 46. https://doi.org/10.1145/249170.249172Google Scholar
Buçinca, Z., Malaya, M. B., & Gajos, K. Z. (2021). To Trust or to Think: Cognitive Forcing Functions Can Reduce Overreliance on AI in AI-assisted Decision-making. Proceedings of the ACM on Human–Computer Interaction, 5 (CSCW1), 188:1–188:21. https://doi.org/10.1145/3449287Google Scholar
Cabrera, Á. A., Perer, A., & Hong, J. I. (2023). Improving Human–AI Collaboration with Descriptions of AI Behavior. Proceedings of the ACM on Human–Computer Interaction, 7(CSCW1), 136:1–136:21. https://doi.org/10.1145/3579612CrossRefGoogle Scholar
Cañas, J. J. (2022). AI and Ethics When Human Beings Collaborate with AI Agents. Frontiers in Psychology, 13. https://doi.org/10.3389/fpsyg.2022.836650CrossRefGoogle Scholar
Cao, S., Gomez, C., & Huang, C.-M. (2023). How Time Pressure in Different Phases of Decision-Making Influences Human–AI Collaboration. Proceedings of the ACM on Human–Computer Interaction, 7(CSCW2), 277:1–277:26. https://doi.org/10.1145/3610068CrossRefGoogle Scholar
Chen, C., & Zheng, Y. (2023). When Consumers Need More Interpretability of Artificial Intelligence (AI) Recommendations? The Effect of Decision-making Domains. Behaviour & Information Technology. https://doi.org/10.1080/0144929X.2023.2279658Google Scholar
Chen, C. Y., Lee, L., & Yap, A. J. (2016). Control Deprivation Motivates Acquisition of Utilitarian Products. Journal of Consumer Research, ucw068. https://doi.org/10.1093/jcr/ucw068CrossRefGoogle Scholar
Chi, O. H., Denton, G., & Gursoy, D. (2020). Artificially Intelligent Device Use in Service Delivery: A Systematic Review, Synthesis, and Research Agenda. Journal of Hospitality Marketing & Management, 29(7), 757786. https://doi.org/10.1080/19368623.2020.1721394CrossRefGoogle Scholar
Deshpande, K. V., Pan, S., & Foulds, J. R. (2020). Mitigating Demographic Bias in AI-based Resume Filtering. Adjunct Publication of the 28th ACM Conference on User Modeling, Adaptation and Personalization, 268–275. https://doi.org/10.1145/3386392.3399569Google Scholar
Desolda, G., Dimauro, G., Esposito, A., Lanzilotti, R., Matera, M., & Zancanaro, M. (2024). A human–AI Interaction Paradigm and Its Application to Rhinocytology. Artificial Intelligence in Medicine, 102933. https://doi.org/10.1016/j.artmed.2024.102933CrossRefGoogle Scholar
van Diggelen, J., Barnhoorn, J., Post, R., Sijs, J., van der Stap, N., & van der Waa, J. (2021). Delegation in Human–Machine Teaming: Progress, Challenges and Prospects. In Russo, D., Ahram, T., Karwowski, W., Di Bucchianico, G., & Taiar, R. (eds.), Intelligent Human Systems Integration 2021 (pp. 1016). Springer International Publishing. https://doi.org/10.1007/978-3-030-68017-6_2Google Scholar
Donahue, K., Chouldechova, A., & Kenthapadi, K. (2022). Human–Algorithm Collaboration: Achieving Complementarity and Avoiding Unfairness. 2022 ACM Conference on Fairness, Accountability, and Transparency, 1639–1656. https://doi.org/10.1145/3531146.3533221Google Scholar
Ebermann, C., Selisky, M., & Weibelzahl, S. (2023). Explainable AI: The Effect of Contradictory Decisions and Explanations on Users’ Acceptance of AI Systems. International Journal of Human–Computer Interaction. www.tandfonline.com/doi/abs/10.1080/10447318.2022.2126812Google Scholar
Epley, N., Waytz, A., & Cacioppo, J. T. (2007). On Seeing Human: A Three-Factor Theory of Anthropomorphism. Psychological Review, 114(4), 864886. https://doi.org/10.1037/0033-295X.114.4.864CrossRefGoogle Scholar
Ethics of Artificial Intelligence | UNESCO. (n.d.). Retrieved July 30, 2024, from www.unesco.org/en/artificial-intelligence/recommendation-ethicsGoogle Scholar
Fan, J., Tian, F., Dai, G., Du, Y., & Liu, Z. (2018). Thoughts on Human–Computer Interaction in the Age of Artificial Intelligence. SCIENTIA SINICA Informationis, 48(4), 361375. https://doi.org/10.1360/N112017-00221Google Scholar
Georganta, E., & Ulfert, A.-S. (2024). My Colleague Is an AI! Trust Differences between AI and Human Teammates. Team Performance Management: An International Journal, 30(1/2), 2337. https://doi.org/10.1108/TPM-07-2023-0053CrossRefGoogle Scholar
Google AI Principles. (n.d.). Google AI. Retrieved July 30, 2024, from https://ai.google/responsibility/principles/Google Scholar
Graefe, J., Rittger, L., Carollo, G., Engelhardt, D., & Bengler, K. (2023). Evaluating the Potential of Interactivity in Explanations for User-Adaptive In-Vehicle Systems: Insights from a Real-World Driving Study. In Duffy, V. G., Krömker, H., Streitz, N. A., & Konomi, S. (eds.), HCI International 2023: Late Breaking Papers (pp. 294312). Springer Nature Switzerland. https://doi.org/10.1007/978-3-031-48047-8_19Google Scholar
Green, B., & Chen, Y. (2019). The Principles and Limits of Algorithm-in-the-Loop Decision Making. Proceedings of the ACM on Human–Computer Interaction, 3(CSCW), 124. https://doi.org/10.1145/3359152CrossRefGoogle Scholar
Grudin, J. (2018). From Tool to Partner: The Evolution of Human–Computer Interaction. Extended Abstracts of the 2018 CHI Conference on Human Factors in Computing Systems, 1–3. https://doi.org/10.1145/3170427.3170663CrossRefGoogle Scholar
Gursoy, D., Chi, O. H., Lu, L., & Nunkoo, R. (2019). Consumers Acceptance of Artificially Intelligent (AI) Device Use in Service Delivery. International Journal of Information Management, 49, 157169. https://doi.org/10.1016/j.ijinfomgt.2019.03.008CrossRefGoogle Scholar
Harbarth, L., Gößwein, E., Bodemer, D., & Schnaubert, L. (2025). (Over)Trusting AI Recommendations: How System and Person Variables Affect Dimensions of Complacency. International Journal of Human–Computer Interaction, 41(1), 391410. https://doi.org/10.1080/10447318.2023.2301250CrossRefGoogle Scholar
Harrison, S., Sengers, P., & Tatar, D. (2011). Making Epistemological Trouble: Third-Paradigm HCI as Successor Science. Interacting with Computers, 23(5), 385392. https://doi.org/10.1016/j.intcom.2011.03.005CrossRefGoogle Scholar
Haupt, M., Freidank, J., & Haas, A. (2025). Consumer Responses to Human–AI Collaboration at Organizational Frontlines: Strategies to Escape Algorithm Aversion in Content Creation. Review of Managerial Science, 19, 377413. https://doi.org/10.1007/s11846-024-00748-yCrossRefGoogle Scholar
Hornbæk, K., Mottelson, A., Knibbe, J., & Vogel, D. (2019). What Do We Mean by “Interaction”? An Analysis of 35 Years of CHI. ACM Transactions on Computer–Human Interaction, 26(4), 27:1–27:30. https://doi.org/10.1145/3325285CrossRefGoogle Scholar
Hou, K., Hou, T., & Cai, L. (2023). Exploring Trust in Human–AI Collaboration in the Context of Multiplayer Online Games. Systems, 11(5), Article 5. https://doi.org/10.3390/systems11050217CrossRefGoogle Scholar
Hou, T.-Y., Tseng, Y.-C., & Yuan, C. W. (Tina). (2024). Is This AI Sexist? The Effects of a Biased AI’s Anthropomorphic Appearance and Explainability on Users’ Bias Perceptions and Trust. International Journal of Information Management, 76, 102775. https://doi.org/10.1016/j.ijinfomgt.2024.102775CrossRefGoogle Scholar
Hu, M., Zhang, G., Chong, L., Cagan, J., & Goucher-Lambert, K. (2025). How Being Outvoted by AI Teammates Impacts Human–AI Collaboration. International Journal of Human–Computer Interaction, 41, 40494066. https://doi.org/10.1080/10447318.2024.2345980Google Scholar
Hu, P., Lu, Y., & Wang, B. (2022). Experiencing Power over AI: The Fit Effect of Perceived Power and Desire for Power on Consumers’ Choice for Voice Shopping. Computers in Human Behavior, 128, 107091. https://doi.org/10.1016/j.chb.2021.107091Google Scholar
Hwang, A. H.-C., & Won, A. S. (2022). AI in Your Mind: Counterbalancing Perceived Agency and Experience in Human–AI Interaction. CHI Conference on Human Factors in Computing Systems Extended Abstracts, 1–10. https://doi.org/10.1145/3491101.3519833Google Scholar
Jain, R., Garg, N., & Khera, S. N. (2022). Effective Human–AI Work Design for Collaborative Decision-making. Kybernetes, 52(11), 50175040. https://doi.org/10.1108/K-04-2022-0548CrossRefGoogle Scholar
Jeon, Y., Jin, S., Shih, P. C., & Han, K. (2021). FashionQ: An AI-Driven Creativity Support Tool for Facilitating Ideation in Fashion Design. Proceedings of the 2021 CHI Conference on Human Factors in Computing Systems, 1–18. https://doi.org/10.1145/3411764.3445093CrossRefGoogle Scholar
Jiang, T., Sun, Z., Fu, S., & Lv, Y. (2024). Human–AI Interaction Research Agenda: A User-centered Perspective. Data and Information Management, 100078. https://doi.org/10.1016/j.dim.2024.100078CrossRefGoogle Scholar
Kahn, L. H., Savas, O., Morrison, A., Shaffer, K. A., & Zapata, L. (2020). Modelling Hybrid Human–Artificial Intelligence Cooperation: A Call Center Customer Service Case Study. 2020 IEEE International Conference on Big Data (Big Data), 3072–3075. https://doi.org/10.1109/BigData50022.2020.9377747CrossRefGoogle Scholar
Kelly, M., Kumar, A., Smyth, P., & Steyvers, M. (2023). Capturing Humans’ Mental Models of AI: An Item Response Theory Approach. 2023 ACM Conference on Fairness, Accountability, and Transparency, 1723–1734. https://doi.org/10.1145/3593013.3594111CrossRefGoogle Scholar
Kim, A., Cho, M., Ahn, J., & Sung, Y. (2019). Effects of Gender and Relationship Type on the Response to Artificial Intelligence. Cyberpsychology, Behavior, and Social Networking, 22(4), 249253. https://doi.org/10.1089/cyber.2018.0581CrossRefGoogle ScholarPubMed
Kim, S. (1995). Interdisciplinary Cooperation. In Baecker, R. M., Grudin, J., Buxton, W. A. S., & Greenberg, S. (eds.), Readings in Human–Computer Interaction (pp. 304311). Morgan Kaufmann. https://doi.org/10.1016/B978-0-08-051574-8.50033-9Google Scholar
Kim, S. S. Y., Watkins, E. A., Russakovsky, O., Fong, R., & Monroy-Hernández, A. (2023). “Help Me Help the AI”: Understanding How Explainability Can Support Human–AI Interaction. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–17. https://doi.org/10.1145/3544548.3581001Google Scholar
Kitchin, J., & Baber, C. (2016). A Comparison of Shared and Distributed Situation Awareness in Teams through the Use of Agent-based Modelling. Theoretical Issues in Ergonomics Science, 17(1), 841. https://www.tandfonline.com/doi/abs/10.1080/1463922X.2015.1106616CrossRefGoogle Scholar
Lee, C. T., Pan, L.-Y., & Hsieh, S. H. (2021). Artificial Intelligent Chatbots as Brand Promoters: A Two-stage Structural Equation Modeling–Artificial Neural Network Approach. Internet Research, 32(4), 13291356. https://doi.org/10.1108/INTR-01-2021-0030CrossRefGoogle Scholar
Li, X., & Sung, Y. (2021). Anthropomorphism Brings Us Closer: The Mediating Role of Psychological Distance in User–AI Assistant Interactions. Computers in Human Behavior, 118, 106680. https://doi.org/10.1016/j.chb.2021.106680CrossRefGoogle Scholar
Lindvall, M., Lundström, C., & Löwgren, J. (2021). Rapid Assisted Visual Search: Supporting Digital Pathologists with Imperfect AI. 26th International Conference on Intelligent User Interfaces, 504–513. https://doi.org/10.1145/3397481.3450681CrossRefGoogle Scholar
Lobo, I., Koch, J., Renoux, J., Batina, I., & Prada, R. (2024). When Should I Lead or Follow: Understanding Initiative Levels in Human–AI Collaborative Gameplay. Designing Interactive Systems Conference, 2037–2056. https://doi.org/10.1145/3643834.3661583CrossRefGoogle Scholar
Malandri, L., Mercorio, F., Mezzanzanica, M., & Nobani, N. (2023). ConvXAI: A System for Multimodal Interaction with Any Black-box Explainer. Cognitive Computation, 15(2), 613644. https://doi.org/10.1007/s12559-022-10067-7CrossRefGoogle Scholar
Merry, M., Riddle, P., & Warren, J. (2021). A Mental Models Approach for Defining Explainable Artificial Intelligence. BMC Medical Informatics and Decision Making, 21(1), 344. https://doi.org/10.1186/s12911-021-01703-7CrossRefGoogle ScholarPubMed
Myers, B., Hollan, J., Cruz, I., Bryson, S., Bulterman, D., Catarci, T., Citrin, W., Glinert, E., Grudin, J., & Ioannidis, Y. (1996). Strategic Directions in Human–Computer Interaction. ACM Computing Surveys, 28(4), 794809. https://doi.org/10.1145/242223.246855Google Scholar
Nagao, K. (2019). Symbiosis between Humans and Artificial Intelligence. In Nagao, K. (ed.), Artificial Intelligence Accelerates Human Learning: Discussion Data Analytics (pp. 135151). Springer. https://doi.org/10.1007/978-981-13-6175-3_6CrossRefGoogle Scholar
Newn, J., Singh, R., Velloso, E., & Vetere, F. (2019). Combining Implicit Gaze and AI for Real-time Intention Projection. Adjunct Proceedings of the 2019 ACM International Joint Conference on Pervasive and Ubiquitous Computing and Proceedings of the 2019 ACM International Symposium on Wearable Computers, 324–327. https://doi.org/10.1145/3341162.3343786CrossRefGoogle Scholar
Oulasvirta, A., & Hornbæk, K. (2016). HCI Research as Problem-Solving. Proceedings of the 2016 CHI Conference on Human Factors in Computing Systems, 4956–4967. https://doi.org/10.1145/2858036.2858283CrossRefGoogle Scholar
Pargman, D. S., Eriksson, E., Bates, O., Kirman, B., Comber, R., Hedman, A., & van den Broeck, M. (2019). The Future of Computing and Wisdom: Insights from Human–Computer Interaction. Futures, 113, 102434. https://doi.org/10.1016/j.futures.2019.06.006CrossRefGoogle Scholar
Pataranutaporn, P., Liu, R., Finn, E., & Maes, P. (2023). Influencing Human–AI Interaction by Priming Beliefs about AI Can Increase Perceived Trustworthiness, Empathy and Effectiveness. Nature Machine Intelligence, 5(10), 10761086. https://doi.org/10.1038/s42256-023-00720-7CrossRefGoogle Scholar
Paul, S. K., Nicolescu, M., & Nicolescu, M. (2023). Enhancing Human–Robot Collaboration through a Multi-Module Interaction Framework with Sensor Fusion: Object Recognition, Verbal Communication, User of Interest Detection, Gesture and Gaze Recognition. Sensors, 23(13), Article 13. https://doi.org/10.3390/s23135798CrossRefGoogle ScholarPubMed
Rastogi, C., Zhang, Y., Wei, D., Varshney, K. R., Dhurandhar, A., & Tomsett, R. (2022). Deciding Fast and Slow: The Role of Cognitive Biases in AI-assisted Decision-making. Proceedings of the ACM on Human–Computer Interactions, 6(CSCW1), 83:1–83:22. https://doi.org/10.1145/3512930Google Scholar
Ribera, M., & Lapedriza, À. (2019, March 2). Can We Do Better Explanations? A Proposal of User-centered Explainable AI. IUI Workshops. www.semanticscholar.org/paper/Can-we-do-better-explanations-A-proposal-of-AI-Ribera-Lapedriza/60758a8b15d843ed4f731b4eaa7832be8a7a7e13Google Scholar
Rosenfeld, A., & Richardson, A. (2019). Explainability in Human–Agent Systems. Autonomous Agents and Multi-Agent Systems, 33(6), 673705. https://doi.org/10.1007/s10458-019-09408-yCrossRefGoogle Scholar
Sadeghian, S., & Hassenzahl, M. (2022). The “Artificial” Colleague: Evaluation of Work Satisfaction in Collaboration with Non-human Coworkers. 27th International Conference on Intelligent User Interfaces, 27–35. https://doi.org/10.1145/3490099.3511128CrossRefGoogle Scholar
Santoni de Sio, F., & van den Hoven, J. (2018). Meaningful Human Control over Autonomous Systems: A Philosophical Account. Frontiers in Robotics and AI, 5. https://doi.org/10.3389/frobt.2018.00015CrossRefGoogle ScholarPubMed
Sarter, N. B., & Woods, D. D. (1995). How in the World Did We Ever Get into That Mode? Mode Error and Awareness in Supervisory Control. Human Factors: The Journal of the Human Factors and Ergonomics Society, 37(1), 519. https://doi.org/10.1518/001872095779049516CrossRefGoogle Scholar
Scholtz, J. (2003). Theory and Evaluation of Human Robot Interactions. Proceedings of The 36th Annual Hawaii International Conference on System Sciences, 2003, 10 pp. https://doi.org/10.1109/HICSS.2003.1174284CrossRefGoogle Scholar
Seeber, I., Bittner, E., Briggs, R. O., de Vreede, T., de Vreede, G.-J., Elkins, A., Maier, R., Merz, A. B., Oeste-Reiß, S., Randrup, N., Schwabe, G., & Söllner, M. (2020). Machines as Teammates: A Research Agenda on AI in Team Collaboration. Information & Management, 57(2), 103174. https://doi.org/10.1016/j.im.2019.103174CrossRefGoogle Scholar
Sekmen, A., & Challa, P. (2013). Assessment of Adaptive Human–Robot Interactions. Knowledge-Based Systems, 42, 4959. https://doi.org/10.1016/j.knosys.2013.01.003CrossRefGoogle Scholar
Shin, J.-G., Choi, G.-Y., Hwang, H.-J., & Kim, S.-H. (2021). Evaluation of Emotional Satisfaction Using Questionnaires in Voice-Based Human–AI Interaction. Applied Sciences, 11(4), Article 4. https://doi.org/10.3390/app11041920Google Scholar
Spatola, N., Marchesi, S., & Wykowska, A. (2022). Different Models of Anthropomorphism across Cultures and Ontological Limits in Current Frameworks: The Integrative Framework of Anthropomorphism. Frontiers in Robotics and AI, 9. https://doi.org/10.3389/frobt.2022.863319CrossRefGoogle ScholarPubMed
Stanford GDPi. (2018, July 9). Human-Centered AI: Building Trust, Democracy and Human Rights by Design. Stanford’s GDPi. https://medium.com/stanfords-gdpi/human-centered-ai-building-trust-democracy-and-human-rights-by-design-2fc14a0b48afGoogle Scholar
Steyvers, M., & Kumar, A. (2023). Three Challenges for AI-Assisted Decision-Making. Perspectives on Psychological Science, 19(5), 17456916231181102. https://doi.org/10.1177/17456916231181102Google ScholarPubMed
Sun, X., Zhang, Y., Qin, J., Li, J., & Wang, S. (2020). Review on Human–Intelligent System Collaboration. Packaging Engineering, 41(18), 111. https://doi.org/10.19554/j.cnki.1001-3563.2020.18.001Google Scholar
Tutul, A. A., Chaspari, T., Levitan, S. I., & Hirschberg, J. (2023). Human–AI Collaboration for the Detection of Deceptive Speech. 2023 11th International Conference on Affective Computing and Intelligent Interaction Workshops and Demos (ACIIW), 1–4. https://doi.org/10.1109/ACIIW59127.2023.10388114CrossRefGoogle Scholar
Villareale, J., Harteveld, C., & Zhu, J. (2022). “I Want To See How Smart This AI Really Is”: Player Mental Model Development of an Adversarial AI Player. Proceedings of the ACM on Human–Computer Interactions, 6(CHI PLAY), 219:1–219:26. https://doi.org/10.1145/3549482Google Scholar
Wang, B., Yuan, T., & Rau, P.-L. P. (2024). Effects of Explanation Strategy and Autonomy of Explainable AI on Human–AI Collaborative Decision-making. International Journal of Social Robotics, 16(4), 791810. https://doi.org/10.1007/s12369-024-01132-2CrossRefGoogle Scholar
Wang, D., Weisz, J. D., Muller, M., Ram, P., Geyer, W., Dugan, C., Tausczik, Y., Samulowitz, H., & Gray, A. (2019). Human–AI Collaboration in Data Science: Exploring Data Scientists’ Perceptions of Automated AI. Proceedings of the ACM on Human–Computer Interactions, 3(CSCW), 211:1–211:24. https://doi.org/10.1145/3359313Google Scholar
Westphal, M., Vössing, M., Satzger, G., Yom-Tov, G. B., & Rafaeli, A. (2023). Decision Control and Explanations in Human–AI Collaboration: Improving User Perceptions and Compliance. Computers in Human Behavior, 144, 107714. https://doi.org/10.1016/j.chb.2023.107714Google Scholar
Xie, Y., Zhao, S., Zhou, P., Lu, L., Liang, C., & Jiang, L. (2024). Estimating the Impact of “Humanizing” AI Assistants. International Journal of Human–Computer Interaction, 40(24), 88768889. https://doi.org/10.1080/10447318.2023.2291614CrossRefGoogle Scholar
Xie, Y., Zhu, K., Zhou, P., & Liang, C. (2023). How Does Anthropomorphism Improve Human–AI Interaction Satisfaction: A Dual-Path Model. Computers in Human Behavior, 148, 107878. https://doi.org/10.1016/j.chb.2023.107878CrossRefGoogle Scholar
Xing, Y., Lv, C., Cao, D., & Hang, P. (2021). Toward Human–Vehicle Collaboration: Review and Perspectives on Human-centered Collaborative Automated Driving. Transportation Research Part C: Emerging Technologies, 128, 103199. https://doi.org/10.1016/j.trc.2021.103199CrossRefGoogle Scholar
Xu, W. (2019). Toward Human-centered AI: A Perspective from Human–Computer Interaction. Interactions, 26(4), 4246. https://doi.org/10.1145/3328485CrossRefGoogle Scholar
Xu, W., & Gao, Z. (2024). Applying HCAI in Developing Effective Human–AI Teaming: A Perspective from Human–AI Joint Cognitive Systems. Interactions, 31(1), 3237. https://doi.org/10.1145/3635116CrossRefGoogle Scholar
Yang, K. B., Lawrence, L., Echeverria, V., Guo, B., Rummel, N., & Aleven, V. (2021). Surveying Teachers’ Preferences and Boundaries Regarding Human–AI Control in Dynamic Pairing of Students for Collaborative Learning. In De Laet, T., Klemke, R., Alario-Hoyos, C., Hilliger, I., & Ortega-Arranz, A. (eds.), Technology-Enhanced Learning for a Free, Safe, and Sustainable World (pp. 260274). Springer International Publishing. https://doi.org/10.1007/978-3-030-86436-1_20CrossRefGoogle Scholar
Zhang, G., Raina, A., Brownell, E., & Cagan, J. (2022). The Impact of a Strategy of Deception about the Identity of an Artificial Intelligence Teammate on Human Designers. Volume 3B: 48th Design Automation Conference (DAC), V03BT03A017. https://doi.org/10.1115/DETC2022-88535CrossRefGoogle Scholar
Zhang, M., Gursoy, D., Zhu, Z., & Shi, S. (2021). Impact of Anthropomorphic Features of Artificially Intelligent Service Robots on Consumer Acceptance: Moderating Role of Sense of Humor. International Journal of Contemporary Hospitality Management, 33(11), 38833905. https://doi.org/10.1108/IJCHM-11-2020-1256Google Scholar
Zhao, Y., & Bao, X. (2023). Narratron: Collaborative Writing and Shadow-playing of Children Stories with Large Language Models. Adjunct Proceedings of the 36th Annual ACM Symposium on User Interface Software and Technology, 1–6. https://doi.org/10.1145/3586182.3625120CrossRefGoogle Scholar
Zou, M., & Huang, L. (2024). The Impact of ChatGPT on L2 Writing and Expected Responses: Voice from Doctoral Students. Education and Information Technologies, 29, 1320113219. https://doi.org/10.1007/s10639-023-12397-xCrossRefGoogle Scholar

Accessibility standard: Unknown

Accessibility compliance for the PDF of this book is currently unknown and may be updated in the future.

Save book to Kindle

To save this book to your Kindle, first ensure no-reply@cambridge.org is added to your Approved Personal Document E-mail List under your Personal Document Settings on the Manage Your Content and Devices page of your Amazon account. Then enter the ‘name’ part of your Kindle email address below. Find out more about saving to your Kindle.

Note you can select to save to either the @free.kindle.com or @kindle.com variations. ‘@free.kindle.com’ emails are free but can only be saved to your device when it is connected to wi-fi. ‘@kindle.com’ emails can be delivered even when you are not connected to wi-fi, but note that service fees apply.

Find out more about the Kindle Personal Document Service.

Available formats
×

Save book to Dropbox

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Dropbox.

Available formats
×

Save book to Google Drive

To save content items to your account, please confirm that you agree to abide by our usage policies. If this is the first time you use this feature, you will be asked to authorise Cambridge Core to connect with your account. Find out more about saving content to Google Drive.

Available formats
×