Dimensions of Trust in AI Coaching Assistants Among Elite Athletes: Interviews with Users and Non-Users

Authors

    Nadereh Saadati Department of Psychology and Counseling, KMAN Research Institute, Richmond Hill, Ontario, Canada
    Mariana Torres * Department of Electrical and Computer Engineering, Monterrey Institute of Technology and Higher Education (ITESM), Monterrey, Mexico mariana.torres@tec.mx

Keywords:

Trust in AI, coaching assistants, elite athletes, qualitative research, transparency, ethics, sports technology

Abstract

This study aimed to explore the dimensions of trust in AI coaching assistants among elite athletes, examining the perspectives of both users and non-users. A qualitative research design was adopted, employing semi-structured interviews with 17 elite athletes from Mexico representing diverse sports, including athletics, football, swimming, and martial arts. Participants were selected purposively to capture experiences from both current or past users (n = 9) and non-users (n = 8) of AI coaching assistants. Interviews were conducted until theoretical saturation was achieved. Each session lasted 45–70 minutes, recorded with consent, transcribed verbatim, and pseudonymized. Data were analyzed thematically using Braun and Clarke’s framework, with the aid of NVivo 14 software for systematic coding and theme development. Four overarching themes emerged: (1) perceived reliability, where athletes valued accurate and consistent feedback but were wary of technical failures; (2) transparency and understanding, with trust enhanced by clear explanations and data governance, but undermined by opaque “black box” processes; (3) emotional and relational aspects, where athletes appreciated private, supportive feedback but emphasized the irreplaceable motivational role of human coaches; and (4) ethical and contextual considerations, including concerns about fairness, accessibility, accountability, and professional acceptance. Illustrative quotations highlighted how athletes negotiate trust across these dimensions, reflecting both enthusiasm and caution in adopting AI coaching assistants. Trust in AI coaching assistants among elite athletes is shaped by technical, relational, and ethical factors. The findings underscore that AI is most effective when framed as a complement rather than a substitute for human coaching. Building trust requires transparency, fairness, accessibility, and institutional support to ensure responsible and equitable integration of AI into elite sports contexts.

Downloads

Download data is not yet available.

References

1. Shabankareh M, Sahne SSK, Nazarian A, Foroudi P. The Impact of AI Perceived Transparency on Trust in AI Recommendations in Healthcare Applications. Asia-Pacific Journal of Business Administration. 2025. doi: 10.1108/apjba-12-2024-0690.

2. Revillod G. Trust Influence on AI HR Tools Perceived Usefulness in Swiss HRM: The Mediating Roles of Perceived Fairness and Privacy Concerns. Ai & Society. 2025. doi: 10.1007/s00146-025-02216-x.

3. Kumar C. From Automation to Ethics: Responsible AI in Human Resource Management Across Industries With Insights From the Power Sector. Research Review International Journal of Multidisciplinary. 2025;10(4):63-81. doi: 10.31305/rrijm.2025.v10.n4.009.

4. Chege AM. Determinants of Artificial Intelligence Technologies Adoption in Kenyan Universities: A Case of United States International. Journal of Technology and Systems. 2025;7(4):16-35. doi: 10.47941/jts.2797.

5. Başal M. Consumer Distrust: Non-Transparent AI Decision-Making Processes. 2025. doi: 10.58830/ozgur.pub710.c3021.

6. Choubisa V, Choubisa D. Towards Trustworthy AI: An Analysis of the Relationship Between Explainability and Trust in AI Systems. International Journal of Science and Research Archive. 2024;11(1):2219-26. doi: 10.30574/ijsra.2024.11.1.0300.

7. Becker M. Securing XAI Through Trusted Computing. 2024:13-20. doi: 10.58895/ksp/1000168973-2.

8. Zhalilov A, Toktorbaev AM. Combining Robustness and Explainability in Developing Safe Artificial Intelligence Systems. Bulletin of Science and Practice. 2024;10(12):167-71. doi: 10.33619/2414-2948/109/23.

9. Journal I. Generative AI for Healthcare: Applications, Challenges, and Ethical Considerations. Interantional Journal of Scientific Research in Engineering and Management. 2024;08(12):1-6. doi: 10.55041/ijsrem39600.

10. Wang B, Asan O, Mansouri M. What May Impact Trustworthiness of AI in Digital Healthcare: Discussion From Patients’ Viewpoint. Proceedings of the International Symposium on Human Factors and Ergonomics in Health Care. 2023;12(1):5-10. doi: 10.1177/2327857923121001.

11. Adeyinka TI. Security, Privacy, and Trust of AI-IoT Convergent Smart System. 2025:129-60. doi: 10.4018/979-8-3693-8332-2.ch006.

12. Shah SA. Trust Dynamics in the Digital Economy: Ethical AI and Data Governance Frameworks. Pjai. 2024. doi: 10.70389/pjai.100005.

13. Said NA. Does Data Privacy Influence Digital Marketing? The Mediating Role of AI-driven Trust: An Empirical Study of Zain Telecom Company in Jordan. International Journal of Data and Network Science. 2025;9(1):187-200. doi: 10.5267/j.ijdns.2024.8.023.

14. Akbar MB, Ibrahim I, Nabil SJ, Iqbal KA, Islam AKMS. The Influence of Artificial Intelligence on Consumer Trust in E-Commerce: Opportunities and Ethical Challenges. European Journal of Theoretical and Applied Sciences. 2024;2(6):250-9. doi: 10.59324/ejtas.2024.2(6).20.

15. Arigbabu AT, Olaniyi OO, Adigwe CS, Adebiyi OO, Ajayi SA. Data Governance in AI - Enabled Healthcare Systems: A Case of the Project Nightingale. Asian Journal of Research in Computer Science. 2024;17(5):85-107. doi: 10.9734/ajrcos/2024/v17i5441.

16. Grace O, Drummond JC. Investigating Public Attitudes Toward AI-Driven Medical Technologies: A Review of Literature on Trust and Ethical Considerations. 2025. doi: 10.31219/osf.io/a9sxw.

17. Haridasan PK. The Salesforce Einstein Trust Layer for Retrieval-Augmented Generation (RAG) for Enterprise Applications. Interantional Journal of Scientific Research in Engineering and Management. 2024;08(10):1-3. doi: 10.55041/ijsrem28465.

18. Moon S-J. Effects of Perception of Potential Risk in Generative AI on Attitudes and Intention to Use. International Journal on Advanced Science Engineering and Information Technology. 2024;14(5):1748-55. doi: 10.18517/ijaseit.14.5.20445.

19. Majrashi K. Determinants of Public Sector Managers' Intentions to Adopt AI in the Workplace. International Journal of Public Administration in the Digital Age. 2024;11(1):1-26. doi: 10.4018/ijpada.342849.

20. James EE, Sampson EA, Usani NE, Inyang IB. A Principal Component Analysis of the Factors Influencing University Students' Trust in AI-Based Educational Technologies. Ajastr. 2025;18(1):111-41. doi: 10.62154/ajastr.2025.018.010691.

21. Kim S, Kyun H. A Study on the Impact of Information and System Characteristics of Generative AI on Continuous Usage Intention. Glob Convergence Res Academy. 2025;4(1):1-13. doi: 10.57199/jgcr.2025.4.1.1.

22. Li J, Wu L, Qi J, Zhang Y, Wu Z, Hu S. Determinants Affecting Consumer Trust in Communication With AI Chatbots. Journal of Organizational and End User Computing. 2023;35(1):1-24. doi: 10.4018/joeuc.328089.

23. Ni Y. The Impact of Explainable AI on Customer Trust and Satisfaction in Banking. Jitp. 2024. doi: 10.62836/jitp.v1i1.165.

24. Prakash AV, Das S. (Why) Do We Trust AI?: A Case of AI-based Health Chatbots. Ajis. 2024;28. doi: 10.3127/ajis.v28.4235.

25. Rajora H, Hung TH, Rathnasiri MSH. Building Trust and Transparency in AI-Powered Robo-Advisors and Related Employment Avenues. 2025:357-76. doi: 10.4018/979-8-3373-1270-5.ch020.

26. Mansori SFA. Exploring the Future of Technology Acceptance Models in the Age of Artificial Intelligence. Istj. 2025;36(2):1-12. doi: 10.62341/sfam1509.

27. Wang Y-F, Chen YC, Chien SY, Wang P-J. Citizens’ Trust in AI-enabled Government Systems. Information Polity. 2024;29(3):293-312. doi: 10.3233/ip-230065.

28. Du J. Do Humans Trust AI in HRM? Why Do? Why Not? — Insights From a Decade of Research. Journal of Research in Social Science and Humanities. 2024;3(7):20-48. doi: 10.56397/jrssh.2024.07.04.

29. Żywiołek J. Trust-Building in AI-Human Partnerships Within Industry 5.0. System Safety Human - Technical Facility - Environment. 2024;6(1):89-98. doi: 10.2478/czoto-2024-0011.

30. Ziakkas D, Henneberry D. The Challenges of the Implementation of Artificial Intelligence (AI) in Transportation. 2024. doi: 10.54941/ahfe1005195.

Downloads

Published

2024-10-01

Submitted

2024-08-12

Revised

2024-09-15

Accepted

2024-09-27

How to Cite

Saadati, N., & Torres, M. (2024). Dimensions of Trust in AI Coaching Assistants Among Elite Athletes: Interviews with Users and Non-Users. Game Nexus, 1(2), 1-11. https://game-nexus.org/index.php/gamenexus/article/view/7

Similar Articles

1-10 of 21

You may also start an advanced similarity search for this article.

Most read articles by the same author(s)