TrustAI (SOCIAL-PSYCHOLOGICAL AND COGNITIVE FACTORS OF TRUST IN AI-SOCIAL AGENTS AND AI-GENERATED INFORMATION IN THE FIELD OF TRUST)
General desciption
With the rapid adoption of AI technologies in everyday life, more and more people are turning to AI agents for advice in sensitive areas—from physical and mental health to decisions affecting quality of life. However, the mechanisms underlying trust in such information remain poorly understood, particularly in the Russian-language context and in text-based communication.
The TrustAI project aims to identify the socio-psychological and cognitive processes through which users construct trust not in the reliability of information per se, but in the AI agent itself as a source. Empirical data suggests that trust can be formed independently of the actual accuracy of recommendations: users tend to accept unreliable information from a "trusted" agent and reject correct information when the source is untrusted. Thus, trust is considered an independent psychological construct that precedes and modulates content evaluation.
The project's goal is to identify the socio-psychological and cognitive determinants of trust in AI agents in a medical context.
Project objectives:
- Analyze existing theories and scales of trust in technology;Identify key perceived characteristics of AI relevant to trust;
- Identify key perceived characteristics of AI relevant to trust;
- Conduct qualitative research (semi-structured interviews and think-aloud protocols);
- Lay the foundation for developing an original AI trust questionnaire.
The project is implemented in two phases:
Theoretical and analytical phase — systematic review of the literature and analysis of the applicability of existing scales (Human–Computer Trust Scale, Anthropomorphism Scale, etc.).
Empirical phase — a series of semi-structured interviews (N ≈ 30–35) with AI chat users, including think-aloud sessions during interactions with the agent. Data analysis was thematic and narrative, using grounded theory.
Scientific and applied significance
The project contributes to the development of the interdisciplinary field of human-AI interaction, seeking to integrate existing trust models into a unified framework. The results will form the basis of a new psychometric tool and will allow for the formulation of recommendations for the design of AI interfaces and the improvement of digital literacy in healthcare. Publication is planned in peer-reviewed journals on social psychology and human-computer interaction.
Project news:
- A theoretical review of trust models for AI agents has been accepted for presentation and publication at the 28th International Conference on Human-Computer Interaction (HCI International 2026, July 26-31, 2026).
- The student project team has begun work, conducted a literature review, and finalized the project's research questions. Guidelines and a protocol for conducting in-depth interviews are being developed.
Have you spotted a typo?
Highlight it, click Ctrl+Enter and send us a message. Thank you for your help!
To be used only for spelling or punctuation mistakes.