Japanese Medical Trainees Split Between AI Optimism and AI Anxiety in Nationwide Survey
When a medical student or resident encounters an AI tool in clinical training - a diagnostic algorithm, a documentation assistant, an imaging analysis system - their response is not purely technical. It is shaped by how they feel about AI, what they expect from it, and how much they trust it. Those attitudes determine whether AI tools get used well, poorly, or not at all. They also predict resistance to training and gaps in adoption that no amount of software improvement can fix.
Measuring those attitudes accurately requires a validated instrument - a questionnaire with known psychometric properties that produces reliable, interpretable scores. In 2024, such a tool was developed for general use: the 12-item ATTARI-12 scale, covering affective, cognitive, and behavioral dimensions of attitudes toward artificial intelligence. For Japanese medical education, the scale had a gap: no validated Japanese version existed, and cultural factors including uncertainty avoidance and specific social norms around technology may shape how Japanese respondents interpret and answer questions about AI in ways that affect scale validity.
Building the Japanese version
A team from Juntendo University led by Project Assistant Professor Hirohisa Fujikawa, in collaboration with Dr. Kayo Kondo from Durham University in the United Kingdom, followed established international guidelines for translating and culturally adapting measurement instruments. The process involves forward and backward translation, review by bilingual content experts, and pilot testing to identify items that may function differently across cultural contexts.
The resulting J-ATTARI-12 was administered in a nationwide online survey conducted between June and July 2025, reaching medical students and residents from multiple universities and hospitals across Japan. A total of 326 participants were included in the final analysis.
The analysis used a split-half validation approach: half the sample was used for exploratory factor analysis (EFA) to identify how responses clustered, and the other half was used for confirmatory factor analysis (CFA) to test whether the structure identified in the first group held up. This approach is more rigorous than testing the whole sample in one step, which risks overfitting the analysis to sample-specific noise.
Two factors, not one
The EFA identified a two-factor structure. The first factor captured what the researchers termed "AI anxiety and aversion" - responses reflecting discomfort, distrust, and concern about the consequences of AI in medical practice. The second captured "AI optimism and acceptance" - positive expectations, openness to adoption, and willingness to engage with AI tools.
The CFA confirmed that this two-factor model fit the data significantly better than a single-factor model, which would have treated all attitudes toward AI as a single dimension. In practice, this distinction matters: a trainee can hold positive views about AI's technical capabilities while simultaneously feeling anxious about loss of clinical autonomy or data privacy. A single-factor scale would obscure this nuance.
Internal consistency reliability was high (Cronbach's alpha reached acceptable thresholds for both factors), and convergent validity was supported by a moderate positive correlation between J-ATTARI-12 scores and attitudes toward robots - a related construct that provides an independent check on what the scale is actually measuring.
How this gets used
"Educators can use this scale to evaluate AI-related training and identify learners who may feel uncertain or hesitant about using AI. It also allows researchers to track how attitudes evolve as AI becomes more integrated into healthcare," said Dr. Fujikawa.
Juntendo University plans to deploy the J-ATTARI-12 in a new "Medicine and AI" curriculum launching in 2026, using baseline scores to identify students who need additional support or different pedagogical approaches. The scale is also expected to facilitate cross-national comparisons of medical trainee attitudes toward AI - research that requires equivalent instruments across languages and cultures.
What the study cannot address
The survey captured attitudes at a single point in time during mid-2025. Whether the two-factor structure is stable across different cohorts, different points in training, or different healthcare settings has not been tested. The nationwide reach was achieved through online recruitment across multiple institutions, which introduces potential selection bias - students and residents willing to complete a survey about AI attitudes may already be more engaged with the topic than the average trainee.
The scale also measures self-reported attitudes rather than actual behavior. Whether high AI-optimism scores predict greater use and skill development with AI tools in clinical practice, and whether high anxiety scores predict avoidance, are empirical questions that prospective follow-up studies will need to address.
"The successful adoption of AI in healthcare depends on clinicians' acceptance as much as on technological performance. Making these attitudes visible enables better education and more responsible implementation," said Dr. Fujikawa.
Lead researcher: Project Assistant Professor Hirohisa Fujikawa, Department of General Medicine, Juntendo University
Study: Published in JMIR Medical Education, Volume 12, e81986, January 14, 2026. DOI: https://doi.org/10.2196/81986
Sample: 326 medical students and residents from multiple Japanese institutions, June-July 2025