Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Medicine 2026-03-03 4 min read

AI Screened 1,476 Patient Records in a Week and Found Trial Candidates Doctors Had Missed

A Cleveland Clinic study shows large language models can identify rare disease trial participants with 96% accuracy - and find significantly more Black patients than traditional recruitment.

Clinical trials have a recruitment problem. Finding eligible participants - people with the right diagnosis, the right disease stage, the right absence of confounding conditions - is slow, expensive, and often inequitable. It typically relies on physicians remembering which of their patients might qualify and manually flagging them, a process that favors patients who are already most visible in the healthcare system.

A study from Cleveland Clinic and health technology company Dyania Health, focused on a Phase 3 trial for a rare heart condition called transthyretin amyloid cardiomyopathy (ATTR-CM), tested whether large language model AI could do that screening better. The results, which involved reviewing 1,476 patient records in a single week, suggest the answer is yes - and that the technology may also help address one of clinical research's most persistent failures: the underrepresentation of Black patients in trials for conditions that disproportionately affect them.

What the AI Was Asked to Do

ATTR-CM is caused by a misfolded protein that deposits in the heart, progressively stiffening it. The wild-type form of the disease - which is not inherited - predominantly affects men over 60 and is significantly more prevalent among Black Americans due to a genetic variant called Val122Ile that is carried by roughly 3-4% of people of African ancestry. Despite this, Black patients have historically been underrepresented in ATTR-CM clinical trials.

The AI system reviewed electronic medical records and evaluated each patient against the trial's eligibility criteria - a complex set of inclusion and exclusion rules that can run to dozens of data points per patient. Over the course of one week, the system processed 1,476 records, answered 7,700 trial-specific questions with 96.2% accuracy, and achieved a 99% negative predictive value for correctly excluding ineligible patients. That last figure matters particularly: a system that is accurate in the affirmative but generates many false negatives - missing eligible patients - would undermine the purpose of the exercise.

The 29 Patients Who Would Have Been Missed

Of the 46 patients the AI flagged as potential trial candidates, 30 were eventually contacted by the research team. Of those, 29 had not been identified through any routine clinical recruitment pathway. They were not on a specialist's radar, had not been referred to the trial by their cardiologist, and would likely never have been asked to participate under a conventional screening approach.

That is a significant finding. It means the AI was not just doing what human recruiters do faster - it was reaching patients that the existing system was structurally likely to miss. The reasons for those missed patients are various: not everyone with a relevant diagnosis sees a heart failure specialist, not all physicians actively monitor trial opportunities for their patients, and the cognitive load on clinicians managing complex caseloads makes systematic trial matching essentially impossible at scale.

A Different Patient Profile

The demographic profile of the AI-identified patients differed substantially from those found through routine recruitment. Among AI-identified candidates, 36.6% were Black - compared to 7.1% through standard screening. That gap reflects a structural problem in how clinical research recruitment typically works: it tends to surface patients who are already most embedded in academic medical center networks, who have established relationships with specialists, and who are therefore more likely to be white, more affluent, and living closer to the research institution.

The AI reviewed records without those biases. It did not know which patients had a relationship with a prominent cardiologist or which had been seen multiple times by specialists. It evaluated eligibility criteria and nothing else, which produced a more representative pool.

Only 60% of the AI-identified patients had any prior connection to a heart failure specialist, compared to 92.8% of patients found through traditional methods. The implication is that AI screening could help trials access the broader population of people with a condition, not just the subset already navigating specialist care.

What This Does Not Solve

Identifying a patient as potentially eligible is not the same as enrolling them. The study does not report how many of the 29 previously overlooked patients ultimately joined the trial, or whether enrollment rates differed between AI-identified and traditionally recruited patients. Those questions matter for assessing the real-world impact of the technology.

There are also legitimate questions about AI systems processing sensitive medical records at scale - privacy protections, audit trails, and oversight mechanisms need to be robust, particularly when the underlying data involves populations that have historically been subjected to exploitative research practices. The study does not detail the data governance framework in place, which is information clinical research administrators would want before adopting a similar approach.

Still, the core performance figures are striking. A system that screens nearly 1,500 complex records in a week with 96% accuracy - and finds patients that traditional methods miss - addresses a genuine bottleneck in clinical research. For rare diseases, where eligible trial participants may number in the hundreds nationally, the ability to find more of them faster could meaningfully accelerate the development of treatments that patients are waiting for.

Source: Study by Cleveland Clinic and Dyania Health examining AI-assisted screening for a Phase 3 transthyretin amyloid cardiomyopathy trial. Contact: Alicia Reale, Cleveland Clinic; realeca@ccf.org; 216-408-7444.