(Press-News.org) At a glance:
A new study reveals that pathology AI models for cancer diagnosis perform unequally across demographic groups.
The researchers identified three explanations for the bias and developed a tool that reduced it.
The findings highlight the need to systematically check for bias in pathology AI to ensure equitable care for patients.
Pathology has long been the cornerstone of cancer diagnosis and treatment. A pathologist carefully examines an ultrathin slice of human tissue under a microscope for clues that indicate the presence, type, and stage of cancer.
To a human expert, looking at a swirly pink tissue sample studded with purple cells is akin to grading an exam without a name on it — the slide reveals essential information about the disease without providing other details about the patient.
Yet the same isn’t necessarily true of pathology artificial intelligence models that have emerged in recent years. A new study led by a team at Harvard Medical School shows that these models can somehow infer demographic information from pathology slides, leading to bias in cancer diagnosis among different populations.
Analyzing several major pathology AI models designed to diagnose cancer, the researchers found unequal performance in detecting and differentiating cancers across populations based on patients’ self-reported gender, race, and age. They identified several possible explanations for this demographic bias.
The team then developed a framework called FAIR-Path that helped reduce bias in the models.
“Reading demographics from a pathology slide is thought of as a ‘mission impossible’ for a human pathologist, so the bias in pathology AI was a surprise to us,” said senior author Kun-Hsing Yu, associate professor of biomedical informatics in the Blavatnik Institute at HMS and HMS assistant professor of pathology at Brigham and Women’s Hospital.
Identifying and counteracting AI bias in medicine is critical because it can affect diagnostic accuracy, as well as patient outcomes, Yu said. FAIR-Path’s success indicates that researchers can improve the fairness of AI models for cancer pathology, and perhaps other AI models in medicine, with minimal effort.
The work, which was supported in part by federal funding, is described Dec. 16 in Cell Reports Medicine.
Testing for bias
Yu and his team investigated bias in four standard AI pathology models being developed for cancer evaluation. These deep-learning models were trained on sets of annotated pathology slides, from which they “learned” biological patterns that enable them to analyze new slides and offer diagnoses.
The researchers fed the AI models a large, multi-institutional repository of pathology slides spanning 20 cancer types.
They discovered that all four models had biased performances, providing less accurate diagnoses for patients in specific groups based on self-reported race, gender, and age. For example, the models struggled to differentiate lung cancer subtypes in African American and male patients, and breast cancer subtypes in younger patients. The models also had trouble detecting breast, renal, thyroid, and stomach cancer in certain demographic groups. These performance disparities occurred in around 29 percent of the diagnostic tasks that the models conducted.
This diagnostic inaccuracy, Yu said, happens because these models extract demographic information from the slides — and rely on demographic-specific patterns to make a diagnosis.
The results were unexpected “because we would expect pathology evaluation to be objective,” Yu added. “When evaluating images, we don’t necessarily need to know a patient’s demographics to make a diagnosis.”
The team wondered: Why didn’t pathology AI show the same objectivity?
Searching for explanations
The researchers landed on three explanations.
Because it is easier to get samples for patients in certain demographic groups, the AI models are trained on unequal sample sizes. As a result, the models have a harder time making an accurate diagnosis in samples that aren’t well-represented in the training set, such as those from minority groups based on race, age, or gender.
Yet “the problem turned out to be much deeper than that,” Yu said. The researchers noticed that sometimes the models performed worse in one demographic group, even when the sample sizes were comparable.
Additional analyses revealed that this may be because of differential disease incidence: Some cancers are more common in certain groups, so the models become better at making a diagnosis in those groups. As a result, the models may have difficulty diagnosing cancers in populations where they aren’t as common.
The AI models also pick up on subtle molecular differences in samples from different demographic groups. For example, the models may detect mutations in cancer driver genes and use them as a proxy for cancer type — and thus be less effective at making a diagnosis in populations in which these mutations are less common.
“We found that because AI is so powerful, it can differentiate many obscure biological signals that cannot be detected by standard human evaluation,” Yu said.
As a result, the models may learn signals that are more related to demographics than disease. That, in turn, could affect their diagnostic ability across groups.
Together, Yu said, these explanations suggest that bias in pathology AI stems not only from the variable quality of the training data but also from how researchers train the models.
Finding a fix
After assessing the scope and sources of the bias, Yu and his team wanted to fix it.
The researchers developed FAIR-Path, a simple framework based on an existing machine-learning concept called contrastive learning. Contrastive learning involves adding an element to AI training that teaches the model to emphasize the differences between essential categories — in this case, cancer types — and to downplay the differences between less crucial categories — here, demographic groups.
When the researchers applied the FAIR-Path framework to the models they’d tested, it reduced the diagnostic disparities by around 88 percent.
“We show that by making this small adjustment, the models can learn robust features that make them more generalizable and fairer across different populations,” Yu said.
The finding is encouraging, he added, because it suggests that bias can be reduced even without training the models on completely fair, representative data.
Next, Yu and his team are collaborating with institutions around the world to investigate the extent of bias in pathology AI in places with different demographics and clinical and pathology practices. They are also exploring ways to extend FAIR-Path to settings with limited sample sizes. Additionally, they would like to investigate how bias in AI contributes to demographic discrepancies in health care and patient outcomes.
Ultimately, Yu said, the goal is to create fair, unbiased pathology AI models that can improve cancer care by helping human pathologists quickly and accurately make a diagnosis.
“I think there’s hope that if we are more aware of and careful about how we design AI systems, we can build models that perform well in every population,” he said.
Authorship, funding, disclosures
Additional authors on the study include Shih-Yen Lin, Pei-Chen Tsai, Fang-Yi Su, Chun-Yen Chen, Fuchen Li, Junhan Zhao, Yuk Yeung Ho, Tsung-Lu Michael Lee, Elizabeth Healey, Po-Jen Lin, Ting-Wan Kao, Dmytro Vremenko, Thomas Roetzer-Pejrimovsky, Lynette Sholl, Deborah Dillon, Nancy U. Lin, David Meredith, Keith L. Ligon, Ying-Chun Lo, Nipon Chaisuriya, David J. Cook, Adelheid Woehrer, Jeffrey Meyerhardt, Shuji Ogino, MacLean P. Nasrallah, Jeffrey A. Golden, Sabina Signoretti, and Jung-Hsien Chiang.
Funding was provided by the National Institute of General Medical Sciences and the National Heart, Lung, and Blood Institute at the National Institutes of Health (grants R35GM142879, R01HL174679), the Department of Defense (Peer Reviewed Cancer Research Program Career Development Award HT9425-231-0523), the American Cancer Society (Research Scholar Grant RSG-24-1253761-01-ESED), a Google Research Scholar Award, a Harvard Medical School Dean’s Innovation Award, the National Science and Technology Council of Taiwan (grants NSTC 113-2917-I-006-009, 112-2634-F-006-003, 113-2321-B-006-023, 114-2917-I-006-016), and a doctoral student scholarship from the Xin Miao Education Foundation.
Ligon was a consultant of Travera, Bristol Myers Squibb, Servier, IntegraGen, L.E.K. Consulting, and Blaze Bioscience; received equity from Travera; and has research funding from Bristol Myers Squibb and Lilly. Vremenko is a cofounder and shareholder of Vectorly.
The authors prepared the initial manuscript and used ChatGPT to edit selected sections to improve readability. After using this tool, the authors reviewed and edited the content as needed and take full responsibility for the content of the published article.
END
Researchers discover bias in AI models that analyze pathology samples
The team created a new tool that reduces bias and improves cancer diagnosis across populations
2025-12-16
ELSE PRESS RELEASES FROM THIS DATE:
Scientists ID potential way to prevent brain injuries from triggering Alzheimer's
2025-12-16
University of Virginia School of Medicine researchers have uncovered how and why traumatic brain injury (TBI) increases the risk for Alzheimer’s disease, and their work suggests a potential way to prevent that increased risk.
John Lukens, PhD, director of UVA's Harrison Family Translational Research Center in Alzheimer's and Neurodegenerative Diseases, and his collaborators found that a single mild TBI brings about harmful changes in the brain that facilitate the onset of Alzheimer’s. But they also were able to prevent those changes in lab mice by using a hollowed-out virus to deliver repair supplies into the brain’s ...
MASTER 2nd Open Call: Execution period kick-off
2025-12-16
In April 2025, the MASTER project launched its 2nd Open Call, inviting educational institutions, content creation services engaging students in XR application testing, and SMEs or large companies (as challenge providers only) from EU Member States, associated overseas countries and territories, and third countries linked to Horizon Europe.
Applicants were asked to address specific challenges and submit proposals for evaluation. Following a rigorous assessment process conducted by both internal and external experts, 24 projects were selected for funding, ...
Algae for health in food and pharma
2025-12-16
Cyanobacteria, micro- and macroalgae produce an infinite number of molecules, many of which have properties beneficial to health. They can relieve pain, alleviate inflammation, or boost our microbiome. Although there is much to uncover, the algae-based health and nutrition sector is growing.
The two-day summit will bring together scientists, innovators and industry leaders to explore the latest developments in algae cultivation and processing, food innovation, small molecules in pharma, and regulatory aspects. The goal is to advance the understanding of algae’s health potential and highlight ...
Advanced microrobots driven by acoustic and magnetic fields for biomedical applications
2025-12-16
Microrobots span dimensions from nanometers to sub-millimeters, can navigate biological fluids/tissues and localize to specific targets, and—owing to their miniaturization, untethered actuation, and multimodal locomotion—can access deep, narrow, and complex regions (e.g., vasculature and brain tissue) with minimal invasiveness, enabling broad prospects in targeted drug delivery, minimally invasive surgery, cell manipulation, and imaging. Yet propulsion and motion control at low Reynolds number remain fundamental challenges, motivating diverse external-field actuation schemes; while electric and optical approaches are constrained by potential cellular ...
Chicago health information leader recognized for raising CPR readiness and blood pressure awareness
2025-12-16
DALLAS, Dec. 16, 2025 — In the United States, more than 350,000 sudden cardiac arrests occur outside of the hospital every year, and 90% of them are fatal.[1] In an effort to save more lives, Laura Merrick, winner of the American Heart Association’s 2025 national Leaders of Impact™ campaign and Chicago health information leader, dedicated her campaign to preventing cardiac arrest deaths after her mom was one of the lucky 10% to survive.
“My mother survived cardiac arrest because a bystander knew CPR,” said Merrick. “I kept asking myself: What can we do ...
The Intimate Animal, a new book from Kinsey Institute Executive Director Dr. Justin Garcia
2025-12-16
Why do we love who we love? Why do we stay in unfulfilling relationships and stray from rewarding ones? Is it ever a good idea to open a relationship? How has the digital age affected courtship? And why do some longtime couples crash and burn while others stay madly in love? These are just some of the questions Kinsey Institute Executive Director Dr. Justin Garcia explores in his highly anticipated new book, The Intimate Animal.
Drawing on decades of interdisciplinary research in evolutionary biology, psychology, neuroscience, anthropology, and social science, Dr. Garcia reveals the surprising science ...
When blue-collar workers lose union protection, they try self-employment
2025-12-16
In U.S. states with anti-union labor environments, workers are up to 53% more likely to start their own businesses—and blue-collar workers are more likely to do it out of necessity.
A study in Strategic Entrepreneurship Journal examines how the labor environment in states with “right-to-work” (RTW) laws compared with that in neighboring states with stronger union bargaining power.
“We found that the enactment of stringent anti-union laws reduces employees’ incentives to stay ...
New video dataset to advance AI for health care
2025-12-16
Researchers at the University of Pennsylvania have launched Observer, the first multimodal medical dataset to capture anonymized, real-time interactions between patients and clinicians. Much like the medical drama The Pitt, which portrays life in the emergency room, Observer lets outsiders peer inside primary care clinics — only, in this case, none of the filmed interactions are fictional.
Until now, the data available to health care researchers has been limited to traces left behind after a visit: qualitative information like clinician notes and quantitative measurements like patient vital signs. None of these sources ...
MEA-based graph deviation network for early autism syndrome signatures in human forebrain organoids
2025-12-16
Multi-electrode arrays (MEAs) provide a noninvasive interface with sub-millisecond temporal resolution and long-term, multi-site recordings, enabling mechanistic investigations of in vitro human brain development and disease-related dysfunction; nevertheless, conventional MEA pipelines largely rely on firing/burst statistics or channel-/waveform-level features, which can be insufficient to systematically characterize and interpret network-level organization and its subtle pathological deviations. Accordingly, representing ...
New modeling approach sheds light on rare gut disease
2025-12-16
During development of the digestive system, a complex network of nerves forms around it, creating a “second brain” — the enteric nervous system (ENS) — which controls the movement of food and waste through the gut. But a combination of changes in the molecular letters making up certain genetic instructions can prevent these nerves from developing properly, leading to Hirschsprung disease (HSCR), a painful and often dangerous condition in which babies develop intestinal blockage and are unable to pass stool.
A study led by NYU Langone Health researchers reveals a new strategy to ...
LAST 30 PRESS RELEASES:
Penn researchers awarded $25M to conduct trial using smartphones to fight heart disease
PCORI awards funding for new patient-centered healthcare research
Exploring the origins of the universe: 145 low-noise amplifiers complete ALMA telescopes
Empress cicada wings help illuminate molecular structure
Using sound waves to detect helium
Time burden in patients with metastatic breast and ovarian cancer from clinic and home demands
Researchers discover bias in AI models that analyze pathology samples
Scientists ID potential way to prevent brain injuries from triggering Alzheimer's
MASTER 2nd Open Call: Execution period kick-off
Algae for health in food and pharma
Advanced microrobots driven by acoustic and magnetic fields for biomedical applications
Chicago health information leader recognized for raising CPR readiness and blood pressure awareness
The Intimate Animal, a new book from Kinsey Institute Executive Director Dr. Justin Garcia
When blue-collar workers lose union protection, they try self-employment
New video dataset to advance AI for health care
MEA-based graph deviation network for early autism syndrome signatures in human forebrain organoids
New modeling approach sheds light on rare gut disease
Study documents potentially hazardous flame retardants in firefighter gear
Can certain bacteria regulate aging of the immune system and its related alterations?
AI model helps diagnose often undetected heart disease from simple EKG
There are fewer online trolls than people think
Cell membrane fluctuations produce electricity
Jeonbuk National University study shows positive parenting can protect adolescents against self-harm
Surface-engineered ZnO nanocrystals to tackle perfluoroalkyl substance contamination
This new understanding of T cell receptors may improve cancer immunotherapies
A new fossil face sheds light on early migrations of ancient human ancestor
A new immunotherapy approach could work for many types of cancer
A new way to diagnose deadly lung infections and save lives
40 percent of MRI signals do not correspond to actual brain activity
How brain-inspired algorithms could drive down AI energy costs
[Press-News.org] Researchers discover bias in AI models that analyze pathology samplesThe team created a new tool that reduces bias and improves cancer diagnosis across populations