Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Medicine 2026-03-13 4 min read

AI tool detects intimate partner violence risk years before patients seek help

NIH-funded machine learning models trained on routine medical records predicted IPV with 88% accuracy, flagging cases an average of 3.7 years before patients enrolled at a domestic violence center.

For decades, healthcare's approach to intimate partner violence has depended on patients disclosing abuse - telling a doctor, a nurse, or a social worker that someone is hurting them. But IPV affects millions of people in the United States, and many never disclose. Fear, stigma, financial dependence, and the very real danger of retaliation keep patients silent. By the time they do seek help, they may have endured years of physical and psychological harm.

What if the medical record itself could flag the risk?

A team led by researchers at Harvard Medical School and Mass General Brigham, funded by the National Institutes of Health, has built machine learning models that do exactly that. Using data routinely collected during medical visits - diagnoses, medications, clinical notes, radiology reports, and social determinants of health - the models identified patients at risk of IPV with up to 88% accuracy. More striking, they detected that risk an average of 3.7 years before patients enrolled at a hospital-based domestic violence intervention center.

Three models, one clear winner

The research team designed three distinct AI models to account for the variability in how different healthcare systems collect and record data. The first used only structured data - the kind stored in neat database tables, including diagnoses, medication lists, and a social deprivation index based on zip code. The second used unstructured data - free-text clinical notes, radiology reports, and emergency department documentation. The third, called Holistic AI in Medicine (HAIM), fused both data types.

The models were trained on records from nearly 850 female patients who had visited a domestic abuse intervention and prevention center at a U.S. academic health center between 2017 and 2022, along with approximately 5,200 demographically matched controls who had not reported IPV.

All three models performed well when tested on a separate group of 168 patients and over 1,000 controls. But the fusion model achieved the highest accuracy at 88%. When evaluated using time-stamped archived records, it predicted 80.5% of cases in advance - on average more than 3.7 years before patients sought care at the intervention center. Both the tabular model and the fusion model could flag risk years ahead, though each had different strengths: the tabular model achieved slightly earlier recognition, while the fusion model detected more cases overall.

What the data reveals

The study identified several clinical patterns associated with higher IPV risk: mental health disorders, chronic pain, and frequent emergency department visits. Conversely, patients who regularly accessed preventive services like mammograms and immunizations had lower risk - suggesting that engagement with routine healthcare may itself be a protective signal.

The inclusion of radiology data proved particularly valuable. Previous research by senior author Bharti Khurana, an emergency radiologist at Mass General Brigham, had shown that women who frequently undergo imaging studies in the emergency department and have specific injury patterns are more likely to later report IPV. The new AI models built on that finding, incorporating radiological patterns as one input among many.

A decision support tool, not a diagnostic

The researchers were emphatic that these models are not designed to diagnose IPV or make clinical decisions. They are decision support tools intended to help healthcare providers initiate earlier, safer, more informed conversations with patients who may be at risk.

Khurana described the approach as a shift from reactive disclosure to proactive risk recognition within routine clinical care. Rather than waiting for patients to volunteer information about abuse, clinicians would receive a risk flag and be equipped with guidance on how to approach the conversation supportively.

The research team developed companion resources at their project website to help clinicians navigate these conversations. The goal, Khurana emphasized, is never to force disclosure but to create conditions where patients feel safe enough to accept help if they choose to.

Limitations and what comes next

The models were developed and validated using data from patients who had already sought care for or disclosed IPV. This introduces a selection bias: the AI may be less accurate for individuals who are experiencing violence but have not yet interacted with domestic violence services. The control group may also contain false negatives - patients who were experiencing IPV but never reported it - which could reduce apparent model accuracy.

The study population was predominantly female, reflecting the higher documented prevalence of IPV among women. Whether the models would perform similarly for male patients or for populations with different demographic profiles remains untested.

The team plans to develop a decision-support tool embedded directly in electronic medical record systems, providing real-time IPV risk evaluations during clinical encounters. Expanding the training data to include larger, more diverse patient populations over longer time periods should improve both accuracy and generalizability.

The short answer to whether AI can meaningfully improve IPV detection: the early evidence says yes, but the tool's real-world impact will depend on whether healthcare systems adopt it - and whether clinicians are trained to act on the information it provides.

Source: NIH National Institute of Biomedical Imaging and Bioengineering (NIBIB). Gu J, Villalobos Carballo K, Ma Y, Bertsimas D, and Khurana B. Published in npj Women's Health. DOI: 10.1038/s44294-025-00126-3. Funded by NIBIB grant R01EB032384.