Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Medicine 2026-03-17 3 min read

Multi-view AI reads heart ultrasounds the way cardiologists do - by looking at more than one angle

A new neural network architecture combines multiple echocardiogram views simultaneously, outperforming single-view AI for detecting three major cardiac conditions.

A cardiologist reading an echocardiogram never looks at just one slice. They scroll through dozens of views, mentally stitching together a three-dimensional picture of a beating heart from flat, grayscale images. But until now, the AI systems built to assist them have been stuck doing something much simpler - analyzing a single two-dimensional view at a time.

That limitation is significant. The left ventricle, for instance, can look perfectly normal from one angle while hiding serious dysfunction in a wall only visible from another. A team at UC San Francisco decided to fix this by building a neural network architecture that processes multiple echocardiogram views simultaneously - the way a human reader would.

Stitching together a 3D heart from 2D slices

The UCSF team, led by cardiologist Geoffrey Tison and first author Joshua Barrios, designed what they call a "multiview" deep neural network (DNN). Rather than feeding the model a single echocardiogram view and asking it to render a verdict, their architecture ingests multiple views at once and learns to extract disease-relevant relationships between them.

They trained the system to detect three cardiovascular conditions: left and right ventricular abnormalities, diastolic dysfunction, and valvular regurgitation. Each of these requires information scattered across different echo views - exactly the kind of task where a single-view model falls short.

The results, published March 17 in Nature Cardiovascular Research, showed that the multiview DNN consistently outperformed any single-view model across all three diagnostic tasks. The team validated their findings using echocardiogram data from both UCSF and the Montreal Heart Institute, demonstrating that the improvement held across different patient populations and imaging practices.

Why one view tells only part of the story

Consider assessing left ventricular function. The standard apical four-chamber view (A4c) captures the inferoseptal and anterolateral walls well. But the anterior and inferior walls? Those require a perpendicular view called the apical two-chamber (A2c). A patient could show completely normal wall motion in one view and significant dysfunction in the other.

This is the fundamental problem with single-view AI in echocardiography. Each view contains only partial information, and the clinical picture emerges from combining them. The multiview architecture was explicitly designed to let the model learn these cross-view relationships - the interplay between features visible in different imaging planes.

A cheaper alternative that still beats single-view models

The researchers also tested a simpler approach: training three separate single-view DNNs and averaging their predictions. This ensemble method improved over any individual single-view model and required less computational power than the full multiview architecture. It offers a practical middle ground for clinical settings where computing resources are limited.

But the multiview DNN still came out on top. Its integrated architecture captured relationships between views that simple averaging could not.

Beyond the echo lab

The architecture is not limited to cardiac ultrasound. Any medical imaging modality that captures complementary information across multiple views - CT, MRI, even X-rays taken from different angles - could potentially benefit from the same approach. Barrios noted that the multiview design "can also be applied to other medical imaging modalities where multiple views contain complementary information."

Still, there are limits to what this study demonstrates. The team validated on data from two institutions, which is better than one but far from the dozens of sites needed to prove generalizability across the full range of imaging equipment, sonographer skill levels, and patient populations seen in real-world cardiology. The diagnostic tasks were also binary classifications - abnormal versus normal - rather than the nuanced grading that clinicians perform in practice.

The multiview approach also requires that all relevant views are available and of adequate quality, which is not always the case in clinical practice. Poor acoustic windows, incomplete studies, and variable sonographer technique remain real challenges that no neural network architecture alone can solve.

The road to clinical integration

Heart disease remains the leading cause of adult death globally, and echocardiography is one of the most widely used diagnostic tools. Standard echo studies routinely capture hundreds of 2D images. An AI system that can synthesize information across those views - the way a trained cardiologist does instinctively - has obvious clinical appeal.

The UCSF team's work does not represent a finished clinical product. It is a proof of concept that multiview architectures outperform single-view ones for cardiac diagnosis, with validation at two centers. The next steps would likely involve larger multi-site trials, integration with clinical workflows, and regulatory review. But the underlying principle - that AI should look at the whole picture, not just one frame - seems difficult to argue with.

Source: Barrios, J. et al. "Multiview deep neural networks for echocardiographic diagnosis." Nature Cardiovascular Research, published March 17, 2026. Research conducted at UC San Francisco with validation data from the Montreal Heart Institute. Funded by the National Institutes of Health (K23HL135274, R56HL161475, DP2HL174046).