Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Medicine 2026-02-25 3 min read

Warning Labels on False Cancer Cure Claims Cut Sharing in 1,051-Person Trial

An online experiment tested whether flagging false cancer treatment claims on simulated social media posts reduced willingness to share them

False claims about cancer treatments circulate on social media at scale. Some promise that commercial supplements cure tumors. Others offer dietary regimens or alternative therapies as definitive cures - claims that can delay evidence-based treatment for patients already navigating a devastating diagnosis. The question of whether platform-level interventions can interrupt this spread without triggering backlash has practical urgency.

A study published in PLOS ONE tested one specific approach: labeling posts containing false cancer treatment claims as potentially inaccurate, then measuring whether that label changed participants' intent to share the content. The experiment included 1,051 U.S. adults recruited for an online study, making it one of the more adequately powered investigations of this question conducted to date.

How the Experiment Worked

Participants were shown simulated social media posts - mock-ups designed to resemble the format of platforms they would recognise - containing cancer treatment claims verified as false or misleading by health authorities. Some participants saw posts with warning flags indicating the content had been identified as potentially false. Others saw the same posts without flags. Researchers then measured how likely participants said they were to share the post with their own network.

The flagged posts produced lower sharing intent than the unflagged versions. The intervention did not eliminate willingness to share - some participants reported they would share flagged content regardless - but the aggregate effect was a measurable reduction. The study was funded by a UNC Lineberger Comprehensive Cancer Center Developmental Award supported by Cancer Center Core Support Grant P30 CA016086.

What This Kind of Study Can and Cannot Establish

Online experiments testing sharing intent carry known limitations. Participants know they are in a study, which may make them more reflective about their sharing decisions than they would be scrolling through their actual feeds. Stated intent to share is not the same as actual sharing behavior - laboratory settings consistently overestimate the effect of information interventions on real-world behavior compared to field experiments on live platforms.

The study also cannot determine whether flagging false content might produce backfire effects in some populations - prior research on vaccine misinformation has found that corrections can, in some circumstances, strengthen belief in the original claim among individuals with high prior commitment to that belief. Cancer treatment misinformation attracts an audience that varies widely in health literacy, trust in medical institutions, and prior exposure to alternative health content. Whether the flagging effect holds across that spectrum requires subgroup analysis that a 1,051-person sample can only partly support.

The Broader Context of Health Misinformation

Social media platforms have implemented a range of interventions to reduce misinformation spread, including content removal, account suspension, reduced algorithmic amplification, and labeling. Research on the comparative effectiveness of these approaches is inconsistent, partly because platforms rarely publish data on what they test internally, and partly because the same intervention may perform very differently across topics, populations, and platform architectures.

Cancer misinformation is a specific case with high stakes. Patients who pursue unproven treatments in place of standard care face measurable survival penalties. The ability to reach those patients with accurate information before they make irreversible decisions is a genuine clinical priority. This study's finding that warning flags reduce sharing intent among a general U.S. adult sample is encouraging, though its durability in real-world platform settings and interaction with algorithmic amplification requires further investigation.

Source: "Intervening and reducing sharing of false cancer treatments on social media: Online experiment." PLOS ONE, February 25, 2026. Available at: https://plos.io/4cccZdV. Funded by UNC Lineberger Comprehensive Cancer Center (P30 CA016086). Author country: U.S. Media contact: Hanna Abdallah, PLOS - onepress@plos.org