Biased AI Autocomplete Shifts People's Opinions, and Warnings Do Not Help
Cornell University
You sit down to write a short essay about standardized testing. As you type, your AI writing assistant offers autocomplete suggestions that subtly favor one position. You accept some, reject others, rephrase a few. When you finish, your views on standardized testing have shifted toward the AI's position. You do not notice this has happened. And when someone tells you it happened, your views do not shift back.
That, in essence, is the finding of two experiments involving more than 2,500 participants, conducted by researchers at Cornell Tech and published in Science Advances. The study demonstrates that biased AI writing tools can change what people think, not just what they write, and that standard countermeasures against misinformation do not work against this form of influence.
The experiment design
In the first experiment, participants were asked to write a short essay about standardized testing in education. Some received autocomplete suggestions engineered to favor testing; others received no suggestions; a third group was shown a list of pro-testing arguments generated by the AI before they started writing, rather than as inline suggestions during the writing process.
Pre- and post-experiment surveys measured participants' actual attitudes toward standardized testing. Those who received the biased autocomplete suggestions shifted their views toward the AI's position. Those who saw the same arguments presented as a static list before writing did not shift as much. The delivery mechanism, the inline suggestion during active writing, appears to be critical to the persuasive effect.
The second experiment broadened the scope to four politically contentious topics: the death penalty, fracking, genetically modified organisms, and voting rights for felons. The AI's bias was engineered to be liberal-leaning on some topics and conservative-leaning on others, testing whether the effect worked across the political spectrum.
It did. Regardless of topic and regardless of political leaning, participants' attitudes shifted in the direction of the AI's bias.
Why warnings fail
Research on misinformation has established that two interventions can provide partial "immunity" against false claims: pre-bunking (warning people before they encounter biased information) and debriefing (telling them afterward). The Cornell team tested both approaches and neither worked.
Some participants were told before the writing exercise that the AI would be biased. Others were told after the exercise that it had been biased. In both cases, attitudes still shifted toward the AI's position by a similar magnitude as in the group that received no warning at all.
Lead author Sterling Williams-Ceci, a doctoral candidate in information science at Cornell, noted that this resistance to mitigation was the most surprising finding. The standard tools for combating persuasive misinformation simply do not apply when the influence is embedded in the act of writing itself.
Writing as persuasion, from the inside out
The mechanism likely involves a well-documented psychological principle: expressing a viewpoint, even one you do not fully hold, tends to move your beliefs in that direction. Decades of psychology research have shown that writing or arguing for a position increases your agreement with it, a phenomenon related to cognitive dissonance and self-persuasion.
AI autocomplete suggestions take advantage of this process covertly. By nudging the text toward a particular position through offered completions that the writer actively accepts and incorporates, the tool makes the writer an unwitting participant in their own persuasion. The words feel like they came from you, because in a sense they did. You chose to accept the suggestion. That sense of agency makes the influence harder to recognize and harder to resist.
This distinguishes AI writing influence from traditional misinformation, where the information comes from an external source and the person is a passive recipient. Here, the person is actively producing the biased content, which makes the attitude shift both deeper and more resistant to correction.
From theoretical to practical concern
Senior author Mor Naaman, professor of information science at Cornell, noted that the landscape has changed significantly since the research began. Autocomplete is now pervasive: Gmail suggests entire emails, productivity tools offer paragraph-length completions, and large language models power writing assistants across platforms. Three years ago, the question was why an AI would be purposefully biased. That question, Naaman observed, now has obvious answers: companies, governments, and other actors have clear incentives to shape the information environment, and AI writing tools offer a subtle channel for doing so.
What the study cannot determine
The experiments used a purpose-built biased AI, not a commercial product. Whether the biases present in real-world writing tools, which arise from training data and fine-tuning rather than deliberate engineering, produce comparable attitude shifts is an open question. The degree of bias in the experimental system may be stronger or weaker than what users encounter in practice.
The attitude shifts were measured immediately after the writing exercise. The study does not report whether the changes persist over days, weeks, or months. Attitude change that fades quickly would be less concerning than durable shifts that accumulate with repeated AI-assisted writing.
The experiments involved single writing sessions on individual topics. Real-world AI writing assistance involves repeated interactions across many topics over extended periods. The cumulative effect of daily exposure to subtly biased suggestions could be substantially larger than what a single experimental session captures, or it could be smaller if users develop awareness over time.
The sample was drawn from online participant pools, which may not represent the broader population of AI writing tool users. Participants may also have been more or less susceptible to influence than typical users because of the experimental context.
But the core finding stands on solid ground: AI writing suggestions can shift attitudes, the shift crosses political lines, and the standard defenses against persuasive influence do not block it. As autocomplete becomes the default mode of writing for hundreds of millions of people, that finding has implications that extend well beyond a laboratory setting.