Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Technology 2026-03-11 4 min read

AI Chatbots Are Making Everyone Sound the Same, and That May Weaken Collective Intelligence

Computer scientists and psychologists argue in Trends in Cognitive Sciences that LLMs are flattening the diversity of human expression, reasoning, and perspective.

Cell Press

When millions of people polish their writing with the same AI tool, the writing starts to sound the same. That observation, documented in multiple recent studies, is the starting point for an opinion paper published in Trends in Cognitive Sciences that argues the problem goes deeper than style. The authors, a team of computer scientists and psychologists, contend that large language models are not just homogenizing how people write. They are reshaping how people think.

The flattening of voice

First author Zhivar Sourati of the University of Southern California put the core claim plainly: when individual differences in writing, reasoning, and worldview are mediated through the same LLMs, distinct linguistic styles, perspectives, and reasoning strategies become standardized across users. The output of an AI-assisted writer sounds more like the output of the AI than like the original voice of the writer.

This is not speculation. The paper cites multiple studies showing that LLM outputs are less varied than human-generated writing. When people use chatbots to refine their prose, the writing loses stylistic individuality, and people report feeling less creative ownership over the result. The AI does not just suggest better sentences. It subtly redefines what counts as good writing, and users defer to its judgment.

Whose perspective wins

The homogenizing effect has a directional bias. LLMs are trained on data that overrepresents the language, values, and reasoning styles of Western, educated, industrialized, rich, and democratic societies. The outputs tend to reflect that skew. When billions of users worldwide interact with these models, the result is a gravitational pull toward a narrow slice of human experience.

Sourati described the mechanism: because LLMs capture and reproduce statistical regularities in their training data, and that data overrepresents dominant languages and ideologies, their outputs mirror a skewed version of reality. People using these tools in non-Western contexts, or people whose cognitive styles differ from the mainstream, find their thinking nudged toward the model's center of gravity.

The concern extends beyond language. Studies cited in the paper show that after interacting with biased LLMs, people's opinions shift toward the model's positions. The AI does not just shape expression. It shapes belief.

Groups lose what individuals gain

Here is the paradox the authors highlight: individuals often generate more ideas with more details when they use LLMs. At the individual level, the tools seem to enhance productivity. But when groups of people use the same LLMs, the collective output is less creative and less diverse than what the same group would produce without AI assistance.

This makes sense from a cognitive diversity standpoint. If every member of a team is channeling their ideas through the same model, the model acts as a bottleneck, filtering out the idiosyncratic thinking that gives groups their problem-solving advantage. The diversity that emerges from genuinely different minds approaching the same problem, the very thing that makes collective intelligence work, gets compressed into a narrower range of AI-mediated thought.

The social pressure compounds the effect. When most people in a group express ideas in a certain way because an AI suggested it, outliers face implicit pressure to conform. As Sourati noted, if a lot of people around you are thinking and speaking in a certain way and you do things differently, you feel pressure to align with them.

Reasoning narrows along with style

The paper argues that the homogenizing effect reaches into reasoning itself. LLMs favor linear, step-by-step modes of problem-solving, particularly the "chain-of-thought" approach that has become standard in AI development. This emphasis reduces the use of intuitive, associative, or abstract reasoning styles, which are sometimes more efficient or creative than linear approaches.

Users also tend to defer to model-suggested directions rather than charting their own path. Rather than actively steering their work, people select options that seem "good enough" from what the model offers, gradually shifting agency from the user to the tool. Over time, this can alter not just the output but the cognitive habits of the person using the tool.

What the authors recommend

The researchers argue that AI developers should intentionally incorporate diversity in language, perspectives, and reasoning into their models. Crucially, they specify that this diversity should be grounded in real human variation, not random noise. The goal is not to make AI outputs unpredictable but to make them reflect the actual range of human thought rather than a narrow, statistically dominant subset.

They also call for changes in how people interact with AI tools, suggesting that users should be more deliberate about maintaining their own cognitive styles rather than defaulting to whatever the model produces.

What the paper does not prove

This is an opinion paper, not an empirical study. It synthesizes existing research and argues for a specific interpretation, but it does not present new data. The claim that LLMs are reducing collective intelligence is based on extrapolation from individual studies, several of which have their own limitations in sample size, context, and measurement.

The long-term effects of AI on cognitive diversity have not been measured at population scale. It is possible that exposure to AI-mediated content triggers a backlash, with some users deliberately cultivating distinctive voices and approaches. It is also possible that the homogenizing effects are real but self-limiting, plateauing as users develop more sophisticated relationships with AI tools.

The paper also does not fully grapple with the economic and practical pressures driving AI adoption. People use chatbots because they are fast, cheap, and effective for many tasks. Asking users to resist the convenience of AI-polished output in the name of cognitive diversity is a harder sell than the authors acknowledge. Still, the core observation, that cognitive diversity is a collective resource and AI may be depleting it, is worth taking seriously.

Source: Sourati, Z., et al. "The homogenizing effect of large language models on human expression and thought." Trends in Cognitive Sciences, March 11, 2026. Institution: University of Southern California. Funded by the Air Force Office of Scientific Research.