Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Technology 2026-03-04 3 min read

72% of Teen AI Chatbot Users Risk Mistaking Algorithms for Friends

A CHOP review in Pediatrics maps how generative AI benefits and harms shift across childhood stages - and warns that current guardrails are still too porous.

A 2025 survey found that 72% of American teenagers have used AI chatbots as companions. That number alone prompted pediatric researchers at Children's Hospital of Philadelphia to ask a harder question: what does a developing brain actually do with a conversational machine that never gets tired, never judges, and sometimes gets things badly wrong?

Their answer, published in the journal Pediatrics, is that it depends almost entirely on how old the child is - and that parents, pediatricians, and policymakers are working with outdated mental models of the risks involved.

Why age matters more than screen time

Generative AI is not a single technology used in a single way. A four-year-old interacting with an AI storytelling app is doing something cognitively and developmentally different from a sixteen-year-old asking a chatbot for advice about a bad week at school. The CHOP team, led by Dr. Robert Grundmeier and Dr. Alexander Fiks, structured their review around three developmental stages, each with its own risk profile.

In early childhood, from birth to age five, the primary concern is the inability to distinguish AI from human interaction. Interactive AI storytelling can support vocabulary development, which is a genuine benefit. But children this young may form mental models of social relationships based partly on exchanges with systems that do not actually reciprocate socially. The researchers recommend that human interaction take priority and that parents watch AI content alongside young children rather than leave them with it unsupervised.

In middle childhood, ages six to eleven, the risks shift. These children are capable of more complex engagement with AI tools - personalized learning, creative writing assistance - but they are also more susceptible to misinformation. They have not yet developed strong critical filters for AI-generated content and may be tempted to submit AI-completed work as their own. The CHOP authors say parents should actively encourage skepticism toward AI outputs and keep channels for conversation open.

Adolescence and the mental health gap

The most acute concerns involve teenagers. Some research suggests AI companionship can address loneliness in adolescents, a real problem given documented declines in face-to-face socializing among this age group. But that potential benefit comes with a significant hazard: AI systems frequently lack adequate guardrails around mental health and suicide topics.

An adolescent in crisis asking an AI chatbot about self-harm may receive a response that is unhelpful at best and harmful at worst. The researchers flag this gap explicitly. They also note that heavy reliance on AI companions - the kind of reliance implied by 72% chatbot adoption - can crowd out exactly the peer socialization and independent thinking that adolescence is supposed to develop.

"It is critical to emphasize that AI is a tool, not a companion, and we need to make sure we are instilling healthy AI literacy and social development in children," said Grundmeier, Section Chief of Informatics at CHOP. "Children, particularly in early and middle childhood, may not be able to distinguish between AI and human interaction and are at risk of developing incorrect mental models of social relationships if they view AI as a friend."

What the review asks of adults

The CHOP paper is not a call to ban children from AI - the researchers acknowledge the technology's genuine educational potential. It is a call for more structured engagement at each developmental stage, with pediatricians playing a larger role in guiding families.

Practically, the recommendations include: close supervision for young children, co-viewing of AI-generated content, fostering a questioning attitude toward AI outputs in school-age children, and explicit family conversations about the limits of AI as a social substitute for teens.

The authors also acknowledge that the guidance will need to keep changing. "This rapidly growing field is going to require continuous research to inform parental guidance and policy to maximize the benefits of these tools while doing everything to mitigate potential harms and keep children safe," said co-author Dr. Fiks, Director of Clinical Futures at CHOP.

What the field can say now is that the question is not whether children will use generative AI - they already are, at scale - but whether the adults around them understand what they're actually dealing with at each stage of development.

Source: Grundmeier R et al., "Generative Artificial Intelligence: Implications for Families and Pediatricians," Pediatrics, March 4, 2026. DOI: 10.1542/peds.2025-074912. Children's Hospital of Philadelphia.