Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Technology 2026-03-18

Why AI keeps failing at work - and what 'cognitive alignment' could fix

A Stevens Institute researcher argues that human-AI partnerships fail not because the technology is too weak or too strong, but because the two sides never learn to think together

Companies keep blaming AI failures on the technology. Either the system was not powerful enough, or it was too powerful to trust. Bei Yan, an assistant professor at Stevens Institute of Technology's School of Business, thinks both explanations miss the point. The real problem, she argues in a new paper, is that humans and machines never learned to work together - and most organizations never gave them the chance to.

The Han Solo problem

Yan opens her paper with a pop culture analogy that is more instructive than it first appears. In Star Wars, Han Solo ignores C-3PO's carefully calculated 3,720-to-1 odds against surviving an asteroid field. It makes for entertaining cinema. It also illustrates exactly the kind of human-machine disconnect that tanks AI projects in the real world - a human operator who dismisses machine intelligence based on gut feeling, or conversely, a user who follows AI recommendations blindly without applying any independent judgment.

"People think differently than AI," Yan says. "People use experience, judgment, and social cues. AI uses statistical patterns learned from data." Those differences can be complementary - humans spot context that data misses, while AI catches patterns that humans overlook. But only if both sides understand what the other brings to the table. And that understanding, Yan argues, almost never develops on its own. It has to be cultivated deliberately, with time, training, and organizational support.

What 'hybrid cognitive alignment' actually means

In her paper, published March 18, 2026 in the Academy of Management journal and titled "Syncing Minds and Machines," Yan introduces the concept of hybrid cognitive alignment - the gradual development of shared expectations between humans and AI about what the system is for, how it should be used, and when human judgment should override machine output.

This is not a technical specification that engineers can write into software. It is a social process. Alignment does not happen when an AI system is deployed. It emerges over time as people learn how the AI behaves in specific situations, adapt how they interact with it based on experience, and recalibrate their trust based on observed outcomes. Think of it less like installing software and more like onboarding a new colleague - one who processes information in a fundamentally alien way, never tires, never forgets, and also never understands why a customer is upset.

The distinction matters because most companies treat AI deployment as a plug-and-play operation. They divide tasks between humans and machines upfront, document the division in a workflow chart, and move on. That works if the work environment is stable and the tasks are predictable. Most work is neither. Markets shift. Customers present novel problems. Regulations change. Crises erupt. In dynamic environments, the initial task division becomes outdated quickly, and without a process for renegotiating it, the human-AI partnership degrades.

When the market crashes and the algorithm cannot adapt

Yan points to high-frequency trading as one example where misalignment can be catastrophic. AI algorithms monitor markets at speeds no human can match, spotting trends and executing trades in milliseconds. But when something unexpected happens - a sudden market drop, a major policy announcement from a central bank, an inflation data release that contradicts consensus forecasts - the algorithms can misread the situation badly. They were trained on historical patterns, and the event they are confronting may have no historical precedent. The models see noise where a human trader would see a regime change.

"The algorithms are trained with preset rules, so AI is not really designed to understand such events," Yan says. In those moments, human judgment is essential. But if the human operators have not developed an intuitive understanding of when to trust the AI and when to override it - if there is no cognitive alignment between the human's mental model and the machine's statistical model - the result can be catastrophic. Flash crashes, where automated trading spirals out of control, are not hypothetical scenarios. They are recurring events in financial markets, and each one represents a failure of human-AI alignment rather than a failure of technology per se.

Medical imaging presents a different version of the same problem. AI systems trained on millions of X-rays and CT scans can detect cancers that a physician's eye might miss. The pattern recognition is genuinely superior in certain well-defined tasks. But the AI does not know the patient's medical history, their response to previous medications, their preferences about treatment aggressiveness, or the clinical context that shapes what a diagnosis actually means for this particular person. Without a clear framework for how the AI's pattern recognition and the physician's clinical expertise should interact, the technology adds friction rather than value. Physicians either over-trust the AI and stop thinking critically, or distrust it and ignore useful findings.

Customer service and the nuance gap

Customer service offers yet another angle on the alignment problem. AI chatbots can search internal documents at record speed and draw on thousands of previous interactions to suggest responses. But they frequently misunderstand the specific problem a customer is describing, especially when it falls outside the patterns in their training data or involves emotional nuance that statistical models handle poorly. A customer who says "this is the third time I've called about this" is not just describing a technical problem. They are communicating frustration, erosion of trust, and an implicit ultimatum. The AI hears the words. It misses the meaning.

The result is a frustrated customer and a human agent who has to clean up the mess - often with less information about what went wrong than if they had handled the call from the start. In each of these cases, the AI is competent within its narrow domain. The failure is not in the technology but in the collaboration. No one taught the humans and machines how to work together, and the organizations deploying them assumed that competence in isolation would translate to competence in partnership. It usually does not.

What companies should actually do differently

Yan's recommendations are practical, if demanding. Companies rolling out AI should focus less on static task division and more on how roles and responsibilities shift over time as both humans and AI learn from each other. Training programs should emphasize not just how to operate the system but how to recognize its limitations, how to interpret its outputs critically, and when human judgment should take precedence. Teams need a genuine ramp-up period during which the division of labor is expected to evolve rather than be fixed from day one.

"Treating AI as a 'plug-and-play' solution often backfires," Yan says. "Treating it as a new collaborator yields better results. For managers, these implications are immediate."

For AI developers, the implications are equally direct. Systems should communicate their capabilities and limitations clearly to users - not in technical documentation that no one reads, but in the interaction itself. They should support user learning over time, providing feedback that helps users calibrate their trust appropriately. And they should be designed for collaboration from the outset, not optimized solely for standalone performance on benchmark tasks.

What this framework does not yet prove

Yan's paper is conceptual rather than empirical. It builds a theoretical framework for understanding human-AI collaboration failures and proposes hybrid cognitive alignment as an organizing concept. But it does not test that concept in a controlled experimental setting with measurable outcomes. The paper draws on examples from trading, medicine, and customer service, but these are illustrations of the framework, not evidence for it.

That is a meaningful limitation. The concept of cognitive alignment is intuitively appealing - of course humans and machines should develop shared expectations about each other's capabilities. But the hard questions are practical ones that the paper leaves unanswered. How long does alignment take to develop in different work contexts? What specific training interventions accelerate it? Does alignment in one task domain transfer to others? Can it be measured reliably? Is there a point of diminishing returns? These are the questions that future empirical work will need to address before the framework can inform organizational practice with confidence.

Still, Yan's central point stands on its own logic: the conversation about AI failure has focused too heavily on the technology and not enough on the relationship between technology and its users. "The promise of AI lies not in making machines smarter in isolation," she says, "but in making human-AI collaboration work better. Alignment, not raw intelligence, is what turns AI from a source of frustration into a source of value."

Source: "Syncing Minds and Machines: Hybrid Cognitive Alignment as an Emergent Coordination Mechanism in Human-AI Collaboration," published March 18, 2026 in the Academy of Management journal. Author: Bei Yan, Stevens Institute of Technology School of Business, Hoboken, NJ.