A $2.7 billion robotic nurse industry is rising, and ethicists say it can't replace moral judgment
The global robotic nurse industry is projected to exceed $2.7 billion by 2031. AI systems in clinical settings already predict patient outcomes, generate treatment summaries, and in some facilities, take the form of humanoid robots that interact directly with patients. The technology is moving faster than the ethical frameworks meant to govern it.
Against this backdrop, an article published in the Hastings Center Report by researchers at the University of Pennsylvania School of Nursing makes a case that the nursing profession needs to hear clearly: no matter how sophisticated AI becomes, the moral core of nursing must stay human.
What makes a moral agent
The article, led by Connie M. Ulrich, the Lillian S. Brunner Chair in Medical and Surgical Nursing at Penn, draws a specific distinction. A moral agent, as the authors define it, is a person capable of discerning right from wrong and being held accountable for their actions. AI systems, regardless of their conversational sophistication, lack three essential qualities: sentience, intentionality, and accountability.
The authors describe current AI as functioning like what philosophers might call "moral zombies." They can produce responses that look like moral reasoning. They can simulate empathy in tone and timing. But there is no inner experience behind the simulation, no capacity to actually care about a patient's suffering or to feel the weight of a decision that affects someone's life.
This matters most in situations where nursing judgment is irreducible to data. End-of-life care, for instance, involves what the authors call therapeutic presence, an intuitive exchange of shared humanity between nurse and patient. The decision to hold a dying person's hand, to adjust pain management based on a subtle change in facial expression, to know when a family needs silence rather than information: these acts draw on moral awareness that no algorithm replicates.
The risk of passive adoption
The concern is not that hospitals will suddenly replace nurses with robots. The more realistic danger is gradual erosion. AI tools generate care summaries that nurses may accept without scrutiny. Algorithmic recommendations for treatment may carry an implied authority that discourages independent clinical judgment. Over time, nurses could shift from active decision-makers to passive operators of systems they did not design and do not fully understand.
The authors offer several recommendations to prevent this drift. Nurses should participate in AI design teams to ensure tools align with clinical values. Health systems should disclose whenever AI generates summaries or treatment suggestions, so both patients and clinicians know the source of information. And AI should never be used to determine the hiring of nurses, since algorithms cannot evaluate empathy, critical reasoning, or moral character.
Trust is the currency at stake
Nursing has consistently ranked as the most trusted profession in public surveys. That trust rests on the understanding that a nurse is a person making judgments on your behalf, someone who can be held accountable and who brings human understanding to your care. If patients begin to suspect that the "nurse" making decisions about their treatment is an algorithm, that trust erodes in ways that may be difficult to rebuild.
The authors are careful to note that AI has legitimate and valuable roles in healthcare. Pattern recognition in medical imaging, drug interaction screening, administrative automation: these applications play to AI's strengths without encroaching on the moral dimension of care. The line they draw is between AI as a resource that supports human deliberation and AI as a substitute for it.
What the article does not address
The piece is a philosophical and ethical argument, not an empirical study. It does not measure whether AI adoption has actually degraded nursing judgment in specific clinical settings. It does not survey patients about their comfort with AI-assisted care. And it does not engage deeply with the counterargument that in understaffed facilities, some AI assistance may be better than no assistance at all.
The global nursing shortage is severe. If a robot can take vital signs, remind a patient to take medication, and alert a human nurse when something is wrong, that may free the human nurse to spend more time on the moral and relational aspects of care that the Penn authors rightly emphasize. The article acknowledges this possibility only obliquely.
There is also the question of accountability when AI does cause harm. Current legal and regulatory frameworks are poorly equipped to assign responsibility when an algorithm contributes to a bad outcome. The article flags this gap but does not propose solutions.
Still, the central argument is difficult to dismiss. Patients come to healthcare settings to be heard and seen by skilled professionals. As AI's role expands, the profession will need to articulate clearly what only a human can provide, and to build institutional structures that protect it.