AI at Work Is Creating Psychosocial Risks Faster Than Regulations Can Adapt
Universitat Oberta de Catalunya (UOC)
When a warehouse worker learns that an algorithm now schedules their shifts, evaluates their productivity, and flags them for falling below an efficiency threshold they cannot see or understand, something beyond automation has entered the workplace. The algorithm is not just performing a task. It is making decisions about a person's working life using criteria that person may never be allowed to examine.
A new study from the Universitat Oberta de Catalunya (UOC) in Barcelona surveys the landscape of these changes and catalogs the occupational health risks that follow when artificial intelligence moves from performing tasks to organizing work and evaluating workers.
Not the first technology to change work, but different this time
"Industrial mechanization and electricity profoundly changed the way people worked, but artificial intelligence marks an entirely new turning point," said Xavier Baraza, dean of the UOC's Faculty of Economics and Business and co-author of the study with Professor Joan Torrent. Previous technologies automated physical tasks. AI automates decisions.
The study, published as open access in the journal Encyclopedia, examines current AI applications across the labor market. In industrial settings, AI manages assembly lines, logistics operations, and quality control. In service sectors, chatbots handle routine customer interactions. In human resources, algorithms screen job applicants, schedule shifts, and track performance metrics.
The efficiency gains are real. Repetitive and high-precision tasks performed by AI reduce human error rates and free workers for more complex assignments. In occupational health and safety, predictive models are already analyzing accident data to identify patterns that can inform preventive measures. Collaborative robotics in advanced manufacturing environments work alongside humans on tasks that would be dangerous or ergonomically harmful for people alone.
But the study's focus is on what comes alongside those gains.
Three psychosocial risks emerging from AI deployment
The researchers identify three primary psychosocial risks that AI introduces to the workplace:
Technostress is the first. When organizations deploy new AI tools without adequate training time or support, workers experience stress from the pressure to adapt. The pace of AI deployment often outstrips the pace of workforce preparation, creating a gap between what workers are expected to do and what they feel equipped to handle.
Surveillance anxiety is the second. Smart cameras, biometric sensors, and productivity-tracking algorithms can create the sensation of being watched constantly. The study notes that this perception of excessive monitoring erodes trust, damages workplace culture, and makes workers feel they are losing their privacy and autonomy. The monitoring may be well-intentioned, aimed at safety or efficiency, but its psychological effects can be corrosive regardless of intent.
Algorithmic opacity is the third. When workers do not understand how AI systems make decisions that affect their performance evaluations, schedules, or job security, the result is uncertainty and mistrust. A human manager can explain their reasoning. An algorithm that assigns a performance score based on dozens of weighted variables typically cannot, at least not in terms a worker can meaningfully interrogate.
The regulation gap
The study finds that existing regulatory frameworks for occupational health and safety were designed before AI entered the workplace and have not fully adapted. Baraza argues that the challenge is not primarily about writing entirely new laws. "There is a sound framework in place for the protection of workers that is still fully effective. The challenge isn't so much enacting completely new laws as adapting and interpreting the existing framework for new scenarios."
The key, according to the researchers, is applying preventive criteria at the design stage of AI systems, not waiting until workplace harms have already materialized. This means involving occupational health experts in the specification and deployment of AI tools, not just IT departments and management.
The European Union's AI Act, which began phased implementation in 2025, represents the most comprehensive regulatory attempt to date. But translating broad regulatory principles into specific workplace protections requires detailed guidance that is still being developed across member states.
The study's limitations
This is a review and analysis paper, not an empirical study with original data collection. It synthesizes existing literature and regulatory analysis rather than measuring the prevalence or severity of AI-related workplace harms directly. The psychosocial risks it identifies are well-supported by prior research, but the study does not quantify how common they are across industries, countries, or organizational sizes.
The analysis focuses primarily on the European context, where labor protections and regulatory approaches differ significantly from those in the United States, Asia, and other regions. The risks may manifest differently in jurisdictions with weaker worker protection frameworks.
The study also does not address the flip side in detail: the extent to which AI-driven workplace changes may reduce certain risks, such as physical injuries from automation of dangerous tasks, could offset the psychosocial harms it creates. A complete accounting would need to weigh both sides.
Where the conversation goes next
The researchers plan to move from broad analysis to specific case studies, examining how AI is being used in particular work contexts and measuring its actual effects on health, safety, and work organization. "Our goal is to produce useful evidence to help companies, institutions and public officials make better decisions," Torrent said.
The underlying tension is unlikely to resolve quickly. AI tools deliver measurable efficiency gains that organizations are eager to capture. The psychosocial costs fall on individual workers and are harder to measure, slower to manifest, and easier to dismiss. Bridging that asymmetry, making the human costs of AI deployment as visible and quantifiable as its productivity benefits, is the challenge the study outlines but cannot yet resolve.