Medicine Technology 🌱 Environment Space Energy Physics Engineering Social Science Earth Science Science
Science 2026-02-20 3 min read

NSF-Funded Project Uses Role-Playing Games to Study Research Security Vulnerabilities

The University of Illinois REDTEAM project will run structured role-playing simulations with faculty, administrators, and cybersecurity experts to identify how human decision-making creates security risks in academic research environments.

The dominant language of research security tends to be technical: firewalls, export controls, data classification, access management. These are real concerns, but they share a blind spot - they treat security as a systems problem while the most consequential vulnerabilities often emerge from human choices made under conditions of competing pressure, incomplete information, and institutional ambiguity.

A new National Science Foundation award supports a project at the University of Illinois School of Information Sciences that takes a different approach. Called REDTEAM (Research Environment Defense Through Expert Attack Modeling), the project will use structured role-playing game workshops to examine how researchers, administrators, and security professionals actually make decisions when facing realistic security dilemmas - and where those decisions create exploitable gaps.

Why Role-Playing Games

Red-teaming is an established methodology in cybersecurity and intelligence: a team attempts to attack or subvert a system in order to identify its weaknesses before actual adversaries do. Traditional red-teaming exercises focus on technical systems. REDTEAM extends the concept to human and organizational dynamics in academic research environments.

The choice of structured role-playing games as the vehicle for this extension is deliberate. Conventional security training relies on policy review, compliance checklists, and tabletop discussion exercises. These formats tend to elicit ideal behavior - what participants know they should do - rather than realistic decision-making under the pressures that actually characterize research environments: deadline pressure, funding incentives, the collaborative norms of academic science, and ambiguous situations where security requirements and scientific openness appear to conflict.

Role-playing games place participants inside scenarios as specific characters with defined roles and interests. When a faculty member playing "PI under grant deadline pressure" or an administrator playing "research compliance officer with limited resources" has to make a decision that pits security against productivity, the game surfacews trade-offs that policy documents do not acknowledge and compliance training does not prepare people for.

"Role-playing games allow us to bring players' attention to how they make decisions," said David Dubin, teaching associate professor and co-principal investigator. "Peer-empowered RPGs like Fiasco invite participants to think more critically about assumptions and incentives than do games governed by adjudicators or traditional discussion-based exercises." The team has selected Fiasco, an award-winning collaborative RPG designed by Jason Morningstar, as one of its frameworks - a game specifically structured to explore how people's decisions lead to unintended consequences.

Who Will Participate

The project will convene two intensive workshops drawing participants from multiple universities across the research ecosystem. The participant mix is explicitly interdisciplinary: faculty researchers, cybersecurity professionals, research administrators, and compliance officers. This combination is important because security vulnerabilities in academic environments typically emerge from the intersections between these groups - from miscommunications, from different threat models, and from institutional incentive structures that do not align security priorities with research productivity goals.

Anita Nikolich, director of research and technology innovation and principal investigator on the project, framed the underlying problem: "Research security isn't just a technical problem, it's a human problem. If we want effective security programs, we have to understand the real pressures researchers face and design solutions grounded in how science actually happens."

The Stakes Are Specific

Academic research security has become a priority concern as geopolitical competition over advanced technologies intensifies. International research collaborations, which are essential to the functioning of science, can also create channels for technology transfer and intelligence gathering that adversaries exploit. Export control regulations, disclosure requirements for foreign funding, and restrictions on certain types of data sharing have all been tightened in recent years - but compliance frameworks lag behind the actual behavioral landscape they are trying to govern.

REDTEAM will not produce a technical fix. Its output is insights into the social dynamics and human decision-making patterns that shape risk, and frameworks for designing security programs that account for these realities rather than assuming they can be overridden by policy alone. Whether role-playing game workshops can generate insights transferable to actual security improvements - rather than producing interesting findings that remain in academic reports - is a question the project will need to take seriously as it moves toward implementation.

Source: University of Illinois School of Information Sciences. NSF-funded REDTEAM project, PI: Anita Nikolich. Media contact: Cindy Brya, brya@illinois.edu, 217-333-8312.