New framework addresses privacy, dignity risks posed by modern ai systems
In a new article, researchers introduce the capabilities approach-contextual integrity (CA-CI), a framework that addresses privacy and dignity risks posed by modern artificial intelligence (AI) systems, especially foundation models whose capabilities evolve across contexts and purposes. In a case study, they demonstrate how CA-CI can operationalize the European Union (EU)’s AI Act’s fundamental rights impact assessments, harm thresholds, and anticipatory governance.
The article, by researchers at Carnegie Mellon University and the University of Michigan, is published in IEEE Security & Privacy.
“By grounding AI oversight in both contextual norms and universal dignity requirements, our framework offers a practical and robust approach to operationalizing ethics in AI governance,” explains Kirsten Martin, dean of Carnegie Mellon’s Heinz College of Information Systems and Public Policy, who coauthored the study. Kat Roemmich, research associate at the University of Michigan, led the study, and Florian Schaub, associate professor of information as well as electrical engineering and computer science at the University of Michigan, is a coauthor.
The widespread use of AI systems carries with it risks to privacy and challenges to governance that correspond with models’ complexity, autonomy, and cross-domain integration. Regulators, providers, and users struggle to manage risks within systems that learn and generalize autonomously. As these systems evolve, the once-assumed observability, traceability, and contextual stability of information flows erodes as their potential for breach, misuse, and harms to dignity increases.
Addressing these challenges requires a governance framework that can evaluate the normative appropriateness of AI systems beyond narrow tasks and stable contexts, a challenge the authors addressed by integrating contextual integrity with the capabilities approach. Specifically, CA-CI:
The EU’s the General Data Protection Regulation enshrines a purpose limitation principle, requiring data to be “collected for specified, explicit, and legitimate purposes and not further processed in a manner incompatible with those purposes.” It also mandates data protection impact assessments for high-risk data processing that may affect fundamental rights and freedoms.
The EU’s AI Act, passed in 2024, extends this logic, prohibiting AI practices deemed to present an unacceptable risk to fundamental rights, health, or safety. It also requires certain users of high-risk systems to conduct fundamental rights impact assessments before use and after relevant system changes, and it requires providers to maintain continuous, purpose-specific risk assessments throughout the system’s life cycle.
But the act lacks a clear standard for determining what constitutes a violation to dignity beyond broad reference to fundamental rights, according to the authors. These ambiguities hinder evaluators in determining when a given practice crosses the moral boundary of dignity, and by extension, the derivative human rights it grounds. As a result, the enforceability of dignity as a foundational normative principle becomes increasingly tenuous.
Meeting this challenge requires a normative governance framework for privacy and data protection that can substantively assess dignity risks across evolving socio-technical contexts throughout the AI life cycle. In applying CA-CI to key requirements of the EU’s AI Act, the authors show how the framework:
END
The article, by researchers at Carnegie Mellon University and the University of Michigan, is published in IEEE Security & Privacy.
“By grounding AI oversight in both contextual norms and universal dignity requirements, our framework offers a practical and robust approach to operationalizing ethics in AI governance,” explains Kirsten Martin, dean of Carnegie Mellon’s Heinz College of Information Systems and Public Policy, who coauthored the study. Kat Roemmich, research associate at the University of Michigan, led the study, and Florian Schaub, associate professor of information as well as electrical engineering and computer science at the University of Michigan, is a coauthor.
The widespread use of AI systems carries with it risks to privacy and challenges to governance that correspond with models’ complexity, autonomy, and cross-domain integration. Regulators, providers, and users struggle to manage risks within systems that learn and generalize autonomously. As these systems evolve, the once-assumed observability, traceability, and contextual stability of information flows erodes as their potential for breach, misuse, and harms to dignity increases.
Addressing these challenges requires a governance framework that can evaluate the normative appropriateness of AI systems beyond narrow tasks and stable contexts, a challenge the authors addressed by integrating contextual integrity with the capabilities approach. Specifically, CA-CI:
- Extends and strengthens Helen Nissenbaum’s contextual integrity (a theory of privacy) by elevating purpose to a constitutive parameter of information flows, enabling better detection of scope creep and cross-context reuse, and
- Incorporates dignity thresholds from Martha Nussbaum’s capabilities approach, defining minimum conditions required for a dignified human life; these capability thresholds function as universal standards for assessing when AI systems cause significant harm.
The EU’s the General Data Protection Regulation enshrines a purpose limitation principle, requiring data to be “collected for specified, explicit, and legitimate purposes and not further processed in a manner incompatible with those purposes.” It also mandates data protection impact assessments for high-risk data processing that may affect fundamental rights and freedoms.
The EU’s AI Act, passed in 2024, extends this logic, prohibiting AI practices deemed to present an unacceptable risk to fundamental rights, health, or safety. It also requires certain users of high-risk systems to conduct fundamental rights impact assessments before use and after relevant system changes, and it requires providers to maintain continuous, purpose-specific risk assessments throughout the system’s life cycle.
But the act lacks a clear standard for determining what constitutes a violation to dignity beyond broad reference to fundamental rights, according to the authors. These ambiguities hinder evaluators in determining when a given practice crosses the moral boundary of dignity, and by extension, the derivative human rights it grounds. As a result, the enforceability of dignity as a foundational normative principle becomes increasingly tenuous.
Meeting this challenge requires a normative governance framework for privacy and data protection that can substantively assess dignity risks across evolving socio-technical contexts throughout the AI life cycle. In applying CA-CI to key requirements of the EU’s AI Act, the authors show how the framework:
- Enables context-sensitive assessment of dignity risks within fundamental rights impact assessments,
- Defines principled thresholds for what counts as significant harm, and
- Supports anticipatory governance by identifying dignity-based risks that have not yet been recognized or codified.
END