Contestability Loops for Public AI infographic.

Contestable Artificial Intelligence

This website documents the research by Kars Alfrink on contestable AI, which between 2018 and 2024 was performed as a PhD project and is now ongoing.

About Contestable AI

For artificial intelligence systems to respect people’s autonomy, they must allow for meaningful human intervention throughout their lifecycles. Our research advances new knowledge in the design of contestation mechanisms for AI systems. We focus on two key dimensions: enabling individuals to challenge specific AI decisions that affect them, while simultaneously creating pathways for collective contestation of AI system design and development, with particular attention to datasets and models.

We recognize that addressing AI system fairness, accountability, and transparency transcends purely technical solutions. These challenges demand a nuanced understanding of both technological and social factors, considered within local contexts. Design research, with its methodological toolkit and human-centered approach, is well-suited to navigate these socio-technical challenges.

Our work employs a practice-based, action-oriented methodology centered on prototyping contestation mechanisms for both new and existing AI systems. We conduct field research with local governments implementing AI for algorithmic decision-making in urban public administration. Through this work, we continually expand our portfolio of contextual examples of contestable AI while developing generative, intermediate-level design knowledge. This knowledge base supports other researchers and designers in creating AI systems that preserve and enhance human agency and control.

LATEST PUBLICATION

Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI

Public sector organizations increasingly use artificial intelligence to augment, support, and automate decision-making. However, such public AI can potentially infringe on citizens’ right to autonomy. Contestability is a system quality that protects against this by ensuring systems are open and responsive to disputes throughout their life cycle. While a growing body of work is investigating contestable AI by design, little of this knowledge has so far been evaluated with practitioners. To make explicit the guiding ideas underpinning contestable AI research, we construct the generative metaphor of the Agonistic Arena, inspired by the political theory of agonistic pluralism. Combining this metaphor and current contestable AI guidelines, we develop an infographic supporting the early-stage concept design of public AI system contestability mechanisms. We evaluate this infographic in five workshops paired with focus groups with a total of 18 practitioners, yielding ten concept designs. Our findings outline the mechanisms for contestability derived from these concept designs. Building on these findings, we subsequently evaluate the efficacy of the Agonistic Arena as a generative metaphor for the design of public AI and identify two competing metaphors at play in this space: the Black Box and the Sovereign.

Alfrink, K., Keller, I., Yurrita Semperena, M., Bulygin, D., Kortuem, G., & Doorn, N. (2024). Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI. She Ji: The Journal of Design, Economics, and Innovation, 10(1), 53–93. https://doi.org/10/gtzwft


Contestable Camera Cars:
A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute

Contestable Camera Cars concept video.

READ THE ARTICLE
Alfrink, K., Keller, I., Doorn, N., & Kortuem, G. (2023). Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10/gr5wcx


Contestable AI by Design: Towards a Framework

Practices

Practices contributing to contestable AI

During business and use-case development, ex-ante safeguards are put in place to protect against potential harms. During design and procurement of training and test data, agonistic development approaches enable stakeholder participation, making room for and leveraging conflict towards continuous improvement. During building and testing quality assurance measures are used to ensure stakeholder interests are centered and progress towards shared goals is tracked. During deployment and monitoring, further quality assurance measures ensure system performance is tracked on an ongoing basis, and the feedback loop with future system development is closed. Finally, throughout, risk mitigation intervenes in the system context to reduce the odds of failure, and third party oversight strengthens the role of external reviewers to enable ongoing outside scrutiny.

Features

Features contributing to contestable AI

System developers create built-in safeguards to constrain the behavior of AI systems. Human controllers use interactive controls to correct or override AI system decisions. Decision subjects use interactive controls, explanations, intervention requests, and tools for scrutiny to contest AI system decisions. Third parties also use tools for scrutiny and intervention requests for oversight and contestation on behalf of individuals and groups.

READ THE ARTICLE
Alfrink, K., Keller, I., Kortuem, G., & Doorn, N. (2022). Contestable AI by Design: Towards a Framework. Minds and Machineshttps://doi.org/10/gqnjcs


Kars Alfrink presenting on contestable AI at Responsible Sensing Lab anniversary event 2023-02-16 (recording by Pakhuis de Zwijger).