Contestability Loops for Public AI infographic.

Contestable Artificial Intelligence

To ensure artificial intelligence (AI) systems respect human rights to autonomy and dignity, they must allow human intervention throughout their lifecycle.

This Ph.D. research project aims to develop new knowledge for the design of mechanisms that (1) enable people to contest individual algorithmic decisions made by AI systems; and (2) enable people to collectively contest the design and development of AI systems, particularly as it pertains to datasets and models.

AI system fairness, accountability, and transparency are not problems that can be solved by technical means alone. Effective solutions require careful consideration of technological and social factors together and should take local contexts into account. These are challenges that design research is uniquely equipped to meet.

The project takes a practice-based, action-oriented approach. The main research activity is prototyping mechanisms for contestation in new and existing AI systems in the lab and the field, focusing on local governments using AI for algorithmic decision-making in urban public administration.

We aim to present a portfolio of examples of contestable AI in context and generative, intermediate-level design knowledge that aids others in researching and designing AI systems that respect human rights.

LATEST PUBLICATION

Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI

Public sector organizations increasingly use artificial intelligence to augment, support, and automate decision-making. However, such public AI can potentially infringe on citizens’ right to autonomy. Contestability is a system quality that protects against this by ensuring systems are open and responsive to disputes throughout their life cycle. While a growing body of work is investigating contestable AI by design, little of this knowledge has so far been evaluated with practitioners. To make explicit the guiding ideas underpinning contestable AI research, we construct the generative metaphor of the Agonistic Arena, inspired by the political theory of agonistic pluralism. Combining this metaphor and current contestable AI guidelines, we develop an infographic supporting the early-stage concept design of public AI system contestability mechanisms. We evaluate this infographic in five workshops paired with focus groups with a total of 18 practitioners, yielding ten concept designs. Our findings outline the mechanisms for contestability derived from these concept designs. Building on these findings, we subsequently evaluate the efficacy of the Agonistic Arena as a generative metaphor for the design of public AI and identify two competing metaphors at play in this space: the Black Box and the Sovereign.

Alfrink, K., Keller, I., Yurrita Semperena, M., Bulygin, D., Kortuem, G., & Doorn, N. (2024). Envisioning Contestability Loops: Evaluating the Agonistic Arena as a Generative Metaphor for Public AI. She Ji: The Journal of Design, Economics, and Innovation, 10(1), 53–93. https://doi.org/10/gtzwft


Contestable Camera Cars:
A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute

Contestable Camera Cars concept video.

READ THE ARTICLE
Alfrink, K., Keller, I., Doorn, N., & Kortuem, G. (2023). Contestable Camera Cars: A Speculative Design Exploration of Public AI That Is Open and Responsive to Dispute. Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems, 1–16. https://doi.org/10/gr5wcx


Contestable AI by Design: Towards a Framework

Practices

Practices contributing to contestable AI

During business and use-case development, ex-ante safeguards are put in place to protect against potential harms. During design and procurement of training and test data, agonistic development approaches enable stakeholder participation, making room for and leveraging conflict towards continuous improvement. During building and testing quality assurance measures are used to ensure stakeholder interests are centered and progress towards shared goals is tracked. During deployment and monitoring, further quality assurance measures ensure system performance is tracked on an ongoing basis, and the feedback loop with future system development is closed. Finally, throughout, risk mitigation intervenes in the system context to reduce the odds of failure, and third party oversight strengthens the role of external reviewers to enable ongoing outside scrutiny.

Features

Features contributing to contestable AI

System developers create built-in safeguards to constrain the behavior of AI systems. Human controllers use interactive controls to correct or override AI system decisions. Decision subjects use interactive controls, explanations, intervention requests, and tools for scrutiny to contest AI system decisions. Third parties also use tools for scrutiny and intervention requests for oversight and contestation on behalf of individuals and groups.

READ THE ARTICLE
Alfrink, K., Keller, I., Kortuem, G., & Doorn, N. (2022). Contestable AI by Design: Towards a Framework. Minds and Machineshttps://doi.org/10/gqnjcs


Kars Alfrink presenting on contestable AI at Responsible Sensing Lab anniversary event 2023-02-16 (recording by Pakhuis de Zwijger).