Contestable AI by Design

To ensure artificial intelligence (AI) systems respect the human rights to autonomy and dignity, they must allow for human intervention during development and after deployment.

This PhD research project aims to develop new knowledge for the design of mechanisms that (1) enable people to meaningfully contest individual automated decisions made by AI systems; and (2) enable people to collectively contest the design and development of AI systems, particularly as it pertains to datasets and models.

AI system transparency, fairness and accountability are not problems that can be solved by purely technical means. Effective solutions require a careful consideration of technical and social factors together, and should take local contexts into account. These are challenges that design research is uniquely equipped to meet.

The project takes a practice-based, action-oriented approach. The main research activity is aimed at prototyping mechanisms for contestation in new and existing AI systems in the lab and in the field, with a special focus on the use of AI by local government for automated decision making in public urban infrastructure.

We aim to present a portfolio of examples of contestable AI in context, along with generative, intermediate-level design knowledge that should aid others in the research and design of AI systems that respect human rights.

Latest publication

As the use of AI systems continues to increase, so do concerns over their lack of fairness, legitimacy and accountability. Such harmful automated decision-making can be guarded against by ensuring AI systems are contestable by design: responsive to human intervention throughout the system lifecycle. Contestable AI by design is a small but growing field of research. However, most available knowledge requires a significant amount of translation to be applicable in practice. A proven way of conveying intermediate-level, generative design knowledge is in the form of frameworks. In this article we use qualitative-interpretative methods and visual mapping techniques to extract from the literature sociotechnical features and practices that contribute to contestable AI, and synthesize these into a design framework.

Alfrink, K., Keller, I., Kortuem, G., & Doorn, N. (2022). Contestable AI by Design: Towards a FrameworkMinds and Machineshttps://doi.org/10/gqnjcs

Framework diagrams and summary

Download a poster summary of the framework here (PDF).

Features contributing to contestable AI

System developers create built-in safeguards to constrain the behavior of AI systems. Human controllers use interactive controls to correct or override AI system decisions. Decision subjects use interactive controls, explanations, intervention requests, and tools for scrutiny to contest AI system decisions. Third parties also use tools for scrutiny and intervention requests for oversight and contestation on behalf of individuals and groups.

Practices contributing to contestable AI

During business and use-case development, ex-ante safeguards are put in place to protect against potential harms. During design and procurement of training and test data, agonistic development approaches enable stakeholder participation, making room for and leveraging conflict towards continuous improvement. During building and testing quality assurance measures are used to ensure stakeholder interests are centered and progress towards shared goals is tracked. During deployment and monitoring, further quality assurance measures ensure system performance is tracked on an ongoing basis, and the feedback loop with future system development is closed. Finally, throughout, risk mitigation intervenes in the system context to reduce the odds of failure, and third party oversight strengthens the role of external reviewers to enable ongoing outside scrutiny.