To ensure artificial intelligence (AI) systems respect the human rights to autonomy and dignity, they must allow for human intervention during development and after deployment.

This PhD research project aims to develop new knowledge for the design of mechanisms that (1) enable people to meaningfully contest individual automated decisions made by AI systems; and (2) enable people to collectively contest the design and development of AI systems, particularly as it pertains to datasets and models.

AI system transparency, fairness and accountability are not problems that can be solved by purely technical means. Effective solutions require a careful consideration of technical and social factors together, and should take local contexts into account. These are challenges that design research is uniquely equipped to meet.

Transparency is unidirectional, contestability is bi-directional

The project takes a practice-based, action-oriented approach. The main research activity is aimed at prototyping mechanisms for contestation in new and existing AI systems in the lab and in the field, with a special focus on the use of AI by local government for automated decision making in public urban infrastructure.

We aim to present a portfolio of examples of contestable AI in context, along with generative, intermediate-level design knowledge that should aid others in the research and design of AI systems that respect human rights.