About
This infographic offers guidance to design practitioners when seeking to make public AI systems more contestable. It was originally created as part of the Envisioning Contestability Loops study (Alfrink et al., 2024).
Infographic concept by Kars Alfrink. Design by Leon de Korte. Supported by Responsible Sensing Lab.
Description
The infographic shows a generic public AI system. It also shows several mechanisms that can be added to create contestability loops. We walk through each in turn.
First, we have a schematic public human-AI system. We are taking a socio-technical view. The ‘system’ consists not only of technology but also humans and their practices. This graphic presupposes that a system is already in place. It does not depict its initial design and development.
As a first step, data comes into the system. Using a model, or set of rules, the AI then uses this data to make a prediction. Then, we have one of two options: either the system fully automatically translates the prediction into a decision, or a human decides based on this prediction (and perhaps additional information). In both cases, the decision impacts a citizen significantly. We call this person the decision subject.
Now we move on to the contestability mechanisms. First, interactive controls intervene in the prediction-to-decision step. Humans, controllers, or subjects may have access to additional information that the AI does not. They can supplement the prediction with this information and have it updated.
Next, we look at contestation after a decision has been made. So-called intervention requests. These can be broken down into explanations, channels for voice, arenas for debate, and the obligation to respond. First, a subject needs to be provided with an explanation of how a decision was made and why it is desirable. Then, a subject must have access to channels by which they can express their objection. This appeal should lead to a dialogical exchange of viewpoints with a system representative in a so-called arena. Finally, the system operators should be obliged to respond to objections. The obligation to respond also implies that decisions must be reversible or repairable.
Connected to the previous decision-appeal loop is a second-order monitoring loop. Here, a record of all decision appeals is kept. This record is analyzed for patterns that indicate systemic shortcomings. A human operator is alerted to investigate if such a pattern is suspected. It is then up to the human to decide on further action. A systemic flaw can require technology revision or, further upstream, to revise policy.
The following mechanism is about global contestability. Tools for scrutiny are public resources that explain and justify the system as a whole. These can be used by subjects or the broad category of ‘third party’ actors, including journalists, and civil society organizations, to hold the system and its operators to account. This mechanism is connected to policy and system development, as well.
Since we are explicitly dealing with public AI systems in this infographic, we also have a mechanism for policy and system development. Citizens have access to various political tools for influencing systems. By means of representative democracy, they can elect representatives that shape the policies that ultimately lead to systems. However, citizens can also more directly participate in policy and technology development. This mechanism produces the policies that directly govern human controller behavior or are translated into technology.
The flow at the bottom shows the overarching motivation for all these mechanisms. It shows how, under the influence of ongoing contestation, systems are pushed over time toward an increasingly more accountable and legitimate state.