(UN)DOING: (post)digital // ABOUT // PT // EN

Who would have thought, not so long ago, that Artificial Intelligence would be used to make decisions that directly affect people's lives? In 2021, it is. AI systems are used to decide, for example, who will be hired, who will be a potential criminal or who will be released from prison. These tools, which are often presented to us as being neutral and objective technical systems, are, in reality, populated by the same social biases as the humans who create them. As Cathy O’Neil explains, “under the guise of math, fairness, and objectivity”, these algorithms “reinforce and magnify the old biases and power dynamics that we hoped they would eliminate”.

AI and the promise of (un)fairness seeks to expose gender and racial biases under the apparent neutrality of Artificial Intelligence systems. Drawing on examples presented by researchers, such as Kate Crawford or Cathy O’Neil, namely search engines, predictive policing, or risk assessment, the project seeks to reveal how these systems end up reinforcing social inequalities.

By strolling in a virtual 3D environment, the user becomes a spectator of these examples and interactively explores the elements within the space to become aware of some of the current applications and implications of AI systems. While AI is often seen as something abstract, complex, or even connoted with fictional or futuristic imaginaries, the project tries to connect these concepts to reality, giving them a tangible expression, and allowing the user to ‘live’ the experience of the impact of AI in the real world.