From schools to health professionals, from employers to governing institutions the world around us is increasingly using AI technologies and algorithms to judge us and make decisions about our lives. But are these machines objective and fair in judging us? Our answer is no. These technologies are often used to make the process of decision making more efficient and objective and to ‘avoid the human error’. Yet paradoxically they are filled with systemic ‘errors’, ‘biases’ and ‘inaccuracies’ when it comes to human profiling. The Human Error Project, combines anthropological theory with critical data and AI research, and aims to investigate the fallacy of algorithms when it comes to reading humans. The aim of our research team – and this website – is to shed light on the fact that the race for AI innovation is often shaped by stereotypical and reductionist understandings of human nature, and by new emerging conflicts about what it means to be human.

The Human Error Project - The Project
The Human Error Project - Research Diary
The Human Error Project - The Team

Latest Posts