From schools to health professionals, from employers to governing institutions the world around us is increasingly using AI technologies and algorithms to judge us and make decisions about our lives. But are these machines objective and fair in judging us? Our answer is no. These technologies are often used to make the process of decision making more efficient and objective and to ‘avoid the human error’. Yet paradoxically they are filled with systemic ‘errors’, ‘biases’ and ‘inaccuracies’ when it comes to human profiling. The Human Error Project, combines anthropological theory with critical data and AI research, and aims to investigate the fallacy of algorithms when it comes to reading humans. The aim of our research team – and this website – is to shed light on the fact that the race for AI innovation, is being defined by new and emerging conflicts about how we understand human nature and what it means to be human.