Skip to content

Socially acceptable AI and fairness trade-offs in predictive analytics

Researchers Involved

Dr. Michele Loi

Dr. Eleonora Viganò

research areas

AI
Artificial intelligence
fairness
justice
social acceptance

timeframe

2021 - 2024

Socially acceptable AI and fairness trade-offs in predictive analytics

Artificial intelligence involved in important decisions must be socially acceptable. To this end, an interdisciplinary team of experts will create a “fairness lab” to facilitate the exchange between professionals as well as their training.

The use of artificial intelligence (AI), for example in the context of decisions pertaining to personnel in companies, can lead to social injustice. The aim of our interdisciplinary project is to develop a methodology for the designing of fair AI applications. This methodology will help stakeholders configure artificial intelligence for specific purposes in a socially acceptable way, and will make it possible to train software developers in ethical topics. The project combines philosophical, technical and social science issues: What does fairness mean? How is fairness perceived? How can fairness be implemented in AI? In doing so, the project connects the ethical discourse on AI with the technological implementation of AI.

Background

In the field of artificial intelligence, data-based decision-making systems are increasingly encroaching on people’s social lives. This raises the question of how to design these systems so that they are compatible with social fairness and justice norms. The concrete design of such systems requires a reflected combination of ethics, technology and social decision-making processes.

Aim

Our aim is to create a so-called “fairness lab” in collaboration with experts in the fields of ethics, information technology and management. This is an IT environment that allows the visualisation and configuration of the fairness effects of AI applications. Developers can thus establish a dialogue, both with users and those affected, in order to jointly design a socially acceptable solution. And companies can create socially accepted and ethically responsible algorithms. The methodology will be tested in existent human resource cases.

Relevance

Our aim is to create a practical tool for the development of AI applications that are committed to fairness, based on their design. This will be achieved by systematically linking ethical aspects (what does fair mean in the current context?), aspects pertaining to social sciences (what is perceived as fair and in what context?) and technological aspects (how can a concept of fairness be implemented?).