- Project number: F 2602
- Institution: Federal Institute for Occupational Safety and Health (BAuA)
- Status: Ongoing Project
- Planned end: 2029-06-30
Description:
AI systems have the potential to make recurring managerial processes more efficient. For example, recruiting can be optimised through automated recommendations. However, this also gives rise to risks, such as the violation of fundamental rights, which can have consequences for those affected. The research project contributes to occupational safety by helping to ensure that AI tools for management tasks are designed in a manner that is fair in terms of protecting fundamental rights.
Thus, the aim of the project is to analyse the characteristics of data sets used to train high-risk AI systems. The focus is on evaluating methods that can identify biases within data sets. A classic example of bias is the unequal distribution of genders in certain professions and their unequal representation in data sets. To prevent unwanted historical trends from continuing, existing biases must therefore be taken into account. Knowledge of bias detection is important for ensuring that AI systems operating in algorithmic management are compliant.
Established fairness metrics (e.g., "disparate impact") and their application to real and synthetic data sets are investigated by the project. New metrics that integrate previous research approaches from the algorithmic fairness field are also being developed, tested and validated. In addition, innovative methods for reducing bias are being used. Partners from academia and industry are involved in testing these approaches.
The research aims to provide practical principle solutions to make AI systems more transparent, trustworthy, and fair. The results will support the introduction of AI tools in line with the European AI Act and create a basis for a secure digital working environment.