Bias and Fairness in AI-Based Employee Attrition Prediction Using Random Forest

Authors

  • Idowu Adesoji Oladipupo Department of Applied Data Science, Teesside University, Middlesbrough TS1 3BA, United Kingdom Author https://orcid.org/0009-0005-2648-8168
  • Dr Serifat Adedamola Folorunso Department of Applied Data Science, Teesside University, Middlesbrough TS1 3BA, United Kingdom Author https://orcid.org/0000-0002-1825-0097
  • Sadiq Olusegun Balogun Leeds Institute for Data Analytics, Leeds LS2 9JT, United Kingdom Author
  • Fatima Iganya Suleiman Department of Applied Data Science, Teesside University, Middlesbrough TS1 3BA, United Kingdom Author
  • Olufunke Catherine Olayemi Department of Computer Science, Teesside University, Middlesbrough TS1 3BX, United Kingdom Author
  • Joseph Maugbe Jacob Department of Applied Data Science, Teesside University, Middlesbrough TS1 3BA, United Kingdom Author

DOI:

https://doi.org/10.64389/icds.2026.02162

Keywords:

Artificial intelligence (AI), Employee attrition, Algorithmic bias, Fairness in machine learning, Workforce analytics, Human resource technology, Random Forest, Gender disparity

Abstract

Artificial intelligence is increasingly employed to predict employee attrition, enabling organisations to improve talent retention and workforce planning. However, without explicit consideration of fairness, these models risk embedding and amplifying societal biases. This study examines bias in AI-based attrition prediction using a Random Forest classifier applied to the IBM HR Analytics employee attrition dataset. Although the model demonstrates high predictive performance (92.3 percent) and an area under the curve of 0.97, subgroup analysis reveals disparities in prediction performance across gender. Fairness assessments based on equal accuracy, demographic parity, and equality of opportunity show that predictions for female employees achieve higher precision and recall than those for male employees, suggesting differential predictive performance across gender groups. The findings highlight organisational risks associated with such disparities, including the risk of unjust decision-making, reduced employee trust, and hindered diversity and inclusion efforts. To mitigate these challenges, the study recommends fairness-aware strategies such as balanced sampling, established fairness metrics, post-processing approaches (e.g., equalised odds), and continuous model auditing. This research underscores the ethical importance of aligning AI systems in human resource management with principles of equity, transparency, and accountability.

Downloads

Download data is not yet available.

Downloads

Published

2026-01-26

Issue

Section

Articles

How to Cite

Oladipupo, I. A., Folorunso, S. A. ., Balogun, S. O. ., Suleiman, F. I. ., Olayemi, O. C., & Jacob, J. M. . (2026). Bias and Fairness in AI-Based Employee Attrition Prediction Using Random Forest. Innovation in Computer and Data Sciences, 2(1), 11-23. https://doi.org/10.64389/icds.2026.02162