Digital Technologies Research and Applications

Article

Algorithmic Bias in Automated Decision-Making: A Statistical Study with Legal and Regulatory Implications

Dar, A. A., Jahan, S. A., Khan, M. S., Azad, I., Farooque, M. M. J., Sindhuja, S., & Abidha, A. K. (2026). Algorithmic Bias in Automated Decision-Making: A Statistical Study with Legal and Regulatory Implications. Digital Technologies Research and Applications, 5(2), 1–14. https://doi.org/10.54963/dtra.v5i2.2308

Authors

  • Amir Ahmad Dar

    Department of Statistics, Lovely Professional University, Jalandhar 144411, India
  • Shaik Afsar Jahan

    Department of Statistics, Lovely Professional University, Jalandhar 144411, India
  • Mohammad Shahfaraz Khan

    College of Economics and Business Administration, University of Technology and Applied Sciences-Salalah, Salalah 211, Oman
  • Imran Azad

    College of Economics and Business Administration, University of Technology and Applied Sciences-Salalah, Salalah 211, Oman
  • Murtaza M. Junaid Farooque

    Department of MIS, College of Commerce and Business Administration, Dhofar University, Salalah 211, Oman
  • S. Sindhuja

    Department of Mathematics, SRM Institute of Science and Technology, Chennai 600089, India
  • A. K. Abidha

    Department Mathematics and Actuarial Science, B. S. Abdur Rahman Crescent Institute of Science and Technology, Chennai 600048, India

Received: 19 January 2026; Revised: 23 February 2026; Accepted: 9 March 2026; Published: 9 April 2026

The use of algorithmic decision systems is being expanded to high-risk areas like credit, recruiting, and distributing government resources. Despite the fact that these systems are usually claimed to be objective and efficient, there have been apprehensions about the likelihood of structural inequalities being perpetuated by the systems. This paper examines the effect of a fairness-aware pre-processing technique called reweighing on the performance of a predictive system in a controlled simulation environment. Using a synthetically created credit approval dataset with structural disadvantage embedded, we compare the performance of a logistic regression classifier with and without reweighing. Fairness is measured using demographic parity disparity (DPD), disparate impact ratio (DIR), and equalized odds difference (EO), along with predictive accuracy. In a single test scenario (seed = 42), reweighing does not improve all fairness metrics uniformly. However, when analyzed for robustness across 50 independent random seeds, we find modest average reductions in demographic parity disparity and equalized odds difference for reweighing, with little change in predictive accuracy. Threshold sensitivity analysis also shows that fairness metrics are sensitive to decision thresholds. These results show that fairness-aware pre-processing can lead to systematic improvements in expectation, although trade-offs across fairness metrics and performance remain context-dependent.

Keywords:

Algorithmic Bias Fairness-Aware Machine Learning Demographic Parity Disparate Impact Equalized Odds AI Governance

References

  1. Kleinberg, J.; Ludwig, J.; Mullainathan, S.; et al. Discrimination in the Age of Algorithms. J. Leg. Anal. 2018, 10, 113–174.
  2. Ebers, M. Automating due process—The promise and challenges of AI-based techniques in consumer online dispute resolution. In Frontiers in Civil Justice; Edward Elgar Publishing: Cheltenham, UK, 2022; pp. 142–168.
  3. Corbett-Davies, S.; Gaebler, J.D.; Nilforoshan, H.; et al. The measure and mismeasure of fairness. J. Mach. Learn. Res. 2023, 24, 1–117.
  4. Barocas, S.; Selbst, A.D. Excerpt from Big Data's Disparate Impact. In Ethics of Data and Analytics; Auerbach Publications: Boca Raton, FL, USA, 2022; pp. 303–318.
  5. Chen, Y.C.; Ahn, M.J.; Wang, Y.F. Artificial intelligence and public values: Value impacts and governance in the public sector. Sustainability 2023, 15, 4796.
  6. Burda, P.; Van Otterloo, S. Fairness definitions explained and illustrated with examples. Comput. Soc. Res. J. 2025, 2, 1–23.
  7. Yan, C.; Zhang, X.; Shen, J. Credit score classification using advanced machine learning: A comprehensive approach. J. Softw. Eng. Appl. 2025, 18, 98–112.
  8. Popat, A.K.; Amemiya, J.; Heyman, G.D.; et al. Hiding discrimination in plain sight: The development of reasoning about disparate impact policies. J. Exp. Psychol. Gen. 2025, 155, 116–132.
  9. Pessach, D.; Shmueli, E. A review on fairness in machine learning. ACM Comput. Surv. 2022, 55, 1–44.
  10. Slussareff, M. Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy; Crown: New York, NY, USA, 2022.
  11. Angwin, J.; Larson, J.; Mattu, S.; et al. Machine bias. In Ethics of Data and Analytics; Auerbach Publications: Boca Raton, FL, USA, 2022; pp. 254–264.
  12. Hardt, M.; Price, E.; Srebro, N. Equality of opportunity in supervised learning. In Proceedings of the 30th International Conference on Neural Information Processing Systems, Barcelona, Spain, 5–10 December 2016.
  13. Kleinberg, J.; Mullainathan, S.; Raghavan, M. Inherent Trade-Offs in the Fair Determination of Risk Scores. In Proceedings of the 8th Innovations in Theoretical Computer Science Conference, Berkeley, CA, USA, 9–11 January 2017.
  14. Ellis, E.; Watson, P. EU Anti-Discrimination Law; Oxford University Press: Oxford, UK, 2012.
  15. Van Iddekinge, C.H.; Lievens, F.; Sackett, P.R. Personnel selection: A review of ways to maximize validity, diversity, and the applicant experience. Pers. Psychol. 2023, 76, 651–686.
  16. The Council of the European Union. Council Directive 2000/43/EC of 29 June 2000 Implementing the Principle of Equal Treatment between Persons Irrespective of Racial or Ethnic Origin; The Council of the European Union: Brussels, Belgium, 2000.
  17. Bhatia, G. India: A Constitution in Search of an Identity. SSRN 2022. Available online: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=4075049
  18. Lognoul, M. Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act–AI Act). Rev. Droit Technol. Inf. 2025, 145–189.
  19. De Troya, Í.; Kernahan, J.; Doorn, N.; et al. Misabstraction in Sociotechnical Systems. In Proceedings of the 2025 ACM Conference on Fairness, Accountability, and Transparency, Athens, Greece, 23–26 June 2025; pp. 1829–1842.
  20. Kamiran, F.; Calders, T. Data preprocessing techniques for classification without discrimination. Knowl. Inf. Syst. 2012, 33, 1–33.
  21. Raghavan, M.; Barocas, S.; Kleinberg, J.; et al. Mitigating bias in algorithmic hiring: Evaluating claims and practices. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 469–481.
  22. Abowd, J.M.; Hawes, M.B. 21st century statistical disclosure limitation: Motivations and challenges. In Handbook of Sharing Confidential Data; Chapman and Hall/CRC: Boca Raton, FL, USA, 2024; pp. 24–36.
  23. Pedregosa, F.; Varoquaux, G.; Gramfort, A.; et al. Scikit-learn: Machine learning in Python. J. Mach. Learn. Res. 2011, 12, 2825–2830.
  24. Loganathan, M.; Sharifzadeh, H.; Keivanmarz, A. Towards Improving Fairness in AI Systems: A Framework for Bias Mitigation. In Proceedings of the 2025 IEEE Region 10 Symposium (TENSYMP), Christchurch, New Zealand, 7–9 July 2025.
  25. Wachter, S.; Mittelstadt, B.; Russell, C. Why fairness cannot be automated: Bridging the gap between EU non-discrimination law and AI. Comput. Law Secur. Rev. 2021, 41, 105567.
  26. Zafar, M.B.; Valera, I.; Rogriguez, M.G.; et al. Fairness constraints: Mechanisms for fair classification. In Proceedings of the 20th International Conference on Artificial Intelligence and Statistics, Fort Lauderdale, FL, USA, April 2017; pp. 962–970.
  27. Binns, R. On the apparent conflict between individual and group fairness. In Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, Barcelona, Spain, 27–30 January 2020; pp. 514–524.
  28. Kour, M.; Schutte, D.P. Artificial Intelligence and Accounting: Ethical, Legal, and Social Implications; Routledge: London, UK, 2025.
  29. Caton, S.; Haas, C. Fairness in machine learning: A survey. ACM Comput. Surv. 2024, 56, 1–38.
  30. Schäferling, S. Automated Decision-Making and the Law. In Governmental Automated Decision-Making and Human Rights: Reconciling Law and Intelligent Systems; Springer Nature: Cham, Switzerland, 2023; pp. 23–90.
  31. European Commission. White Paper on Artificial Intelligence—A European Approach to Excellence and Trust; European Commission: Brussels, Belgium, 2020.