Vol. 2 No. 1 (2026): Applied Decision Analytics
Articles

Governing Fair Algorithmic Hiring Under the EU AI Act: A Multi-Layer Decision Architecture for High-Risk Recruitment Systems

Veranika Khlud
RTU, BIA, ITDA

Published 2026-04-21

Keywords

  • Algorithmic hiring,
  • EU AI Act,
  • AI governance,
  • Organizational justice,
  • Algorithmic fairness,
  • Human oversight
  • ...More
    Less

How to Cite

Khlud, V. (2026). Governing Fair Algorithmic Hiring Under the EU AI Act: A Multi-Layer Decision Architecture for High-Risk Recruitment Systems. Applied Decision Analytics , 2(1), 271-282. https://doi.org/10.66972/ada21202624

Abstract

Artificial intelligence is increasingly used to support recruitment decisions, while the European Union’s (EU) Artificial Intelligence (AI) Act treats many employment-related AI systems as high-risk because they may significantly affect career opportunities, livelihoods, and fundamental rights. However, existing debates on fair algorithmic hiring often remain fragmented, treating fairness either as a technical bias-mitigation issue, a legal compliance issue, or a matter of applicant perception. This fragmentation leaves organizations without an integrated governance architecture that connects regulatory obligations, decision accountability, human oversight, data controls, and procedural justice. The purpose of this conceptual paper is to develop a multi-layer decision architecture for governing fair algorithmic hiring under the EU AI Act. The article applies a theory-building methodology based on conceptual synthesis, integrating the AI Act’s requirements for high-risk systems with organizational justice theory, algorithmic fairness research, and human resource management governance literature. The study develops a six-layer governance architecture consisting of strategic accountability, lifecycle risk management, data and model assurance, human oversight, candidate-facing transparency and redress, and continuous monitoring and organizational learning. In addition, the article formulates seven theoretical propositions explaining how governance mechanisms can translate compliance obligations into distributive, procedural, interpersonal, and informational justice in recruitment. The article concludes that fair algorithmic hiring cannot be achieved through isolated audits, bias testing, or disclosure statements alone. Instead, it requires an integrated decision architecture that allocates rights and responsibilities across the recruitment pipeline, preserves meaningful human judgment, enables candidate voice and appeal, and embeds legal and ethical safeguards into ongoing organizational routines. 

Downloads

Download data is not yet available.

References

  1. Tambe, P., Cappelli, P., & Yakubovich, V. (2019). Artificial intelligence in human resources management: Challenges and a path forward. California Management Review, 61(4), 15–42. https://doi.org/10.1177/0008125619867910
  2. Raghavan, M., Barocas, S., Kleinberg, J., & Levy, K. (2020). Mitigating bias in algorithmic hiring: Evaluating claims and practices. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 469–481. https://doi.org/10.1145/3351095.3372828
  3. Köchling, A., & Wehner, M. C. (2020). Discriminated by an algorithm: A systematic review of discrimination and fairness by algorithmic decision-making in the context of HR recruitment and HR development. Business Research, 13, 795–848. https://doi.org/10.1007/s40685-020-00134-w
  4. European Parliament and Council of the European Union. (2024). Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence (Artificial Intelligence Act). Official Journal of the European Union. https://eur-lex.europa.eu/eli/reg/2024/1689/oj
  5. European Commission. (2026). Timeline for the implementation of the EU AI Act. AI Act Service Desk. https://ai-act-service-desk.ec.europa.eu/en/ai-act/eu-ai-act-implementation-timeline
  6. Rigotti, C., & Fosch-Villaronga, E. (2024). Fairness, AI & recruitment. Computer Law & Security Review, 53, 105966. https://doi.org/10.1016/j.clsr.2024.105966
  7. Drage, E., & Mackereth, K. (2022). Does AI debias recruitment? Race, gender, and AI’s “eradication of difference”. Philosophy & Technology, 35, 89. https://doi.org/10.1007/s13347-022-00543-1
  8. Chen, Z. (2023). Ethics and discrimination in artificial intelligence-enabled recruitment practices. Humanities and Social Sciences Communications, 10, 567. https://doi.org/10.1057/s41599-023-02079-x
  9. Budhwar, P., Malik, A., De Silva, M. T., & Thevisuthan, P. (2022). Artificial intelligence – challenges and opportunities for international HRM: A review and research agenda. The International Journal of Human Resource Management, 33(6), 1065–1097. https://doi.org/10.1080/09585192.2022.2035161
  10. Malik, A., Budhwar, P., Patel, C., & Srikanth, N. R. (2023). Unlocking the value of artificial intelligence in human resource management through AI capability framework. Human Resource Management Review, 33(1), 100899. https://doi.org/10.1016/j.hrmr.2022.100899
  11. Horodyski, P. (2023). Applicants’ perception of artificial intelligence in the recruitment process. Computers in Human Behavior Reports, 11, 100303. https://doi.org/10.1016/j.chbr.2023.100303
  12. Chun, J. S., De Cremer, D., Oh, E.-J., & Kim, Y. (2024). What algorithmic evaluation fails to deliver: Respectful treatment and individualized consideration. Scientific Reports, 14, 25996. https://doi.org/10.1038/s41598-024-76320-1
  13. Köchling, A., Wehner, M. C., & Ruhle, S. A. (2025). This (AI)n’t fair? Employee reactions to artificial intelligence (AI) in career development systems. Review of Managerial Science, 19, 1195–1228. https://doi.org/10.1007/s11846-024-00789-3
  14. Langer, M., König, C. J., & Krause, K. (2017). Examining digital interviews for personnel selection: Applicant reactions and interviewer ratings. International Journal of Selection and Assessment, 25(4), 371–382. https://doi.org/10.1111/ijsa.12191
  15. Barocas, S., & Selbst, A. D. (2016). Big data’s disparate impact. California Law Review, 104(3), 671–732. https://doi.org/10.15779/Z38BG31
  16. Selbst, A. D., Boyd, D., Friedler, S. A., Venkatasubramanian, S., & Vertesi, J. (2019). Fairness and abstraction in sociotechnical systems. Proceedings of the Conference on Fairness, Accountability, and Transparency, 59–68. https://doi.org/10.1145/3287560.3287598
  17. Mitchell, M., Wu, S., Zaldivar, A., Barnes, P., Vasserman, L., Hutchinson, B., Spitzer, E., Raji, I. D., & Gebru, T. (2019). Model cards for model reporting. Proceedings of the Conference on Fairness, Accountability, and Transparency, 220–229. https://doi.org/10.1145/3287560.3287596
  18. Gebru, T., Morgenstern, J., Vecchione, B., Vaughan, J. W., Wallach, H., Daumé III, H., & Crawford, K. (2021). Datasheets for datasets. Communications of the ACM, 64(12), 86–92. https://doi.org/10.1145/3458723
  19. Raji, I. D., Smart, A., White, R. N., Mitchell, M., Gebru, T., Hutchinson, B., Smith-Loud, J., Theron, D., & Barnes, P. (2020). Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing. Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency, 33–44. https://doi.org/10.1145/3351095.3372873
  20. Colquitt, J. A. (2001). On the dimensionality of organizational justice: A construct validation of a measure. Journal of Applied Psychology, 86(3), 386–400. https://doi.org/10.1037/0021-9010.86.3.386
  21. Gilliland, S. W. (1993). The perceived fairness of selection systems: An organizational justice perspective. Academy of Management Review, 18(4), 694–734. https://doi.org/10.5465/amr.1993.9402210155
  22. Bauer, T. N., Truxillo, D. M., Sanchez, R. J., Craig, J. M., Ferrara, P., & Campion, M. A. (2001). Applicant reactions to selection: Development of the Selection Procedural Justice Scale (SPJS). Personnel Psychology, 54(2), 387–419. https://doi.org/10.1111/j.1744-6570.2001.tb00097.x
  23. Hausknecht, J. P., Day, D. V., & Thomas, S. C. (2004). Applicant reactions to selection procedures: An updated model and meta-analysis. Personnel Psychology, 57(3), 639–683. https://doi.org/10.1111/j.1744-6570.2004.00003.x
  24. Jaakkola, E. (2020). Designing conceptual articles: Four approaches. AMS Review, 10, 18–26. https://doi.org/10.1007/s13162-020-00161-0
  25. Whetten, D. A. (1989). What constitutes a theoretical contribution? Academy of Management Review, 14(4), 490–495. https://doi.org/10.2307/258554
  26. MacInnis, D. J. (2011). A framework for conceptual contributions in marketing. Journal of Marketing, 75(4), 136–154. https://doi.org/10.1509/jmkg.75.4.136
  27. Chouldechova, A. (2017). Fair prediction with disparate impact: A study of bias in recidivism prediction instruments. Big Data, 5(2), 153–163. https://doi.org/10.1089/big.2016.0047
  28. Kleinberg, J., Mullainathan, S., & Raghavan, M. (2017). Inherent trade-offs in the fair determination of risk scores. Proceedings of the 8th Innovations in Theoretical Computer Science Conference, 67, 43:1–43:23. https://doi.org/10.4230/LIPIcs.ITCS.2017.43
  29. Allal-Chérif, O., Yela Aránega, A., & Castaño Sánchez, R. (2021). Intelligent recruitment: How to identify, select, and retain talents from around the world using artificial intelligence. Technological Forecasting and Social Change, 169, 120822. https://doi.org/10.1016/j.techfore.2021.120822
  30. Black, J. S., & van Esch, P. (2020). AI-enabled recruiting: What is it and how should a manager use it? Business Horizons, 63(2), 215–226. https://doi.org/10.1016/j.bushor.2019.12.001
  31. van Esch, P., Black, J. S., & Ferolie, J. (2019). Marketing AI recruitment: The next phase in job application and selection. Computers in Human Behavior, 90, 215–222. https://doi.org/10.1016/j.chb.2018.09.009
  32. Madaio, M. A., Stark, L., Vaughan, J. W., & Wallach, H. (2020). Co-designing checklists to understand organizational challenges and opportunities around fairness in AI. Proceedings of the 2020 CHI Conference on Human Factors in Computing Systems, 1–14. https://doi.org/10.1145/3313831.3376445
  33. Mourby, M., Ó Cathaoir, K., & Bierre Collin, C. (2021). Transparency of machine-learning in healthcare: The GDPR & European health law. Computer Law & Security Review, 43, 105611. https://doi.org/10.1016/j.clsr.2021.105611
  34. Miller, T. (2019). Explanation in artificial intelligence: Insights from the social sciences. Artificial Intelligence, 267, 1–38. https://doi.org/10.1016/j.artint.2018.07.007
  35. Wachter, S., Mittelstadt, B., & Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the General Data Protection Regulation. International Data Privacy Law, 7(2), 76–99. https://doi.org/10.1093/idpl/ipx005
  36. Diakopoulos, N. (2016). Accountability in algorithmic decision making. Communications of the ACM, 59(2), 56–62. https://doi.org/10.1145/2844110
  37. Floridi, L., Cowls, J., Beltrametti, M., Chatila, R., Chazerand, P., Dignum, V., Luetge, C., Madelin, R., Pagallo, U., Rossi, F., Schafer, B., Valcke, P., & Vayena, E. (2018). AI4People—An ethical framework for a good AI society. Minds and Machines, 28, 689–707. https://doi.org/10.1007/s11023-018-9482-5
  38. Jobin, A., Ienca, M., & Vayena, E. (2019). The global landscape of AI ethics guidelines. Nature Machine Intelligence, 1, 389–399. https://doi.org/10.1038/s42256-019-0088-2
  39. Mittelstadt, B. (2019). Principles alone cannot guarantee ethical AI. Nature Machine Intelligence, 1, 501–507. https://doi.org/10.1038/s42256-019-0114-4