Explainable AI for Public Sector Decision Making: A Systematic Literature Review
Keywords:
Explainable Artificial Intelligence (XAI), Public Sector Decision Making, Algorithmic Transparency, Accountability in Governance, Systematic Literature ReviewAbstract
The growing adoption of Artificial Intelligence (AI) in government has intensified the need for transparent, accountable, and trustworthy decision-making systems. This study conducts a systematic literature review to examine how Explainable AI (XAI) is applied within the public sector, identify the dominant techniques used, and analyze their benefits and challenges. Using PRISMA guidelines, studies were collected from major academic databases including Scopus, Web of Science, IEEE Xplore, SpringerLink, ACM Digital Library, and Google Scholar. The findings reveal that XAI development in government contexts has grown significantly over the past decade, with SHAP, LIME, decision trees, counterfactual explanations, and rule-based models emerging as the most frequently used methods. These techniques support public-sector decision making by enhancing transparency, strengthening accountability, reducing bias, improving auditability, and fostering public trust. However, persistent challenges remain, including technical complexity, trade-offs between accuracy and interpretability, limited AI literacy among officials, lack of standard frameworks, and legal or ethical risks. The review highlights the need for more domain-specific XAI guidelines, user-centered explanation tools, and integrated evaluation frameworks. This research contributes a comprehensive synthesis of current XAI applications in government and outlines a future research agenda to support the development of responsible, explainable, and ethically aligned AI for public administration.
Downloads
References
J. G. Corvalán, “Digital and intelligent public administration: Transformations in the era of artificial intelligence,” A&C-Revista Direito Adm. Const., vol. 18, no. 71, pp. 55–87, 2018.
B. Lepri, N. Oliver, E. Letouzé, A. Pentland, and P. Vinck, “Fair, transparent, and accountable algorithmic decision-making processes: The premise, the proposed solutions, and the open challenges,” Philos. Technol., vol. 31, pp. 611–627, 2018.
D. Anny, “Regulatory Frameworks for AI Bias: A Comparative Analysis of Global Approaches,” 2020.
D. Martinez-Mosquera, R. Navarrete, and S. Lujan-Mora, “Modeling and management big data in databases—A systematic literature review,” Sustainability, vol. 12, no. 2, p. 634, 2020.
A. Anguita-Ruiz, A. Segura-Delgado, R. Alcalá, C. M. Aguilera, and J. Alcalá-Fdez, “eXplainable Artificial Intelligence (XAI) for the identification of biologically relevant gene expression patterns in longitudinal human studies, insights from obesity research,” PLoS Comput. Biol., vol. 16, no. 4, p. e1007792, 2020.
D. Arduini and A. Zanfei, “An overview of scholarly research on public e-services? A meta-analysis of the literature,” Telecomm. Policy, vol. 38, no. 5–6, pp. 476–495, 2014.
M. Kuziemski and G. Misuraca, “AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings,” Telecomm. Policy, vol. 44, no. 6, p. 101976, 2020.
H. P. Olsen, J. L. Slosser, T. T. Hildebrandt, and C. Wiesener, “What’s in the box? The legal requirement of explainability in computationally aided decision-making in public administration,” 2019.
Y. Kwon, M. Lemieux, J. McTavish, and N. Wathen, “Identifying and removing duplicate records from systematic review searches,” J. Med. Libr. Assoc. JMLA, vol. 103, no. 4, p. 184, 2015.
E. Tjoa and C. Guan, “A survey on explainable artificial intelligence (xai): Toward medical xai,” IEEE Trans. neural networks Learn. Syst., vol. 32, no. 11, pp. 4793–4813, 2020.
J. Allotey et al., “Clinical manifestations, risk factors, and maternal and perinatal outcomes of coronavirus disease 2019 in pregnancy: living systematic review and meta-analysis,” bmj, vol. 370, 2020.
Z. Liu, D. Rexachs, F. Epelde, and E. Luque, “An agent-based model for quantitatively analyzing and predicting the complex behavior of emergency departments,” J. Comput. Sci., vol. 21, pp. 11–23, 2017.
D. Slack, S. Hilgard, E. Jia, S. Singh, and H. Lakkaraju, “Fooling lime and shap: Adversarial attacks on post hoc explanation methods,” in Proceedings of the AAAI/ACM Conference on AI, Ethics, and Society, 2020, pp. 180–186.
M. Veale, M. Van Kleek, and R. Binns, “Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making,” in Proceedings of the 2018 chi conference on human factors in computing systems, 2018, pp. 1–14.
A. S. Adly, A. S. Adly, and M. S. Adly, “Approaches based on artificial intelligence and the internet of intelligent things to prevent the spread of COVID-19: scoping review,” J. Med. Internet Res., vol. 22, no. 8, p. e19104, 2020.
Y. Kai, “The Rules and Methods of Incidental Review on Documents of Administrative Norms,” China Leg. Sci., vol. 5, p. 146, 2017.
H. Felzmann, E. Fosch-Villaronga, C. Lutz, and A. Tamò-Larrieux, “Towards transparency by design for artificial intelligence,” Sci. Eng. Ethics, vol. 26, no. 6, pp. 3333–3361, 2020.
N. Helberger, T. Araujo, and C. H. De Vreese, “Who is the fairest of them all? Public attitudes and expectations regarding automated decision-making,” Comput. Law Secur. Rev., vol. 39, p. 105456, 2020.
S. J. Mikhaylov, M. Esteve, and A. Campion, “Artificial intelligence for the public sector: opportunities and challenges of cross-sector collaboration,” Philos. Trans. R. Soc. a Math. Phys. Eng. Sci., vol. 376, no. 2128, p. 20170357, 2018.
J. Gerlings, A. Shollo, and I. Constantiou, “Reviewing the need for explainable artificial intelligence (xAI),” arXiv Prepr. arXiv2012.01007, 2020.
M. Dubnick and H. G. Frederickson, “Public accountability: Performance measurement, the extended state, and the search for trust,” Natl. Acad. Public Adm. Kettering Found., 2011.
S. Goel, M. Manuja, R. Dwivedi, and A. M. Sherry, “Challenges of technology infrastructure availability in e-governance program implementations: A cloud based solution,” J. Comput. Eng., vol. 5, no. 2, pp. 13–17, 2012.
A. Abdul, J. Vermeulen, D. Wang, B. Y. Lim, and M. Kankanhalli, “Trends and trajectories for explainable, accountable and intelligible systems: An hci research agenda,” in Proceedings of the 2018 CHI conference on human factors in computing systems, 2018, pp. 1–18.
A. Adadi and M. Berrada, “Peeking inside the black-box: a survey on explainable artificial intelligence (XAI),” IEEE access, vol. 6, pp. 52138–52160, 2018.
A. Rosenfeld and A. Richardson, “Explainability in human–agent systems,” Auton. Agent. Multi. Agent. Syst., vol. 33, no. 6, pp. 673–705, 2019.
K. Azi, “Explainable AI (XAI): Interpretability in Machine Learning Models,” Int. J. Artif. Intell. Mach. Learn., vol. 1, no. 2, 2018.
J. Thomas, D. Kneale, J. E. McKenzie, S. E. Brennan, and S. Bhaumik, “Determining the scope of the review and the questions it will address,” Cochrane Handb. Syst. Rev. Interv., pp. 13–31, 2019.
A. Morrison et al., “The effect of English-language restriction on systematic review-based meta-analyses: a systematic review of empirical studies,” Int. J. Technol. Assess. Health Care, vol. 28, no. 2, pp. 138–144, 2012.
P. Henman, “Improving public services using artificial intelligence: possibilities, pitfalls, governance,” Asia Pacific J. Public Adm., vol. 42, no. 4, pp. 209–221, 2020.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Roland Vincent Karl

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

