A Unified Theoretical-Practical Framework for Explainable Machine Learning in Critical Public Sector Applications

Authors

  • Hengki Tamando Sihotang Sistem Informasi, Uniiversitas Putra Abadi Langkat, Indonesia
  • Romasinta Simbolon Institute of Computer Science (IOCSCience), Indonesia

Keywords:

Explainable Machine Learning (XML), Public Sector AI, Algorithmic Transparency, Accountability Frameworks, Responsible Artificial Intelligence

Abstract

 

The rapid adoption of machine learning (ML) in the public sector has increased the need for transparent, accountable, and trustworthy algorithmic decision-making, particularly in high-stakes domains such as social welfare, healthcare, security, and public administration. However, existing approaches to explainable machine learning (XML) remain fragmented, focusing primarily on technical explanation techniques without integrating the institutional, ethical, and user-centered requirements of government environments. This research aims to develop a unified theoretical practical framework that operationalizes explainability across the entire ML lifecycle for critical public-sector applications. This study adopts a qualitative, multi-stage research design that combines theoretical synthesis, framework construction, and empirical validation through expert assessment and case-based evaluation.The results demonstrate that explainability is a multidimensional construct that extends beyond algorithmic transparency to include contextual risk assessment, adaptive explanation delivery, and governance mechanisms such as auditability, human oversight, and documentation standards. The proposed framework integrates four interconnected layers context analysis, model design and transparency, explanation delivery, and oversight and governance providing a structured pathway for implementing explainable ML systems that meet public-sector standards of fairness, legitimacy, and accountability. Expert feedback and case evaluations confirm that the framework enhances interpretability, reduces misinterpretation risks, and supports more informed decision-making among stakeholders. This research contributes to the advancement of responsible AI in government by offering a comprehensive model that bridges technical methods with policy and practice, paving the way for more transparent and trustworthy ML adoption in public-sector services.

Downloads

Download data is not yet available.

References

A. Adadi and M. Berrada, “Peeking inside the black-box: a survey on explainable artificial intelligence (XAI),” IEEE access, vol. 6, pp. 52138–52160, 2018.

X. van Bruxvoort and M. van Keulen, “Framework for assessing ethical aspects of algorithms and their encompassing socio-technical system,” Appl. Sci., vol. 11, no. 23, p. 11187, 2021.

A. F. Cooper, K. Levy, and C. De Sa, “Accuracy-efficiency trade-offs and accountability in distributed ML systems,” in Proceedings of the 1st ACM Conference on Equity and Access in Algorithms, Mechanisms, and Optimization, 2021, pp. 1–11.

K. Sahin and T. Barker, “Europe’s capacity to act in the global tech race: Charting a path for Europe in times of major technological disruption,” 2021.

V. Lai, C. Chen, Q. V. Liao, A. Smith-Renner, and C. Tan, “Towards a science of human-ai decision making: a survey of empirical studies,” arXiv Prepr. arXiv2112.11471, 2021.

M. S. Wood and A. McKelvie, “Opportunity evaluation as future focused cognition: Identifying conceptual themes and empirical trends,” Int. J. Manag. Rev., vol. 17, no. 2, pp. 256–277, 2015.

J. Gerlings, A. Shollo, and I. Constantiou, “Reviewing the need for explainable artificial intelligence (xAI),” arXiv Prepr. arXiv2012.01007, 2020.

D. A. Shepherd and R. Suddaby, “Theory building: A review and integration,” J. Manage., vol. 43, no. 1, pp. 59–86, 2017.

M. G. Mendonça and V. R. Basili, “Validation of an approach for improving existing measurement frameworks,” IEEE Trans. Softw. Eng., vol. 26, no. 6, pp. 484–499, 2002.

I. A. ESSIEN, G. C. NWOKOCHA, E. D. ERIGHA, E. OBUSE, and A. O. AKINDEMOWO, “A Risk Governance Model for Architectural Innovation in Public Infrastructure Projects,” J Front Multidiscip Res, vol. 1, no. 1, pp. 57–70, 2020.

B. W. Wirtz, J. C. Weyerer, and B. J. Sturm, “The dark sides of artificial intelligence: An integrated AI governance framework for public administration,” Int. J. Public Adm., vol. 43, no. 9, pp. 818–829, 2020.

S. Riedmaier, B. Danquah, B. Schick, and F. Diermeyer, “Unified framework and survey for model verification, validation and uncertainty quantification,” Arch. Comput. Methods Eng., 2020.

C. J. Sampson et al., “Transparency in decision modelling: what, why, who and how?,” Pharmacoeconomics, vol. 37, no. 11, pp. 1355–1369, 2019.

A. Adewuyi, T. J. Oladuji, A. Ajuwon, and C. R. Nwangele, “A conceptual framework for financial inclusion in emerging economies: Leveraging AI to expand access to credit,” IRE Journals, vol. 4, no. 1, pp. 222–236, 2020.

T. Schillemans, “Calibrating Public Sector Accountability: Translating experimental findings to public sector accountability,” Public Manag. Rev., vol. 18, no. 9, pp. 1400–1420, 2016.

F. Sørmo, J. Cassens, and A. Aamodt, “Explanation in case-based reasoning–perspectives and goals,” Artif. Intell. Rev., vol. 24, no. 2, pp. 109–143, 2005.

M. Kuziemski and G. Misuraca, “AI governance in the public sector: Three tales from the frontiers of automated decision-making in democratic settings,” Telecomm. Policy, vol. 44, no. 6, p. 101976, 2020.

M. Veale, M. Van Kleek, and R. Binns, “Fairness and accountability design needs for algorithmic support in high-stakes public sector decision-making,” in Proceedings of the 2018 chi conference on human factors in computing systems, 2018, pp. 1–14.

G. I. Zekos, “AI risk management,” in Economics and Law of Artificial Intelligence: Finance, Economic Impacts, Risk Management and Governance, Springer, 2021, pp. 233–288.

T. Komatsu, M. Salgado, A. Deserti, and F. Rizzo, “Policy labs challenges in the public sector: the value of design for more responsive organizations,” Policy Des. Pract., vol. 4, no. 2, pp. 271–291, 2021.

D. F. Engstrom, D. E. Ho, C. M. Sharkey, and M.-F. Cuéllar, “Government by algorithm: Artificial intelligence in federal administrative agencies,” NYU Sch. Law, Public Law Res. Pap., no. 20–54, 2020.

I. D. Raji et al., “Closing the AI accountability gap: Defining an end-to-end framework for internal algorithmic auditing,” in Proceedings of the 2020 conference on fairness, accountability, and transparency, 2020, pp. 33–44.

Z. Engin and P. Treleaven, “Algorithmic government: Automating public services and supporting civil servants in using data science technologies,” Comput. J., vol. 62, no. 3, pp. 448–460, 2019.

Downloads

Published

2024-09-30

How to Cite

Sihotang, H. T., & Simbolon, R. (2024). A Unified Theoretical-Practical Framework for Explainable Machine Learning in Critical Public Sector Applications. Jurnal Teknik Informatika C.I.T Medicom, 16(4), 211–220. Retrieved from https://www.medikom.iocspublisher.org/index.php/JTI/article/view/1350