A Systematic Literature Review on the Theoretical Foundations of Machine Learning in Intelligent Computing Systems
Keywords:
Machine Learning Theory, Intelligent Computing Systems, Theoretical Foundations, Systematic Literature Review, Statistical and Computational LearningAbstract
This study presents a comprehensive theoretical review of the foundations that underpin modern intelligent computing systems, integrating perspectives from statistical learning theory, computational learning theory, optimization theory, information theory, probabilistic modeling, neural computation, and cognitive as well as bio-inspired approaches. Using a systematic review methodology supported by structured search strings and rigorous data extraction, the study identifies core theoretical constructs including VC dimension, PAC learning, sample complexity, entropy, mutual information, Bayesian inference, convergence principles, and universal approximation that collectively shape the development, capabilities, and limitations of intelligent systems. The analysis reveals how these theories complement one another in addressing challenges related to generalization, learnability, optimization efficiency, uncertainty modeling, and biological plausibility. The findings highlight that existing theoretical frameworks provide strong foundations but remain limited in explaining the behavior of high-dimensional, non-convex, and black-box models common in deep learning. The review contributes an integrated conceptual map that clarifies how different theories support robust system design and identifies gaps that future research must address, including scalability of theoretical guarantees, unified frameworks for hybrid systems, and deeper mathematical understanding of modern neural architectures. Overall, the study offers a coherent synthesis that strengthens theoretical grounding and guides future advancements in the construction of reliable and intelligent computing systems.
Downloads
References
M. Chen, F. Herrera, and K. Hwang, “Cognitive computing: architecture, technologies and intelligent applications,” Ieee Access, vol. 6, pp. 19774–19783, 2018.
M. Shafique et al., “Robust machine learning systems: Challenges, current trends, perspectives, and the road ahead,” IEEE Des. Test, vol. 37, no. 2, pp. 30–57, 2020.
A. Sharma, Z. Zhang, and R. Rai, “The interpretive model of manufacturing: a theoretical framework and research agenda for machine learning in manufacturing,” Int. J. Prod. Res., vol. 59, no. 16, pp. 4960–4994, 2021.
N. A. D. Suhaimi and H. Abas, “A systematic literature review on supervised machine learning algorithms,” Perintis Ejournal, vol. 10, no. 1, pp. 1–24, 2020.
J. Subramanian and R. Simon, “Overfitting in prediction models–is it a problem only in high dimensions?,” Contemp. Clin. Trials, vol. 36, no. 2, pp. 636–641, 2013.
A. Clark and S. Lappin, “Computational learning theory and language acquisition,” Philos. Linguist., vol. 14, p. 445, 2012.
R. Sun, “Optimization for deep learning: theory and algorithms,” arXiv Prepr. arXiv1912.08957, 2019.
Z. Ghahramani, “Probabilistic machine learning and artificial intelligence,” Nature, vol. 521, no. 7553, pp. 452–459, 2015.
T. L. Griffiths, F. Callaway, M. B. Chang, E. Grant, P. M. Krueger, and F. Lieder, “Doing more with less: meta-reasoning and meta-learning in humans and machines,” Curr. Opin. Behav. Sci., vol. 29, pp. 24–30, 2019.
S. Gupta et al., “Systematic review of the literature: best practices,” Acad. Radiol., vol. 25, no. 11, pp. 1481–1490, 2018.
M. A. Meza Martínez, M. Nadj, and A. Maedche, “Towards an integrative theoretical framework of interactive machine learning systems,” 2019.
M. Schotten, W. J. N. Meester, S. Steiginga, and C. A. Ross, “A brief history of Scopus: The world’s largest abstract and citation database of scientific literature,” in Research analytics, Auerbach Publications, 2017, pp. 31–58.
M. Gusenbauer and N. R. Haddaway, “Which academic search systems are suitable for systematic reviews or meta?analyses? Evaluating retrieval qualities of Google Scholar, PubMed, and 26 other resources,” Res. Synth. Methods, vol. 11, no. 2, pp. 181–217, 2020.
A. B. Patel, T. Nguyen, and R. G. Baraniuk, “A probabilistic theory of deep learning,” arXiv Prepr. arXiv1504.00641, 2015.
R. Frost, B. C. Armstrong, and M. H. Christiansen, “Statistical learning research: A critical review and possible new directions.,” Psychol. Bull., vol. 145, no. 12, p. 1128, 2019.
G. A. Anastassiou, Intelligent systems: approximation by artificial neural networks, vol. 19. Springer, 2011.
P. Jain and P. Kar, “Non-convex optimization for machine learning,” Found. Trends® Mach. Learn., vol. 10, no. 3–4, pp. 142–363, 2017.
T. W. S. Chow and D. Huang, “Estimating optimal feature subsets using efficient estimation of high-dimensional mutual information,” IEEE Trans. Neural networks, vol. 16, no. 1, pp. 213–224, 2005.
M. Z. Alom et al., “A state-of-the-art survey on deep learning theory and architectures,” electronics, vol. 8, no. 3, p. 292, 2019.
D. Floreano and C. Mattiussi, Bio-inspired artificial intelligence: theories, methods, and technologies. MIT press, 2008.
C. Chen et al., “Deep learning on computational?resource?limited platforms: A survey,” Mob. Inf. Syst., vol. 2020, no. 1, p. 8454327, 2020.
C. C. Aggarwal, Neural networks and deep learning, vol. 10, no. 978. Springer, 2018.
T. Greenhalgh et al., “Beyond adoption: a new framework for theorizing and evaluating nonadoption, abandonment, and challenges to the scale-up, spread, and sustainability of health and care technologies,” J. Med. Internet Res., vol. 19, no. 11, p. e8775, 2017.
I. M. Johnstone and D. M. Titterington, “Statistical challenges of high-dimensional data,” Philosophical transactions of the Royal Society A: Mathematical, physical and engineering sciences, vol. 367, no. 1906. The Royal Society Publishing, pp. 4237–4253, 2009.
Z. Allen-Zhu, Y. Li, and Z. Song, “A convergence theory for deep learning via over-parameterization,” in International conference on machine learning, PMLR, 2019, pp. 242–252.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2024 Henry Quinn Payton, Thomas Shiloh

This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.

