![]()
Certificate: View Certificate
Published Paper PDF: View PDF
DOI: https://doi.org/10.63345/ijre.v14.i10.3
Er. Niharika Singh
ABES Engineering College
Crossings Republik, Ghaziabad, Uttar Pradesh 201009
Abstract
Artificial intelligence (AI)–driven student performance prediction models have emerged as a transformative force in educational technology (EdTech), enabling instructors, administrators, and learners themselves to harness data-driven insights for personalized learning pathways. By leveraging advanced machine learning algorithms—ranging from decision trees and support vector machines to deep neural networks—these systems analyze a multitude of variables, including but not limited to past academic records, engagement metrics in learning management systems, socio-demographic factors, and even real-time affective indicators. The predictive outputs facilitate early identification of students who may be at risk of underperforming or dropping out, thereby empowering timely and targeted interventions. However, the journey from model development to real-world application is fraught with technical, ethical, and operational challenges. Data heterogeneity and quality issues often undermine model robustness; algorithmic opacity raises concerns about fairness and accountability; and limited user literacy regarding AI tools can inhibit adoption. This manuscript presents a comprehensive survey-based investigation involving 120 K–12 educators and 180 higher-education students, aimed at evaluating both objective performance metrics and subjective stakeholder perceptions of AI prediction models. Quantitative analysis revealed that, on average, predictive accuracy exceeds 85%, with ensemble methods and hybrid deep-learning architectures showing particular promise. Nonetheless, 74% of respondents expressed significant concerns regarding data privacy, and 68% highlighted the need for greater model explainability.
Keywords
AI, Student Performance, Prediction Models, EdTech, Machine Learning
References
- Anderson, T., & Dron, J. (2011). Three generations of distance education pedagogy. The International Review of Research in Open and Distributed Learning, 12(3), 80–97.
- Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.
- Choi, H., & Park, S. (2023). Barriers to adoption of AI analytics in educational institutions. Journal of Educational Technology & Society, 26(1), 45–59.
- Cukurova, M., Luckin, R., & Gaved, M. (2018). Beyond high-stakes testing: Advancing learner assessment in the digital age. British Journal of Educational Technology, 49(5), 819–831.
- Dekker, G., Pechenizkiy, M., & Vleeshouwers, J. M. (2009). Predicting students drop out: A case study. In Knowledge Discovery in Databases: PKDD 2009 Workshops (pp. 147–156). Springer.
- Garcia, D., Nguyen, T., & Smith, J. (2015). An ensemble approach to predicting student success in online courses. International Journal of Artificial Intelligence in Education, 25(3), 317–333.
- Haque, A., & Kang, M. (2021). Ethical AI in education: A scoping review. Computers & Education: Artificial Intelligence, 2, 100036.
- Johnson, L., Adams Becker, S., Estrada, V., & Freeman, A. (2019). NMC Horizon Report: 2019 Higher Education Edition. EDUCAUSE.
- Kumar, V., & Rose, C. (2020). Feature engineering for educational data mining: A systematic review. Computers & Education, 145, 103736.
- Li, X., Zhou, Q., & Wang, F. (2022). Differential privacy in educational data analytics: Techniques and applications. IEEE Transactions on Learning Technologies, 15(2), 121–133.
- Maldonado-Mahauad, J., Guzmán-López, O., & Claro, S. (2018). Predictive analytics in MOOCs: Early warning system for dropout. Educational Technology & Society, 21(1), 111–124.
- Nguyen, A., & Keller, J. (2020). Assessing the pedagogical impact of AI-driven adaptive quizzes. Journal of Computer Assisted Learning, 36(6), 844–858.
- O’Neil, C. (2016). Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy. Crown Publishing.
- Ribeiro, M. T., Singh, S., & Guestrin, C. (2016). “Why should I trust you?” Explaining the predictions of any classifier. In
- Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (pp. 1135–1144). ACM.
- Romero, C., & Ventura, S. (2007). Educational data mining: A survey from 1995 to 2005. Expert Systems with Applications, 33(1), 135–146.
- Smith, A., & Lee, B. (2010). Predicting academic performance: A regression analysis of student characteristics. Educational Research Quarterly, 33(4), 15–30.
- Wang, Y., & Aggarwal, R. (2021). Explainable AI for educational applications: A framework and case study. AI Magazine, 42(2), 55–67.
- Yadav, A., & Dey, L. (2019). Machine learning techniques for assessing student performance: A comparative study. International Journal of Engineering Education, 35(4), 1224–1238.
- Zhang, H., & Patel, K. (2018). Hybrid deep learning and ensemble methods for predicting student dropout. Journal of Learning Analytics, 5(2), 67–85.
- Griffiths, D., & Guetl, C. (2013). The role of data in shaping personalized learning: A policy analysis. Educational Policy, 27(4), 839–860.
- Breiman, L. (2001). Random forests. Machine Learning, 45(1), 5–32.