![]()
Certificate: View Certificate
Published Paper PDF: View PDF
Sneha Reddy
Independent Researcher
Telangana, India
Abstract
This study examines the multifaceted relationship between Online Internal Assessment Systems (OIAS) and student satisfaction in higher education, addressing an urgent gap in contemporary pedagogical research. With the rapid shift toward digital learning environments—expedited by events such as the COVID‑19 pandemic—universities have widely adopted OIAS for continuous, formative evaluation. These systems promise benefits including automated grading, flexible scheduling, and immediate feedback, yet their true impact on learner perceptions remains insufficiently understood. Employing a cross‑sectional survey of 450 undergraduate and postgraduate students across three diverse Indian universities, this research measures perceptions of system usability, feedback quality, transparency of grading processes, and availability of technical support services.
Through rigorous statistical analyses—comprising descriptive statistics, exploratory factor analysis, and hierarchical multiple regression—our findings reveal nuanced insights: perceived usability emerges as the single strongest predictor of overall satisfaction (β = 0.42, p < 0.001), followed closely by the quality of feedback provided (β = 0.35, p < 0.001). Transparency of grading algorithms (β = 0.21, p < 0.01) and dependable technical support (β = 0.18, p < 0.05) also contribute significantly, though to a lesser extent. Subgroup analyses highlight modest differences by program level and gender, indicating that postgraduates and female students tend to report slightly higher satisfaction with feedback mechanisms.
These results underscore the critical importance of user‑centered interface design and the delivery of clear, actionable feedback within OIAS. By prioritizing intuitive navigation, minimizing system downtime, and communicating transparent scoring rubrics, institutions can enhance learner engagement, reduce anxiety, and foster greater trust in digital assessments. The study concludes with targeted recommendations for system developers—to integrate iterative usability testing and adaptive feedback modules—and for academic policymakers—to invest in robust support infrastructures and transparent communication strategies. This comprehensive evaluation not only advances theoretical understanding of digital assessment satisfaction but also offers practical guidance for optimizing OIAS to support effective, equitable evaluation in evolving educational landscapes.
Keywords
Online assessment, student satisfaction, usability, feedback quality, higher education
References
- https://ars.els-cdn.com/content/image/1-s2.0-S0260691723001004-gr1.jpg
- https://www.researchgate.net/publication/259933154/figure/fig4/AS:667834313089039@1536235516676/Flowchart-of-the-Feedback-Process.ppm
- Black, P., & Wiliam, D. (1998). Assessment and classroom learning. Assessment in Education: Principles, Policy & Practice, 5(1), 7–74. https://doi.org/10.1080/0969595980050102
- Chapelle, C. A., & Douglas, D. (2006). Assessing language through computer technology. Cambridge University Press.
- Davis, F. D. (1989). Perceived usefulness, perceived ease of use, and user acceptance of information technology. MIS Quarterly, 13(3), 319–340. https://doi.org/10.2307/249008
- Gikandi, J. W., Morrow, D., & Davis, N. E. (2011). Online formative assessment in higher education: A review of the literature. Computers & Education, 57(4), 2333–2351. https://doi.org/10.1016/j.compedu.2011.06.004
- Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112. https://doi.org/10.3102/003465430298487
- McGinn, C., & Wasko, M. (2014). An analysis of the adoption of online assessment systems. International Journal of Information Management, 34(6), 703–713. https://doi.org/10.1016/j.ijinfomgt.2014.07.005
- Narciss, S., & Huth, K. (2004). How to design informative tutoring feedback for multimedia learning. In S. A. Cerri, G. Gouardères, & F. Paraguaçu (Eds.), Proceedings of the 7th International Conference on Intelligent Tutoring Systems (pp. 181–190). Springer. https://doi.org/10.1007/978-3-540-30139-3_20
- Park, J. H. (2009). An analysis of the technology acceptance model in understanding university students’ behavioral intention to use e‑learning. Educational Technology & Society, 12(3), 150–162.
- Shute, V. J. (2008). Focus on formative feedback. Review of Educational Research, 78(1), 153–189. https://doi.org/10.3102/0034654307313795
- Simpson, J., & Oliver, B. (2007). Electronic voting systems for lectures—Three years of practical experience. Australasian Journal of Educational Technology, 23(2), 135–148. https://doi.org/10.14742/ajet.1231
- Teo, T. (2011). Factors influencing teachers’ intention to use technology: Model development and test. Computers & Education, 57(4), 2432–2440. https://doi.org/10.1016/j.compedu.2011.06.008
- Tyler, T. R., & Smith, H. J. (1995). Social justice and social movements. In A. G. Greenwald (Ed.), Social justice and behavioral science (pp. 1–23). Erlbaum.
- Venkatesh, V., & Bala, H. (2008). Technology acceptance model 3 and a research agenda on interventions. Decision Sciences, 39(2), 273–315. https://doi.org/10.1111/j.1540-5915.2008.00192.x
- Venkatesh, V., & Davis, F. D. (2000). A theoretical extension of the technology acceptance model: Four longitudinal field studies. Management Science, 46(2), 186–204. https://doi.org/10.1287/mnsc.46.2.186.11926
- Wang, M., & Baker, R. (2015). Content or platform? Why do students complete MOOCs? Journal of Online Learning and Teaching, 11(1), 17–30.
- Yilmaz, R. M., & Keser, H. (2016). Student perceptions of e‑assessment: Feedback, convenience and structure. International Journal of Emerging Technologies in Learning, 11(1), 19–30. https://doi.org/10.3991/ijet.v11i01.4959