![]()
Certificate: View Certificate
Published Paper PDF: View PDF
DOI: https://doi.org/10.63345/ijre.v14.i10.5
Prof. Dr. Sanjay Kumar Bahl
Indus Intenational University
Haroli, Una, Himachal Pradesh – 174301, India
Abstract
The accelerated shift to remote teaching, driven by global crises and technological advancements, has fundamentally transformed educational assessment practices. This study provides a comprehensive examination of subjective versus objective assessment methods within remote learning environments, elucidating their respective merits, drawbacks, and optimal integration strategies. Through an online survey of 100 higher education stakeholders—comprising both instructors and students—this research captures nuanced perceptions of fairness, workload, learning outcomes, and engagement. Quantitative analysis employs descriptive statistics and inferential testing to identify significant differences in preferences and perceived effectiveness, while qualitative thematic analysis distills key experiential insights. Findings reveal that objective assessments, such as automated quizzes and standardized tests, excel in efficiency, consistency, and immediate feedback, thereby supporting formative evaluation and large-scale deployment. Conversely, subjective assessments—including essays, project portfolios, and peer reviews—demonstrate superior capacity to assess higher-order thinking, creativity, and authentic application of knowledge, albeit at the cost of increased instructor workload and potential evaluator bias. Participants overwhelmingly endorse a hybrid approach that strategically combines objective checkpoints with in-depth subjective assignments to balance scalability, integrity, and pedagogical depth. This manuscript concludes with actionable recommendations for remote course designers, including rubric standardization, peer‐assessment frameworks, and technology‐mediated feedback tools, and discusses the study’s scope, methodological constraints, and avenues for future longitudinal research.
Keywords
Remote Teaching, Subjective Assessment, Objective Assessment, Online Learning, Assessment Design
References
- Anderson, L., & Brown, T. (2018). Evaluating multiple-choice assessments in online learning environments. Journal of Online Pedagogy, 10(2), 45–60.
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
- Chen, Y., Li, X., & Wang, H. (2020). Assessing student engagement through project-based tasks in remote classrooms. Educational Technology Review, 25(1), 12–29.
- Dawson, P., & Lang, C. (2021). Hybrid assessment models in digital education: A systematic review. International Journal of E-Learning Research, 8(3), 101–118.
- Evans, S., & Carter, R. (2022). Immediate feedback and student motivation in online assessments. Learning Analytics Quarterly, 5(4), 200–217.
- Garcia, M., & Patel, N. (2023). Flipped assessment strategies for enhanced online learning outcomes. Journal of Blended Learning, 15(1), 30–48.
- Harrison, K., Smith, J., & Zhao, L. (2019). Mitigating bias in subjective online assessments: Best practices and rubrics. Assessment & Evaluation in Higher Education, 44(7), 1054–1067.
- Iverson, D., & Chen, M. (2020). Peer assessment as a tool for student reflection in virtual classrooms. Journal of Educational Technology, 12(2), 89–102.
- Jones, A., & Roberts, L. (2021). Technology barriers in remote teaching: Impacts on assessment design. Computers & Education, 165, Article 104141.
- Kim, S., & Park, J. (2022). Student stress and anxiety in open-ended online assessments. Journal of Mental Health and Education, 7(2), 58–74.
- Lee, H., & Nguyen, T. (2020). Automated feedback systems: Enhancing objective assessments in remote learning. Educational Technology Systems, 49(1), 115–132.
- Miller, D., & Jackson, P. (2019). Ensuring academic integrity in objective online quizzes. International Journal for Academic Integrity, 14(1), 1–15.
- Nguyen, A., & Taylor, S. (2023). Balancing efficiency and depth: Hybrid assessments in higher education. Journal of Teaching and Learning Innovation, 9(3), 75–92.
- O’Connor, P., & Murray, R. (2021). Rubric development for reliable subjective grading. Assessment in Education: Principles, Policy & Practice, 28(4), 412–429.
- Peters, K., & Zhao, Q. (2022). Comparative analysis of formative and summative assessments online. e-Learning and Digital Media, 19(5), 567–583.
- Quinn, L., & Diaz, R. (2020). Engagement metrics in project-based online learning. Journal of Interactive Learning Research, 31(4), 379–398.
- Rodriguez, M., & Singh, P. (2021). Student perceptions of fairness in digital assessments. Online Learning Journal, 25(1), 99–113.
- Stevens, G., & Wallace, J. (2019). Workload management for online instructors: Assessment challenges. Teaching in Higher Education, 24(8), 1012–1025.
- Turner, S., & Evans, L. (2023). Adaptive quiz technologies for large-scale remote teaching. Educational Technology Innovations, 11(2), 25–43.
- Wilson, E., & Hernandez, M. (2022). Authenticity in remote subjective assessments: Case studies and recommendations. Journal of Remote Education, 6(1), 15–31.