![]()
Certificate: View Certificate
Published Paper PDF: View PDF
DOI: https://doi.org/10.63345/ijre.v14.i10.4
Prof. (Dr) Punit Goel
Maharaja Agrasen Himalayan Garhwal University
Uttarakhand, India
orcid- https://orcid.org/0000-0002-3757-3123
Abstract
Rubric-based grading in online project-based learning (PBL) environments has garnered significant attention for its potential to address long-standing challenges in assessment transparency, consistency, and learner engagement. This enhanced abstract elaborates on the context, methods, findings, and implications of implementing analytic rubrics in a fully online engineering design course. We conducted a convergent mixed‑methods study involving 120 undergraduate engineering students and five instructors. The analytic rubric, comprising five well-defined criteria—Problem Definition, Technical Solution, Prototype Quality, Documentation, and Reflection—was iteratively developed through collaborative workshops and expert validation. Quantitative analyses revealed consistently high inter‑rater reliability (Cronbach’s α = .89–.93; ICC = .88–.92) across three project assignments, indicating robust consistency in scoring. Qualitative data from focus groups and reflective journals highlighted that students perceived rubric use as enhancing fairness, clarifying expectations, and promoting self‑regulated learning behaviors. Instructors reported an initial increase in rubric development workload (approximately 6–8 hours per rubric) but observed a 20% reduction in grading time per project over the semester. Iterative rubric refinement—driven by thematic feedback on descriptor clarity and performance level granularity—further improved reliability and user satisfaction. Key recommendations include embedding rubric training into instructor professional development, adopting a cyclical rubric refinement process, and exploring technological supports such as rubric‑integrated learning management system (LMS) tools. This study contributes empirical evidence to the PBL and online assessment literature by demonstrating that thoughtfully designed and continuously refined analytic rubrics can foster equitable, transparent, and pedagogically rich assessment practices in digital learning contexts.
Keywords
Rubric-Based Grading, Online Learning, Project-Based Learning, Assessment Reliability, Student Engagement
References
- Andrade, H. (2005). Teaching with rubrics: The good, the bad, and the ugly. College Teaching, 53(1), 27–31.
- Bell, S. (2010). Project-based learning for the 21st century: Skills for the future. The Clearing House, 83(2), 39–43.
- Boud, D. (1995). Assessment and learning: Contradictory or complementary? In P. Knight (Ed.), Assessment for learning in higher education (pp. 35–48). Kogan Page.
- Braun, V., & Clarke, V. (2006). Using thematic analysis in psychology. Qualitative Research in Psychology, 3(2), 77–101.
- Brookhart, S. M. (2013). How to create and use rubrics for formative assessment and grading. ASCD.
- Creswell, J. W., & Plano Clark, V. L. (2017). Designing and conducting mixed methods research (3rd ed.). SAGE Publications.
- Dochy, F., Segers, M., Van den Bossche, P., & Gijbels, D. (2003). Effects of problem-based learning: A meta-analysis. Learning and Instruction, 13(5), 533–568.
- Hannafin, M. J., Land, S. M., & Oliver, K. (1999). Open learning environments: Foundations, methods, and models. In C. M. Reigeluth (Ed.), Instructional-design theories and models: A new paradigm of instructional theory (Vol. 2, pp. 115–140). Lawrence Erlbaum.
- Jonsson, A., & Svingby, G. (2007). The use of scoring rubrics: Reliability, validity, and educational consequences. Educational Research Review, 2(2), 130–144.
- Means, B., & Neisler, J. (2020). Pivot to online learning during COVID-19: Lessons from our nation’s schools. Digital Promise.
- Panadero, E., Jonsson, A., & Botella, J. (2017). Effects of self-assessment on self-regulated learning and self-efficacy: Four meta-analyses. Educational Research Review, 22, 74–98.
- Reddy, Y. M., & Andrade, H. (2010). A review of rubric use in higher education. Assessment & Evaluation in Higher Education, 35(4), 435–448.
- Rust, C. (2002). The impact of assessment on student learning: How can the research literature practically help to inform the development of departmental assessment strategies and learner-centred assessment practices? Active Learning in Higher Education, 3(2), 145–158.
- Sadler, D. R. (2009). Indeterminacy in the use of preset criteria for assessment and grading. Assessment & Evaluation in Higher Education, 34(2), 159–179.
- Schank, P. (1999). Web-based teaching and learning. The Journal of Continuing Higher Education, 47(3), 12–18.
- Thomas, J. W. (2000). A review of research on project-based learning. Autodesk Foundation.
- Wiggins, G. (1998). Educative assessment: Designing assessments to inform and improve student performance. Jossey-Bass.
- Yen, C. J., & Lou, S. J. (2013). Integrating rubrics within a weblog environment: Promoting self- and peer assessment in elementary school. Computers & Education, 60(1), 225–236.
- Zydney, J., Bathurst, K., & Hassel, B. (2019). Student perspectives on the use of rubrics for grading and feedback in project-based learning. Journal of Educational Technology Development and Exchange, 12(1), 1–14.