Breaking the Assessment Bottleneck: Finding Human-AI Balance in Feedback at Scale
From CITL CITL
views
comments
From CITL CITL
Breaking the Assessment Bottleneck: Finding Human-AI Balance in Feedback at Scale. At-scale programs face a dilemma: how to maintain educational quality and a great experience while keeping programs affordable and sustainable. This presentation demonstrates how AI-assisted grading (where AI helps graders focus their time on students needing more assistance) can resolve this tension through three experiments at the Gies College of Business, showing how institutions can deliver quality education at scale without unsustainable cost increases.
Traditional at-scale programs face critical trade-offs: hiring additional graders increases costs, while instructional team burnout and delayed or limited feedback compromise student experience. These constraints encourage reliance on multiple-choice or team assignments that reduce grading costs but limit learning outcomes. Our experiments show how AI-assisted grading breaks this cycle, enabling meaningful individual assessments at scale without proportional cost increases.
During this session, Adam King and Julia Sabin shared research findings on student perceptions, results from full deployment in at-scale courses, and outcomes from AI-generated customized feedback. The talk includes demonstrating the Sense AI grading platform, showing how technology supports human graders in analyzing student work and providing feedback at-scale with appropriate oversight.
Attendees explored ethical considerations and implementation strategies, to gain actionable insights for using AI to make at-scale programs more affordable while improving educational quality.