Assessment in a Time of Coronavirus and Closed Campuses

Today is March 15, 2020, and, by now, most law schools have either announced a transition to fully online teaching or set a date when they will begin doing so.  Although many schools have said that this situation is temporary and will last for no more than a few weeks, my personal prediction is that most schools will not resume face-to-face teaching this semester.  This post invites faculty and administrators to think now about the consequences for assessment during this challenging time.

Although my usual interest is in programmatic assessment, here I am writing specifically about course-based assessment.  On the one hand, the next six seeks or so may be an opportunity for faculty to provide more formative assessments to students, such as low-stakes quizzes, essays, and discussion posts. Such activities are a way to keep students engaged with material.

However, there is a looming assessment issue that will require some attention sooner rather than later: how to engage in the typical end-of-semester summative assessments, such as final exams and, for skills classes, final activities. The questions that a law school must answer are several and complex: Continue reading

Contaminating Student Assessment with Class Participation and Attendance “Bumps”

Professor Jay Silver (St. Thomas Law [FL]) has an interesting essay on Inside Higher Ed, The Contamination of Student Assessment.  In it, he argues that we undermine our efforts to utilize reliable summative assessment mechanisms when we provide “bumps” for class participation, attendance, and going to outside events. Here’s an excerpt:

In the era of outcomes assessment, testing serves to measure, more than ever, whether students have assimilated particular knowledge and developed certain skills. A student’s mere exposure to information and instruction in skills does not, in today’s assessment regime, reflect a successful outcome. The assessments crowd wants proof that it sank in, and grades are the unit of measurement.

Accordingly, extra credit for attendance at, say, even the most erudite and inspiring guest lecture outside class corrupts grades as a pure measurement of performance.

Sure, the lecture can be of value, whether demonstrable or not, in the intellectual development of the student, and giving credit for going to it is an effective incentive to attend. Nonetheless, that type of extra credit contaminates grades as a measure of performance, as it can allow the grades of students who attend extra-credit events to leapfrog over the grades of those who outperformed them on the exam but did not attend.

The next contaminant of grades as measurements of performance is the upgrade for stellar attendance. There is, of course, no guarantee that the ubiquitous attendee wasn’t an accomplished daydreamer or a back-row socialite. And if they were present and genuinely plugged in to each class, their diligence should show up in their exam performance, so extra credit merely gilds the lily.

Using the stick as well as the carrot, some professors do the opposite: they downgrade students for disruptive behavior or chronically poor preparation or attendance. Like a doctor using a hammer to anesthetize a patient, downgrades aimed at controlling behavior produce collateral damage. Colleges have better tools — like meetings with the dean of students — to address conduct-related problems.

Finally, we come to what may well be the most common nonperformance variable incorporated into grades: the class participation upgrade that so many of us rely on to break the deafening silence we’d otherwise encounter in casting pearls of wisdom upon the class. Class participation upgrades that recognize and reward the volume, rather than quality, of a student’s classroom contributions pollute performance-based assessment.

Participation upgrades for remarks that consistently advance the class discussion is a more complex issue. …

I definitely encourage readers to check out the full article.  It raises some interesting points. The article has caused me to rethink whether to give “bumps” for class participation. (I’ve never given bumps for attending outside events, and class attendance for our students is compulsory.)  A broader point: although those of us in the “assessment crowd” think and write a lot about formative assessment, we should also recognize the importance of utilizing reliable and effective methods for summative assessment—a point that Professor Silver notes throughout the article.