A Simple, Low-Cost Assessment Process?

Professor Andrea Curcio (Georgia State) has published A Simple Low-Cost Institutional Learning-Outcomes Assessment Process, 67 J. Legal Educ. 489 (2018). It’s an informative article, arguing that, in light of budgetary pressures, faculty should use AAC&U style rubrics to assess competencies across a range of courses. The results can then be pooled and analyzed.  In her abstract on SSRN, Professor Curcio states:

The essay explains a five-step institutional outcomes assessment process: 1. Develop rubrics for institutional learning outcomes that can be assessed in law school courses; 2. Identify courses that will use the rubrics; 3. Ask faculty in designated courses to assess and grade as they usually do, adding only one more step – completion of a short rubric for each student; 4. Enter the rubric data; and 5. Analyze and use the data to improve student learning. The essay appendix provides sample rubrics for a wide range of law school institutional learning outcomes. This outcomes assessment method provides an option for collecting data on institutional learning outcomes assessment in a cost-effective manner, allowing faculties to gather data that provides an overview of student learning across a wide range of learning outcomes. How faculties use that data depends upon the results as well as individual schools’ commitment to using the outcomes assessment process to help ensure their graduates have the knowledge, skills and values necessary to practice law.

This is an ideal way to conduct assessment, because it involves measuring students’ actual performance in their classes, rather than on a simulated exercise that is unconnected to a course and in which, therefore, they may not give full effort. This article is particularly valuable to the field because it includes sample rubrics for a range of learning outcomes that law schools are likely to measure. It’s definitely worth a read!

My only concern is with getting faculty buy-in.  Professor Curcio states, “In courses designated for outcomes measurement, professors add one more step to their grading process. After grading, faculty in designated courses complete an institutional faculty-designed rubric that delineates, along a continuum, students’ development of core competencies encompassed by a given learning outcome. The rubric may be applied to every student’s work or to that of a random student sample.” Continue reading

NLJ: Feedback on Feedback

Karen Sloan of the National Law Journal reports on a symposium issue of the University of Detroit Mercy Law Review about formative assessment.  She compares two studies that seem to reach different conclusions on the subject.

First up is an article by a group of law professors Ohio State, led by Ruth Colker, who conducted a study offering a voluntary practice test to students in Constitutional Law.  Those who opted for the voluntary test and mock grade did better on the final exam.  Those students also did better on their other subjects than non-participants.

The second article was by David Siegel of New England.  He examined whether individualized outreach to low performing students would benefit their end-of-semester grades.  In his study, he sent e-mails to students in his course who scored low on quizzes.  He also had follow-up meetings with them.  His control group was students who scored slightly higher on the quizzes but didn’t receive any individualized feedback or have one-on-one meetings.  He found that there was no statistical difference between the final grades of the groups.

From this, Ms. Sloan concludes:

There’s enough research out there on the benefits of formative assessments to put stock in the conclusion the Ohio State professors reached, that more feedback on tests and performance helps. But I think Siegel’s study tells us that the manner and context of how that feedback is delivered makes a difference. It’s one thing to have a general conversation with low performing students. But issuing a grade on a practice exam—even if it doesn’t count toward their final grade—I suspect is a real wake-up call to students that they may need to step up and make some changes.

I agree 100% with Ms. Sloan’s takeaway.  One additional point: the two studies are really measuring two different things. Professor Colker’s was about formative assessment, while Professor Siegel’s was about the efficacy of early alerts. After all, all students in his class took the quiz and got the results. I also note that Professor Siegel’s “control group” wasn’t really one, since they received higher grades on the first quiz, albeit ones that were only slightly higher. It may be that this group benefitted just from taking and scoring the quiz.  An interesting way to re-run the study would be to do as Professor Colker and her colleagues did at Ohio State: invite participants from all grade ranges to participate in the extra feedback.  Of course, there’s still the problem of cause-and-effect versus correlation.  It may be that the students in Professor Colker’s study were simply more motivated, and it is this fact—motivation—that is the true driver of the improvement in grades.  Nevertheless, these are two, important studies and additions to the conversation about assessment in legal education. (LC)