As reported today on TaxProf, Professor Sarah Schendel (Suffolk) has a new article on SSRN, “What You Don’t Know (Can Hurt You): Using Exam Wrappers to Foster Self-Assessment Skills in Law Students.” She describes exam wrappers as a “one page post-exam exercise” that has students self-assess their “exam preparation and exam taking skills, and prompt them to consider changes to their techniques.” Exam wrappers have been used in a number of disciplines, including physics, chemistry, and second language acquisition; however, they are not widespread in legal education.
At TaxProf, Dean Caron has links to Detroit Mercy’s recent symposium on formative assessment. Many of the articles look interesting!
A colleague and I were just chatting about time efficient ways to incorporate more assessment activities in our writing courses, and we began talking about the value of self-assessment in the writing process. Here are some quick resources on the subject:
- Joi Montiel, Empower the Student, Liberate the Professor: Self-Assessment by Comparative Analysis, 39 S. Ill. U. L.J. 249 (2015).
- Olympia Duhart & Anthony Niedwiecki, Using Legal Writing Portfolios and Feedback Sessions as Tools to Build Better Writers, 24 Second Draft 8-9 (Fall 2010).
- Texas A&M Writing Center, Self-Assessment
- Northwestern University, The Writing Place, Performing a Writing Self-Assessment
- Stanford University, Teaching Commons, Student Self-Assessment
- Andrade, H. & Valtcheva, A. (2009). Promoting learning and achievement through self-assessment. Theory Into Practice, 48, 12-19.
- Nielsen, K. (2014). Self-assessment methods in writing instruction: A conceptual framework, successful practices and essential strategies. Journal of Research in Reading, 37(1).
Karen Sloan of the National Law Journal reports on a symposium issue of the University of Detroit Mercy Law Review about formative assessment. She compares two studies that seem to reach different conclusions on the subject.
First up is an article by a group of law professors Ohio State, led by Ruth Colker, who conducted a study offering a voluntary practice test to students in Constitutional Law. Those who opted for the voluntary test and mock grade did better on the final exam. Those students also did better on their other subjects than non-participants.
The second article was by David Siegel of New England. He examined whether individualized outreach to low performing students would benefit their end-of-semester grades. In his study, he sent e-mails to students in his course who scored low on quizzes. He also had follow-up meetings with them. His control group was students who scored slightly higher on the quizzes but didn’t receive any individualized feedback or have one-on-one meetings. He found that there was no statistical difference between the final grades of the groups.
From this, Ms. Sloan concludes:
There’s enough research out there on the benefits of formative assessments to put stock in the conclusion the Ohio State professors reached, that more feedback on tests and performance helps. But I think Siegel’s study tells us that the manner and context of how that feedback is delivered makes a difference. It’s one thing to have a general conversation with low performing students. But issuing a grade on a practice exam—even if it doesn’t count toward their final grade—I suspect is a real wake-up call to students that they may need to step up and make some changes.
I agree 100% with Ms. Sloan’s takeaway. One additional point: the two studies are really measuring two different things. Professor Colker’s was about formative assessment, while Professor Siegel’s was about the efficacy of early alerts. After all, all students in his class took the quiz and got the results. I also note that Professor Siegel’s “control group” wasn’t really one, since they received higher grades on the first quiz, albeit ones that were only slightly higher. It may be that this group benefitted just from taking and scoring the quiz. An interesting way to re-run the study would be to do as Professor Colker and her colleagues did at Ohio State: invite participants from all grade ranges to participate in the extra feedback. Of course, there’s still the problem of cause-and-effect versus correlation. It may be that the students in Professor Colker’s study were simply more motivated, and it is this fact—motivation—that is the true driver of the improvement in grades. Nevertheless, these are two, important studies and additions to the conversation about assessment in legal education. (LC)