Linda Suskie has a new blog post up about the difference between course and program learning goals. She begins by cutting through some of the jargon and vocabulary to summarize learning goals as:
Learning goals (or whatever you want to call them) describe what students will be able to do as a result of successful completion of a learning experience, be it a course, program or some other learning experience. So course learning goals describe what students will be able to do upon passing the course, and program learning goals describe what students will be able to do upon successfully completing the (degree or certificate) program.
I encourage readers to check out the full post from Ms. Suskie.
Dean Vikram Amar (Illinois) has an excellent post on Above the Law about exam writing. He offers four thoughts based on his experience as a professor, associate dean, and dean. First, Dean Amar talks about the benefits of interim assessments:
Regardless of how much weight I attach to midterm performance in the final course grade, and even if I use question types that are faster to grade than traditional issue spotting/analyzing questions — e.g., short answer, multiple-choice questions, modified true/false questions in which I tell students that particular statements are false but ask them to explain in a few sentences precisely how so — the feedback I get, and the feedback the students get, is invaluable.
Second, Dean Amar articulates an argument in favor of closed book exams: Continue reading
At TaxProf, Dean Caron has links to Detroit Mercy’s recent symposium on formative assessment. Many of the articles look interesting!
A colleague and I were just chatting about time efficient ways to incorporate more assessment activities in our writing courses, and we began talking about the value of self-assessment in the writing process. Here are some quick resources on the subject:
- Joi Montiel, Empower the Student, Liberate the Professor: Self-Assessment by Comparative Analysis, 39 S. Ill. U. L.J. 249 (2015).
- Olympia Duhart & Anthony Niedwiecki, Using Legal Writing Portfolios and Feedback Sessions as Tools to Build Better Writers, 24 Second Draft 8-9 (Fall 2010).
- Texas A&M Writing Center, Self-Assessment
- Northwestern University, The Writing Place, Performing a Writing Self-Assessment
- Stanford University, Teaching Commons, Student Self-Assessment
- Andrade, H. & Valtcheva, A. (2009). Promoting learning and achievement through self-assessment. Theory Into Practice, 48, 12-19.
- Nielsen, K. (2014). Self-assessment methods in writing instruction: A conceptual framework, successful practices and essential strategies. Journal of Research in Reading, 37(1).
Karen Sloan of the National Law Journal reports on a symposium issue of the University of Detroit Mercy Law Review about formative assessment. She compares two studies that seem to reach different conclusions on the subject.
First up is an article by a group of law professors Ohio State, led by Ruth Colker, who conducted a study offering a voluntary practice test to students in Constitutional Law. Those who opted for the voluntary test and mock grade did better on the final exam. Those students also did better on their other subjects than non-participants.
The second article was by David Siegel of New England. He examined whether individualized outreach to low performing students would benefit their end-of-semester grades. In his study, he sent e-mails to students in his course who scored low on quizzes. He also had follow-up meetings with them. His control group was students who scored slightly higher on the quizzes but didn’t receive any individualized feedback or have one-on-one meetings. He found that there was no statistical difference between the final grades of the groups.
From this, Ms. Sloan concludes:
There’s enough research out there on the benefits of formative assessments to put stock in the conclusion the Ohio State professors reached, that more feedback on tests and performance helps. But I think Siegel’s study tells us that the manner and context of how that feedback is delivered makes a difference. It’s one thing to have a general conversation with low performing students. But issuing a grade on a practice exam—even if it doesn’t count toward their final grade—I suspect is a real wake-up call to students that they may need to step up and make some changes.
I agree 100% with Ms. Sloan’s takeaway. One additional point: the two studies are really measuring two different things. Professor Colker’s was about formative assessment, while Professor Siegel’s was about the efficacy of early alerts. After all, all students in his class took the quiz and got the results. I also note that Professor Siegel’s “control group” wasn’t really one, since they received higher grades on the first quiz, albeit ones that were only slightly higher. It may be that this group benefitted just from taking and scoring the quiz. An interesting way to re-run the study would be to do as Professor Colker and her colleagues did at Ohio State: invite participants from all grade ranges to participate in the extra feedback. Of course, there’s still the problem of cause-and-effect versus correlation. It may be that the students in Professor Colker’s study were simply more motivated, and it is this fact—motivation—that is the true driver of the improvement in grades. Nevertheless, these are two, important studies and additions to the conversation about assessment in legal education. (LC)
Over at TaxProf, Dean Caron reports on a University of Minnesota study that found that students who were randomly assigned to 1L sections that had a class with individualized, formative assessments performed better in their other courses than those who did not. Daniel Schwarcz and Dion Farganis authored the study, which appears in the Journal of Legal Education.
From the overview section of the study:
The natural experiment arises from the assignment of first-year law students to one of several “sections,” each of which is taught by a common slate of professors. A random subset of these professors provides students with individualized feedback other than their final grades. Meanwhile, students in two different sections are occasionally grouped together in a “double- section” first-year class. We find that in these double-section classes, students in sections that have previously or concurrently had a professor who provides individualized feedback consistently outperform students in sections that have not received any such feedback. The effect is both statistically significant and hardly trivial in magnitude, approaching about one-third of a grade increment after controlling for students’ LSAT scores, undergraduate GPA, gender, race, and country of birth. This effect corresponds to a 3.7-point increase in students’ LSAT scores in our model. Additionally, the positive impact of feedback is stronger among students whose combined LSAT score and undergraduate GPA fall below the median at the University of Minnesota Law School.
What’s particularly interesting is how this study came about. Minnesota’s use of “double sections” created a natural control group to compare students who previously had formative assessment with those who did not.
The results should come as no surprise. Intuitively, students who practice and get feedback on a new skills should outperform students who do not. This study advances the literature by providing empirical evidence for this point in a law school context. The study is also significant because it shows that individualized, formative assessment in one class can benefit those students in their other classes.
There are policy implications from this study. Should associate deans assign professors who practice formative assessment evenly across 1L sections so that all students benefit? Should all classes be required to have individualized, formative assessments? What resources are needed to promote greater use of formative assessments—smaller sections and teaching assistants, for example?
I just finished slogging through 85 final exams in my Evidence course, and it got me thinking about how I would teach the course if it was offered in a small format of, say, 20 students. Evidence at our school is a “core” course, one of five classes from which students must take at least four (the others are Administrative Law, Business Organizations, Tax, and Trusts and Estates). Naturally, therefore, it draws a big enrollment. I love teaching big classes because the discussions are much richer, but the format hampers my ability to give formative assessments. This semester, I experimented with giving out-of-class, multiple choice quizzes after each unit. They served several purposes. They gave students practice with the material, and they allowed me to see students’ strengths and weaknesses. I was able to backtrack and go over concepts that students had particular difficulty mastering.
But having read 255 individual essays (85 times three essays each), I’m left convinced that students would benefit from additional feedback on essay writing. In lieu of a final exam, I’d love to give students a series of writing assignments throughout the semester. They could even take the form of practice writing documents, like motions. But to be effective, this change requires a small class. So that got me thinking: how would I change my teaching if my Evidence course had 20 students instead of 85? Continue reading
Although the ABA standards concern themselves primarily with programmatic assessment—this is, whether a school has a process to determine if students are achieving the learning goals we set them and then using the results to improve the curriculum—they also speak to course-level assessment. While the ABA standards do not require formative assessment in every class (see Interpretation 314-2), the curriculum must contain sufficient assessments to ensure that students receive “meaningful feedback.”
Thus, I was delighted to learn from the ASP listserv that the Institute for Law Teaching and Emory Law School will be hosting a conference on course-level formative assessment in large classes on March 25, 2017, in Atlanta, Georgia. More information at the link above.
A new study out of BYU attempts to answer the question. It’s summarized at TaxProf and the full article is here. From the abstract on SSRN:
What, if any, is the relationship between speed and grades on first year law school examinations? Are time-pressured law school examinations typing speed tests? Employing both simple linear regression and mixed effects linear regression, we present an empirical hypothesis test on the relationship between first year law school grades and speed, with speed represented by two variables: word count and student typing speed. Our empirical findings of a strong statistically significant positive correlation between total words written on first year law school examinations and grades suggest that speed matters. On average, the more a student types, the better her grade. In the end, however, typing speed was not a statistically significant variable explaining first year law students’ grades. At the same time, factors other than speed are relevant to student performance.
In addition to our empirical analysis, we discuss the importance of speed in law school examinations as a theoretical question and indicator of future performance as a lawyer, contextualizing the question in relation to the debate in the relevant psychometric literature regarding speed and ability or intelligence. Given that empirically, speed matters, we encourage law professors to consider more explicitly whether their exams over-reward length, and thus speed, or whether length and assumptions about speed are actually a useful proxy for future professional performance and success as lawyers.
The study raises important questions of how we structure exams. I know of colleagues who impose word count limits (enforceable thanks to exam software), and I think I may be joining the ranks. More broadly, are our high-stakes final exams truly measuring what we want them to?