Although the ABA standards concern themselves primarily with programmatic assessment—this is, whether a school has a process to determine if students are achieving the learning goals we set them and then using the results to improve the curriculum—they also speak to course-level assessment. While the ABA standards do not require formative assessment in every class (see Interpretation 314-2), the curriculum must contain sufficient assessments to ensure that students receive “meaningful feedback.”
Thus, I was delighted to learn from the ASP listserv that the Institute for Law Teaching and Emory Law School will be hosting a conference on course-level formative assessment in large classes on March 25, 2017, in Atlanta, Georgia. More information at the link above.
A new study out of BYU attempts to answer the question. It’s summarized at TaxProf and the full article is here. From the abstract on SSRN:
What, if any, is the relationship between speed and grades on first year law school examinations? Are time-pressured law school examinations typing speed tests? Employing both simple linear regression and mixed effects linear regression, we present an empirical hypothesis test on the relationship between first year law school grades and speed, with speed represented by two variables: word count and student typing speed. Our empirical findings of a strong statistically significant positive correlation between total words written on first year law school examinations and grades suggest that speed matters. On average, the more a student types, the better her grade. In the end, however, typing speed was not a statistically significant variable explaining first year law students’ grades. At the same time, factors other than speed are relevant to student performance.
In addition to our empirical analysis, we discuss the importance of speed in law school examinations as a theoretical question and indicator of future performance as a lawyer, contextualizing the question in relation to the debate in the relevant psychometric literature regarding speed and ability or intelligence. Given that empirically, speed matters, we encourage law professors to consider more explicitly whether their exams over-reward length, and thus speed, or whether length and assumptions about speed are actually a useful proxy for future professional performance and success as lawyers.
The study raises important questions of how we structure exams. I know of colleagues who impose word count limits (enforceable thanks to exam software), and I think I may be joining the ranks. More broadly, are our high-stakes final exams truly measuring what we want them to?
Eunice Park (Western State) has a short piece on SSRN, featured in the SSRN Legal Writing eJournal and published in the AALS Teaching Methods Newsletter, about assessing cultural competency in a legal writing appellate advocacy exercise. Cultural competency is listed in Interpretation 302-1 as an example of a “professional skill” that would satisfy Standard 302’s requirement that a school’s learning outcomes include “[o]ther professional skills needed for competent and ethical participation as a member of the legal profession.”
Professor Park writes:
Legal writing courses provide an ideal setting for raising awareness of the importance of sensitivity to diverse cultural mores. One way is by creating an assignment that demonstrates how viewing determinative facts from a strictly Western lens might lead to an unfair outcome.
In writing a recent appellate brief problem, I introduced cultural competence as a learning outcome by integrating culturally-sensitive legally significant facts into the assignment.
She goes on to describe the appellate brief problem and how it helped meet the goal of enhancing students’ cultural competency.
At Best Practices for Legal Education, Steven Friedland (Elon) has an interesting post about shifting focus from “teaching” to “learning,” paying particular attention to student motivation. As one of the comments noted, this in turn has implications for formative assessment.
I’ve taught a number of doctrinal, writing, clinical, and skills courses. Here are a few examples of learning outcomes I’ve used in recent course syllabi. I don’t offer these as models but instead examples of how I flesh out outcomes in a variety of course types. I include below commentary for particular objectives. Continue reading →
The new ABA standards are largely focused on programmatic assessment: measuring whether students, in fact, have learned the knowledge, skills, and values that we want them to achieve by the end of the J.D. degree. This requires a faculty to gather and analyze aggregated data across the curriculum. Nevertheless, the ABA standards also implicate individual courses and the faculty who teach them.
According to the ABA Managing Director’s guidance memo on learning outcomes assessment, “Learning outcomes for individual courses must be published in the course syllabi.” Continue reading →