I just finished slogging through 85 final exams in my Evidence course, and it got me thinking about how I would teach the course if it was offered in a small format of, say, 20 students. Evidence at our school is a “core” course, one of five classes from which students must take at least four (the others are Administrative Law, Business Organizations, Tax, and Trusts and Estates). Naturally, therefore, it draws a big enrollment. I love teaching big classes because the discussions are much richer, but the format hampers my ability to give formative assessments. This semester, I experimented with giving out-of-class, multiple choice quizzes after each unit. They served several purposes. They gave students practice with the material, and they allowed me to see students’ strengths and weaknesses. I was able to backtrack and go over concepts that students had particular difficulty mastering.
But having read 255 individual essays (85 times three essays each), I’m left convinced that students would benefit from additional feedback on essay writing. In lieu of a final exam, I’d love to give students a series of writing assignments throughout the semester. They could even take the form of practice writing documents, like motions. But to be effective, this change requires a small class. So that got me thinking: how would I change my teaching if my Evidence course had 20 students instead of 85? Continue reading
The ABA Standards set forth the minimal learning outcomes that every law school must adopt. They include “written and oral communication in the legal context.”
“Written communication” as a learning outcome is “low-hanging fruit” for law school assessment committees. For a few reasons, this is an easy area to begin assessing students’ learning on a program level:
- Per the ABA standards, there must be a writing experience in both the 1L year and in at least one of the upper-level semesters. (Some schools, such as ours, have several writing requirements.) This provides a lot of opportunities to look at student growth over time by assessing the same students’ work as 1Ls and again as 2Ls or 3Ls. In theory, there should be improvement over time!
- Writing naturally generates “artifacts” to assess. Unlike other competencies, which may require the generation of special, artificial exams or other assessments, legal writing courses are already producing several documents per student to examine.
- Legal writing faculty are a naturally collaborative group of faculty, if I do say so myself! Even in schools without a formal structure (so-called “directorless” programs), my experience is that legal writing faculty work together on common problems/assignments, syllabi, and rubrics. This allows for assessment across sections. I also find that legal writing faculty, based on the nature of their courses, give a lot of thought to assessment, generally.
Oral communication is another matter. This is a more difficult outcome to assess. Apart from a first-year moot court exercise, most schools don’t have required courses in verbal skills, although that may be changing with the ABA’s new experiential learning requirement. Still, I think there are some good places in the curriculum to look for evidence of student learning of this outcome. Trial and appellate advocacy courses, for example, require significant demonstration of that skill, although in some schools only a few students may take advantage of these opportunities. Clinics are a goldmine, as are externships. For these courses, surveying faculty about students’ oral communication skills is one way to gather evidence of student learning. However, this is an indirect measure. A better way to assess this outcome is to utilize common rubrics for particular assignments or experiences. For example, after students appear in court on a clinic case, the professor could rate them using a commonly applied rubric. Those rubrics could be used both to grade the individual students and to assess student learning more generally.