I’m delighted to welcome Ezra Goldschlager (LaVerne) for a guest post on the language of assessment:
***
When the ABA drafted and approved Standard 314, requiring law schools to “utilize … assessment methods” to “measure and improve student learning,” it missed an opportunity to focus law schools’ attention and energy in the right place: continuous quality improvement. The ABA’s choice of language in 314 (requiring schools to engage in “assessment”) and the similarly framed 315 (requiring law schools to engage in “ongoing evaluation” of programs of legal education), will guide law schools down the wrong path for a few reasons.
Calling it “assessment” gives it an air of “otherness”; it conjures a now decades-old dialogue about a shift from teaching to learning, and brings with it all of the associated preconceptions (and baggage). “Assessment” is a loaded term of art, imbued with politics and paradigm shifts.
Because it is rooted in this history, “assessment” can overwhelm. While asking faculty to improve their teaching may not sound trivial, it does not impose the same burden that asking faculty to “engage in assessment” can. I can try to help my students learn better without thinking about accounting for a substantial body of literature, but ask me to “do assessment” and I may stumble at the starting line, at least vaguely aware of this body of literature, brimming with best practices and commandments requiring me to engage in serious study.
Calling it “assessment,” that is, labeling it as something stand-alone (apart from our general desires to get better at our professions), makes it easy (and natural) for faculty to consider it something additional and outside of their general responsibilities. Administrators can do “assessment,” we may conclude; I will “teach.”
When administrators lead faculties in “assessment,” the focus on measurement is daunting. Faculty buy-in is a chronic problem for assessment efforts, and that’s in part because of suspicion about what’s motivating the measurement or what might be done with the results. This suspicion is only natural — we expect, in general, that any measurement is done for a reason, and when administrators tell faculty to get on board with assessment, they must wonder: to what end?
Finally, and probably most important, calling it “assessment” makes the improvement part of the process seem separate and perhaps optional. We don’t usually go to doctors just for a diagnosis; we want to get better. Asking one’s doctor to “diagnose,” however, does leave the question hanging: does my patient want me to help her improve, too? Calling it “assessment” puts the focus on only part of a cycle that must be completed if we are to make any of the assessing worthwhile. “Don’t just assess, close the loop,” faculty are admonished. That admonishment might not be as necessary if the instruction, in the first place, were not just to “assess.”