About Larry Cunningham

Law professor, associate dean, and director of the Center for Trial and Appellate Advocacy at St. John's Law School. Former prosecutor and defense attorney.

Guest Post (Ezra Goldschlager): Don’t Call it Assessment – Focusing on “Assessment” is Alienating and Limiting

I’m delighted to welcome Ezra Goldschlager (LaVerne) for a guest post on the language of assessment:

***

When the ABA drafted and approved Standard 314, requiring law schools to “utilize … assessment methods” to “measure and improve student learning,” it missed an opportunity to focus law schools’ attention and energy in the right place: continuous quality improvement. The ABA’s choice of language in 314 (requiring schools to engage in “assessment”) and the similarly framed 315 (requiring law schools to engage in “ongoing evaluation” of programs of legal education), will guide law schools down the wrong path for a few reasons.

Calling it “assessment” gives it an air of “otherness”; it conjures a now decades-old dialogue about a shift from teaching to learning, and brings with it all of the associated preconceptions (and baggage). “Assessment” is a loaded term of art, imbued with politics and paradigm shifts.

Because it is rooted in this history, “assessment” can overwhelm. While asking faculty to improve their teaching may not sound trivial, it does not impose the same burden that asking faculty to “engage in assessment” can. I can try to help my students learn better without thinking about accounting for a substantial body of literature, but ask me to “do assessment” and I may stumble at the starting line, at least vaguely aware of this body of literature, brimming with best practices and commandments requiring me to engage in serious study.

Calling it “assessment,” that is, labeling it as something stand-alone (apart from our general desires to get better at our professions), makes it easy (and natural) for faculty to consider it something additional and outside of their general responsibilities. Administrators can do “assessment,” we may conclude; I will “teach.”

When administrators lead faculties in “assessment,” the focus on measurement is daunting. Faculty buy-in is a chronic problem for assessment efforts, and that’s in part because of suspicion about what’s motivating the measurement or what might be done with the results. This suspicion is only natural — we expect, in general, that any measurement is done for a reason, and when administrators tell faculty to get on board with assessment, they must wonder: to what end?

Finally, and probably most important, calling it “assessment” makes the improvement part of the process seem separate and perhaps optional. We don’t usually go to doctors just for a diagnosis; we want to get better. Asking one’s doctor to “diagnose,” however, does leave the question hanging: does my patient want me to help her improve, too? Calling it “assessment” puts the focus on only part of a cycle that must be completed if we are to make any of the assessing worthwhile. “Don’t just assess, close the loop,” faculty are admonished. That admonishment might not be as necessary if the instruction, in the first place, were not just to “assess.”

Assessment Institute – coming up in October!

I’ve written before about the importance of learning from other disciplines’ experiences with assessment.  A great place to do so is the annual Assessment Institute in Indianapolis, put on by IUPUI.  This year’s conference is October 21-23, 2018.  It’s a big conference (about 1000 attendees are expected) with a lot of interesting programs and panels.  It’s a terrific place to get ideas and see what other disciplines are up to.

This year’s Institute has two programs related to legal education and many more concerning graduate and professional education. The law school presentations are:

  • Building a Bridge Between Experiential Skills Development and Skills Assessment on Professional Licensing Exams – In this session, we will explore the relationship between student participation in experiential skills programs and scores earned on the skills assessment component of a professional licensing exam. Results from a longitudinal research study of University of Cincinnati Law students’ participation in clinics, externships, and clerkships and corresponding scores on the bar exam performance test will be presented. Participants will be encouraged to share the clinical and skills assessment contained in licensing exams for their associated fields, as well as approaches to better align clinical skills development and testing of clinical skills as a requirement for professional licensure.  The presenters are with the University of Cincinnati.
  • Isn’t the Bar Exam the Ultimate Assessment?: Learning Outcomes, ABA Standard 302, and Law Schools – The American Bar Association, the accrediting entity for American law schools, has recently adopted Standard 302, which requires law schools to “establish” learning outcomes related to knowledge of substantive and procedural law, legal analysis and reasoning, legal research, oral and written communication, professional responsibility (ethics), and other professional skills. The overwhelming majority of law school graduates also take a bar examination, the licensing exam required for admission to practice as a lawyer. How do these relate to each other? And how do both of them relate to law school exams and grades? Come find out!  The presenters are Diane J. Klein, University of La Verne College of Law; and Linda Jellum, Mercer School of Law.

The early bird registration deadline is Friday, September 14, 2018.

Keeping Track of Assessment Follow-Up

Once a school implements its assessment plan, it will begin collecting a lot of data, distilling it into results, and hopefully identifying recommendations for improving student learning based on those results.  That is a lot of data and information, and it’s easy for the work of a school’s assessment committees to end up sitting on a shelf, forgotten with the passage of time.  Assessment is not about producing reports; it’s about converting student data into meaningful action.

I developed a template for schools to use in keeping track of its assessment methods, findings, and recommendations.  You don’t need fancy software, like Weave, to keep track of a single set of learning outcomes (university-level metrics are another matter). A simple Excel spreadsheet will do.  For each learning outcome, list the following:

  • The year the outcome was assessed.
  • Who led the assessment team or committee.
  • The methods used to complete the assessment.
  • The committee’s key findings.
  • Recommendations based on the report.
  • For each recommendation:
    • Which administrator or committee is responsible for follow-up.
    • The status of that recommendation: whether it was implemented and when.
    • Color code based on status (green = implemented; yellow = in progress; red = no action to date).

This easy format allows the dean and faculty to ensure that tangible results are achieved with the assessment process.  In the template, I included examples of methods, findings, and recommendations for one of seven learning outcomes.  (These are made up findings and recommendations that I created as an example.  They don’t necessarily reflect those of St. John’s.)  Feel free to use and adapt at your school.  (LC)

Contaminating Student Assessment with Class Participation and Attendance “Bumps”

Professor Jay Silver (St. Thomas Law [FL]) has an interesting essay on Inside Higher Ed, The Contamination of Student Assessment.  In it, he argues that we undermine our efforts to utilize reliable summative assessment mechanisms when we provide “bumps” for class participation, attendance, and going to outside events. Here’s an excerpt:

In the era of outcomes assessment, testing serves to measure, more than ever, whether students have assimilated particular knowledge and developed certain skills. A student’s mere exposure to information and instruction in skills does not, in today’s assessment regime, reflect a successful outcome. The assessments crowd wants proof that it sank in, and grades are the unit of measurement.

Accordingly, extra credit for attendance at, say, even the most erudite and inspiring guest lecture outside class corrupts grades as a pure measurement of performance.

Sure, the lecture can be of value, whether demonstrable or not, in the intellectual development of the student, and giving credit for going to it is an effective incentive to attend. Nonetheless, that type of extra credit contaminates grades as a measure of performance, as it can allow the grades of students who attend extra-credit events to leapfrog over the grades of those who outperformed them on the exam but did not attend.

The next contaminant of grades as measurements of performance is the upgrade for stellar attendance. There is, of course, no guarantee that the ubiquitous attendee wasn’t an accomplished daydreamer or a back-row socialite. And if they were present and genuinely plugged in to each class, their diligence should show up in their exam performance, so extra credit merely gilds the lily.

Using the stick as well as the carrot, some professors do the opposite: they downgrade students for disruptive behavior or chronically poor preparation or attendance. Like a doctor using a hammer to anesthetize a patient, downgrades aimed at controlling behavior produce collateral damage. Colleges have better tools — like meetings with the dean of students — to address conduct-related problems.

Finally, we come to what may well be the most common nonperformance variable incorporated into grades: the class participation upgrade that so many of us rely on to break the deafening silence we’d otherwise encounter in casting pearls of wisdom upon the class. Class participation upgrades that recognize and reward the volume, rather than quality, of a student’s classroom contributions pollute performance-based assessment.

Participation upgrades for remarks that consistently advance the class discussion is a more complex issue. …

I definitely encourage readers to check out the full article.  It raises some interesting points. The article has caused me to rethink whether to give “bumps” for class participation. (I’ve never given bumps for attending outside events, and class attendance for our students is compulsory.)  A broader point: although those of us in the “assessment crowd” think and write a lot about formative assessment, we should also recognize the importance of utilizing reliable and effective methods for summative assessment—a point that Professor Silver notes throughout the article.

 

Quick Resources on Self Assessment

A colleague and I were just chatting about time efficient ways to incorporate more assessment activities in our writing courses, and we began talking about the value of self-assessment in the writing process.  Here are some quick resources on the subject:

Publishing Learning Objectives in Course Syllabi

With the Fall semester about a month away (eek!), many faculty are turning their attention to refreshing their courses and preparing their syllabi. This is an opportune time to repost my thoughts on course-level student learning outcomes, which the ABA requires us to publish to our students. Much ink has been spilled on what verbs are proper to use in our learning outcomes; as I noted in August 2016, I hope that we in legal education can take a more holistic view.

Law School Assessment

The new ABA standards are largely focused on programmatic assessment: measuring whether students, in fact, have learned the knowledge, skills, and values that we want them to achieve by the end of the J.D. degree. This requires a faculty to gather and analyze aggregated data across the curriculum. Nevertheless, the ABA standards also implicate individual courses and the faculty who teach them.

According to the ABA Managing Director’s guidance memo on learning outcomes assessment, “Learning outcomes for individual courses must be published in the course syllabi.” 

View original post 696 more words

New Article: Building a Culture of Assessment in Legal Education

On SSRN, I have a draft article posted entitled, “Building a Culture of Assessment in Law Schools.” It is available at https://ssrn.com/abstract=3216804.

Here’s the abstract:

A new era of legal education is upon us: Law schools are now required to assess learning outcomes across their degrees and programs, not just in individual courses. Programmatic assessment is new to legal education, but it has existed in higher education for decades. To be successful, assessment requires cooperation and buy-in from faculty. Yet establishing a culture of assessment in other disciplines has not been easy, and there is no reason to believe that it will be any different in legal education. A survey of provosts identified faculty buy-in as the single biggest challenge towards implementing assessment efforts. This article surveys the literature on culture of assessment, including conceptual papers and quantitative and qualitative studies. It then draws ten themes from the literature about how to build a culture of assessment: (1) the purpose of assessment, which is a form of scholarship, is improving student learning, not just for satisfying accreditors; (2) assessment must be faculty-driven; (3) messaging and communication around assessment is critical, from the reasons for assessment through celebrating successes; (4) faculty should be provided professional development, including in their own graduate studies; (5) resources are important; (6) successes should be rewarded and recognized; (7) priority should be given to utilizing faculty’s existing assessment devices rather than employing externally developed tests; (8) the unique needs of contingent faculty and other populations should be considered; (9) to accomplish change, stakeholders should draw on theories of leadership, business, motivation, and the social process of innovation; and (10) student affairs should be integrated with faculty and academic assessment activities. These themes, if implemented by law schools, will help programmatic assessment to become an effective addition to legal education and not just something viewed as a regulatory burden.

What is unique about this paper is that it draws almost exclusively from literature outside of legal education. Since assessment is new for a lot of law schools, we can learn a lot from those in other fields who have gone before us. The “scholarship of assessment” articles are particularly fascinating, since they employ rigorous empirical methods to ascertain the best practices for building a culture of assessment.

I welcome thoughts and reactions at Larry.Cunningham@stjohns.edu!

A Simple, Low-Cost Assessment Process?

Professor Andrea Curcio (Georgia State) has published A Simple Low-Cost Institutional Learning-Outcomes Assessment Process, 67 J. Legal Educ. 489 (2018). It’s an informative article, arguing that, in light of budgetary pressures, faculty should use AAC&U style rubrics to assess competencies across a range of courses. The results can then be pooled and analyzed.  In her abstract on SSRN, Professor Curcio states:

The essay explains a five-step institutional outcomes assessment process: 1. Develop rubrics for institutional learning outcomes that can be assessed in law school courses; 2. Identify courses that will use the rubrics; 3. Ask faculty in designated courses to assess and grade as they usually do, adding only one more step – completion of a short rubric for each student; 4. Enter the rubric data; and 5. Analyze and use the data to improve student learning. The essay appendix provides sample rubrics for a wide range of law school institutional learning outcomes. This outcomes assessment method provides an option for collecting data on institutional learning outcomes assessment in a cost-effective manner, allowing faculties to gather data that provides an overview of student learning across a wide range of learning outcomes. How faculties use that data depends upon the results as well as individual schools’ commitment to using the outcomes assessment process to help ensure their graduates have the knowledge, skills and values necessary to practice law.

This is an ideal way to conduct assessment, because it involves measuring students’ actual performance in their classes, rather than on a simulated exercise that is unconnected to a course and in which, therefore, they may not give full effort. This article is particularly valuable to the field because it includes sample rubrics for a range of learning outcomes that law schools are likely to measure. It’s definitely worth a read!

My only concern is with getting faculty buy-in.  Professor Curcio states, “In courses designated for outcomes measurement, professors add one more step to their grading process. After grading, faculty in designated courses complete an institutional faculty-designed rubric that delineates, along a continuum, students’ development of core competencies encompassed by a given learning outcome. The rubric may be applied to every student’s work or to that of a random student sample.” Continue reading

NLJ: Feedback on Feedback

Karen Sloan of the National Law Journal reports on a symposium issue of the University of Detroit Mercy Law Review about formative assessment.  She compares two studies that seem to reach different conclusions on the subject.

First up is an article by a group of law professors Ohio State, led by Ruth Colker, who conducted a study offering a voluntary practice test to students in Constitutional Law.  Those who opted for the voluntary test and mock grade did better on the final exam.  Those students also did better on their other subjects than non-participants.

The second article was by David Siegel of New England.  He examined whether individualized outreach to low performing students would benefit their end-of-semester grades.  In his study, he sent e-mails to students in his course who scored low on quizzes.  He also had follow-up meetings with them.  His control group was students who scored slightly higher on the quizzes but didn’t receive any individualized feedback or have one-on-one meetings.  He found that there was no statistical difference between the final grades of the groups.

From this, Ms. Sloan concludes:

There’s enough research out there on the benefits of formative assessments to put stock in the conclusion the Ohio State professors reached, that more feedback on tests and performance helps. But I think Siegel’s study tells us that the manner and context of how that feedback is delivered makes a difference. It’s one thing to have a general conversation with low performing students. But issuing a grade on a practice exam—even if it doesn’t count toward their final grade—I suspect is a real wake-up call to students that they may need to step up and make some changes.

I agree 100% with Ms. Sloan’s takeaway.  One additional point: the two studies are really measuring two different things. Professor Colker’s was about formative assessment, while Professor Siegel’s was about the efficacy of early alerts. After all, all students in his class took the quiz and got the results. I also note that Professor Siegel’s “control group” wasn’t really one, since they received higher grades on the first quiz, albeit ones that were only slightly higher. It may be that this group benefitted just from taking and scoring the quiz.  An interesting way to re-run the study would be to do as Professor Colker and her colleagues did at Ohio State: invite participants from all grade ranges to participate in the extra feedback.  Of course, there’s still the problem of cause-and-effect versus correlation.  It may be that the students in Professor Colker’s study were simply more motivated, and it is this fact—motivation—that is the true driver of the improvement in grades.  Nevertheless, these are two, important studies and additions to the conversation about assessment in legal education. (LC)