Minnesota Study: Formative Assessment in One First-Year Class Leads to Higher Grades in Other Classes

Over at TaxProf, Dean Caron reports on a University of Minnesota study that found that students who were randomly assigned to 1L sections that had a class with individualized, formative assessments performed better in their other courses than those who did not.  Daniel Schwarcz and Dion Farganis authored the study, which appears in the Journal of Legal Education.

From the overview section of the study:

The natural experiment arises from the assignment of first-year law students to one of several “sections,” each of which is taught by a common slate of professors. A random subset of these professors provides students with individualized feedback other than their final grades. Meanwhile, students in two different sections are occasionally grouped together in a “double- section” first-year class. We find that in these double-section classes, students in sections that have previously or concurrently had a professor who provides individualized feedback consistently outperform students in sections that have not received any such feedback. The effect is both statistically significant and hardly trivial in magnitude, approaching about one-third of a grade increment after controlling for students’ LSAT scores, undergraduate GPA, gender, race, and country of birth. This effect corresponds to a 3.7-point increase in students’ LSAT scores in our model. Additionally, the positive impact of feedback is stronger among students whose combined LSAT score and undergraduate GPA fall below the median at the University of Minnesota Law School.

What’s particularly interesting is how this study came about. Minnesota’s use of “double sections” created a natural control group to compare students who previously had formative assessment with those who did not.

The results should come as no surprise. Intuitively, students who practice and get feedback on a new skills should outperform students who do not. This study advances the literature by providing empirical evidence for this point in a law school context. The study is also significant because it shows that individualized, formative assessment in one class can benefit those students in their other classes.

There are policy implications from this study. Should associate deans assign professors who practice formative assessment evenly across 1L sections so that all students benefit? Should all classes be required to have individualized, formative assessments? What resources are needed to promote greater use of formative assessments—smaller sections and  teaching assistants, for example?

What Would a Small, Assessment-Rich Core Course Look Like?

I just finished slogging through 85 final exams in my Evidence course, and it got me thinking about how I would teach the course if it was offered in a small format of, say, 20 students. Evidence at our school is a “core” course, one of five classes from which students must take at least four (the others are Administrative Law, Business Organizations, Tax, and Trusts and Estates). Naturally, therefore, it draws a big enrollment. I love teaching big classes because the discussions are much richer, but the format hampers my ability to give formative assessments. This semester, I experimented with giving out-of-class, multiple choice quizzes after each unit. They served several purposes. They gave students practice with the material, and they allowed me to see students’ strengths and weaknesses. I was able to backtrack and go over concepts that students had particular difficulty mastering.

But having read 255 individual essays (85 times three essays each), I’m left convinced that students would benefit from additional feedback on essay writing. In lieu of a final exam, I’d love to give students a series of writing assignments throughout the semester. They could even take the form of practice writing documents, like motions. But to be effective, this change requires a small class. So that got me thinking: how would I change my teaching if my Evidence course had 20 students instead of 85? Continue reading

Collecting Ultimate Bar Passage Data: Weighing the Costs and Benefits

The bar exam is an important outcome measure of whether our graduates are learning the basic competencies expected of new lawyers. As the ABA Managing Director reminded us in his memo of June 2015, however, it can no longer be the principal measure of student learning. Thus, we’re directed to look for other evidence of learning in our programs of legal education, hence the new focus on programmatic assessment.

Nevertheless, the ABA has wisely retained a minimal bar passage requirement in Standard 316, described in greater detail here. It is an important metric for prospective students. It is also an indicator of the quality of a school’s admission standards and, indirectly, its academic program. Indeed, it has been the subject of much debate recently. A proposal would have simplified the rule by requiring law schools to demonstrate that 75% of their graduates had passed a bar exam within two years of graduation. For a variety of reasons, the Council of the Section of Legal Education and Admission to the Bar recently decided to postpone moving forward with this change and leave Standard 316 written as-is.

With that background, Friday afternoon the ABA Associate Deans’ listserv received a message from William Adams, Deputy Managing Director of the ABA.  In it, he described a new process for collecting data on bar passage. A copy of the memo is on the ABA website. This change was authorized at the June 2017 meeting of the Council.  Readers may remember that the June meeting was the one that led to a major dust-up in legal education, when it was later revealed that the Council had voted to make substantial (and some would say, detrimental) changes to the Employment Questionnaire. When this came to light through the work of Jerry Organ and others, the ABA wisely backed off this proposed change and indicated it would further study the issue.

The change that the ABA approved in June and announced in greater detail on Friday is equally problematic.   Continue reading

Thoughts on Assessing Communication Competencies

The ABA Standards set forth the minimal learning outcomes that every law school must adopt. They include “written and oral communication in the legal context.”

“Written communication” as a learning outcome is “low-hanging fruit” for law school assessment committees. For a few reasons, this is an easy area to begin assessing students’ learning on a program level:

  1. Per the ABA standards, there must be a writing experience in both the 1L year and in at least one of the upper-level semesters. (Some schools, such as ours, have several writing requirements.) This provides a lot of opportunities to look at student growth over time by assessing the same students’ work as 1Ls and again as 2Ls or 3Ls.  In theory, there should be improvement over time!
  2. Writing naturally generates “artifacts” to assess.  Unlike other competencies, which may require the generation of special, artificial exams or other assessments, legal writing courses are already producing several documents per student to examine.
  3. Legal writing faculty are a naturally collaborative group of faculty, if I do say so myself!  Even in schools without a formal structure (so-called “directorless” programs), my experience is that legal writing faculty work together on common problems/assignments, syllabi, and rubrics.  This allows for assessment across sections.  I also find that legal writing faculty, based on the nature of their courses, give a lot of thought to assessment, generally.

Oral communication is another matter. This is a more difficult outcome to assess. Apart from a first-year moot court exercise, most schools don’t have required courses in verbal skills, although that may be changing with the ABA’s new experiential learning requirement.  Still, I think there are some good places in the curriculum to look for evidence of student learning of this outcome.  Trial and appellate advocacy courses, for example, require significant demonstration of that skill, although in some schools only a few students may take advantage of these opportunities.  Clinics are a goldmine, as are externships.  For these courses, surveying faculty about students’ oral communication skills is one way to gather evidence of student learning. However, this is an indirect measure.  A better way to assess this outcome is to utilize common rubrics for particular assignments or experiences.  For example, after students appear in court on a clinic case, the professor could rate them using a commonly applied rubric.  Those rubrics could be used both to grade the individual students and to assess student learning more generally.

Note Taking Advice to My Evidence Students

I recently sent out an e-mail to students in my Evidence class, sharing my views on classroom laptop bans and note taking.  In the past, I’ve banned laptops, but I’ve gone back to allowing them. As with most things in the law, the question is not the rule but who gets to decide the rule. Here, with a group of adult learners, I prefer a deferential approach. I’m also cognizant that there may be generational issues at play and that what would work for me as a student might not work for the current generation of law students. So, I’ve taken to offering advice on note taking, a skill that must be honed like any other.

Dear Class:

As you begin your study of the law of Evidence, I wanted to offer my perspective on note taking.  Specifically, I’d like to weigh in on the debate about whether students should take notes by hand or using a laptop.

As you will soon find out, Evidence is a heavy, 4-credit course. Our class time—4 hours per week—will be spent working through difficult rules, cases, and problems.  Classes will build on your out-of-class preparation and will not be a mere review of what you’ve read in advance.  Thus, it is important that the way you take notes helps, not hurts, your learning.

The research overwhelmingly shows that students retain information better when they take notes by hand rather than computer. This article has a nice summary of the literature: http://www.npr.org/2016/04/17/474525392/attention-students-put-your-laptops-away?utm_campaign=storyshare&utm_source=twitter.com&utm_medium=social.  The reason why handwriting is better is pretty simple: when you handwrite, you are forced to process and synthesize the material, since it’s impossible to take down every word said in class. In contrast, when you type, you tend to function more like a court reporter, trying to take down every word that is said. Additionally, laptops present a host of distractions: e-mail, chat, the web, and games are all there to tempt you away from the task at hand, which is to stay engaged with the discussion. I’ve lost count of the number of times that I’ve called on a student engrossed in his or her laptop, only to get “Can you repeat the question?” as the response.

Of course, it’s possible to be distracted without a computer, too.  Crossword puzzles, the buzzing of a cell phone, daydreaming, or the off-topic computer usage of the person in front of you can all present distractions. And it’s more difficult to integrate handwritten notes with your outline and other materials.

If I were you, I would handwrite my notes.  But I’m not you.  You are adults and presumably know best how you study and retain information.  For this reason, I don’t ban laptops.  But if you choose to use a laptop or similar device to take notes, I have some additional suggestions.  Look into distraction avoidance software, such as FocusMe, Cold Turkey, or Freedom.  These programs block out distracting apps like web browsers and text messaging.  Turn off your wireless connection.  Turn down the display of your screen so that your on-screen activity is not distracting to those behind you.  Of course, turn off the sound.  Learn to type quietly so you’re not annoying your neighbors with the clickety-clack of your keys.

Finally, and most importantly, think strategically about what you’re typing.  You don’t need a written record of everything said in class.  Indeed, one of the reasons I record all of my classes and make the recordings available to registered students is so you don’t have to worry about missing something.  You can always go back and rewatch a portion of class that wasn’t clear to you. It’s not necessary to type background material, such as the facts of a case or the basic rule.  Most of this should be in your notes that you took when reading the materials for class (you are taking notes as you read, right?).  Instead of having a new set of notes for class, think about integrating them with what you’ve already written as you prepared for class.  That is, synthesize and integrate what is said in class about the cases and problems with what you’ve written out about them in advance.  Try your best to take fewer notes, not more.  Focus on listening and thinking along with the class discussion.  Sometimes less is more.

Above all else, do what works best for you.  If you’ve done well on law school exams while taking verbose notes, by all means continue doing so.  But, if you’ve found yourself not doing as well as you’d like, now is the time to try a new means of studying and note-taking.  You may be pleasantly surprised by experimenting with handwritten notes or, if you do use a laptop, adapting your note-taking style as suggested above.

Of course, please let me know if you’d like further advice.  I’m here to help you learn.


Prof. Cunningham

What Law Schools Can Learn about Assessment from Other Disciplines

I have spent the last few days at the ExamSoft Assessment Conference. I gave a presentation on assessment developments in legal education, and it was great to see colleagues from other law schools there. I spent a lot of time attending presentations about how other disciplines are using assessment. I was particularly impressed by what the sciences are doing, especially nursing, pharmacy, physical therapy, podiatry, and medicine. I came away from the conference with the following takeaways about how these other disciplines are using assessment:

  • They use assessment data to improve student learning, both at an individual and macro level.  They are less focused on using assessments to “sort” students along a curve for grading purposes. Driven in part by their accreditors, the sciences use assessment data to help individual students recognize their weaknesses and, by graduation, get up to the level expected for eventual licensure, sometimes through remediation. They also use assessment data to drive curricular and teaching reform.
  • They focus on the validity and reliability of their summative assessments.  This is probably not surprising since scientists are trained in the scientific method. They are also, by nature, accepting of data and statistics. They utilize item analysis reports (see bullet #3) and rubrics (for essays) to ensure that their assessments are effective and that their grading is reliable. Assessments are reused and improved over time. Thus, a lot of effort is put into exam security.
  • They utilize item analysis data reports to improve their assessments over time. Item analysis reports show things like a KR-20 score and point biserial coefficients, which are statistical tools that can help assess the quality of individual test items and the exam as a whole. They can be generated by most scoring systems, such as Scantron and ExamSoft.
  • They utilize multiple, formative assessments in courses. 
  • They collect a lot of data on students.
  • They cooperate and share assessments across sections and professors.  It is not uncommon for there to be a single, departmentally-approved exam for a particular course. Professors teaching multiple sections of a course collaborate on writing the exam against a common set of learning outcomes.
  • They categorize and tag questions to track student progress and to assist with programmatic assessment. (In law, this could work as follows. Questions could be tagged against programmatic learning outcomes [such as knowledge of the law] and to content outlines [e.g., in Torts, a question could be tagged as referring to Battery].)  This allows them to generate reports that show how students perform over time in a particular outcome or topic.
  • They debrief assessments with students, using the results to help students learn how to improve, even when the course is over.  Here, categorization of questions is important.  (I started doing this in my Evidence course. I tagged multiple choice questions as testing hearsay, relevance, privilege, etc.  This allowed me to generate  reports out of Scantron ParScore that showed (1) how the class, as a whole, did on each category; and (2) how individual students did on each category. In turn, I’ll be able to use the data to improve my teaching next year.)
  • They utilize technology, such as ExamSoft, to make all of this data analysis and reporting possible.
  • They have trained assessment professionals to assist with the entire process.  Many schools have assessment departments or offices that can setup assessments and reports. Should we rethink the role of faculty support staff? Should we have faculty assistants move away from traditional secretarial functions and to assisting faculty with assessments? What training would be required?

Incidentally, I highly recommend the ExamSoft Assessment Conference, regardless of whether one is at an “ExamSoft law school” or not. (Full disclosure: I, like all speakers, received a very modest honorarium for my talk.) The conference was full of useful, practical information about teaching, learning, and assessment.  ExamSoft schools can also benefit from learning about new features of the software.

Off topic: WaPo op-ed on access-to-justice

Not directly assessment-related, but I thought I would share that Jennifer Bard (Cincinnati) and I have an op-ed in the Washington Post about access-to-justice. Drawing on an analogy to medicine, we argue:

Professionals must first acknowledge that not every legal task must be performed by a licensed lawyer. Instead, we need to adopt a tiered system of legal-services delivery that allows for lower barriers to entry. Just as a pharmacist can administer vaccines and a nurse practitioner can be on the front line of diagnosing and treating ailments, we should have legal practitioners who can also exercise independent judgment within the scope of their training. Such a change would expand the preparation and independence of the existing network of paralegals, secretaries and investigators already assisting lawyers.

This creates greater, not fewer, opportunities for law schools, which should provide a range of educational opportunities, from short programs for limited license holders to Ph.D.’s for those interested in academic research.

Enjoy the article!