NLJ: Feedback on Feedback

Karen Sloan of the National Law Journal reports on a symposium issue of the University of Detroit Mercy Law Review about formative assessment.  She compares two studies that seem to reach different conclusions on the subject.

First up is an article by a group of law professors Ohio State, led by Ruth Colker, who conducted a study offering a voluntary practice test to students in Constitutional Law.  Those who opted for the voluntary test and mock grade did better on the final exam.  Those students also did better on their other subjects than non-participants.

The second article was by David Siegel of New England.  He examined whether individualized outreach to low performing students would benefit their end-of-semester grades.  In his study, he sent e-mails to students in his course who scored low on quizzes.  He also had follow-up meetings with them.  His control group was students who scored slightly higher on the quizzes but didn’t receive any individualized feedback or have one-on-one meetings.  He found that there was no statistical difference between the final grades of the groups.

From this, Ms. Sloan concludes:

There’s enough research out there on the benefits of formative assessments to put stock in the conclusion the Ohio State professors reached, that more feedback on tests and performance helps. But I think Siegel’s study tells us that the manner and context of how that feedback is delivered makes a difference. It’s one thing to have a general conversation with low performing students. But issuing a grade on a practice exam—even if it doesn’t count toward their final grade—I suspect is a real wake-up call to students that they may need to step up and make some changes.

I agree 100% with Ms. Sloan’s takeaway.  One additional point: the two studies are really measuring two different things. Professor Colker’s was about formative assessment, while Professor Siegel’s was about the efficacy of early alerts. After all, all students in his class took the quiz and got the results. I also note that Professor Siegel’s “control group” wasn’t really one, since they received higher grades on the first quiz, albeit ones that were only slightly higher. It may be that this group benefitted just from taking and scoring the quiz.  An interesting way to re-run the study would be to do as Professor Colker and her colleagues did at Ohio State: invite participants from all grade ranges to participate in the extra feedback.  Of course, there’s still the problem of cause-and-effect versus correlation.  It may be that the students in Professor Colker’s study were simply more motivated, and it is this fact—motivation—that is the true driver of the improvement in grades.  Nevertheless, these are two, important studies and additions to the conversation about assessment in legal education. (LC)

 

Assessing legal research

Legal research is a competency mandated by the ABA standards. It’s also a natural area where law schools would want to know if their students are performing competently. This outcome is also low hanging fruit for assessment, since there are numerous places in the curriculum where you examine students’ research (1L Legal Writing, clinics, externships, and seminars all come to mind).

Laura Ray, Outreach and Instructional Services Librarian at Cleveland-Marshall College of Law, is gathering information on how law schools are planning to assess legal research outcomes. She invites comments at l.ray@csuohio.edu.

Minnesota Study: Formative Assessment in One First-Year Class Leads to Higher Grades in Other Classes

Over at TaxProf, Dean Caron reports on a University of Minnesota study that found that students who were randomly assigned to 1L sections that had a class with individualized, formative assessments performed better in their other courses than those who did not.  Daniel Schwarcz and Dion Farganis authored the study, which appears in the Journal of Legal Education.

From the overview section of the study:

The natural experiment arises from the assignment of first-year law students to one of several “sections,” each of which is taught by a common slate of professors. A random subset of these professors provides students with individualized feedback other than their final grades. Meanwhile, students in two different sections are occasionally grouped together in a “double- section” first-year class. We find that in these double-section classes, students in sections that have previously or concurrently had a professor who provides individualized feedback consistently outperform students in sections that have not received any such feedback. The effect is both statistically significant and hardly trivial in magnitude, approaching about one-third of a grade increment after controlling for students’ LSAT scores, undergraduate GPA, gender, race, and country of birth. This effect corresponds to a 3.7-point increase in students’ LSAT scores in our model. Additionally, the positive impact of feedback is stronger among students whose combined LSAT score and undergraduate GPA fall below the median at the University of Minnesota Law School.

What’s particularly interesting is how this study came about. Minnesota’s use of “double sections” created a natural control group to compare students who previously had formative assessment with those who did not.

The results should come as no surprise. Intuitively, students who practice and get feedback on a new skills should outperform students who do not. This study advances the literature by providing empirical evidence for this point in a law school context. The study is also significant because it shows that individualized, formative assessment in one class can benefit those students in their other classes.

There are policy implications from this study. Should associate deans assign professors who practice formative assessment evenly across 1L sections so that all students benefit? Should all classes be required to have individualized, formative assessments? What resources are needed to promote greater use of formative assessments—smaller sections and  teaching assistants, for example?

What Would a Small, Assessment-Rich Core Course Look Like?

I just finished slogging through 85 final exams in my Evidence course, and it got me thinking about how I would teach the course if it was offered in a small format of, say, 20 students. Evidence at our school is a “core” course, one of five classes from which students must take at least four (the others are Administrative Law, Business Organizations, Tax, and Trusts and Estates). Naturally, therefore, it draws a big enrollment. I love teaching big classes because the discussions are much richer, but the format hampers my ability to give formative assessments. This semester, I experimented with giving out-of-class, multiple choice quizzes after each unit. They served several purposes. They gave students practice with the material, and they allowed me to see students’ strengths and weaknesses. I was able to backtrack and go over concepts that students had particular difficulty mastering.

But having read 255 individual essays (85 times three essays each), I’m left convinced that students would benefit from additional feedback on essay writing. In lieu of a final exam, I’d love to give students a series of writing assignments throughout the semester. They could even take the form of practice writing documents, like motions. But to be effective, this change requires a small class. So that got me thinking: how would I change my teaching if my Evidence course had 20 students instead of 85? Continue reading

Collecting Ultimate Bar Passage Data: Weighing the Costs and Benefits

The bar exam is an important outcome measure of whether our graduates are learning the basic competencies expected of new lawyers. As the ABA Managing Director reminded us in his memo of June 2015, however, it can no longer be the principal measure of student learning. Thus, we’re directed to look for other evidence of learning in our programs of legal education, hence the new focus on programmatic assessment.

Nevertheless, the ABA has wisely retained a minimal bar passage requirement in Standard 316, described in greater detail here. It is an important metric for prospective students. It is also an indicator of the quality of a school’s admission standards and, indirectly, its academic program. Indeed, it has been the subject of much debate recently. A proposal would have simplified the rule by requiring law schools to demonstrate that 75% of their graduates had passed a bar exam within two years of graduation. For a variety of reasons, the Council of the Section of Legal Education and Admission to the Bar recently decided to postpone moving forward with this change and leave Standard 316 written as-is.

With that background, Friday afternoon the ABA Associate Deans’ listserv received a message from William Adams, Deputy Managing Director of the ABA.  In it, he described a new process for collecting data on bar passage. A copy of the memo is on the ABA website. This change was authorized at the June 2017 meeting of the Council.  Readers may remember that the June meeting was the one that led to a major dust-up in legal education, when it was later revealed that the Council had voted to make substantial (and some would say, detrimental) changes to the Employment Questionnaire. When this came to light through the work of Jerry Organ and others, the ABA wisely backed off this proposed change and indicated it would further study the issue.

The change that the ABA approved in June and announced in greater detail on Friday is equally problematic.   Continue reading

Thoughts on Assessing Communication Competencies

The ABA Standards set forth the minimal learning outcomes that every law school must adopt. They include “written and oral communication in the legal context.”

“Written communication” as a learning outcome is “low-hanging fruit” for law school assessment committees. For a few reasons, this is an easy area to begin assessing students’ learning on a program level:

  1. Per the ABA standards, there must be a writing experience in both the 1L year and in at least one of the upper-level semesters. (Some schools, such as ours, have several writing requirements.) This provides a lot of opportunities to look at student growth over time by assessing the same students’ work as 1Ls and again as 2Ls or 3Ls.  In theory, there should be improvement over time!
  2. Writing naturally generates “artifacts” to assess.  Unlike other competencies, which may require the generation of special, artificial exams or other assessments, legal writing courses are already producing several documents per student to examine.
  3. Legal writing faculty are a naturally collaborative group of faculty, if I do say so myself!  Even in schools without a formal structure (so-called “directorless” programs), my experience is that legal writing faculty work together on common problems/assignments, syllabi, and rubrics.  This allows for assessment across sections.  I also find that legal writing faculty, based on the nature of their courses, give a lot of thought to assessment, generally.

Oral communication is another matter. This is a more difficult outcome to assess. Apart from a first-year moot court exercise, most schools don’t have required courses in verbal skills, although that may be changing with the ABA’s new experiential learning requirement.  Still, I think there are some good places in the curriculum to look for evidence of student learning of this outcome.  Trial and appellate advocacy courses, for example, require significant demonstration of that skill, although in some schools only a few students may take advantage of these opportunities.  Clinics are a goldmine, as are externships.  For these courses, surveying faculty about students’ oral communication skills is one way to gather evidence of student learning. However, this is an indirect measure.  A better way to assess this outcome is to utilize common rubrics for particular assignments or experiences.  For example, after students appear in court on a clinic case, the professor could rate them using a commonly applied rubric.  Those rubrics could be used both to grade the individual students and to assess student learning more generally.

Note Taking Advice to My Evidence Students

I recently sent out an e-mail to students in my Evidence class, sharing my views on classroom laptop bans and note taking.  In the past, I’ve banned laptops, but I’ve gone back to allowing them. As with most things in the law, the question is not the rule but who gets to decide the rule. Here, with a group of adult learners, I prefer a deferential approach. I’m also cognizant that there may be generational issues at play and that what would work for me as a student might not work for the current generation of law students. So, I’ve taken to offering advice on note taking, a skill that must be honed like any other.


Dear Class:

As you begin your study of the law of Evidence, I wanted to offer my perspective on note taking.  Specifically, I’d like to weigh in on the debate about whether students should take notes by hand or using a laptop.

As you will soon find out, Evidence is a heavy, 4-credit course. Our class time—4 hours per week—will be spent working through difficult rules, cases, and problems.  Classes will build on your out-of-class preparation and will not be a mere review of what you’ve read in advance.  Thus, it is important that the way you take notes helps, not hurts, your learning.

The research overwhelmingly shows that students retain information better when they take notes by hand rather than computer. This article has a nice summary of the literature: http://www.npr.org/2016/04/17/474525392/attention-students-put-your-laptops-away?utm_campaign=storyshare&utm_source=twitter.com&utm_medium=social.  The reason why handwriting is better is pretty simple: when you handwrite, you are forced to process and synthesize the material, since it’s impossible to take down every word said in class. In contrast, when you type, you tend to function more like a court reporter, trying to take down every word that is said. Additionally, laptops present a host of distractions: e-mail, chat, the web, and games are all there to tempt you away from the task at hand, which is to stay engaged with the discussion. I’ve lost count of the number of times that I’ve called on a student engrossed in his or her laptop, only to get “Can you repeat the question?” as the response.

Of course, it’s possible to be distracted without a computer, too.  Crossword puzzles, the buzzing of a cell phone, daydreaming, or the off-topic computer usage of the person in front of you can all present distractions. And it’s more difficult to integrate handwritten notes with your outline and other materials.

If I were you, I would handwrite my notes.  But I’m not you.  You are adults and presumably know best how you study and retain information.  For this reason, I don’t ban laptops.  But if you choose to use a laptop or similar device to take notes, I have some additional suggestions.  Look into distraction avoidance software, such as FocusMe, Cold Turkey, or Freedom.  These programs block out distracting apps like web browsers and text messaging.  Turn off your wireless connection.  Turn down the display of your screen so that your on-screen activity is not distracting to those behind you.  Of course, turn off the sound.  Learn to type quietly so you’re not annoying your neighbors with the clickety-clack of your keys.

Finally, and most importantly, think strategically about what you’re typing.  You don’t need a written record of everything said in class.  Indeed, one of the reasons I record all of my classes and make the recordings available to registered students is so you don’t have to worry about missing something.  You can always go back and rewatch a portion of class that wasn’t clear to you. It’s not necessary to type background material, such as the facts of a case or the basic rule.  Most of this should be in your notes that you took when reading the materials for class (you are taking notes as you read, right?).  Instead of having a new set of notes for class, think about integrating them with what you’ve already written as you prepared for class.  That is, synthesize and integrate what is said in class about the cases and problems with what you’ve written out about them in advance.  Try your best to take fewer notes, not more.  Focus on listening and thinking along with the class discussion.  Sometimes less is more.

Above all else, do what works best for you.  If you’ve done well on law school exams while taking verbose notes, by all means continue doing so.  But, if you’ve found yourself not doing as well as you’d like, now is the time to try a new means of studying and note-taking.  You may be pleasantly surprised by experimenting with handwritten notes or, if you do use a laptop, adapting your note-taking style as suggested above.

Of course, please let me know if you’d like further advice.  I’m here to help you learn.

Regards,

Prof. Cunningham