Vollweiler: Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless

Professor and Associate Dean Debra Moss Vollweiler (Nova) has an interesting article on SSRN entitled, “Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless.”  Here’s an excerpt of the abstract:

Legal education, professors and administrators at law schools nationwide have finally been thrust fully into the world of educational and curriculum planning. Ever since ABA Standards started requiring law schools to “establish and publish learning outcomes” designed to achieve their objectives, and requiring how to assess them debuted, legal education has turned itself upside down in efforts to comply. However, in the initial stages of these requirements, many law schools viewed these requirements as “boxes to check” to meet the standard, rather than wholeheartedly embracing these reliable educational tools that have been around for decades. However, given that most faculty teaching in law schools have Juris Doctorate and not education degrees, the task of bringing thousands of law professors up to speed on the design, use and measurement of learning outcomes to improve education is a daunting one. Unfortunately, as the motivation to adopt them for many schools was merely meeting the standards, many law schools have opted for technical compliance — naming a committee to manage learning outcomes and assessment planning to ensure the school gets through their accreditation process, rather than for the purpose of truly enhancing the educational experience for students. … While schools should not be panicking at implementing and measuring learning outcomes, neither should they consign the tool to being a “mostly harmless” — one that misses out on the opportunity to improve their program of legal education through proper leveraging. Understanding that outcomes design and appropriate assessment design is itself a scholarly, intellectual function that requires judgment, knowledge and skill by faculty can dictate a path of adoption that is thoughtful and productive. This article serves as a guide to law schools implementing learning outcomes and their assessments as to ways these can be devised, used, and measured to gain real improvement in the program of legal education.

The article offers a number of recommendations for implementing assessment in a meaningful way:

  1. Ease into Reverse Planning with Central Planning and Modified Forward Planning
  2. Curriculum Mapping to Ensure Programmatic Learning Outcomes Met
  3. Cooperation Among Sections of Same Course and Vertically Through Curriculum
  4. Tying Course Evaluations to Learning Outcomes to Measure Gains
  5. Expanding the Idea of What Outcomes Can be for Legal Education
  6. Better use of Formative Assessments to Measure
  7. Use of the Bar Exam Appropriately to Measure Learning Outcomes
  8. Properly Leverage Data on Assessments Through Collection and Analysis

I was particularly interested in Professor Vollweiler’s point in her third recommendation.  Law school courses and professors are notoriously siloed.  Professors teaching the same course will use different texts, have varying learning outcomes, and assess their students in distinct ways.  This presents challenges in looking at student learning at a more macro level.  Professor Vollweiler effectively dismantles arguments against common learning outcomes.  The article should definitely be on Summer reading lists!

New Article: Building a Culture of Assessment in Legal Education

On SSRN, I have a draft article posted entitled, “Building a Culture of Assessment in Law Schools.” It is available at https://ssrn.com/abstract=3216804.

Here’s the abstract:

A new era of legal education is upon us: Law schools are now required to assess learning outcomes across their degrees and programs, not just in individual courses. Programmatic assessment is new to legal education, but it has existed in higher education for decades. To be successful, assessment requires cooperation and buy-in from faculty. Yet establishing a culture of assessment in other disciplines has not been easy, and there is no reason to believe that it will be any different in legal education. A survey of provosts identified faculty buy-in as the single biggest challenge towards implementing assessment efforts. This article surveys the literature on culture of assessment, including conceptual papers and quantitative and qualitative studies. It then draws ten themes from the literature about how to build a culture of assessment: (1) the purpose of assessment, which is a form of scholarship, is improving student learning, not just for satisfying accreditors; (2) assessment must be faculty-driven; (3) messaging and communication around assessment is critical, from the reasons for assessment through celebrating successes; (4) faculty should be provided professional development, including in their own graduate studies; (5) resources are important; (6) successes should be rewarded and recognized; (7) priority should be given to utilizing faculty’s existing assessment devices rather than employing externally developed tests; (8) the unique needs of contingent faculty and other populations should be considered; (9) to accomplish change, stakeholders should draw on theories of leadership, business, motivation, and the social process of innovation; and (10) student affairs should be integrated with faculty and academic assessment activities. These themes, if implemented by law schools, will help programmatic assessment to become an effective addition to legal education and not just something viewed as a regulatory burden.

What is unique about this paper is that it draws almost exclusively from literature outside of legal education. Since assessment is new for a lot of law schools, we can learn a lot from those in other fields who have gone before us. The “scholarship of assessment” articles are particularly fascinating, since they employ rigorous empirical methods to ascertain the best practices for building a culture of assessment.

I welcome thoughts and reactions at Larry.Cunningham@stjohns.edu!

A Simple, Low-Cost Assessment Process?

Professor Andrea Curcio (Georgia State) has published A Simple Low-Cost Institutional Learning-Outcomes Assessment Process, 67 J. Legal Educ. 489 (2018). It’s an informative article, arguing that, in light of budgetary pressures, faculty should use AAC&U style rubrics to assess competencies across a range of courses. The results can then be pooled and analyzed.  In her abstract on SSRN, Professor Curcio states:

The essay explains a five-step institutional outcomes assessment process: 1. Develop rubrics for institutional learning outcomes that can be assessed in law school courses; 2. Identify courses that will use the rubrics; 3. Ask faculty in designated courses to assess and grade as they usually do, adding only one more step – completion of a short rubric for each student; 4. Enter the rubric data; and 5. Analyze and use the data to improve student learning. The essay appendix provides sample rubrics for a wide range of law school institutional learning outcomes. This outcomes assessment method provides an option for collecting data on institutional learning outcomes assessment in a cost-effective manner, allowing faculties to gather data that provides an overview of student learning across a wide range of learning outcomes. How faculties use that data depends upon the results as well as individual schools’ commitment to using the outcomes assessment process to help ensure their graduates have the knowledge, skills and values necessary to practice law.

This is an ideal way to conduct assessment, because it involves measuring students’ actual performance in their classes, rather than on a simulated exercise that is unconnected to a course and in which, therefore, they may not give full effort. This article is particularly valuable to the field because it includes sample rubrics for a range of learning outcomes that law schools are likely to measure. It’s definitely worth a read!

My only concern is with getting faculty buy-in.  Professor Curcio states, “In courses designated for outcomes measurement, professors add one more step to their grading process. After grading, faculty in designated courses complete an institutional faculty-designed rubric that delineates, along a continuum, students’ development of core competencies encompassed by a given learning outcome. The rubric may be applied to every student’s work or to that of a random student sample.” Continue reading

NLJ: Feedback on Feedback

Karen Sloan of the National Law Journal reports on a symposium issue of the University of Detroit Mercy Law Review about formative assessment.  She compares two studies that seem to reach different conclusions on the subject.

First up is an article by a group of law professors Ohio State, led by Ruth Colker, who conducted a study offering a voluntary practice test to students in Constitutional Law.  Those who opted for the voluntary test and mock grade did better on the final exam.  Those students also did better on their other subjects than non-participants.

The second article was by David Siegel of New England.  He examined whether individualized outreach to low performing students would benefit their end-of-semester grades.  In his study, he sent e-mails to students in his course who scored low on quizzes.  He also had follow-up meetings with them.  His control group was students who scored slightly higher on the quizzes but didn’t receive any individualized feedback or have one-on-one meetings.  He found that there was no statistical difference between the final grades of the groups.

From this, Ms. Sloan concludes:

There’s enough research out there on the benefits of formative assessments to put stock in the conclusion the Ohio State professors reached, that more feedback on tests and performance helps. But I think Siegel’s study tells us that the manner and context of how that feedback is delivered makes a difference. It’s one thing to have a general conversation with low performing students. But issuing a grade on a practice exam—even if it doesn’t count toward their final grade—I suspect is a real wake-up call to students that they may need to step up and make some changes.

I agree 100% with Ms. Sloan’s takeaway.  One additional point: the two studies are really measuring two different things. Professor Colker’s was about formative assessment, while Professor Siegel’s was about the efficacy of early alerts. After all, all students in his class took the quiz and got the results. I also note that Professor Siegel’s “control group” wasn’t really one, since they received higher grades on the first quiz, albeit ones that were only slightly higher. It may be that this group benefitted just from taking and scoring the quiz.  An interesting way to re-run the study would be to do as Professor Colker and her colleagues did at Ohio State: invite participants from all grade ranges to participate in the extra feedback.  Of course, there’s still the problem of cause-and-effect versus correlation.  It may be that the students in Professor Colker’s study were simply more motivated, and it is this fact—motivation—that is the true driver of the improvement in grades.  Nevertheless, these are two, important studies and additions to the conversation about assessment in legal education. (LC)

 

Minnesota Study: Formative Assessment in One First-Year Class Leads to Higher Grades in Other Classes

Over at TaxProf, Dean Caron reports on a University of Minnesota study that found that students who were randomly assigned to 1L sections that had a class with individualized, formative assessments performed better in their other courses than those who did not.  Daniel Schwarcz and Dion Farganis authored the study, which appears in the Journal of Legal Education.

From the overview section of the study:

The natural experiment arises from the assignment of first-year law students to one of several “sections,” each of which is taught by a common slate of professors. A random subset of these professors provides students with individualized feedback other than their final grades. Meanwhile, students in two different sections are occasionally grouped together in a “double- section” first-year class. We find that in these double-section classes, students in sections that have previously or concurrently had a professor who provides individualized feedback consistently outperform students in sections that have not received any such feedback. The effect is both statistically significant and hardly trivial in magnitude, approaching about one-third of a grade increment after controlling for students’ LSAT scores, undergraduate GPA, gender, race, and country of birth. This effect corresponds to a 3.7-point increase in students’ LSAT scores in our model. Additionally, the positive impact of feedback is stronger among students whose combined LSAT score and undergraduate GPA fall below the median at the University of Minnesota Law School.

What’s particularly interesting is how this study came about. Minnesota’s use of “double sections” created a natural control group to compare students who previously had formative assessment with those who did not.

The results should come as no surprise. Intuitively, students who practice and get feedback on a new skills should outperform students who do not. This study advances the literature by providing empirical evidence for this point in a law school context. The study is also significant because it shows that individualized, formative assessment in one class can benefit those students in their other classes.

There are policy implications from this study. Should associate deans assign professors who practice formative assessment evenly across 1L sections so that all students benefit? Should all classes be required to have individualized, formative assessments? What resources are needed to promote greater use of formative assessments—smaller sections and  teaching assistants, for example?

New Article on Lessons Learned from Medical Education about Assessing Professional Formation Outcomes

Neil Hamilton (St. Thomas, MN) has a new article on SSRN, Professional-Identity/Professional-Formation/Professionalism Learning Outcomes: What Can We Learn About Assessment From Medical Education? 

Here’s an except from the abstract:

The accreditation changes requiring competency-based education are an exceptional opportunity for each law school to differentiate its education so that its students better meet the needs of clients, legal employers, and the legal system. While ultimately competency-based education will lead to a change in the model of how law faculty and staff, students, and legal employers understand legal education, this process of change is going to take a number of years. However, the law schools that most effectively lead this change are going to experience substantial differentiating gains in terms of both meaningful employment for graduates and legal employer and client appreciation for graduates’ competencies in meeting employer/client needs. This will be particularly true for those law schools that emphasize the foundational principle of competency-based learning that each student must grow toward later stages of self-directed learning – taking full responsibility as the active agent for the student’s experiences and assessment activities to achieve the faculty’s learning outcomes and the student’s ultimate goal of bar passage and meaningful employment.

Medical education has had fifteen more years of experience with competency-based education from which legal educators can learn. This article has focused on medical education’s “lessons learned” applicable to legal education regarding effective assessment of professional-identity learning outcomes.

Legal education has many other disciplines, including medicine, to look to for examples of implementing outcome-based assessment.  Professor Hamilton’s article nicely draws upon lessons learned by medical schools in assessing professional formation, an outcome that some law schools have decided to implement.

In looking at professional identity formation, in particular, progression is important. The curriculum and assessments must build on each other in order to see whether students are improving in this area. The hidden curriculum is a valuable area to teach and assess a competency like professional identity formation. But this requires coordination among various silos:

Law schools historically have been structured in silos with strongly guarded turf in and around each silo. Each of the major silos (including doctrinal classroom faculty, clinical faculty, lawyering skills faculty, externship directors, career services and professional development staff, and counseling staff) wants control over and autonomy regarding its turf. Coordination among these silos is going to take time and effort and involve some loss of autonomy but in return a substantial increase in student development and employment outcomes. For staff in particular, there should be much greater recognition that they are co-educators along with faculty to help students achieve the learning outcomes.

Full-time faculty members were not trained in a competency-based education model, and many have limited experience with some of the competencies, for example teamwork, that many law schools are including in their learning outcomes. In my experience, many full-time faculty members also have enormous investments in doctrinal knowledge and legal and policy analysis concerning their doctrinal field. They believe that the student’s law school years are about learning doctrinal knowledge, strong legal and policy analysis, and research and writing skills. These faculty members emphasize that they have to stay focused on “coverage” with the limited time in their courses even though this model of coverage of doctrinal knowledge and the above skills overemphasizes these competencies in comparison with the full range of competencies that legal employers and clients indicate they want.

In my view, this is the greatest  challenge with implementing a competency-based model of education in law schools. (Prof. Hamilton’s article has a nice summary of time-based versus competency-based education models.) Most law school curricula are silo-based. At most schools, a required first-year curriculum is followed by a largely unconnected series of electives in the second and third years. There are few opportunities for longitudinal study of outcomes in such an environment. In medical schools, however, there are clear milestones at which to assess knowledge, skills, and values for progression and growth.

Do exams measure speed or performance?

A new study out of BYU attempts to answer the question.  It’s summarized at TaxProf and the full article is here. From the abstract on SSRN:

What, if any, is the relationship between speed and grades on first year law school examinations? Are time-pressured law school examinations typing speed tests? Employing both simple linear regression and mixed effects linear regression, we present an empirical hypothesis test on the relationship between first year law school grades and speed, with speed represented by two variables: word count and student typing speed. Our empirical findings of a strong statistically significant positive correlation between total words written on first year law school examinations and grades suggest that speed matters. On average, the more a student types, the better her grade. In the end, however, typing speed was not a statistically significant variable explaining first year law students’ grades. At the same time, factors other than speed are relevant to student performance.

In addition to our empirical analysis, we discuss the importance of speed in law school examinations as a theoretical question and indicator of future performance as a lawyer, contextualizing the question in relation to the debate in the relevant psychometric literature regarding speed and ability or intelligence. Given that empirically, speed matters, we encourage law professors to consider more explicitly whether their exams over-reward length, and thus speed, or whether length and assumptions about speed are actually a useful proxy for future professional performance and success as lawyers.

The study raises important questions of how we structure exams. I know of colleagues who impose word count limits (enforceable thanks to exam software), and I think I may be joining the ranks. More broadly, are our high-stakes final exams truly measuring what we want them to?

Cultural Competency as a Learning Outcome in Legal Writing

Eunice Park (Western State) has a short piece on SSRN, featured in the SSRN Legal Writing eJournal and published in the AALS Teaching Methods Newsletter, about assessing cultural competency in a legal writing appellate advocacy exercise. Cultural competency is listed in Interpretation 302-1 as an example of a “professional skill” that would satisfy Standard 302’s requirement that a school’s learning outcomes include “[o]ther professional skills needed for competent and ethical participation as a member of the legal profession.”

Professor Park writes:

Legal writing courses provide an ideal setting for raising awareness of the importance of sensitivity to diverse cultural mores. One way is by creating an assignment that demonstrates how viewing determinative facts from a strictly Western lens might lead to an unfair outcome.

In writing a recent appellate brief problem, I introduced cultural competence as a learning outcome by integrating culturally-sensitive legally significant facts into the assignment.

She goes on to describe the appellate brief problem and how it helped meet the goal of enhancing students’ cultural competency.