Vollweiler: Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless

Professor and Associate Dean Debra Moss Vollweiler (Nova) has an interesting article on SSRN entitled, “Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless.”  Here’s an excerpt of the abstract:

Legal education, professors and administrators at law schools nationwide have finally been thrust fully into the world of educational and curriculum planning. Ever since ABA Standards started requiring law schools to “establish and publish learning outcomes” designed to achieve their objectives, and requiring how to assess them debuted, legal education has turned itself upside down in efforts to comply. However, in the initial stages of these requirements, many law schools viewed these requirements as “boxes to check” to meet the standard, rather than wholeheartedly embracing these reliable educational tools that have been around for decades. However, given that most faculty teaching in law schools have Juris Doctorate and not education degrees, the task of bringing thousands of law professors up to speed on the design, use and measurement of learning outcomes to improve education is a daunting one. Unfortunately, as the motivation to adopt them for many schools was merely meeting the standards, many law schools have opted for technical compliance — naming a committee to manage learning outcomes and assessment planning to ensure the school gets through their accreditation process, rather than for the purpose of truly enhancing the educational experience for students. … While schools should not be panicking at implementing and measuring learning outcomes, neither should they consign the tool to being a “mostly harmless” — one that misses out on the opportunity to improve their program of legal education through proper leveraging. Understanding that outcomes design and appropriate assessment design is itself a scholarly, intellectual function that requires judgment, knowledge and skill by faculty can dictate a path of adoption that is thoughtful and productive. This article serves as a guide to law schools implementing learning outcomes and their assessments as to ways these can be devised, used, and measured to gain real improvement in the program of legal education.

The article offers a number of recommendations for implementing assessment in a meaningful way:

  1. Ease into Reverse Planning with Central Planning and Modified Forward Planning
  2. Curriculum Mapping to Ensure Programmatic Learning Outcomes Met
  3. Cooperation Among Sections of Same Course and Vertically Through Curriculum
  4. Tying Course Evaluations to Learning Outcomes to Measure Gains
  5. Expanding the Idea of What Outcomes Can be for Legal Education
  6. Better use of Formative Assessments to Measure
  7. Use of the Bar Exam Appropriately to Measure Learning Outcomes
  8. Properly Leverage Data on Assessments Through Collection and Analysis

I was particularly interested in Professor Vollweiler’s point in her third recommendation.  Law school courses and professors are notoriously siloed.  Professors teaching the same course will use different texts, have varying learning outcomes, and assess their students in distinct ways.  This presents challenges in looking at student learning at a more macro level.  Professor Vollweiler effectively dismantles arguments against common learning outcomes.  The article should definitely be on Summer reading lists!

The Value of Sampling in Assessment

I just returned from the biennial ABA Associate Deans’ Conference, which is a fun and rewarding gathering of associate deans of academics, student affairs, research, administration, and other similar roles.  (Interestingly, more and more associate deans seem to have assessment in their titles.)

I spoke on a plenary panel about assessment, and I discussed the value of sampling in conducting programmatic assessment.  I wanted to elaborate on some of my thoughts on the subject.

Let’s say a school wants to assess the extent to which students are meeting the learning outcome of writing.  One way to do so would be to conduct what is called a “census” in which every student’s writing in a course or sequence is evaluated by an assessment committee.  In a small LL.M. or Juris Master’s program of 10 or 20 students, this might be feasible.  But in a school of, say, 900 J.D.’s, this is not workable.

A more feasible approach is to use a “sample” — a subset of the larger group.  So instead of reviewing 900 papers, perhaps the committee might look at 50 or 100.  If the sample is properly constructed, it is permissible to extrapolate the results and draw conclusions about the larger population.

Sometimes using a census is workable, even for a large group.  For example, if faculty who teach a subject all agree to embed 10 of the same multiple choice questions in their final exam, those results could be analyzed to see how the students performed on the material being tested.

Frequently, though, we are assessing something, like writing, that does not lend itself easily to embedded multiple choice questions or other easy-to-administer forms of assessment.  That’s where sampling comes in.  The key is to construct a representative sample of the larger population.  Here are some tips in doing so:

  • Consider, first, what you will be assessing.  Are you reviewing two-page papers?  Ten-page memos?  Thirty-page appellate briefs?  15-minute oral arguments in a moot court exercise?  Each of these will call for different time commitments on the part of your reviewers.  Next, take into account how many reviewers you will have.  The more reviewers, the more documents you’ll be able to assess.  Consider, also, that you’ll likely need multiple reviewers per thing that you’re assessing, and time should be allotted for the reviewers to “calibrate” expectations.  All of this will give you an idea of how much time it will take per reviewer per document or other thing that you’re looking at.
  • In general, the larger the sample size, the better.  Statistically, this has something to do with “margin or error” and “confidence interval.”  For more on picking a sample size, check out this very helpful article from Washington State University.  But, in general, a quick rule of thumb is a minimum of 10 students or 10% of the population, whichever is greater.
  • It is preferable for those doing the assessment not to be involved with picking the sample itself.  Here’s where having an assessment or data coordinator can be helpful.  Most times, a sample can be collected at random.  Online random number generators can be of help here.  There are suggestions for simplifying this process in the document I linked to above.
  • Once you have selected your sample size and identified those who will be in the sample, make sure you have a representative sample.  For example, if your population is composed of 60% women and 40% men, the sample should probably approximate this breakdown as well.  I like to look, too, at average LSAT and UGPA of the groups, as well as Law School GPA, to make sure we’ll be assessing a sample that is academic representative of the larger population.

In the assessment projects I have worked on, I have found sampling to be an effective way to make assessment easier for faculty who have a lot of competing demands on their time.  Some additional resources for sampling are:

Keeping Track of Assessment Follow-Up

Once a school implements its assessment plan, it will begin collecting a lot of data, distilling it into results, and hopefully identifying recommendations for improving student learning based on those results.  That is a lot of data and information, and it’s easy for the work of a school’s assessment committees to end up sitting on a shelf, forgotten with the passage of time.  Assessment is not about producing reports; it’s about converting student data into meaningful action.

I developed a template for schools to use in keeping track of its assessment methods, findings, and recommendations.  You don’t need fancy software, like Weave, to keep track of a single set of learning outcomes (university-level metrics are another matter). A simple Excel spreadsheet will do.  For each learning outcome, list the following:

  • The year the outcome was assessed.
  • Who led the assessment team or committee.
  • The methods used to complete the assessment.
  • The committee’s key findings.
  • Recommendations based on the report.
  • For each recommendation:
    • Which administrator or committee is responsible for follow-up.
    • The status of that recommendation: whether it was implemented and when.
    • Color code based on status (green = implemented; yellow = in progress; red = no action to date).

This easy format allows the dean and faculty to ensure that tangible results are achieved with the assessment process.  In the template, I included examples of methods, findings, and recommendations for one of seven learning outcomes.  (These are made up findings and recommendations that I created as an example.  They don’t necessarily reflect those of St. John’s.)  Feel free to use and adapt at your school.  (LC)