The Value of Sampling in Assessment

I just returned from the biennial ABA Associate Deans’ Conference, which is a fun and rewarding gathering of associate deans of academics, student affairs, research, administration, and other similar roles.  (Interestingly, more and more associate deans seem to have assessment in their titles.)

I spoke on a plenary panel about assessment, and I discussed the value of sampling in conducting programmatic assessment.  I wanted to elaborate on some of my thoughts on the subject.

Let’s say a school wants to assess the extent to which students are meeting the learning outcome of writing.  One way to do so would be to conduct what is called a “census” in which every student’s writing in a course or sequence is evaluated by an assessment committee.  In a small LL.M. or Juris Master’s program of 10 or 20 students, this might be feasible.  But in a school of, say, 900 J.D.’s, this is not workable.

A more feasible approach is to use a “sample” — a subset of the larger group.  So instead of reviewing 900 papers, perhaps the committee might look at 50 or 100.  If the sample is properly constructed, it is permissible to extrapolate the results and draw conclusions about the larger population.

Sometimes using a census is workable, even for a large group.  For example, if faculty who teach a subject all agree to embed 10 of the same multiple choice questions in their final exam, those results could be analyzed to see how the students performed on the material being tested.

Frequently, though, we are assessing something, like writing, that does not lend itself easily to embedded multiple choice questions or other easy-to-administer forms of assessment.  That’s where sampling comes in.  The key is to construct a representative sample of the larger population.  Here are some tips in doing so:

  • Consider, first, what you will be assessing.  Are you reviewing two-page papers?  Ten-page memos?  Thirty-page appellate briefs?  15-minute oral arguments in a moot court exercise?  Each of these will call for different time commitments on the part of your reviewers.  Next, take into account how many reviewers you will have.  The more reviewers, the more documents you’ll be able to assess.  Consider, also, that you’ll likely need multiple reviewers per thing that you’re assessing, and time should be allotted for the reviewers to “calibrate” expectations.  All of this will give you an idea of how much time it will take per reviewer per document or other thing that you’re looking at.
  • In general, the larger the sample size, the better.  Statistically, this has something to do with “margin or error” and “confidence interval.”  For more on picking a sample size, check out this very helpful article from Washington State University.  But, in general, a quick rule of thumb is a minimum of 10 students or 10% of the population, whichever is greater.
  • It is preferable for those doing the assessment not to be involved with picking the sample itself.  Here’s where having an assessment or data coordinator can be helpful.  Most times, a sample can be collected at random.  Online random number generators can be of help here.  There are suggestions for simplifying this process in the document I linked to above.
  • Once you have selected your sample size and identified those who will be in the sample, make sure you have a representative sample.  For example, if your population is composed of 60% women and 40% men, the sample should probably approximate this breakdown as well.  I like to look, too, at average LSAT and UGPA of the groups, as well as Law School GPA, to make sure we’ll be assessing a sample that is academic representative of the larger population.

In the assessment projects I have worked on, I have found sampling to be an effective way to make assessment easier for faculty who have a lot of competing demands on their time.  Some additional resources for sampling are:

A Simple, Low-Cost Assessment Process?

Professor Andrea Curcio (Georgia State) has published A Simple Low-Cost Institutional Learning-Outcomes Assessment Process, 67 J. Legal Educ. 489 (2018). It’s an informative article, arguing that, in light of budgetary pressures, faculty should use AAC&U style rubrics to assess competencies across a range of courses. The results can then be pooled and analyzed.  In her abstract on SSRN, Professor Curcio states:

The essay explains a five-step institutional outcomes assessment process: 1. Develop rubrics for institutional learning outcomes that can be assessed in law school courses; 2. Identify courses that will use the rubrics; 3. Ask faculty in designated courses to assess and grade as they usually do, adding only one more step – completion of a short rubric for each student; 4. Enter the rubric data; and 5. Analyze and use the data to improve student learning. The essay appendix provides sample rubrics for a wide range of law school institutional learning outcomes. This outcomes assessment method provides an option for collecting data on institutional learning outcomes assessment in a cost-effective manner, allowing faculties to gather data that provides an overview of student learning across a wide range of learning outcomes. How faculties use that data depends upon the results as well as individual schools’ commitment to using the outcomes assessment process to help ensure their graduates have the knowledge, skills and values necessary to practice law.

This is an ideal way to conduct assessment, because it involves measuring students’ actual performance in their classes, rather than on a simulated exercise that is unconnected to a course and in which, therefore, they may not give full effort. This article is particularly valuable to the field because it includes sample rubrics for a range of learning outcomes that law schools are likely to measure. It’s definitely worth a read!

My only concern is with getting faculty buy-in.  Professor Curcio states, “In courses designated for outcomes measurement, professors add one more step to their grading process. After grading, faculty in designated courses complete an institutional faculty-designed rubric that delineates, along a continuum, students’ development of core competencies encompassed by a given learning outcome. The rubric may be applied to every student’s work or to that of a random student sample.” Continue reading

Assessing legal research

Legal research is a competency mandated by the ABA standards. It’s also a natural area where law schools would want to know if their students are performing competently. This outcome is also low hanging fruit for assessment, since there are numerous places in the curriculum where you examine students’ research (1L Legal Writing, clinics, externships, and seminars all come to mind).

Laura Ray, Outreach and Instructional Services Librarian at Cleveland-Marshall College of Law, is gathering information on how law schools are planning to assess legal research outcomes. She invites comments at l.ray@csuohio.edu.

Suskie: How to Assess Anything Without Killing Yourself … Really!

Linda Suskie (former VP, Middle States Commission on Higher Education) has posted a great list of common-sense tips about assessments on her blog. They’re based on a book by Douglas Hubbard, How to Measure Anything: Finding the Value of “Intangibles in Business.” My favorites are:

1. We are (or should be) assessing because we want to make better decisions than what we would make without assessment results. If assessment results don’t help us make better decisions, they’re a waste of time and money.

4. Don’t try to assess everything. Focus on goals that you really need to assess and on assessments that may lead you to change what you’re doing. In other words, assessments that only confirm the status quo should go on a back burner. (I suggest assessing them every three years or so, just to make sure results aren’t slipping.)

5. Before starting a new assessment, ask how much you already know, how confident you are in what you know, and why you’re confident or not confident. Information you already have on hand, however imperfect, may be good enough. How much do you really need this new assessment?

8. If you know almost nothing, almost anything will tell you something. Don’t let anxiety about what could go wrong with assessment keep you from just starting to do some organized assessment.

9. Assessment results have both cost (in time as well as dollars) and value. Compare the two and make sure they’re in appropriate balance.

10. Aim for just enough results. You probably need less data than you think, and an adequate amount of new data is probably more accessible than you first thought. Compare the expected value of perfect assessment results (which are unattainable anyway), imperfect assessment results, and sample assessment results. Is the value of sample results good enough to give you confidence in making decisions?

14. Assessment value is perishable. How quickly it perishes depends on how quickly our students, our curricula, and the needs of our students, employers, and region are changing.

15. Something we don’t ask often enough is whether a learning experience was worth the time students, faculty, and staff invested in it. Do students learn enough from a particular assignment or co-curricular experience to make it worth the time they spent on it? Do students learn enough from writing papers that take us 20 hours to grade to make our grading time worthwhile?

 

Assessment and Strategic Planning

Over at PrawfsBlawg, my friend Jennifer Bard, dean of Cincinnati Law School, has a post on “Learning Outcomes as the New Strategic Planning.” She points readers to Professors Shaw and VanZandt’s book, Student Learning Outcomes and Law School Assessment. The book is, to be sure, an excellent resource, although parts of it may be too advanced for schools that are just getting started with assessment.  Still, it’s a great book, one that sits on the corner of my desk and is consulted often.  (Dean Bard also gave a nice shoutout to my blog as a resource.)

Citing an article by Hanover Research, Dean Bard draws a key distinction between strategic planning activities of yesteryear and what’s required under the new ABA standards.

Traditionally, law school strategic plans focused on outcomes other than whether students were learning what schools had determined their students should be learning. These often included things like faculty scholarly production, diversity, student career placement, fundraising, and admissions inputs. Former ABA Standard 203 required a strategic planning process (albeit not a strategic plan per se) to improve all of the goals of a school:

In addition to the self study described in Standard 202, a law school shall demonstrate that it regularly identifies specific goals for improving the law school’s program, identifies means to achieve the established goals, assesses its success in realizing the established goals and periodically re-examines and appropriately revises its established goals.

The old standard used the term “assessment” in a broad sense, not just as to student learning. In contrast, new Standard 315 focuses on assessment of learning outcomes to improve the curriculum:

The dean and the faculty of a law school shall conduct ongoing evaluation of the law school’s program of legal education, learning outcomes, and assessment methods; and shall use the results of this evaluation to determine the degree of student attainment of competency in the learning outcomes and to make appropriate changes to improve the curriculum.

This is the “closing the loop” of the assessment process: using the results of programmatic outcomes assessment to improve student learning.

So, what to do with the “old” way of strategic planning? Certainly, a school should still engage in a  strategic planning process that focuses on all of the important outcomes and goals of the school, of which assessment of student learning is just one piece. Paraphrasing a common expression, if you don’t measure it, it doesn’t get done. Indeed, one can interpret Standards 201 and 202 as still requiring a planning process of some kind, particularly to guide resource allocation.

Still, much of the way that some schools engage in strategic planning is wasteful and ineffective. Often, the planning cycle takes years and results in a beautiful, glossy brochure (complete with photos of happy students and faculty) that sits on the shelf. I’m much more a fan of quick-and-dirty strategic planning that involves efficiently setting goals and action items that can be accomplished over a relatively short time-horizon. The importance is not the product (the glossy brochure) but having a process that is nimble, updated often, used to guide allocation of resources, and serves as a self-accountability tool. (Here, I have to confess, my views on this have evolved since serving on the Strategic Priorities Review Team of our University. I now see much more value in the type of efficient planning I have described.)

In this respect, strategic planning and learning outcomes assessment should both have in common an emphasis on process, not product. Some of the assessment reports generated by schools as a result of regional accreditation are truly works of art, but what is being done with the information? That, to me, is the ultimate question of the value of both processes.

What is the point of curriculum mapping?

Curriculum mapping is the process of identifying where in a school’s curriculum each of its learning outcomes is being taught and assessed. We recently posted our curriculum maps on our assessment webpage, including the survey instrument we used to collect data from faculty.

Curriculum mapping was a big discussion item at an assessment conference in Boston last Spring and understandably so. But, to be clear, curriculum mapping is, itself, not assessment. It is, rather, a tool to assist with the programmatic assessment process.  It also furthers curricular reform.

Mapping is not assessment in the programmatic sense because even the best of curriculum maps will not show whether, in fact, students are learning what we want them to learn. Curriculum mapping helps with assessment because it enables an assessment committee to identify where in the curriculum to look for particular evidence (“artifacts” in the lingo) of student learning.

It also helps with curricular reform in two ways:

  • by enabling a faculty to plug holes in the curriculum.  If an outcome has been identified as desirable but it is not being taught to all or most students, a new degree requirement could be created. Our school did this with negotiation. We had identified it as a valuable skill but realized, through a curriculum mapping exercise done several years ago, that it was not being taught to a sufficient number of students. We then created a 1L course specifically on negotiation and other interpersonal skills.
  • by restructuring degree requirements so that smarter sequencing occurs. In theory, advanced instruction should build upon introductions.  A curriculum map will help show the building blocks in particular outcomes: introduction to competence to advanced.

Overall, I hope that schools put serious thought into curriculum mapping, while also recognizing that it is not the end of assessment … but instead the beginning.

Checklist for Getting Started with Assessment

I’m at a conference, Responding to the New ABA Standards: Best Practices in Outcomes Assessment, being put on by Boston University and the Institute for Law Teaching and Learning.  The conference is terrific, and I’ll have a number of posts based on what I’ve learned  today.

It strikes me that law schools are at varying stages of assessment.  Some schools—particularly those who have been dealing directly with regional accreditors—are fairly well along.  

But other schools are just getting started.  For those schools, I recommend keeping it simple and taking this step-by-step approach:

  1. Ask the dean to appoint a assessment committee, composed of faculty who have a particular interest in teaching and learning.
  2. Start keeping detailed records and notes of what follows.  Consider a shared collaboration space like OneDrive or Dropbox.  
  3. As a committee, develop a set of 5-10 proposed learning outcomes for the JD degree, using those in Standard 302 as a starting point.  (Alternatively, if you wish to start getting broader buy-in, ask another committee, such as a curriculum committee, to undertake this task.)  If you school has a particular mission or focus, make sure it is incorporated in one or more of the outcomes.
  4. Bring the learning outcomes to the full faculty for a vote.
  5. Map the curriculum.  Send a survey to faculty, asking them to identify which of the institutional outcomes are taught in their courses.  If you want to go further, survey faculty on the depth of teaching/learning (introduction, practice, mastery).  Compile a chart with the classes on the Y axis and learning outcomes on the X axis.  Check off the appropriate boxes to indicate in which courses the outcomes are being taught (the point of assessment is to identify whether students are actually learning them).  
  6. Identify one of the outcomes to assess and how you’ll do so: who will measure it, which assessment tools they’ll use, and what will be done with the results.
  7. Put your learning outcomes on your school’s website.

All of this can probably be done in 1-2 years.  It essentially completes the “design phase” of the assessment process.  Separately, I’ll post about some ideas of what not to do in the early stages …