About Larry Cunningham

Law professor and Vice Dean at St. John's Law School. Former prosecutor and defense attorney.

What Law Schools Can Learn about Assessment from Other Disciplines

I have spent the last few days at the ExamSoft Assessment Conference. I gave a presentation on assessment developments in legal education, and it was great to see colleagues from other law schools there. I spent a lot of time attending presentations about how other disciplines are using assessment. I was particularly impressed by what the sciences are doing, especially nursing, pharmacy, physical therapy, podiatry, and medicine. I came away from the conference with the following takeaways about how these other disciplines are using assessment:

  • They use assessment data to improve student learning, both at an individual and macro level.  They are less focused on using assessments to “sort” students along a curve for grading purposes. Driven in part by their accreditors, the sciences use assessment data to help individual students recognize their weaknesses and, by graduation, get up to the level expected for eventual licensure, sometimes through remediation. They also use assessment data to drive curricular and teaching reform.
  • They focus on the validity and reliability of their summative assessments.  This is probably not surprising since scientists are trained in the scientific method. They are also, by nature, accepting of data and statistics. They utilize item analysis reports (see bullet #3) and rubrics (for essays) to ensure that their assessments are effective and that their grading is reliable. Assessments are reused and improved over time. Thus, a lot of effort is put into exam security.
  • They utilize item analysis data reports to improve their assessments over time. Item analysis reports show things like a KR-20 score and point biserial coefficients, which are statistical tools that can help assess the quality of individual test items and the exam as a whole. They can be generated by most scoring systems, such as Scantron and ExamSoft.
  • They utilize multiple, formative assessments in courses. 
  • They collect a lot of data on students.
  • They cooperate and share assessments across sections and professors.  It is not uncommon for there to be a single, departmentally-approved exam for a particular course. Professors teaching multiple sections of a course collaborate on writing the exam against a common set of learning outcomes.
  • They categorize and tag questions to track student progress and to assist with programmatic assessment. (In law, this could work as follows. Questions could be tagged against programmatic learning outcomes [such as knowledge of the law] and to content outlines [e.g., in Torts, a question could be tagged as referring to Battery].)  This allows them to generate reports that show how students perform over time in a particular outcome or topic.
  • They debrief assessments with students, using the results to help students learn how to improve, even when the course is over.  Here, categorization of questions is important.  (I started doing this in my Evidence course. I tagged multiple choice questions as testing hearsay, relevance, privilege, etc.  This allowed me to generate  reports out of Scantron ParScore that showed (1) how the class, as a whole, did on each category; and (2) how individual students did on each category. In turn, I’ll be able to use the data to improve my teaching next year.)
  • They utilize technology, such as ExamSoft, to make all of this data analysis and reporting possible.
  • They have trained assessment professionals to assist with the entire process.  Many schools have assessment departments or offices that can setup assessments and reports. Should we rethink the role of faculty support staff? Should we have faculty assistants move away from traditional secretarial functions and to assisting faculty with assessments? What training would be required?

Incidentally, I highly recommend the ExamSoft Assessment Conference, regardless of whether one is at an “ExamSoft law school” or not. (Full disclosure: I, like all speakers, received a very modest honorarium for my talk.) The conference was full of useful, practical information about teaching, learning, and assessment.  ExamSoft schools can also benefit from learning about new features of the software.

Off topic: WaPo op-ed on access-to-justice

Not directly assessment-related, but I thought I would share that Jennifer Bard (Cincinnati) and I have an op-ed in the Washington Post about access-to-justice. Drawing on an analogy to medicine, we argue:

Professionals must first acknowledge that not every legal task must be performed by a licensed lawyer. Instead, we need to adopt a tiered system of legal-services delivery that allows for lower barriers to entry. Just as a pharmacist can administer vaccines and a nurse practitioner can be on the front line of diagnosing and treating ailments, we should have legal practitioners who can also exercise independent judgment within the scope of their training. Such a change would expand the preparation and independence of the existing network of paralegals, secretaries and investigators already assisting lawyers.

This creates greater, not fewer, opportunities for law schools, which should provide a range of educational opportunities, from short programs for limited license holders to Ph.D.’s for those interested in academic research.

Enjoy the article!

Suskie: How to Assess Anything Without Killing Yourself … Really!

Linda Suskie (former VP, Middle States Commission on Higher Education) has posted a great list of common-sense tips about assessments on her blog. They’re based on a book by Douglas Hubbard, How to Measure Anything: Finding the Value of “Intangibles in Business.” My favorites are:

1. We are (or should be) assessing because we want to make better decisions than what we would make without assessment results. If assessment results don’t help us make better decisions, they’re a waste of time and money.

4. Don’t try to assess everything. Focus on goals that you really need to assess and on assessments that may lead you to change what you’re doing. In other words, assessments that only confirm the status quo should go on a back burner. (I suggest assessing them every three years or so, just to make sure results aren’t slipping.)

5. Before starting a new assessment, ask how much you already know, how confident you are in what you know, and why you’re confident or not confident. Information you already have on hand, however imperfect, may be good enough. How much do you really need this new assessment?

8. If you know almost nothing, almost anything will tell you something. Don’t let anxiety about what could go wrong with assessment keep you from just starting to do some organized assessment.

9. Assessment results have both cost (in time as well as dollars) and value. Compare the two and make sure they’re in appropriate balance.

10. Aim for just enough results. You probably need less data than you think, and an adequate amount of new data is probably more accessible than you first thought. Compare the expected value of perfect assessment results (which are unattainable anyway), imperfect assessment results, and sample assessment results. Is the value of sample results good enough to give you confidence in making decisions?

14. Assessment value is perishable. How quickly it perishes depends on how quickly our students, our curricula, and the needs of our students, employers, and region are changing.

15. Something we don’t ask often enough is whether a learning experience was worth the time students, faculty, and staff invested in it. Do students learn enough from a particular assignment or co-curricular experience to make it worth the time they spent on it? Do students learn enough from writing papers that take us 20 hours to grade to make our grading time worthwhile?

 

New Article on Lessons Learned from Medical Education about Assessing Professional Formation Outcomes

Neil Hamilton (St. Thomas, MN) has a new article on SSRN, Professional-Identity/Professional-Formation/Professionalism Learning Outcomes: What Can We Learn About Assessment From Medical Education? 

Here’s an except from the abstract:

The accreditation changes requiring competency-based education are an exceptional opportunity for each law school to differentiate its education so that its students better meet the needs of clients, legal employers, and the legal system. While ultimately competency-based education will lead to a change in the model of how law faculty and staff, students, and legal employers understand legal education, this process of change is going to take a number of years. However, the law schools that most effectively lead this change are going to experience substantial differentiating gains in terms of both meaningful employment for graduates and legal employer and client appreciation for graduates’ competencies in meeting employer/client needs. This will be particularly true for those law schools that emphasize the foundational principle of competency-based learning that each student must grow toward later stages of self-directed learning – taking full responsibility as the active agent for the student’s experiences and assessment activities to achieve the faculty’s learning outcomes and the student’s ultimate goal of bar passage and meaningful employment.

Medical education has had fifteen more years of experience with competency-based education from which legal educators can learn. This article has focused on medical education’s “lessons learned” applicable to legal education regarding effective assessment of professional-identity learning outcomes.

Legal education has many other disciplines, including medicine, to look to for examples of implementing outcome-based assessment.  Professor Hamilton’s article nicely draws upon lessons learned by medical schools in assessing professional formation, an outcome that some law schools have decided to implement.

In looking at professional identity formation, in particular, progression is important. The curriculum and assessments must build on each other in order to see whether students are improving in this area. The hidden curriculum is a valuable area to teach and assess a competency like professional identity formation. But this requires coordination among various silos:

Law schools historically have been structured in silos with strongly guarded turf in and around each silo. Each of the major silos (including doctrinal classroom faculty, clinical faculty, lawyering skills faculty, externship directors, career services and professional development staff, and counseling staff) wants control over and autonomy regarding its turf. Coordination among these silos is going to take time and effort and involve some loss of autonomy but in return a substantial increase in student development and employment outcomes. For staff in particular, there should be much greater recognition that they are co-educators along with faculty to help students achieve the learning outcomes.

Full-time faculty members were not trained in a competency-based education model, and many have limited experience with some of the competencies, for example teamwork, that many law schools are including in their learning outcomes. In my experience, many full-time faculty members also have enormous investments in doctrinal knowledge and legal and policy analysis concerning their doctrinal field. They believe that the student’s law school years are about learning doctrinal knowledge, strong legal and policy analysis, and research and writing skills. These faculty members emphasize that they have to stay focused on “coverage” with the limited time in their courses even though this model of coverage of doctrinal knowledge and the above skills overemphasizes these competencies in comparison with the full range of competencies that legal employers and clients indicate they want.

In my view, this is the greatest  challenge with implementing a competency-based model of education in law schools. (Prof. Hamilton’s article has a nice summary of time-based versus competency-based education models.) Most law school curricula are silo-based. At most schools, a required first-year curriculum is followed by a largely unconnected series of electives in the second and third years. There are few opportunities for longitudinal study of outcomes in such an environment. In medical schools, however, there are clear milestones at which to assess knowledge, skills, and values for progression and growth.

Database on Law Schools’ Learning Outcomes

The Holloran Center at St. Thomas Law School (MN)—run by Jerry Organ and Neil Hamilton—has created a database of law schools’ efforts to adopt learning outcomes.  The center plans to update the database quarterly.  

One of the very helpful aspects of the database is that it has coding so that a user can filter by school and by learning outcomes that go above and beyond the ABA minimum.  This will be a terrific resource as schools roll out the new ABA standards on learning outcomes.  In addition, for those of us interested in assessment as an area of scholarship, it is a treasure trove of data.

Frustratingly, it looks like many schools have decided not to go beyond the minimum competences set forth in ABA Standard 302, what the Holloran Center has categorized as a “basic” set of outcomes. The ABA’s list is far from exhaustive.  Schools that have essentially copied-and-pasted from Standard 302 have missed an opportunity to make their learning outcomes uniquely their own by incorporating aspects of their mission that make them distinctive from other schools.  Worse, it may be a sign that some schools are being dragged into the world of assessment kicking-and-screaming. On the other hand, it may indicate a lack of training or a belief that the ABA’s minimums fully encapsulate the core learning outcomes that every student should attain. Only time will tell.  As schools actually begin to assess their learning outcomes, we’ll have a better idea of how seriously assessment is being taken by law schools.

Assessing the Hidden Curriculum

I was honored to have been asked to attend St. Thomas (MN) Law School’s recent conference on professional formation, hosted by St. Thomas’ Holloran Center for Professional Formation, which is co-directed by Neil Hamilton and Jerry Organ.  The conference was fascinating and exceptionally well-run (I was particularly impressed by how Neil and Jerry nicely integrated students from the Law Journal into the conference as full participants.).  The two-day conference included a workshop to discuss ways to begin assessing professional formation in legal education.  Speakers included those from other professional disciplines, including medicine and the military.

One of the most important themes was the idea of the “hidden curriculum” in law schools, a phrase used by Professor (and Dean Emeritus) Louis Bilionis of the University of Cincinnati College of Law. The idea is that learning occurs in many forms, not just by professors in a classroom instilling concepts through traditional teaching methods.  Students interact with a range of individuals during their legal education, many of whom are actively contributing to their education, particularly as to professional formation.  Consider:

  • The Career Development Office counselor who advises a student on how to deal with multiple, competing offers from law firms in a professional manner.
  • The Externship supervisor who helps a student reflect on an ethical issue that arose in his or her placement.
  • The secretary of a law school clinic who speaks with a student who has submitted a number of typo-ridden motions.
  • A non-faculty Assistant Dean who works with the Public Interest Law Student Association to put on a successful fundraising event for student fellowships, which involves setting deadlines, creating professional communications to donors, and leading a large staff of volunteer students.
  • The Law School receptionist who pulls a student aside before an interview to help the student get composed.
  • A fellow student who suggests that a classmate could have handled an interaction with a professor in a more professional manner.

These are all opportunities for teaching professional formation, which for many schools is (at least nominally) a learning outcome.  But how do we assess such out-of-classroom learning experiences?  If professional formation is a learning outcome, I suggest that schools will need to develop methods of measuring the extent to which this value is being learned.  Here are some suggestions:

  • Many schools with robust career services programs already assess student satisfaction in this area through student surveys.  They should consider adding questions to determine the extent to which students perceive that their career counselors helped them to become professionals.
  • Embed professional identity questions in final exams in Professional Responsibility and similar courses.
  • Survey alumni.
  • If professional identity is introduced in the first year, assess whether students in the 2L and 3L Externship Program have embodied lessons that were learned in the 1L curriculum.  Site supervisors could be asked, for example, to what extent students displayed a range of professional behaviors.
  • Ask the state bar for data on disciplinary violations for graduates of your school compared to others.

I recognize that a lot of these are indirect measures.  However, if a school has a robust professional identity curriculum (as some do), direct measures can be collected and analyzed.  In doing so, schools should not ignore the “hidden curriculum” to look for evidence of student learning.

2017 Assessment Institute

It’s been a while since my last blog post, for which I’m very sorry! The Fall was a busy semester. I was in Cambodia on a Fulbright and a bug I picked up there knocked me off my feet for a while.  But, I’m better and back to blogging.

Max Huffman at Indiana University McKinney School of Law alerted me that the annual Assessment Institute will be held this year from October 22-24 in Indianapolis.  Law school assessment will be featured on the Graduate Track, which Professor Huffman will be co-directing.  There is a request for proposals. Looks like it will be a great event.

Upcoming ILT Conference on Formative Assessment

Although the ABA standards concern themselves primarily with programmatic assessment—this is, whether a school has a process to determine if students are achieving the learning goals we set them and then using the results to improve the curriculum—they also speak to course-level assessment. While the ABA standards do not require formative assessment in every class (see Interpretation 314-2), the curriculum must contain sufficient assessments to ensure that students receive “meaningful feedback.”

Thus, I was delighted to learn from the ASP listserv that the Institute for Law Teaching and Emory Law School will be hosting a conference on course-level formative assessment in large classes on March 25, 2017, in Atlanta, Georgia. More information at the link above.

Do exams measure speed or performance?

A new study out of BYU attempts to answer the question.  It’s summarized at TaxProf and the full article is here. From the abstract on SSRN:

What, if any, is the relationship between speed and grades on first year law school examinations? Are time-pressured law school examinations typing speed tests? Employing both simple linear regression and mixed effects linear regression, we present an empirical hypothesis test on the relationship between first year law school grades and speed, with speed represented by two variables: word count and student typing speed. Our empirical findings of a strong statistically significant positive correlation between total words written on first year law school examinations and grades suggest that speed matters. On average, the more a student types, the better her grade. In the end, however, typing speed was not a statistically significant variable explaining first year law students’ grades. At the same time, factors other than speed are relevant to student performance.

In addition to our empirical analysis, we discuss the importance of speed in law school examinations as a theoretical question and indicator of future performance as a lawyer, contextualizing the question in relation to the debate in the relevant psychometric literature regarding speed and ability or intelligence. Given that empirically, speed matters, we encourage law professors to consider more explicitly whether their exams over-reward length, and thus speed, or whether length and assumptions about speed are actually a useful proxy for future professional performance and success as lawyers.

The study raises important questions of how we structure exams. I know of colleagues who impose word count limits (enforceable thanks to exam software), and I think I may be joining the ranks. More broadly, are our high-stakes final exams truly measuring what we want them to?

Assessment and Strategic Planning

Over at PrawfsBlawg, my friend Jennifer Bard, dean of Cincinnati Law School, has a post on “Learning Outcomes as the New Strategic Planning.” She points readers to Professors Shaw and VanZandt’s book, Student Learning Outcomes and Law School Assessment. The book is, to be sure, an excellent resource, although parts of it may be too advanced for schools that are just getting started with assessment.  Still, it’s a great book, one that sits on the corner of my desk and is consulted often.  (Dean Bard also gave a nice shoutout to my blog as a resource.)

Citing an article by Hanover Research, Dean Bard draws a key distinction between strategic planning activities of yesteryear and what’s required under the new ABA standards.

Traditionally, law school strategic plans focused on outcomes other than whether students were learning what schools had determined their students should be learning. These often included things like faculty scholarly production, diversity, student career placement, fundraising, and admissions inputs. Former ABA Standard 203 required a strategic planning process (albeit not a strategic plan per se) to improve all of the goals of a school:

In addition to the self study described in Standard 202, a law school shall demonstrate that it regularly identifies specific goals for improving the law school’s program, identifies means to achieve the established goals, assesses its success in realizing the established goals and periodically re-examines and appropriately revises its established goals.

The old standard used the term “assessment” in a broad sense, not just as to student learning. In contrast, new Standard 315 focuses on assessment of learning outcomes to improve the curriculum:

The dean and the faculty of a law school shall conduct ongoing evaluation of the law school’s program of legal education, learning outcomes, and assessment methods; and shall use the results of this evaluation to determine the degree of student attainment of competency in the learning outcomes and to make appropriate changes to improve the curriculum.

This is the “closing the loop” of the assessment process: using the results of programmatic outcomes assessment to improve student learning.

So, what to do with the “old” way of strategic planning? Certainly, a school should still engage in a  strategic planning process that focuses on all of the important outcomes and goals of the school, of which assessment of student learning is just one piece. Paraphrasing a common expression, if you don’t measure it, it doesn’t get done. Indeed, one can interpret Standards 201 and 202 as still requiring a planning process of some kind, particularly to guide resource allocation.

Still, much of the way that some schools engage in strategic planning is wasteful and ineffective. Often, the planning cycle takes years and results in a beautiful, glossy brochure (complete with photos of happy students and faculty) that sits on the shelf. I’m much more a fan of quick-and-dirty strategic planning that involves efficiently setting goals and action items that can be accomplished over a relatively short time-horizon. The importance is not the product (the glossy brochure) but having a process that is nimble, updated often, used to guide allocation of resources, and serves as a self-accountability tool. (Here, I have to confess, my views on this have evolved since serving on the Strategic Priorities Review Team of our University. I now see much more value in the type of efficient planning I have described.)

In this respect, strategic planning and learning outcomes assessment should both have in common an emphasis on process, not product. Some of the assessment reports generated by schools as a result of regional accreditation are truly works of art, but what is being done with the information? That, to me, is the ultimate question of the value of both processes.