What Law Schools Can Learn about Assessment from Other Disciplines

I have spent the last few days at the ExamSoft Assessment Conference. I gave a presentation on assessment developments in legal education, and it was great to see colleagues from other law schools there. I spent a lot of time attending presentations about how other disciplines are using assessment. I was particularly impressed by what the sciences are doing, especially nursing, pharmacy, physical therapy, podiatry, and medicine. I came away from the conference with the following takeaways about how these other disciplines are using assessment:

  • They use assessment data to improve student learning, both at an individual and macro level.  They are less focused on using assessments to “sort” students along a curve for grading purposes. Driven in part by their accreditors, the sciences use assessment data to help individual students recognize their weaknesses and, by graduation, get up to the level expected for eventual licensure, sometimes through remediation. They also use assessment data to drive curricular and teaching reform.
  • They focus on the validity and reliability of their summative assessments.  This is probably not surprising since scientists are trained in the scientific method. They are also, by nature, accepting of data and statistics. They utilize item analysis reports (see bullet #3) and rubrics (for essays) to ensure that their assessments are effective and that their grading is reliable. Assessments are reused and improved over time. Thus, a lot of effort is put into exam security.
  • They utilize item analysis data reports to improve their assessments over time. Item analysis reports show things like a KR-20 score and point biserial coefficients, which are statistical tools that can help assess the quality of individual test items and the exam as a whole. They can be generated by most scoring systems, such as Scantron and ExamSoft.
  • They utilize multiple, formative assessments in courses. 
  • They collect a lot of data on students.
  • They cooperate and share assessments across sections and professors.  It is not uncommon for there to be a single, departmentally-approved exam for a particular course. Professors teaching multiple sections of a course collaborate on writing the exam against a common set of learning outcomes.
  • They categorize and tag questions to track student progress and to assist with programmatic assessment. (In law, this could work as follows. Questions could be tagged against programmatic learning outcomes [such as knowledge of the law] and to content outlines [e.g., in Torts, a question could be tagged as referring to Battery].)  This allows them to generate reports that show how students perform over time in a particular outcome or topic.
  • They debrief assessments with students, using the results to help students learn how to improve, even when the course is over.  Here, categorization of questions is important.  (I started doing this in my Evidence course. I tagged multiple choice questions as testing hearsay, relevance, privilege, etc.  This allowed me to generate  reports out of Scantron ParScore that showed (1) how the class, as a whole, did on each category; and (2) how individual students did on each category. In turn, I’ll be able to use the data to improve my teaching next year.)
  • They utilize technology, such as ExamSoft, to make all of this data analysis and reporting possible.
  • They have trained assessment professionals to assist with the entire process.  Many schools have assessment departments or offices that can setup assessments and reports. Should we rethink the role of faculty support staff? Should we have faculty assistants move away from traditional secretarial functions and to assisting faculty with assessments? What training would be required?

Incidentally, I highly recommend the ExamSoft Assessment Conference, regardless of whether one is at an “ExamSoft law school” or not. (Full disclosure: I, like all speakers, received a very modest honorarium for my talk.) The conference was full of useful, practical information about teaching, learning, and assessment.  ExamSoft schools can also benefit from learning about new features of the software.

Suskie: How to Assess Anything Without Killing Yourself … Really!

Linda Suskie (former VP, Middle States Commission on Higher Education) has posted a great list of common-sense tips about assessments on her blog. They’re based on a book by Douglas Hubbard, How to Measure Anything: Finding the Value of “Intangibles in Business.” My favorites are:

1. We are (or should be) assessing because we want to make better decisions than what we would make without assessment results. If assessment results don’t help us make better decisions, they’re a waste of time and money.

4. Don’t try to assess everything. Focus on goals that you really need to assess and on assessments that may lead you to change what you’re doing. In other words, assessments that only confirm the status quo should go on a back burner. (I suggest assessing them every three years or so, just to make sure results aren’t slipping.)

5. Before starting a new assessment, ask how much you already know, how confident you are in what you know, and why you’re confident or not confident. Information you already have on hand, however imperfect, may be good enough. How much do you really need this new assessment?

8. If you know almost nothing, almost anything will tell you something. Don’t let anxiety about what could go wrong with assessment keep you from just starting to do some organized assessment.

9. Assessment results have both cost (in time as well as dollars) and value. Compare the two and make sure they’re in appropriate balance.

10. Aim for just enough results. You probably need less data than you think, and an adequate amount of new data is probably more accessible than you first thought. Compare the expected value of perfect assessment results (which are unattainable anyway), imperfect assessment results, and sample assessment results. Is the value of sample results good enough to give you confidence in making decisions?

14. Assessment value is perishable. How quickly it perishes depends on how quickly our students, our curricula, and the needs of our students, employers, and region are changing.

15. Something we don’t ask often enough is whether a learning experience was worth the time students, faculty, and staff invested in it. Do students learn enough from a particular assignment or co-curricular experience to make it worth the time they spent on it? Do students learn enough from writing papers that take us 20 hours to grade to make our grading time worthwhile?

 

New Article on Lessons Learned from Medical Education about Assessing Professional Formation Outcomes

Neil Hamilton (St. Thomas, MN) has a new article on SSRN, Professional-Identity/Professional-Formation/Professionalism Learning Outcomes: What Can We Learn About Assessment From Medical Education? 

Here’s an except from the abstract:

The accreditation changes requiring competency-based education are an exceptional opportunity for each law school to differentiate its education so that its students better meet the needs of clients, legal employers, and the legal system. While ultimately competency-based education will lead to a change in the model of how law faculty and staff, students, and legal employers understand legal education, this process of change is going to take a number of years. However, the law schools that most effectively lead this change are going to experience substantial differentiating gains in terms of both meaningful employment for graduates and legal employer and client appreciation for graduates’ competencies in meeting employer/client needs. This will be particularly true for those law schools that emphasize the foundational principle of competency-based learning that each student must grow toward later stages of self-directed learning – taking full responsibility as the active agent for the student’s experiences and assessment activities to achieve the faculty’s learning outcomes and the student’s ultimate goal of bar passage and meaningful employment.

Medical education has had fifteen more years of experience with competency-based education from which legal educators can learn. This article has focused on medical education’s “lessons learned” applicable to legal education regarding effective assessment of professional-identity learning outcomes.

Legal education has many other disciplines, including medicine, to look to for examples of implementing outcome-based assessment.  Professor Hamilton’s article nicely draws upon lessons learned by medical schools in assessing professional formation, an outcome that some law schools have decided to implement.

In looking at professional identity formation, in particular, progression is important. The curriculum and assessments must build on each other in order to see whether students are improving in this area. The hidden curriculum is a valuable area to teach and assess a competency like professional identity formation. But this requires coordination among various silos:

Law schools historically have been structured in silos with strongly guarded turf in and around each silo. Each of the major silos (including doctrinal classroom faculty, clinical faculty, lawyering skills faculty, externship directors, career services and professional development staff, and counseling staff) wants control over and autonomy regarding its turf. Coordination among these silos is going to take time and effort and involve some loss of autonomy but in return a substantial increase in student development and employment outcomes. For staff in particular, there should be much greater recognition that they are co-educators along with faculty to help students achieve the learning outcomes.

Full-time faculty members were not trained in a competency-based education model, and many have limited experience with some of the competencies, for example teamwork, that many law schools are including in their learning outcomes. In my experience, many full-time faculty members also have enormous investments in doctrinal knowledge and legal and policy analysis concerning their doctrinal field. They believe that the student’s law school years are about learning doctrinal knowledge, strong legal and policy analysis, and research and writing skills. These faculty members emphasize that they have to stay focused on “coverage” with the limited time in their courses even though this model of coverage of doctrinal knowledge and the above skills overemphasizes these competencies in comparison with the full range of competencies that legal employers and clients indicate they want.

In my view, this is the greatest  challenge with implementing a competency-based model of education in law schools. (Prof. Hamilton’s article has a nice summary of time-based versus competency-based education models.) Most law school curricula are silo-based. At most schools, a required first-year curriculum is followed by a largely unconnected series of electives in the second and third years. There are few opportunities for longitudinal study of outcomes in such an environment. In medical schools, however, there are clear milestones at which to assess knowledge, skills, and values for progression and growth.

Assessment is Up, Standardized Tests are Down

A new study from the Association of American Colleges and Universities found that 87% of colleges and universities are assessing student learning across the curriculum.  11% plan to do so.  The remaining 2%, well, may be in hot water with their accreditors.  85% reported having a common set of learning outcomes across all undergraduate programs, up from 78% in 2008.

An AAC&U official, Debra Humphreys, gave credit to the accreditors for this increase.

 If they had not been pushing, these numbers would not be like this.

On the other hand, fewer institutions are using standardized testing to assess learning in general education (down to 38% from 49% in 2008).  Instead, they are using rubrics to a greater extent (up from 77% to 91%), a recognition that faculty prefer to use assessments that they develop themselves.

More about the AAC&U study from this story on Inside Higher Ed.

Why a Blog on Assessment in Legal Education?

When the American Bar Association first began discussing revision of its accreditation standards for the J.D. degree to include a full-blown assessment requirement, I was skeptical. I saw “assessment” as more higher ed-speak with no benefit to students. “We’re already assessing students – we give final exams, writing assignments, and projects, and we track bar passage and career outcomes, right?” Later, as I learned more about assessment—including the differences between course-level and programmatic assessment—I came to the conclusion that, stripped of its at-times burdensome lingo, it was a simple process with a worthy goal: improving student learning through data-driven analysis. The process, I learned, was rooted in a scholarly approach to learning: define outcomes, measure and analyze direct and indirect evidence of student learning, and then use the information learned to improve teaching and learning.

Legal education is one of the last disciplines to adopt an assessment philosophy. Looking at assessment reports from programs, such as pharmacy, that have used assessment for years can be daunting. They have come a long way in a relatively short period of time. There is a dearth of information about assessment in legal education and, hence, this blog was born.  My goal is to bring together resources on law school assessment in one place while also offering my observations and practical insights to help keep assessment from drowning in lingo and endless report writing.  I hope readers find it valuable.