New Article on Lessons Learned from Medical Education about Assessing Professional Formation Outcomes

Neil Hamilton (St. Thomas, MN) has a new article on SSRN, Professional-Identity/Professional-Formation/Professionalism Learning Outcomes: What Can We Learn About Assessment From Medical Education? 

Here’s an except from the abstract:

The accreditation changes requiring competency-based education are an exceptional opportunity for each law school to differentiate its education so that its students better meet the needs of clients, legal employers, and the legal system. While ultimately competency-based education will lead to a change in the model of how law faculty and staff, students, and legal employers understand legal education, this process of change is going to take a number of years. However, the law schools that most effectively lead this change are going to experience substantial differentiating gains in terms of both meaningful employment for graduates and legal employer and client appreciation for graduates’ competencies in meeting employer/client needs. This will be particularly true for those law schools that emphasize the foundational principle of competency-based learning that each student must grow toward later stages of self-directed learning – taking full responsibility as the active agent for the student’s experiences and assessment activities to achieve the faculty’s learning outcomes and the student’s ultimate goal of bar passage and meaningful employment.

Medical education has had fifteen more years of experience with competency-based education from which legal educators can learn. This article has focused on medical education’s “lessons learned” applicable to legal education regarding effective assessment of professional-identity learning outcomes.

Legal education has many other disciplines, including medicine, to look to for examples of implementing outcome-based assessment.  Professor Hamilton’s article nicely draws upon lessons learned by medical schools in assessing professional formation, an outcome that some law schools have decided to implement.

In looking at professional identity formation, in particular, progression is important. The curriculum and assessments must build on each other in order to see whether students are improving in this area. The hidden curriculum is a valuable area to teach and assess a competency like professional identity formation. But this requires coordination among various silos:

Law schools historically have been structured in silos with strongly guarded turf in and around each silo. Each of the major silos (including doctrinal classroom faculty, clinical faculty, lawyering skills faculty, externship directors, career services and professional development staff, and counseling staff) wants control over and autonomy regarding its turf. Coordination among these silos is going to take time and effort and involve some loss of autonomy but in return a substantial increase in student development and employment outcomes. For staff in particular, there should be much greater recognition that they are co-educators along with faculty to help students achieve the learning outcomes.

Full-time faculty members were not trained in a competency-based education model, and many have limited experience with some of the competencies, for example teamwork, that many law schools are including in their learning outcomes. In my experience, many full-time faculty members also have enormous investments in doctrinal knowledge and legal and policy analysis concerning their doctrinal field. They believe that the student’s law school years are about learning doctrinal knowledge, strong legal and policy analysis, and research and writing skills. These faculty members emphasize that they have to stay focused on “coverage” with the limited time in their courses even though this model of coverage of doctrinal knowledge and the above skills overemphasizes these competencies in comparison with the full range of competencies that legal employers and clients indicate they want.

In my view, this is the greatest  challenge with implementing a competency-based model of education in law schools. (Prof. Hamilton’s article has a nice summary of time-based versus competency-based education models.) Most law school curricula are silo-based. At most schools, a required first-year curriculum is followed by a largely unconnected series of electives in the second and third years. There are few opportunities for longitudinal study of outcomes in such an environment. In medical schools, however, there are clear milestones at which to assess knowledge, skills, and values for progression and growth.

Database on Law Schools’ Learning Outcomes

The Holloran Center at St. Thomas Law School (MN)—run by Jerry Organ and Neil Hamilton—has created a database of law schools’ efforts to adopt learning outcomes.  The center plans to update the database quarterly.  

One of the very helpful aspects of the database is that it has coding so that a user can filter by school and by learning outcomes that go above and beyond the ABA minimum.  This will be a terrific resource as schools roll out the new ABA standards on learning outcomes.  In addition, for those of us interested in assessment as an area of scholarship, it is a treasure trove of data.

Frustratingly, it looks like many schools have decided not to go beyond the minimum competences set forth in ABA Standard 302, what the Holloran Center has categorized as a “basic” set of outcomes. The ABA’s list is far from exhaustive.  Schools that have essentially copied-and-pasted from Standard 302 have missed an opportunity to make their learning outcomes uniquely their own by incorporating aspects of their mission that make them distinctive from other schools.  Worse, it may be a sign that some schools are being dragged into the world of assessment kicking-and-screaming. On the other hand, it may indicate a lack of training or a belief that the ABA’s minimums fully encapsulate the core learning outcomes that every student should attain. Only time will tell.  As schools actually begin to assess their learning outcomes, we’ll have a better idea of how seriously assessment is being taken by law schools.

Assessing the Hidden Curriculum

I was honored to have been asked to attend St. Thomas (MN) Law School’s recent conference on professional formation, hosted by St. Thomas’ Holloran Center for Professional Formation, which is co-directed by Neil Hamilton and Jerry Organ.  The conference was fascinating and exceptionally well-run (I was particularly impressed by how Neil and Jerry nicely integrated students from the Law Journal into the conference as full participants.).  The two-day conference included a workshop to discuss ways to begin assessing professional formation in legal education.  Speakers included those from other professional disciplines, including medicine and the military.

One of the most important themes was the idea of the “hidden curriculum” in law schools, a phrase used by Professor (and Dean Emeritus) Louis Bilionis of the University of Cincinnati College of Law. The idea is that learning occurs in many forms, not just by professors in a classroom instilling concepts through traditional teaching methods.  Students interact with a range of individuals during their legal education, many of whom are actively contributing to their education, particularly as to professional formation.  Consider:

  • The Career Development Office counselor who advises a student on how to deal with multiple, competing offers from law firms in a professional manner.
  • The Externship supervisor who helps a student reflect on an ethical issue that arose in his or her placement.
  • The secretary of a law school clinic who speaks with a student who has submitted a number of typo-ridden motions.
  • A non-faculty Assistant Dean who works with the Public Interest Law Student Association to put on a successful fundraising event for student fellowships, which involves setting deadlines, creating professional communications to donors, and leading a large staff of volunteer students.
  • The Law School receptionist who pulls a student aside before an interview to help the student get composed.
  • A fellow student who suggests that a classmate could have handled an interaction with a professor in a more professional manner.

These are all opportunities for teaching professional formation, which for many schools is (at least nominally) a learning outcome.  But how do we assess such out-of-classroom learning experiences?  If professional formation is a learning outcome, I suggest that schools will need to develop methods of measuring the extent to which this value is being learned.  Here are some suggestions:

  • Many schools with robust career services programs already assess student satisfaction in this area through student surveys.  They should consider adding questions to determine the extent to which students perceive that their career counselors helped them to become professionals.
  • Embed professional identity questions in final exams in Professional Responsibility and similar courses.
  • Survey alumni.
  • If professional identity is introduced in the first year, assess whether students in the 2L and 3L Externship Program have embodied lessons that were learned in the 1L curriculum.  Site supervisors could be asked, for example, to what extent students displayed a range of professional behaviors.
  • Ask the state bar for data on disciplinary violations for graduates of your school compared to others.

I recognize that a lot of these are indirect measures.  However, if a school has a robust professional identity curriculum (as some do), direct measures can be collected and analyzed.  In doing so, schools should not ignore the “hidden curriculum” to look for evidence of student learning.

2017 Assessment Institute

It’s been a while since my last blog post, for which I’m very sorry! The Fall was a busy semester. I was in Cambodia on a Fulbright and a bug I picked up there knocked me off my feet for a while.  But, I’m better and back to blogging.

Max Huffman at Indiana University McKinney School of Law alerted me that the annual Assessment Institute will be held this year from October 22-24 in Indianapolis.  Law school assessment will be featured on the Graduate Track, which Professor Huffman will be co-directing.  There is a request for proposals. Looks like it will be a great event.

Upcoming ILT Conference on Formative Assessment

Although the ABA standards concern themselves primarily with programmatic assessment—this is, whether a school has a process to determine if students are achieving the learning goals we set them and then using the results to improve the curriculum—they also speak to course-level assessment. While the ABA standards do not require formative assessment in every class (see Interpretation 314-2), the curriculum must contain sufficient assessments to ensure that students receive “meaningful feedback.”

Thus, I was delighted to learn from the ASP listserv that the Institute for Law Teaching and Emory Law School will be hosting a conference on course-level formative assessment in large classes on March 25, 2017, in Atlanta, Georgia. More information at the link above.

Do exams measure speed or performance?

A new study out of BYU attempts to answer the question.  It’s summarized at TaxProf and the full article is here. From the abstract on SSRN:

What, if any, is the relationship between speed and grades on first year law school examinations? Are time-pressured law school examinations typing speed tests? Employing both simple linear regression and mixed effects linear regression, we present an empirical hypothesis test on the relationship between first year law school grades and speed, with speed represented by two variables: word count and student typing speed. Our empirical findings of a strong statistically significant positive correlation between total words written on first year law school examinations and grades suggest that speed matters. On average, the more a student types, the better her grade. In the end, however, typing speed was not a statistically significant variable explaining first year law students’ grades. At the same time, factors other than speed are relevant to student performance.

In addition to our empirical analysis, we discuss the importance of speed in law school examinations as a theoretical question and indicator of future performance as a lawyer, contextualizing the question in relation to the debate in the relevant psychometric literature regarding speed and ability or intelligence. Given that empirically, speed matters, we encourage law professors to consider more explicitly whether their exams over-reward length, and thus speed, or whether length and assumptions about speed are actually a useful proxy for future professional performance and success as lawyers.

The study raises important questions of how we structure exams. I know of colleagues who impose word count limits (enforceable thanks to exam software), and I think I may be joining the ranks. More broadly, are our high-stakes final exams truly measuring what we want them to?

Assessment and Strategic Planning

Over at PrawfsBlawg, my friend Jennifer Bard, dean of Cincinnati Law School, has a post on “Learning Outcomes as the New Strategic Planning.” She points readers to Professors Shaw and VanZandt’s book, Student Learning Outcomes and Law School Assessment. The book is, to be sure, an excellent resource, although parts of it may be too advanced for schools that are just getting started with assessment.  Still, it’s a great book, one that sits on the corner of my desk and is consulted often.  (Dean Bard also gave a nice shoutout to my blog as a resource.)

Citing an article by Hanover Research, Dean Bard draws a key distinction between strategic planning activities of yesteryear and what’s required under the new ABA standards.

Traditionally, law school strategic plans focused on outcomes other than whether students were learning what schools had determined their students should be learning. These often included things like faculty scholarly production, diversity, student career placement, fundraising, and admissions inputs. Former ABA Standard 203 required a strategic planning process (albeit not a strategic plan per se) to improve all of the goals of a school:

In addition to the self study described in Standard 202, a law school shall demonstrate that it regularly identifies specific goals for improving the law school’s program, identifies means to achieve the established goals, assesses its success in realizing the established goals and periodically re-examines and appropriately revises its established goals.

The old standard used the term “assessment” in a broad sense, not just as to student learning. In contrast, new Standard 315 focuses on assessment of learning outcomes to improve the curriculum:

The dean and the faculty of a law school shall conduct ongoing evaluation of the law school’s program of legal education, learning outcomes, and assessment methods; and shall use the results of this evaluation to determine the degree of student attainment of competency in the learning outcomes and to make appropriate changes to improve the curriculum.

This is the “closing the loop” of the assessment process: using the results of programmatic outcomes assessment to improve student learning.

So, what to do with the “old” way of strategic planning? Certainly, a school should still engage in a  strategic planning process that focuses on all of the important outcomes and goals of the school, of which assessment of student learning is just one piece. Paraphrasing a common expression, if you don’t measure it, it doesn’t get done. Indeed, one can interpret Standards 201 and 202 as still requiring a planning process of some kind, particularly to guide resource allocation.

Still, much of the way that some schools engage in strategic planning is wasteful and ineffective. Often, the planning cycle takes years and results in a beautiful, glossy brochure (complete with photos of happy students and faculty) that sits on the shelf. I’m much more a fan of quick-and-dirty strategic planning that involves efficiently setting goals and action items that can be accomplished over a relatively short time-horizon. The importance is not the product (the glossy brochure) but having a process that is nimble, updated often, used to guide allocation of resources, and serves as a self-accountability tool. (Here, I have to confess, my views on this have evolved since serving on the Strategic Priorities Review Team of our University. I now see much more value in the type of efficient planning I have described.)

In this respect, strategic planning and learning outcomes assessment should both have in common an emphasis on process, not product. Some of the assessment reports generated by schools as a result of regional accreditation are truly works of art, but what is being done with the information? That, to me, is the ultimate question of the value of both processes.

What is the point of curriculum mapping?

Curriculum mapping is the process of identifying where in a school’s curriculum each of its learning outcomes is being taught and assessed. We recently posted our curriculum maps on our assessment webpage, including the survey instrument we used to collect data from faculty.

Curriculum mapping was a big discussion item at an assessment conference in Boston last Spring and understandably so. But, to be clear, curriculum mapping is, itself, not assessment. It is, rather, a tool to assist with the programmatic assessment process.  It also furthers curricular reform.

Mapping is not assessment in the programmatic sense because even the best of curriculum maps will not show whether, in fact, students are learning what we want them to learn. Curriculum mapping helps with assessment because it enables an assessment committee to identify where in the curriculum to look for particular evidence (“artifacts” in the lingo) of student learning.

It also helps with curricular reform in two ways:

  • by enabling a faculty to plug holes in the curriculum.  If an outcome has been identified as desirable but it is not being taught to all or most students, a new degree requirement could be created. Our school did this with negotiation. We had identified it as a valuable skill but realized, through a curriculum mapping exercise done several years ago, that it was not being taught to a sufficient number of students. We then created a 1L course specifically on negotiation and other interpersonal skills.
  • by restructuring degree requirements so that smarter sequencing occurs. In theory, advanced instruction should build upon introductions.  A curriculum map will help show the building blocks in particular outcomes: introduction to competence to advanced.

Overall, I hope that schools put serious thought into curriculum mapping, while also recognizing that it is not the end of assessment … but instead the beginning.

Cultural Competency as a Learning Outcome in Legal Writing

Eunice Park (Western State) has a short piece on SSRN, featured in the SSRN Legal Writing eJournal and published in the AALS Teaching Methods Newsletter, about assessing cultural competency in a legal writing appellate advocacy exercise. Cultural competency is listed in Interpretation 302-1 as an example of a “professional skill” that would satisfy Standard 302’s requirement that a school’s learning outcomes include “[o]ther professional skills needed for competent and ethical participation as a member of the legal profession.”

Professor Park writes:

Legal writing courses provide an ideal setting for raising awareness of the importance of sensitivity to diverse cultural mores. One way is by creating an assignment that demonstrates how viewing determinative facts from a strictly Western lens might lead to an unfair outcome.

In writing a recent appellate brief problem, I introduced cultural competence as a learning outcome by integrating culturally-sensitive legally significant facts into the assignment.

She goes on to describe the appellate brief problem and how it helped meet the goal of enhancing students’ cultural competency.