About Larry Cunningham

Law professor and Vice Dean at St. John's Law School. Former prosecutor and defense attorney.

Collecting Ultimate Bar Passage Data: Weighing the Costs and Benefits

The bar exam is an important outcome measure of whether our graduates are learning the basic competencies expected of new lawyers. As the ABA Managing Director reminded us in his memo of June 2015, however, it can no longer be the principal measure of student learning. Thus, we’re directed to look for other evidence of learning in our programs of legal education, hence the new focus on programmatic assessment.

Nevertheless, the ABA has wisely retained a minimal bar passage requirement in Standard 316, described in greater detail here. It is an important metric for prospective students. It is also an indicator of the quality of a school’s admission standards and, indirectly, its academic program. Indeed, it has been the subject of much debate recently. A proposal would have simplified the rule by requiring law schools to demonstrate that 75% of their graduates had passed a bar exam within two years of graduation. For a variety of reasons, the Council of the Section of Legal Education and Admission to the Bar recently decided to postpone moving forward with this change and leave Standard 316 written as-is.

With that background, Friday afternoon the ABA Associate Deans’ listserv received a message from William Adams, Deputy Managing Director of the ABA.  In it, he described a new process for collecting data on bar passage. A copy of the memo is on the ABA website. This change was authorized at the June 2017 meeting of the Council.  Readers may remember that the June meeting was the one that led to a major dust-up in legal education, when it was later revealed that the Council had voted to make substantial (and some would say, detrimental) changes to the Employment Questionnaire. When this came to light through the work of Jerry Organ and others, the ABA wisely backed off this proposed change and indicated it would further study the issue.

The change that the ABA approved in June and announced in greater detail on Friday is equally problematic.  In the past, schools would report bar passage as part of the Annual Questionnaire process. The bar passage section of the questionnaire asked schools to report first-time bar passage information. If a school was going through a site visit, it would also report this information on the Site Evaluation Questionnaire. If a school could not demonstrate compliance with Standard 316 with first-time bar passage, it was asked to show compliance using ultimate bar passage in the narrative section of the SEQ, specifically question 66, or as part of an interim monitoring or report-back process, described here (page 6).

Now, per the ABA, all schools—even those that can show that their graduates meet the minimums of Standard 316 through first-time passage data—must track, collect, and report ultimate bar passage information going back two years. (There is a phase-in process as outlined in the memo.) Hypothetically, let us assume that a school always has a pass rate of 80% (for sake of argument with 100% of graduates reporting) in a state with a consistent average of 75%. The school is in compliance with Standard 316, but it must nevertheless track the 20% of graduates who did not pass on the first attempt to see if they passed on subsequent attempts.

I have several problems with this change. As with the Employment Questionnaire issue, this change to the collection of bar passage data was done without notice and comment. While notice and comment is not required under the ABA rules for changes to questionnaires, a more open dialogue with schools would have likely highlighted the issues I raise below. Not all of us have the time to scour the ABA’s website for agenda and minutes for the various entities involved with the accreditation process (Council, Accreditation Committee, Standards Review Committee). A change this significant should have been done with input from those of us—deans and vice/associate deans—who are on the front lines of ABA compliance.

From a substantive perspective, the new change in data collection adds significant burdens without much benefit to the accreditation process.  Tracking graduates two years out will not be easy, particularly for schools in states that do not release data to schools or the public on bar passage. This is on top of the employment data that is collected every year, which is a significant undertaking if done correctly. Compliance with ABA Standards, state and federal Department of Education rules, state court rules (e.g., New York, which has a number of quasi-accreditation rules, such as its Skills Requirement and Pro Bono Requirement), and regional accreditors is increasingly taking up much of the work of associate deans of law schools. Time spent on compliance and accreditation is time that could otherwise be spent managing our institutions, helping students, or teaching.

That said, if reporting is a means to achieve an important end, I’m all for it.  The disclosures related to graduate employment, for instance, are important to prospective students. Collecting such data takes time but serves valuable purposes in transparency. Much of the Standard 509 report is valuable to applicants when comparing schools, and I fully support the transparency that it promotes.

Here, though, requiring all schools to track ultimate bar passage serves little purpose. Most schools can satisfy the minimums of Standard 316 with first-time bar passage data. In order to comply with Standard 316 using first-time data, a fully approved school must demonstrate that for three of the last five calendar years, the school’s bar passage rate was not more than 15 points below the first-time bar passage rate for graduates of ABA-approved law schools taking the bar exam in the same jurisdictions in the relevant years. This is a ridiculously easy standard to meet for nearly all schools. If I’m reading the ABA summary data on bar passage correctly, roughly 15-20 schools each year have a composite passage differential below 15% and thus must use the ultimate bar passage calculations to demonstrate compliance. All others are in compliance with the first-time standard. Why, then, require all schools to go through the process of tracking down every graduate to see if he or she passed a bar exam within two years of graduation? How would that data serve the purposes of Standard 316? A better approach is to leave the status quo and require the more onerous ultimate bar passage data collection only for schools that cannot demonstrate compliance with the first-year standard.

On the other hand, there may be external benefits to collecting and reporting ultimate bar passage two years out. For schools that struggle with first-time passage, I suppose being able to report on ultimate passage will be helpful from a marketing perspective, but nothing in the existing system prevents them from doing so now. I worry about schools misrepresenting ultimate bar passage results—”Look how great we are! 99.5% of our grads pass the bar exam!” with a tiny footnote explaining that this “great” result is based on reporting two years out. If I were a prospective student, first-time passage would be much more important in determining which school to attend. With the bar exam only being offered twice a year, having to retake the bar exam 2, 3, or 4 times can be disastrous to one’s career.

There may also be value in collecting this data from a research perspective. With the proposed reforms to Standard 316 on hold for now, collecting ultimate bar passage from all schools may help the ABA determine what the threshold should be. On the other hand, the ABA should develop standards based on what, in the Council’s professional judgments, schools should achieve, not what they are achieving right now. Moreover, if the goal is to gather research for future amendments to the standards, the Council should be upfront that this is its goal and it should consider doing a voluntary sampling. Again, notice and comment would be helpful in this regard.

There is one aspect of the new Bar Passage Questionnaire that is a positive change. By de-coupling bar passage from the Annual Questionnaire, prospective students will have more timely information on this important outcome. Currently, the Annual Questionnaire, which is completed in October each year, asks for bar passage data from the previous calendar year. For example, if the process was left unchanged, this AQ season we would have been reporting bar passage from calendar year 2016, even though most schools now have full 2017 data available.

I have great respect for the staff of the Managing Director’s Office. In these types of matters, they are the proverbial messenger, so I don’t fault them. I have three requests of the Council, however:

  1. First, the Council should give greater thought to the costs of data collection, particularly where it’s unclear whether or how such data will translate to assessing compliance with the existing standards. The Council has done a terrific job of streamlining the data collected for site visits, but more work can be done on the AQ.
  2. Second, the Council should withdraw its proposed implementation of the section of the new Bar Passage Questionnaire that asks all schools to report ultimate passage until these issues can be more fully aired.
  3. Finally, if significant changes are proposed to data questionnaires in the future, the Council should engage in a more open and collaborative process with the law schools and the broader legal education community to get feedback.

Thoughts on Assessing Communication Competencies

The ABA Standards set forth the minimal learning outcomes that every law school must adopt. They include “written and oral communication in the legal context.”

“Written communication” as a learning outcome is “low-hanging fruit” for law school assessment committees. For a few reasons, this is an easy area to begin assessing students’ learning on a program level:

  1. Per the ABA standards, there must be a writing experience in both the 1L year and in at least one of the upper-level semesters. (Some schools, such as ours, have several writing requirements.) This provides a lot of opportunities to look at student growth over time by assessing the same students’ work as 1Ls and again as 2Ls or 3Ls.  In theory, there should be improvement over time!
  2. Writing naturally generates “artifacts” to assess.  Unlike other competencies, which may require the generation of special, artificial exams or other assessments, legal writing courses are already producing several documents per student to examine.
  3. Legal writing faculty are a naturally collaborative group of faculty, if I do say so myself!  Even in schools without a formal structure (so-called “directorless” programs), my experience is that legal writing faculty work together on common problems/assignments, syllabi, and rubrics.  This allows for assessment across sections.  I also find that legal writing faculty, based on the nature of their courses, give a lot of thought to assessment, generally.

Oral communication is another matter. This is a more difficult outcome to assess. Apart from a first-year moot court exercise, most schools don’t have required courses in verbal skills, although that may be changing with the ABA’s new experiential learning requirement.  Still, I think there are some good places in the curriculum to look for evidence of student learning of this outcome.  Trial and appellate advocacy courses, for example, require significant demonstration of that skill, although in some schools only a few students may take advantage of these opportunities.  Clinics are a goldmine, as are externships.  For these courses, surveying faculty about students’ oral communication skills is one way to gather evidence of student learning. However, this is an indirect measure.  A better way to assess this outcome is to utilize common rubrics for particular assignments or experiences.  For example, after students appear in court on a clinic case, the professor could rate them using a commonly applied rubric.  Those rubrics could be used both to grade the individual students and to assess student learning more generally.

Note Taking Advice to My Evidence Students

I recently sent out an e-mail to students in my Evidence class, sharing my views on classroom laptop bans and note taking.  In the past, I’ve banned laptops, but I’ve gone back to allowing them. As with most things in the law, the question is not the rule but who gets to decide the rule. Here, with a group of adult learners, I prefer a deferential approach. I’m also cognizant that there may be generational issues at play and that what would work for me as a student might not work for the current generation of law students. So, I’ve taken to offering advice on note taking, a skill that must be honed like any other.


Dear Class:

As you begin your study of the law of Evidence, I wanted to offer my perspective on note taking.  Specifically, I’d like to weigh in on the debate about whether students should take notes by hand or using a laptop.

As you will soon find out, Evidence is a heavy, 4-credit course. Our class time—4 hours per week—will be spent working through difficult rules, cases, and problems.  Classes will build on your out-of-class preparation and will not be a mere review of what you’ve read in advance.  Thus, it is important that the way you take notes helps, not hurts, your learning.

The research overwhelmingly shows that students retain information better when they take notes by hand rather than computer. This article has a nice summary of the literature: http://www.npr.org/2016/04/17/474525392/attention-students-put-your-laptops-away?utm_campaign=storyshare&utm_source=twitter.com&utm_medium=social.  The reason why handwriting is better is pretty simple: when you handwrite, you are forced to process and synthesize the material, since it’s impossible to take down every word said in class. In contrast, when you type, you tend to function more like a court reporter, trying to take down every word that is said. Additionally, laptops present a host of distractions: e-mail, chat, the web, and games are all there to tempt you away from the task at hand, which is to stay engaged with the discussion. I’ve lost count of the number of times that I’ve called on a student engrossed in his or her laptop, only to get “Can you repeat the question?” as the response.

Of course, it’s possible to be distracted without a computer, too.  Crossword puzzles, the buzzing of a cell phone, daydreaming, or the off-topic computer usage of the person in front of you can all present distractions. And it’s more difficult to integrate handwritten notes with your outline and other materials.

If I were you, I would handwrite my notes.  But I’m not you.  You are adults and presumably know best how you study and retain information.  For this reason, I don’t ban laptops.  But if you choose to use a laptop or similar device to take notes, I have some additional suggestions.  Look into distraction avoidance software, such as FocusMe, Cold Turkey, or Freedom.  These programs block out distracting apps like web browsers and text messaging.  Turn off your wireless connection.  Turn down the display of your screen so that your on-screen activity is not distracting to those behind you.  Of course, turn off the sound.  Learn to type quietly so you’re not annoying your neighbors with the clickety-clack of your keys.

Finally, and most importantly, think strategically about what you’re typing.  You don’t need a written record of everything said in class.  Indeed, one of the reasons I record all of my classes and make the recordings available to registered students is so you don’t have to worry about missing something.  You can always go back and rewatch a portion of class that wasn’t clear to you. It’s not necessary to type background material, such as the facts of a case or the basic rule.  Most of this should be in your notes that you took when reading the materials for class (you are taking notes as you read, right?).  Instead of having a new set of notes for class, think about integrating them with what you’ve already written as you prepared for class.  That is, synthesize and integrate what is said in class about the cases and problems with what you’ve written out about them in advance.  Try your best to take fewer notes, not more.  Focus on listening and thinking along with the class discussion.  Sometimes less is more.

Above all else, do what works best for you.  If you’ve done well on law school exams while taking verbose notes, by all means continue doing so.  But, if you’ve found yourself not doing as well as you’d like, now is the time to try a new means of studying and note-taking.  You may be pleasantly surprised by experimenting with handwritten notes or, if you do use a laptop, adapting your note-taking style as suggested above.

Of course, please let me know if you’d like further advice.  I’m here to help you learn.

Regards,

Prof. Cunningham

What Law Schools Can Learn about Assessment from Other Disciplines

I have spent the last few days at the ExamSoft Assessment Conference. I gave a presentation on assessment developments in legal education, and it was great to see colleagues from other law schools there. I spent a lot of time attending presentations about how other disciplines are using assessment. I was particularly impressed by what the sciences are doing, especially nursing, pharmacy, physical therapy, podiatry, and medicine. I came away from the conference with the following takeaways about how these other disciplines are using assessment:

  • They use assessment data to improve student learning, both at an individual and macro level.  They are less focused on using assessments to “sort” students along a curve for grading purposes. Driven in part by their accreditors, the sciences use assessment data to help individual students recognize their weaknesses and, by graduation, get up to the level expected for eventual licensure, sometimes through remediation. They also use assessment data to drive curricular and teaching reform.
  • They focus on the validity and reliability of their summative assessments.  This is probably not surprising since scientists are trained in the scientific method. They are also, by nature, accepting of data and statistics. They utilize item analysis reports (see bullet #3) and rubrics (for essays) to ensure that their assessments are effective and that their grading is reliable. Assessments are reused and improved over time. Thus, a lot of effort is put into exam security.
  • They utilize item analysis data reports to improve their assessments over time. Item analysis reports show things like a KR-20 score and point biserial coefficients, which are statistical tools that can help assess the quality of individual test items and the exam as a whole. They can be generated by most scoring systems, such as Scantron and ExamSoft.
  • They utilize multiple, formative assessments in courses. 
  • They collect a lot of data on students.
  • They cooperate and share assessments across sections and professors.  It is not uncommon for there to be a single, departmentally-approved exam for a particular course. Professors teaching multiple sections of a course collaborate on writing the exam against a common set of learning outcomes.
  • They categorize and tag questions to track student progress and to assist with programmatic assessment. (In law, this could work as follows. Questions could be tagged against programmatic learning outcomes [such as knowledge of the law] and to content outlines [e.g., in Torts, a question could be tagged as referring to Battery].)  This allows them to generate reports that show how students perform over time in a particular outcome or topic.
  • They debrief assessments with students, using the results to help students learn how to improve, even when the course is over.  Here, categorization of questions is important.  (I started doing this in my Evidence course. I tagged multiple choice questions as testing hearsay, relevance, privilege, etc.  This allowed me to generate  reports out of Scantron ParScore that showed (1) how the class, as a whole, did on each category; and (2) how individual students did on each category. In turn, I’ll be able to use the data to improve my teaching next year.)
  • They utilize technology, such as ExamSoft, to make all of this data analysis and reporting possible.
  • They have trained assessment professionals to assist with the entire process.  Many schools have assessment departments or offices that can setup assessments and reports. Should we rethink the role of faculty support staff? Should we have faculty assistants move away from traditional secretarial functions and to assisting faculty with assessments? What training would be required?

Incidentally, I highly recommend the ExamSoft Assessment Conference, regardless of whether one is at an “ExamSoft law school” or not. (Full disclosure: I, like all speakers, received a very modest honorarium for my talk.) The conference was full of useful, practical information about teaching, learning, and assessment.  ExamSoft schools can also benefit from learning about new features of the software.

Off topic: WaPo op-ed on access-to-justice

Not directly assessment-related, but I thought I would share that Jennifer Bard (Cincinnati) and I have an op-ed in the Washington Post about access-to-justice. Drawing on an analogy to medicine, we argue:

Professionals must first acknowledge that not every legal task must be performed by a licensed lawyer. Instead, we need to adopt a tiered system of legal-services delivery that allows for lower barriers to entry. Just as a pharmacist can administer vaccines and a nurse practitioner can be on the front line of diagnosing and treating ailments, we should have legal practitioners who can also exercise independent judgment within the scope of their training. Such a change would expand the preparation and independence of the existing network of paralegals, secretaries and investigators already assisting lawyers.

This creates greater, not fewer, opportunities for law schools, which should provide a range of educational opportunities, from short programs for limited license holders to Ph.D.’s for those interested in academic research.

Enjoy the article!

Suskie: How to Assess Anything Without Killing Yourself … Really!

Linda Suskie (former VP, Middle States Commission on Higher Education) has posted a great list of common-sense tips about assessments on her blog. They’re based on a book by Douglas Hubbard, How to Measure Anything: Finding the Value of “Intangibles in Business.” My favorites are:

1. We are (or should be) assessing because we want to make better decisions than what we would make without assessment results. If assessment results don’t help us make better decisions, they’re a waste of time and money.

4. Don’t try to assess everything. Focus on goals that you really need to assess and on assessments that may lead you to change what you’re doing. In other words, assessments that only confirm the status quo should go on a back burner. (I suggest assessing them every three years or so, just to make sure results aren’t slipping.)

5. Before starting a new assessment, ask how much you already know, how confident you are in what you know, and why you’re confident or not confident. Information you already have on hand, however imperfect, may be good enough. How much do you really need this new assessment?

8. If you know almost nothing, almost anything will tell you something. Don’t let anxiety about what could go wrong with assessment keep you from just starting to do some organized assessment.

9. Assessment results have both cost (in time as well as dollars) and value. Compare the two and make sure they’re in appropriate balance.

10. Aim for just enough results. You probably need less data than you think, and an adequate amount of new data is probably more accessible than you first thought. Compare the expected value of perfect assessment results (which are unattainable anyway), imperfect assessment results, and sample assessment results. Is the value of sample results good enough to give you confidence in making decisions?

14. Assessment value is perishable. How quickly it perishes depends on how quickly our students, our curricula, and the needs of our students, employers, and region are changing.

15. Something we don’t ask often enough is whether a learning experience was worth the time students, faculty, and staff invested in it. Do students learn enough from a particular assignment or co-curricular experience to make it worth the time they spent on it? Do students learn enough from writing papers that take us 20 hours to grade to make our grading time worthwhile?

 

New Article on Lessons Learned from Medical Education about Assessing Professional Formation Outcomes

Neil Hamilton (St. Thomas, MN) has a new article on SSRN, Professional-Identity/Professional-Formation/Professionalism Learning Outcomes: What Can We Learn About Assessment From Medical Education? 

Here’s an except from the abstract:

The accreditation changes requiring competency-based education are an exceptional opportunity for each law school to differentiate its education so that its students better meet the needs of clients, legal employers, and the legal system. While ultimately competency-based education will lead to a change in the model of how law faculty and staff, students, and legal employers understand legal education, this process of change is going to take a number of years. However, the law schools that most effectively lead this change are going to experience substantial differentiating gains in terms of both meaningful employment for graduates and legal employer and client appreciation for graduates’ competencies in meeting employer/client needs. This will be particularly true for those law schools that emphasize the foundational principle of competency-based learning that each student must grow toward later stages of self-directed learning – taking full responsibility as the active agent for the student’s experiences and assessment activities to achieve the faculty’s learning outcomes and the student’s ultimate goal of bar passage and meaningful employment.

Medical education has had fifteen more years of experience with competency-based education from which legal educators can learn. This article has focused on medical education’s “lessons learned” applicable to legal education regarding effective assessment of professional-identity learning outcomes.

Legal education has many other disciplines, including medicine, to look to for examples of implementing outcome-based assessment.  Professor Hamilton’s article nicely draws upon lessons learned by medical schools in assessing professional formation, an outcome that some law schools have decided to implement.

In looking at professional identity formation, in particular, progression is important. The curriculum and assessments must build on each other in order to see whether students are improving in this area. The hidden curriculum is a valuable area to teach and assess a competency like professional identity formation. But this requires coordination among various silos:

Law schools historically have been structured in silos with strongly guarded turf in and around each silo. Each of the major silos (including doctrinal classroom faculty, clinical faculty, lawyering skills faculty, externship directors, career services and professional development staff, and counseling staff) wants control over and autonomy regarding its turf. Coordination among these silos is going to take time and effort and involve some loss of autonomy but in return a substantial increase in student development and employment outcomes. For staff in particular, there should be much greater recognition that they are co-educators along with faculty to help students achieve the learning outcomes.

Full-time faculty members were not trained in a competency-based education model, and many have limited experience with some of the competencies, for example teamwork, that many law schools are including in their learning outcomes. In my experience, many full-time faculty members also have enormous investments in doctrinal knowledge and legal and policy analysis concerning their doctrinal field. They believe that the student’s law school years are about learning doctrinal knowledge, strong legal and policy analysis, and research and writing skills. These faculty members emphasize that they have to stay focused on “coverage” with the limited time in their courses even though this model of coverage of doctrinal knowledge and the above skills overemphasizes these competencies in comparison with the full range of competencies that legal employers and clients indicate they want.

In my view, this is the greatest  challenge with implementing a competency-based model of education in law schools. (Prof. Hamilton’s article has a nice summary of time-based versus competency-based education models.) Most law school curricula are silo-based. At most schools, a required first-year curriculum is followed by a largely unconnected series of electives in the second and third years. There are few opportunities for longitudinal study of outcomes in such an environment. In medical schools, however, there are clear milestones at which to assess knowledge, skills, and values for progression and growth.

Database on Law Schools’ Learning Outcomes

The Holloran Center at St. Thomas Law School (MN)—run by Jerry Organ and Neil Hamilton—has created a database of law schools’ efforts to adopt learning outcomes.  The center plans to update the database quarterly.  

One of the very helpful aspects of the database is that it has coding so that a user can filter by school and by learning outcomes that go above and beyond the ABA minimum.  This will be a terrific resource as schools roll out the new ABA standards on learning outcomes.  In addition, for those of us interested in assessment as an area of scholarship, it is a treasure trove of data.

Frustratingly, it looks like many schools have decided not to go beyond the minimum competences set forth in ABA Standard 302, what the Holloran Center has categorized as a “basic” set of outcomes. The ABA’s list is far from exhaustive.  Schools that have essentially copied-and-pasted from Standard 302 have missed an opportunity to make their learning outcomes uniquely their own by incorporating aspects of their mission that make them distinctive from other schools.  Worse, it may be a sign that some schools are being dragged into the world of assessment kicking-and-screaming. On the other hand, it may indicate a lack of training or a belief that the ABA’s minimums fully encapsulate the core learning outcomes that every student should attain. Only time will tell.  As schools actually begin to assess their learning outcomes, we’ll have a better idea of how seriously assessment is being taken by law schools.

Assessing the Hidden Curriculum

I was honored to have been asked to attend St. Thomas (MN) Law School’s recent conference on professional formation, hosted by St. Thomas’ Holloran Center for Professional Formation, which is co-directed by Neil Hamilton and Jerry Organ.  The conference was fascinating and exceptionally well-run (I was particularly impressed by how Neil and Jerry nicely integrated students from the Law Journal into the conference as full participants.).  The two-day conference included a workshop to discuss ways to begin assessing professional formation in legal education.  Speakers included those from other professional disciplines, including medicine and the military.

One of the most important themes was the idea of the “hidden curriculum” in law schools, a phrase used by Professor (and Dean Emeritus) Louis Bilionis of the University of Cincinnati College of Law. The idea is that learning occurs in many forms, not just by professors in a classroom instilling concepts through traditional teaching methods.  Students interact with a range of individuals during their legal education, many of whom are actively contributing to their education, particularly as to professional formation.  Consider:

  • The Career Development Office counselor who advises a student on how to deal with multiple, competing offers from law firms in a professional manner.
  • The Externship supervisor who helps a student reflect on an ethical issue that arose in his or her placement.
  • The secretary of a law school clinic who speaks with a student who has submitted a number of typo-ridden motions.
  • A non-faculty Assistant Dean who works with the Public Interest Law Student Association to put on a successful fundraising event for student fellowships, which involves setting deadlines, creating professional communications to donors, and leading a large staff of volunteer students.
  • The Law School receptionist who pulls a student aside before an interview to help the student get composed.
  • A fellow student who suggests that a classmate could have handled an interaction with a professor in a more professional manner.

These are all opportunities for teaching professional formation, which for many schools is (at least nominally) a learning outcome.  But how do we assess such out-of-classroom learning experiences?  If professional formation is a learning outcome, I suggest that schools will need to develop methods of measuring the extent to which this value is being learned.  Here are some suggestions:

  • Many schools with robust career services programs already assess student satisfaction in this area through student surveys.  They should consider adding questions to determine the extent to which students perceive that their career counselors helped them to become professionals.
  • Embed professional identity questions in final exams in Professional Responsibility and similar courses.
  • Survey alumni.
  • If professional identity is introduced in the first year, assess whether students in the 2L and 3L Externship Program have embodied lessons that were learned in the 1L curriculum.  Site supervisors could be asked, for example, to what extent students displayed a range of professional behaviors.
  • Ask the state bar for data on disciplinary violations for graduates of your school compared to others.

I recognize that a lot of these are indirect measures.  However, if a school has a robust professional identity curriculum (as some do), direct measures can be collected and analyzed.  In doing so, schools should not ignore the “hidden curriculum” to look for evidence of student learning.