About Larry Cunningham

Law professor, associate dean, and director of the Center for Trial and Appellate Advocacy at St. John's Law School. Former prosecutor and defense attorney.

What This Professor Learned by Becoming a Student Again

For the past year, I have been a student again.  Once I finish a final paper (hopefully tomorrow), I will be receiving a Graduate Certificate in Assessment and Institutional Research from Sam Houston State University.

I enrolled in the program at “Sam” (as students call it) because I wanted to receive formal instruction in assessment, institutional research, data management, statistics, and, more generally, higher education.  These were areas where I was mainly self-taught, and I thought the online program at Sam Houston would give me beneficial skills and knowledge.  The program has certainly not disappointed.  The courses were excellent, the professors knowledgeable, and the technology flawless.  I paid for the program out-of-pocket, and it was worth every penny.  It has made me better at programmatic assessment and institutional research.  (I also turned one of my research papers into an article, which just came out this week.)

But the program had another benefit: It has made me a better teacher. Continue reading

New Article on the UBE

Professor and Director of Academic Support and Bar Passage Marsha Griggs (Washburn) has a new article on SSRN, Building a Better Bar Exam.  It is a well-written critique of the Uniform Bar Exam.  From the SSRN summary:

In the wake of declining bar passage rates and limited placement options for law grads, a new bar exam has emerged: the UBE. Drawn to an allusive promise of portability, 36 U.S. jurisdictions have adopted the UBE. I predict that in a few years the UBE will be administered in all states and U.S. territories. The UBE has snowballed from an idea into the primary gateway for entry into the practice of law. But the UBE is not a panacea that will solve the bar passage problems that U.S. law schools face. Whether or not to adopt a uniform exam is no longer the question. Now that the UBE has firmly taken root, the question to be answered is what can be done to make sure that the UBE does less harm than good?

This paper will, in four parts, examine the meteoric rise and spread of the UBE and the potential costs of its quick adoption. Part one will survey the gradual move away from state law exams to the jurisdictionally neutral UBE. Part two will identify correlations between recent changes to the multistate exams and a stark national decline in bar passage rates. Part three will address the limitations of the UBE, including the misleading promise of score portability and the consequences of forum shopping. Part four will propose additional measures that can coexist with the UBE to counterbalance its limitations to make a better bar exam for our students and the clients they will serve.

The UBE, while well-intentioned, has had unintended consequences.  In the Empire State, the New York State Bar Association—a voluntary membership organization, not a licensing or regulatory entity—is studying the impact of our state’s move to the UBE a few years ago.  As Patricia Salkin (Provost, Graduate and Professional Divisions, Touro) and I wrote about in the New York Law Journal, there was a precipitous decline in New York Practice enrollment statewide after New York’s “unique” civil procedure code, the Civil Practice Law and Rules, was no longer tested on the bar exam.  Students voted with their feet and flocked to other courses.  The NYSBA Task Force will attempt to assess whether there has been a decrease in lawyer competency following the adoption of the UBE.

In the meantime, Professor Griggs’ article makes a nice addition to the conversation around various aspects of the UBE.

 

The Point of Curriculum Maps

Over at her blog, Linda Suskie asks the question, “Why are we doing curriculum maps?”  She argues that curriculum maps—charts that show where learning goals are achieved in program requirements—can answer several questions:

Is the curriculum designed to ensure that every student has enough opportunity to achieve each of its key learning goals? A program curriculum map will let you know if a program learning goal is addressed only in elective courses or only in one course.

Is the curriculum appropriately coherent? Is it designed so students strengthen their achievement of program learning goals as they progress through the program? Or is attention to program learning goals scattershot and disconnected?

Does the curriculum give students ample and diverse opportunities to achieve its learning goals? Many learning goals are best achieved when students experience them in diverse settings, such as courses with a variety of foci.

Does the curriculum have appropriate, progressive rigor? Do higher-numbered courses address program learning goals on a more advanced level than introductory courses? While excessive prerequisites may be a barrier to completion, do upper-level courses have appropriate prerequisites to ensure that students in them tackle program learning goals at an appropriately advanced level?

Does the curriculum conclude with a capstone experience? Not only is this an excellent opportunity for students to integrate and synthesize their learning, but it’s an opportunity for students to demonstrate their achievement of program learning goals as they approach graduation. A program curriculum map will tell you if you have a true capstone in which students synthesize their achievement of multiple program learning goals.

Is the curriculum sufficiently focused and simple? You should be able to view the curriculum map on one piece of paper or computer screen. If you can’t do this, your curriculum is probably too complicated and therefore might be a barrier to student success.

Is the curriculum responsive to the needs of students, employers, and society? Look at how many program learning goals are address in the program’s internship, field experience, or service learning requirement. If a number of learning goals aren’t addressed there, the learning goals may not be focusing sufficiently on what students most need to learn for post-graduation success.

She doesn’t view the primary purpose of curriculum maps as identifying where in a curriculum to find assessments of particular learning goals.  I’ve previously argued the contrary: that this is the primary purpose of curriculum maps, but I think I’m coming around to Ms. Suskie’s view.  The point I would emphasize, however, is that curriculum mapping—while valuable—is not in and of itself programmatic assessment.  It does not demonstrate whether students are achieving the learning outcomes we have set out for them, only where evidence of such learning may be found.

As a tool for assessing the curriculum (versus student learning), maps can be helpful tools.  Ms. Suskie offers several suggestions in this regard:

Elective courses have no place in a curriculum map. Remember one of the purposes is to ensure that the curriculum is designed to ensure that every student has enough opportunity to achieve every learning goal. Electives don’t help with this analysis.

My take: I agree and disagree.  Electives are not helpful if you are trying to determine what every student will have learned.  But a map with elective courses can demonstrate a mismatch between degree requirements and learning outcomes.  For example, at our school, a curriculum map showed that although we identified negotiation as a critical skill for our students, it was only being taught in a handful of electives that only a small number of students were taking.  (This led us to develop an innovative, required course for all students in Lawyering skills.)

List program requirements, not program courses. If students can choose from any of four courses to fulfill a particular requirement, for example, group those four courses together and mark only the program learning outcomes that all four courses address.

My take: I agree.  In theory, the courses in the cluster should all revolve a common goal.

Codes can help identify if the curriculum has appropriate, progressive rigor. Some assessment management systems require codes indicating whether a learning goal is introduced, developed further, or demonstrated in each course, rather than simply whether it’s addressed in the course.

My take: I agree.  Note that faculty require definitions for the various levels of rigor, and one should be on the lookout for “puffing”—a course where a professor claims that all of the learning outcomes are being addressed at an “advanced” level.

Check off a course only if students are graded on their progress toward achieving the learning goal. Cast a suspicious eye at courses for which every program learning goal is checked off. How can those courses meaningfully address all those goals?

My take: 100% agree.


Law school curricula are notoriously “flat.”  After the first year, there is not necessarily a progression of courses.  Students are left to choose from various electives.  Courses are not stacked on top of one another, as they are in other disciplines and at the undergraduate level.  There are exceptions: schools that prescribe requirements or clusters of courses in the 2L and 3L year that build sequentially on learning outcomes.  And some schools have capstone courses, a form of stacking.

So much attention in law school curricular reform is paid to which courses are worthy of being required in the first year.  But we have three or four years with students.  In my view, assessment gives us a chance to talk meaningfully about the upper-level curriculum.  And, as Ms. Suskie points out, mapping can help with this endeavor.

Guest Post (Ezra Goldschlager): Don’t Call it Assessment – Focusing on “Assessment” is Alienating and Limiting

I’m delighted to welcome Ezra Goldschlager (LaVerne) for a guest post on the language of assessment:

***

When the ABA drafted and approved Standard 314, requiring law schools to “utilize … assessment methods” to “measure and improve student learning,” it missed an opportunity to focus law schools’ attention and energy in the right place: continuous quality improvement. The ABA’s choice of language in 314 (requiring schools to engage in “assessment”) and the similarly framed 315 (requiring law schools to engage in “ongoing evaluation” of programs of legal education), will guide law schools down the wrong path for a few reasons.

Calling it “assessment” gives it an air of “otherness”; it conjures a now decades-old dialogue about a shift from teaching to learning, and brings with it all of the associated preconceptions (and baggage). “Assessment” is a loaded term of art, imbued with politics and paradigm shifts.

Because it is rooted in this history, “assessment” can overwhelm. While asking faculty to improve their teaching may not sound trivial, it does not impose the same burden that asking faculty to “engage in assessment” can. I can try to help my students learn better without thinking about accounting for a substantial body of literature, but ask me to “do assessment” and I may stumble at the starting line, at least vaguely aware of this body of literature, brimming with best practices and commandments requiring me to engage in serious study.

Calling it “assessment,” that is, labeling it as something stand-alone (apart from our general desires to get better at our professions), makes it easy (and natural) for faculty to consider it something additional and outside of their general responsibilities. Administrators can do “assessment,” we may conclude; I will “teach.”

When administrators lead faculties in “assessment,” the focus on measurement is daunting. Faculty buy-in is a chronic problem for assessment efforts, and that’s in part because of suspicion about what’s motivating the measurement or what might be done with the results. This suspicion is only natural — we expect, in general, that any measurement is done for a reason, and when administrators tell faculty to get on board with assessment, they must wonder: to what end?

Finally, and probably most important, calling it “assessment” makes the improvement part of the process seem separate and perhaps optional. We don’t usually go to doctors just for a diagnosis; we want to get better. Asking one’s doctor to “diagnose,” however, does leave the question hanging: does my patient want me to help her improve, too? Calling it “assessment” puts the focus on only part of a cycle that must be completed if we are to make any of the assessing worthwhile. “Don’t just assess, close the loop,” faculty are admonished. That admonishment might not be as necessary if the instruction, in the first place, were not just to “assess.”

Assessment Institute – coming up in October!

I’ve written before about the importance of learning from other disciplines’ experiences with assessment.  A great place to do so is the annual Assessment Institute in Indianapolis, put on by IUPUI.  This year’s conference is October 21-23, 2018.  It’s a big conference (about 1000 attendees are expected) with a lot of interesting programs and panels.  It’s a terrific place to get ideas and see what other disciplines are up to.

This year’s Institute has two programs related to legal education and many more concerning graduate and professional education. The law school presentations are:

  • Building a Bridge Between Experiential Skills Development and Skills Assessment on Professional Licensing Exams – In this session, we will explore the relationship between student participation in experiential skills programs and scores earned on the skills assessment component of a professional licensing exam. Results from a longitudinal research study of University of Cincinnati Law students’ participation in clinics, externships, and clerkships and corresponding scores on the bar exam performance test will be presented. Participants will be encouraged to share the clinical and skills assessment contained in licensing exams for their associated fields, as well as approaches to better align clinical skills development and testing of clinical skills as a requirement for professional licensure.  The presenters are with the University of Cincinnati.
  • Isn’t the Bar Exam the Ultimate Assessment?: Learning Outcomes, ABA Standard 302, and Law Schools – The American Bar Association, the accrediting entity for American law schools, has recently adopted Standard 302, which requires law schools to “establish” learning outcomes related to knowledge of substantive and procedural law, legal analysis and reasoning, legal research, oral and written communication, professional responsibility (ethics), and other professional skills. The overwhelming majority of law school graduates also take a bar examination, the licensing exam required for admission to practice as a lawyer. How do these relate to each other? And how do both of them relate to law school exams and grades? Come find out!  The presenters are Diane J. Klein, University of La Verne College of Law; and Linda Jellum, Mercer School of Law.

The early bird registration deadline is Friday, September 14, 2018.

Keeping Track of Assessment Follow-Up

Once a school implements its assessment plan, it will begin collecting a lot of data, distilling it into results, and hopefully identifying recommendations for improving student learning based on those results.  That is a lot of data and information, and it’s easy for the work of a school’s assessment committees to end up sitting on a shelf, forgotten with the passage of time.  Assessment is not about producing reports; it’s about converting student data into meaningful action.

I developed a template for schools to use in keeping track of its assessment methods, findings, and recommendations.  You don’t need fancy software, like Weave, to keep track of a single set of learning outcomes (university-level metrics are another matter). A simple Excel spreadsheet will do.  For each learning outcome, list the following:

  • The year the outcome was assessed.
  • Who led the assessment team or committee.
  • The methods used to complete the assessment.
  • The committee’s key findings.
  • Recommendations based on the report.
  • For each recommendation:
    • Which administrator or committee is responsible for follow-up.
    • The status of that recommendation: whether it was implemented and when.
    • Color code based on status (green = implemented; yellow = in progress; red = no action to date).

This easy format allows the dean and faculty to ensure that tangible results are achieved with the assessment process.  In the template, I included examples of methods, findings, and recommendations for one of seven learning outcomes.  (These are made up findings and recommendations that I created as an example.  They don’t necessarily reflect those of St. John’s.)  Feel free to use and adapt at your school.  (LC)

Contaminating Student Assessment with Class Participation and Attendance “Bumps”

Professor Jay Silver (St. Thomas Law [FL]) has an interesting essay on Inside Higher Ed, The Contamination of Student Assessment.  In it, he argues that we undermine our efforts to utilize reliable summative assessment mechanisms when we provide “bumps” for class participation, attendance, and going to outside events. Here’s an excerpt:

In the era of outcomes assessment, testing serves to measure, more than ever, whether students have assimilated particular knowledge and developed certain skills. A student’s mere exposure to information and instruction in skills does not, in today’s assessment regime, reflect a successful outcome. The assessments crowd wants proof that it sank in, and grades are the unit of measurement.

Accordingly, extra credit for attendance at, say, even the most erudite and inspiring guest lecture outside class corrupts grades as a pure measurement of performance.

Sure, the lecture can be of value, whether demonstrable or not, in the intellectual development of the student, and giving credit for going to it is an effective incentive to attend. Nonetheless, that type of extra credit contaminates grades as a measure of performance, as it can allow the grades of students who attend extra-credit events to leapfrog over the grades of those who outperformed them on the exam but did not attend.

The next contaminant of grades as measurements of performance is the upgrade for stellar attendance. There is, of course, no guarantee that the ubiquitous attendee wasn’t an accomplished daydreamer or a back-row socialite. And if they were present and genuinely plugged in to each class, their diligence should show up in their exam performance, so extra credit merely gilds the lily.

Using the stick as well as the carrot, some professors do the opposite: they downgrade students for disruptive behavior or chronically poor preparation or attendance. Like a doctor using a hammer to anesthetize a patient, downgrades aimed at controlling behavior produce collateral damage. Colleges have better tools — like meetings with the dean of students — to address conduct-related problems.

Finally, we come to what may well be the most common nonperformance variable incorporated into grades: the class participation upgrade that so many of us rely on to break the deafening silence we’d otherwise encounter in casting pearls of wisdom upon the class. Class participation upgrades that recognize and reward the volume, rather than quality, of a student’s classroom contributions pollute performance-based assessment.

Participation upgrades for remarks that consistently advance the class discussion is a more complex issue. …

I definitely encourage readers to check out the full article.  It raises some interesting points. The article has caused me to rethink whether to give “bumps” for class participation. (I’ve never given bumps for attending outside events, and class attendance for our students is compulsory.)  A broader point: although those of us in the “assessment crowd” think and write a lot about formative assessment, we should also recognize the importance of utilizing reliable and effective methods for summative assessment—a point that Professor Silver notes throughout the article.