Cross Cultural Competency as a Learning Outcome

The ABA received a flood of comments regarding a series of proposed changes to the Standards. The proposals involve professional identity formation and, broadly, the topic of diversity and inclusion. I added my two cents, suggesting areas where the proposals got it right and where they could use improvement. I noted an overarching concern about the ABA Standards being used to advance particular views of legal education that are better left to individual schools to decide whether to adopt. As I wrote, I can understand fully why schools may wish to specialize or distinguish themselves through professional identity formation or, for that matter, law and economics, public interest law, or international perspectives. That does not mean those views should be imposed on the other roughly 200 schools governed by the Standards as a matter of accreditation.

In any event, one of the proposals would required all law schools to provide training in “bias, cross-cultural competency, and racism” at two points in a student’s education: once at the beginning and at least once before graduation. If a student takes a clinic or externship, the second training must take place before (or concurrent with) enrollment in the clinic or externship. This proposal would add the requirement to Standard 303, the same Standard that requires (a) three broad course sequences (professional responsibility, legal writing, and the 6-credit experiential requirement) and (b) substantial opportunities for clinics, externships, and pro bono work.

In my comment, I suggested a different approach: adding cross-cultural competency to Standard 302.

Continue reading

Student Services and Assessment

The word “assessment” is often used in the context of students’ academics: measuring the learning that takes place in formal coursework. Let me suggest, though, that there are ways that administrators and staff who work in student services can both engage in their own assessment and help the academic side of the house with learning outcomes assessment.

(I’m speaking on this topic at the AALS Annual Meeting on Saturday, January 4, at 2:45 pm in Washington 1, if you’re interested in hearing more. My slides are attached here.)

Law schools employ many type of professionals besides faculty. A given law school may also have Continue reading

Suskie: Course vs. Program Learning Goals

Linda Suskie has a new blog post up about the difference between course and program learning goals.  She begins by cutting through some of the jargon and vocabulary to summarize learning goals as:

Learning goals (or whatever you want to call them) describe what students will be able to do as a result of successful completion of a learning experience, be it a course, program or some other learning experience. So course learning goals describe what students will be able to do upon passing the course, and program learning goals describe what students will be able to do upon successfully completing the (degree or certificate) program.

I encourage readers to check out the full post from Ms. Suskie.

Vollweiler: Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless

Professor and Associate Dean Debra Moss Vollweiler (Nova) has an interesting article on SSRN entitled, “Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless.”  Here’s an excerpt of the abstract:

Legal education, professors and administrators at law schools nationwide have finally been thrust fully into the world of educational and curriculum planning. Ever since ABA Standards started requiring law schools to “establish and publish learning outcomes” designed to achieve their objectives, and requiring how to assess them debuted, legal education has turned itself upside down in efforts to comply. However, in the initial stages of these requirements, many law schools viewed these requirements as “boxes to check” to meet the standard, rather than wholeheartedly embracing these reliable educational tools that have been around for decades. However, given that most faculty teaching in law schools have Juris Doctorate and not education degrees, the task of bringing thousands of law professors up to speed on the design, use and measurement of learning outcomes to improve education is a daunting one. Unfortunately, as the motivation to adopt them for many schools was merely meeting the standards, many law schools have opted for technical compliance — naming a committee to manage learning outcomes and assessment planning to ensure the school gets through their accreditation process, rather than for the purpose of truly enhancing the educational experience for students. … While schools should not be panicking at implementing and measuring learning outcomes, neither should they consign the tool to being a “mostly harmless” — one that misses out on the opportunity to improve their program of legal education through proper leveraging. Understanding that outcomes design and appropriate assessment design is itself a scholarly, intellectual function that requires judgment, knowledge and skill by faculty can dictate a path of adoption that is thoughtful and productive. This article serves as a guide to law schools implementing learning outcomes and their assessments as to ways these can be devised, used, and measured to gain real improvement in the program of legal education.

The article offers a number of recommendations for implementing assessment in a meaningful way:

  1. Ease into Reverse Planning with Central Planning and Modified Forward Planning
  2. Curriculum Mapping to Ensure Programmatic Learning Outcomes Met
  3. Cooperation Among Sections of Same Course and Vertically Through Curriculum
  4. Tying Course Evaluations to Learning Outcomes to Measure Gains
  5. Expanding the Idea of What Outcomes Can be for Legal Education
  6. Better use of Formative Assessments to Measure
  7. Use of the Bar Exam Appropriately to Measure Learning Outcomes
  8. Properly Leverage Data on Assessments Through Collection and Analysis

I was particularly interested in Professor Vollweiler’s point in her third recommendation.  Law school courses and professors are notoriously siloed.  Professors teaching the same course will use different texts, have varying learning outcomes, and assess their students in distinct ways.  This presents challenges in looking at student learning at a more macro level.  Professor Vollweiler effectively dismantles arguments against common learning outcomes.  The article should definitely be on Summer reading lists!

Publishing Learning Objectives in Course Syllabi

With the Fall semester about a month away (eek!), many faculty are turning their attention to refreshing their courses and preparing their syllabi. This is an opportune time to repost my thoughts on course-level student learning outcomes, which the ABA requires us to publish to our students. Much ink has been spilled on what verbs are proper to use in our learning outcomes; as I noted in August 2016, I hope that we in legal education can take a more holistic view.

Law School Assessment

The new ABA standards are largely focused on programmatic assessment: measuring whether students, in fact, have learned the knowledge, skills, and values that we want them to achieve by the end of the J.D. degree. This requires a faculty to gather and analyze aggregated data across the curriculum. Nevertheless, the ABA standards also implicate individual courses and the faculty who teach them.

According to the ABA Managing Director’s guidance memo on learning outcomes assessment, “Learning outcomes for individual courses must be published in the course syllabi.” 

View original post 696 more words

A Simple, Low-Cost Assessment Process?

Professor Andrea Curcio (Georgia State) has published A Simple Low-Cost Institutional Learning-Outcomes Assessment Process, 67 J. Legal Educ. 489 (2018). It’s an informative article, arguing that, in light of budgetary pressures, faculty should use AAC&U style rubrics to assess competencies across a range of courses. The results can then be pooled and analyzed.  In her abstract on SSRN, Professor Curcio states:

The essay explains a five-step institutional outcomes assessment process: 1. Develop rubrics for institutional learning outcomes that can be assessed in law school courses; 2. Identify courses that will use the rubrics; 3. Ask faculty in designated courses to assess and grade as they usually do, adding only one more step – completion of a short rubric for each student; 4. Enter the rubric data; and 5. Analyze and use the data to improve student learning. The essay appendix provides sample rubrics for a wide range of law school institutional learning outcomes. This outcomes assessment method provides an option for collecting data on institutional learning outcomes assessment in a cost-effective manner, allowing faculties to gather data that provides an overview of student learning across a wide range of learning outcomes. How faculties use that data depends upon the results as well as individual schools’ commitment to using the outcomes assessment process to help ensure their graduates have the knowledge, skills and values necessary to practice law.

This is an ideal way to conduct assessment, because it involves measuring students’ actual performance in their classes, rather than on a simulated exercise that is unconnected to a course and in which, therefore, they may not give full effort. This article is particularly valuable to the field because it includes sample rubrics for a range of learning outcomes that law schools are likely to measure. It’s definitely worth a read!

My only concern is with getting faculty buy-in.  Professor Curcio states, “In courses designated for outcomes measurement, professors add one more step to their grading process. After grading, faculty in designated courses complete an institutional faculty-designed rubric that delineates, along a continuum, students’ development of core competencies encompassed by a given learning outcome. The rubric may be applied to every student’s work or to that of a random student sample.” Continue reading

Assessing legal research

Legal research is a competency mandated by the ABA standards. It’s also a natural area where law schools would want to know if their students are performing competently. This outcome is also low hanging fruit for assessment, since there are numerous places in the curriculum where you examine students’ research (1L Legal Writing, clinics, externships, and seminars all come to mind).

Laura Ray, Outreach and Instructional Services Librarian at Cleveland-Marshall College of Law, is gathering information on how law schools are planning to assess legal research outcomes. She invites comments at l.ray@csuohio.edu.

Thoughts on Assessing Communication Competencies

The ABA Standards set forth the minimal learning outcomes that every law school must adopt. They include “written and oral communication in the legal context.”

“Written communication” as a learning outcome is “low-hanging fruit” for law school assessment committees. For a few reasons, this is an easy area to begin assessing students’ learning on a program level:

  1. Per the ABA standards, there must be a writing experience in both the 1L year and in at least one of the upper-level semesters. (Some schools, such as ours, have several writing requirements.) This provides a lot of opportunities to look at student growth over time by assessing the same students’ work as 1Ls and again as 2Ls or 3Ls.  In theory, there should be improvement over time!
  2. Writing naturally generates “artifacts” to assess.  Unlike other competencies, which may require the generation of special, artificial exams or other assessments, legal writing courses are already producing several documents per student to examine.
  3. Legal writing faculty are a naturally collaborative group of faculty, if I do say so myself!  Even in schools without a formal structure (so-called “directorless” programs), my experience is that legal writing faculty work together on common problems/assignments, syllabi, and rubrics.  This allows for assessment across sections.  I also find that legal writing faculty, based on the nature of their courses, give a lot of thought to assessment, generally.

Oral communication is another matter. This is a more difficult outcome to assess. Apart from a first-year moot court exercise, most schools don’t have required courses in verbal skills, although that may be changing with the ABA’s new experiential learning requirement.  Still, I think there are some good places in the curriculum to look for evidence of student learning of this outcome.  Trial and appellate advocacy courses, for example, require significant demonstration of that skill, although in some schools only a few students may take advantage of these opportunities.  Clinics are a goldmine, as are externships.  For these courses, surveying faculty about students’ oral communication skills is one way to gather evidence of student learning. However, this is an indirect measure.  A better way to assess this outcome is to utilize common rubrics for particular assignments or experiences.  For example, after students appear in court on a clinic case, the professor could rate them using a commonly applied rubric.  Those rubrics could be used both to grade the individual students and to assess student learning more generally.

Database on Law Schools’ Learning Outcomes

The Holloran Center at St. Thomas Law School (MN)—run by Jerry Organ and Neil Hamilton—has created a database of law schools’ efforts to adopt learning outcomes.  The center plans to update the database quarterly.  

One of the very helpful aspects of the database is that it has coding so that a user can filter by school and by learning outcomes that go above and beyond the ABA minimum.  This will be a terrific resource as schools roll out the new ABA standards on learning outcomes.  In addition, for those of us interested in assessment as an area of scholarship, it is a treasure trove of data.

Frustratingly, it looks like many schools have decided not to go beyond the minimum competences set forth in ABA Standard 302, what the Holloran Center has categorized as a “basic” set of outcomes. The ABA’s list is far from exhaustive.  Schools that have essentially copied-and-pasted from Standard 302 have missed an opportunity to make their learning outcomes uniquely their own by incorporating aspects of their mission that make them distinctive from other schools.  Worse, it may be a sign that some schools are being dragged into the world of assessment kicking-and-screaming. On the other hand, it may indicate a lack of training or a belief that the ABA’s minimums fully encapsulate the core learning outcomes that every student should attain. Only time will tell.  As schools actually begin to assess their learning outcomes, we’ll have a better idea of how seriously assessment is being taken by law schools.

Assessment and Strategic Planning

Over at PrawfsBlawg, my friend Jennifer Bard, dean of Cincinnati Law School, has a post on “Learning Outcomes as the New Strategic Planning.” She points readers to Professors Shaw and VanZandt’s book, Student Learning Outcomes and Law School Assessment. The book is, to be sure, an excellent resource, although parts of it may be too advanced for schools that are just getting started with assessment.  Still, it’s a great book, one that sits on the corner of my desk and is consulted often.  (Dean Bard also gave a nice shoutout to my blog as a resource.)

Citing an article by Hanover Research, Dean Bard draws a key distinction between strategic planning activities of yesteryear and what’s required under the new ABA standards.

Traditionally, law school strategic plans focused on outcomes other than whether students were learning what schools had determined their students should be learning. These often included things like faculty scholarly production, diversity, student career placement, fundraising, and admissions inputs. Former ABA Standard 203 required a strategic planning process (albeit not a strategic plan per se) to improve all of the goals of a school:

In addition to the self study described in Standard 202, a law school shall demonstrate that it regularly identifies specific goals for improving the law school’s program, identifies means to achieve the established goals, assesses its success in realizing the established goals and periodically re-examines and appropriately revises its established goals.

The old standard used the term “assessment” in a broad sense, not just as to student learning. In contrast, new Standard 315 focuses on assessment of learning outcomes to improve the curriculum:

The dean and the faculty of a law school shall conduct ongoing evaluation of the law school’s program of legal education, learning outcomes, and assessment methods; and shall use the results of this evaluation to determine the degree of student attainment of competency in the learning outcomes and to make appropriate changes to improve the curriculum.

This is the “closing the loop” of the assessment process: using the results of programmatic outcomes assessment to improve student learning.

So, what to do with the “old” way of strategic planning? Certainly, a school should still engage in a  strategic planning process that focuses on all of the important outcomes and goals of the school, of which assessment of student learning is just one piece. Paraphrasing a common expression, if you don’t measure it, it doesn’t get done. Indeed, one can interpret Standards 201 and 202 as still requiring a planning process of some kind, particularly to guide resource allocation.

Still, much of the way that some schools engage in strategic planning is wasteful and ineffective. Often, the planning cycle takes years and results in a beautiful, glossy brochure (complete with photos of happy students and faculty) that sits on the shelf. I’m much more a fan of quick-and-dirty strategic planning that involves efficiently setting goals and action items that can be accomplished over a relatively short time-horizon. The importance is not the product (the glossy brochure) but having a process that is nimble, updated often, used to guide allocation of resources, and serves as a self-accountability tool. (Here, I have to confess, my views on this have evolved since serving on the Strategic Priorities Review Team of our University. I now see much more value in the type of efficient planning I have described.)

In this respect, strategic planning and learning outcomes assessment should both have in common an emphasis on process, not product. Some of the assessment reports generated by schools as a result of regional accreditation are truly works of art, but what is being done with the information? That, to me, is the ultimate question of the value of both processes.