Suskie: Course vs. Program Learning Goals

Linda Suskie has a new blog post up about the difference between course and program learning goals.  She begins by cutting through some of the jargon and vocabulary to summarize learning goals as:

Learning goals (or whatever you want to call them) describe what students will be able to do as a result of successful completion of a learning experience, be it a course, program or some other learning experience. So course learning goals describe what students will be able to do upon passing the course, and program learning goals describe what students will be able to do upon successfully completing the (degree or certificate) program.

I encourage readers to check out the full post from Ms. Suskie.

Vollweiler: Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless

Professor and Associate Dean Debra Moss Vollweiler (Nova) has an interesting article on SSRN entitled, “Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless.”  Here’s an excerpt of the abstract:

Legal education, professors and administrators at law schools nationwide have finally been thrust fully into the world of educational and curriculum planning. Ever since ABA Standards started requiring law schools to “establish and publish learning outcomes” designed to achieve their objectives, and requiring how to assess them debuted, legal education has turned itself upside down in efforts to comply. However, in the initial stages of these requirements, many law schools viewed these requirements as “boxes to check” to meet the standard, rather than wholeheartedly embracing these reliable educational tools that have been around for decades. However, given that most faculty teaching in law schools have Juris Doctorate and not education degrees, the task of bringing thousands of law professors up to speed on the design, use and measurement of learning outcomes to improve education is a daunting one. Unfortunately, as the motivation to adopt them for many schools was merely meeting the standards, many law schools have opted for technical compliance — naming a committee to manage learning outcomes and assessment planning to ensure the school gets through their accreditation process, rather than for the purpose of truly enhancing the educational experience for students. … While schools should not be panicking at implementing and measuring learning outcomes, neither should they consign the tool to being a “mostly harmless” — one that misses out on the opportunity to improve their program of legal education through proper leveraging. Understanding that outcomes design and appropriate assessment design is itself a scholarly, intellectual function that requires judgment, knowledge and skill by faculty can dictate a path of adoption that is thoughtful and productive. This article serves as a guide to law schools implementing learning outcomes and their assessments as to ways these can be devised, used, and measured to gain real improvement in the program of legal education.

The article offers a number of recommendations for implementing assessment in a meaningful way:

  1. Ease into Reverse Planning with Central Planning and Modified Forward Planning
  2. Curriculum Mapping to Ensure Programmatic Learning Outcomes Met
  3. Cooperation Among Sections of Same Course and Vertically Through Curriculum
  4. Tying Course Evaluations to Learning Outcomes to Measure Gains
  5. Expanding the Idea of What Outcomes Can be for Legal Education
  6. Better use of Formative Assessments to Measure
  7. Use of the Bar Exam Appropriately to Measure Learning Outcomes
  8. Properly Leverage Data on Assessments Through Collection and Analysis

I was particularly interested in Professor Vollweiler’s point in her third recommendation.  Law school courses and professors are notoriously siloed.  Professors teaching the same course will use different texts, have varying learning outcomes, and assess their students in distinct ways.  This presents challenges in looking at student learning at a more macro level.  Professor Vollweiler effectively dismantles arguments against common learning outcomes.  The article should definitely be on Summer reading lists!

The Value of Sampling in Assessment

I just returned from the biennial ABA Associate Deans’ Conference, which is a fun and rewarding gathering of associate deans of academics, student affairs, research, administration, and other similar roles.  (Interestingly, more and more associate deans seem to have assessment in their titles.)

I spoke on a plenary panel about assessment, and I discussed the value of sampling in conducting programmatic assessment.  I wanted to elaborate on some of my thoughts on the subject.

Let’s say a school wants to assess the extent to which students are meeting the learning outcome of writing.  One way to do so would be to conduct what is called a “census” in which every student’s writing in a course or sequence is evaluated by an assessment committee.  In a small LL.M. or Juris Master’s program of 10 or 20 students, this might be feasible.  But in a school of, say, 900 J.D.’s, this is not workable.

A more feasible approach is to use a “sample” — a subset of the larger group.  So instead of reviewing 900 papers, perhaps the committee might look at 50 or 100.  If the sample is properly constructed, it is permissible to extrapolate the results and draw conclusions about the larger population.

Sometimes using a census is workable, even for a large group.  For example, if faculty who teach a subject all agree to embed 10 of the same multiple choice questions in their final exam, those results could be analyzed to see how the students performed on the material being tested.

Frequently, though, we are assessing something, like writing, that does not lend itself easily to embedded multiple choice questions or other easy-to-administer forms of assessment.  That’s where sampling comes in.  The key is to construct a representative sample of the larger population.  Here are some tips in doing so:

  • Consider, first, what you will be assessing.  Are you reviewing two-page papers?  Ten-page memos?  Thirty-page appellate briefs?  15-minute oral arguments in a moot court exercise?  Each of these will call for different time commitments on the part of your reviewers.  Next, take into account how many reviewers you will have.  The more reviewers, the more documents you’ll be able to assess.  Consider, also, that you’ll likely need multiple reviewers per thing that you’re assessing, and time should be allotted for the reviewers to “calibrate” expectations.  All of this will give you an idea of how much time it will take per reviewer per document or other thing that you’re looking at.
  • In general, the larger the sample size, the better.  Statistically, this has something to do with “margin or error” and “confidence interval.”  For more on picking a sample size, check out this very helpful article from Washington State University.  But, in general, a quick rule of thumb is a minimum of 10 students or 10% of the population, whichever is greater.
  • It is preferable for those doing the assessment not to be involved with picking the sample itself.  Here’s where having an assessment or data coordinator can be helpful.  Most times, a sample can be collected at random.  Online random number generators can be of help here.  There are suggestions for simplifying this process in the document I linked to above.
  • Once you have selected your sample size and identified those who will be in the sample, make sure you have a representative sample.  For example, if your population is composed of 60% women and 40% men, the sample should probably approximate this breakdown as well.  I like to look, too, at average LSAT and UGPA of the groups, as well as Law School GPA, to make sure we’ll be assessing a sample that is academic representative of the larger population.

In the assessment projects I have worked on, I have found sampling to be an effective way to make assessment easier for faculty who have a lot of competing demands on their time.  Some additional resources for sampling are:

Dean Vikram Amar on Constructing Exams

Dean Vikram Amar (Illinois) has an excellent post on Above the Law about exam writing.  He offers four thoughts based on his experience as a professor, associate dean, and dean.  First, Dean Amar talks about the benefits of interim assessments:

Regardless of how much weight I attach to midterm performance in the final course grade, and even if I use question types that are faster to grade than traditional issue spotting/analyzing questions — e.g., short answer, multiple-choice questions, modified true/false questions in which I tell students that particular statements are false but ask them to explain in a few sentences precisely how so — the feedback I get, and the feedback the students get, is invaluable.

Second, Dean Amar articulates an argument in favor of closed book exams: Continue reading

What This Professor Learned by Becoming a Student Again

For the past year, I have been a student again.  Once I finish a final paper (hopefully tomorrow), I will be receiving a Graduate Certificate in Assessment and Institutional Research from Sam Houston State University.

I enrolled in the program at “Sam” (as students call it) because I wanted to receive formal instruction in assessment, institutional research, data management, statistics, and, more generally, higher education.  These were areas where I was mainly self-taught, and I thought the online program at Sam Houston would give me beneficial skills and knowledge.  The program has certainly not disappointed.  The courses were excellent, the professors knowledgeable, and the technology flawless.  I paid for the program out-of-pocket, and it was worth every penny.  It has made me better at programmatic assessment and institutional research.  (I also turned one of my research papers into an article, which just came out this week.)

But the program had another benefit: It has made me a better teacher. Continue reading

New Article on the UBE

Professor and Director of Academic Support and Bar Passage Marsha Griggs (Washburn) has a new article on SSRN, Building a Better Bar Exam.  It is a well-written critique of the Uniform Bar Exam.  From the SSRN summary:

In the wake of declining bar passage rates and limited placement options for law grads, a new bar exam has emerged: the UBE. Drawn to an allusive promise of portability, 36 U.S. jurisdictions have adopted the UBE. I predict that in a few years the UBE will be administered in all states and U.S. territories. The UBE has snowballed from an idea into the primary gateway for entry into the practice of law. But the UBE is not a panacea that will solve the bar passage problems that U.S. law schools face. Whether or not to adopt a uniform exam is no longer the question. Now that the UBE has firmly taken root, the question to be answered is what can be done to make sure that the UBE does less harm than good?

This paper will, in four parts, examine the meteoric rise and spread of the UBE and the potential costs of its quick adoption. Part one will survey the gradual move away from state law exams to the jurisdictionally neutral UBE. Part two will identify correlations between recent changes to the multistate exams and a stark national decline in bar passage rates. Part three will address the limitations of the UBE, including the misleading promise of score portability and the consequences of forum shopping. Part four will propose additional measures that can coexist with the UBE to counterbalance its limitations to make a better bar exam for our students and the clients they will serve.

The UBE, while well-intentioned, has had unintended consequences.  In the Empire State, the New York State Bar Association—a voluntary membership organization, not a licensing or regulatory entity—is studying the impact of our state’s move to the UBE a few years ago.  As Patricia Salkin (Provost, Graduate and Professional Divisions, Touro) and I wrote about in the New York Law Journal, there was a precipitous decline in New York Practice enrollment statewide after New York’s “unique” civil procedure code, the Civil Practice Law and Rules, was no longer tested on the bar exam.  Students voted with their feet and flocked to other courses.  The NYSBA Task Force will attempt to assess whether there has been a decrease in lawyer competency following the adoption of the UBE.

In the meantime, Professor Griggs’ article makes a nice addition to the conversation around various aspects of the UBE.

 

The Point of Curriculum Maps

Over at her blog, Linda Suskie asks the question, “Why are we doing curriculum maps?”  She argues that curriculum maps—charts that show where learning goals are achieved in program requirements—can answer several questions:

Is the curriculum designed to ensure that every student has enough opportunity to achieve each of its key learning goals? A program curriculum map will let you know if a program learning goal is addressed only in elective courses or only in one course.

Is the curriculum appropriately coherent? Is it designed so students strengthen their achievement of program learning goals as they progress through the program? Or is attention to program learning goals scattershot and disconnected?

Does the curriculum give students ample and diverse opportunities to achieve its learning goals? Many learning goals are best achieved when students experience them in diverse settings, such as courses with a variety of foci.

Does the curriculum have appropriate, progressive rigor? Do higher-numbered courses address program learning goals on a more advanced level than introductory courses? While excessive prerequisites may be a barrier to completion, do upper-level courses have appropriate prerequisites to ensure that students in them tackle program learning goals at an appropriately advanced level?

Does the curriculum conclude with a capstone experience? Not only is this an excellent opportunity for students to integrate and synthesize their learning, but it’s an opportunity for students to demonstrate their achievement of program learning goals as they approach graduation. A program curriculum map will tell you if you have a true capstone in which students synthesize their achievement of multiple program learning goals.

Is the curriculum sufficiently focused and simple? You should be able to view the curriculum map on one piece of paper or computer screen. If you can’t do this, your curriculum is probably too complicated and therefore might be a barrier to student success.

Is the curriculum responsive to the needs of students, employers, and society? Look at how many program learning goals are address in the program’s internship, field experience, or service learning requirement. If a number of learning goals aren’t addressed there, the learning goals may not be focusing sufficiently on what students most need to learn for post-graduation success.

She doesn’t view the primary purpose of curriculum maps as identifying where in a curriculum to find assessments of particular learning goals.  I’ve previously argued the contrary: that this is the primary purpose of curriculum maps, but I think I’m coming around to Ms. Suskie’s view.  The point I would emphasize, however, is that curriculum mapping—while valuable—is not in and of itself programmatic assessment.  It does not demonstrate whether students are achieving the learning outcomes we have set out for them, only where evidence of such learning may be found.

As a tool for assessing the curriculum (versus student learning), maps can be helpful tools.  Ms. Suskie offers several suggestions in this regard:

Elective courses have no place in a curriculum map. Remember one of the purposes is to ensure that the curriculum is designed to ensure that every student has enough opportunity to achieve every learning goal. Electives don’t help with this analysis.

My take: I agree and disagree.  Electives are not helpful if you are trying to determine what every student will have learned.  But a map with elective courses can demonstrate a mismatch between degree requirements and learning outcomes.  For example, at our school, a curriculum map showed that although we identified negotiation as a critical skill for our students, it was only being taught in a handful of electives that only a small number of students were taking.  (This led us to develop an innovative, required course for all students in Lawyering skills.)

List program requirements, not program courses. If students can choose from any of four courses to fulfill a particular requirement, for example, group those four courses together and mark only the program learning outcomes that all four courses address.

My take: I agree.  In theory, the courses in the cluster should all revolve a common goal.

Codes can help identify if the curriculum has appropriate, progressive rigor. Some assessment management systems require codes indicating whether a learning goal is introduced, developed further, or demonstrated in each course, rather than simply whether it’s addressed in the course.

My take: I agree.  Note that faculty require definitions for the various levels of rigor, and one should be on the lookout for “puffing”—a course where a professor claims that all of the learning outcomes are being addressed at an “advanced” level.

Check off a course only if students are graded on their progress toward achieving the learning goal. Cast a suspicious eye at courses for which every program learning goal is checked off. How can those courses meaningfully address all those goals?

My take: 100% agree.


Law school curricula are notoriously “flat.”  After the first year, there is not necessarily a progression of courses.  Students are left to choose from various electives.  Courses are not stacked on top of one another, as they are in other disciplines and at the undergraduate level.  There are exceptions: schools that prescribe requirements or clusters of courses in the 2L and 3L year that build sequentially on learning outcomes.  And some schools have capstone courses, a form of stacking.

So much attention in law school curricular reform is paid to which courses are worthy of being required in the first year.  But we have three or four years with students.  In my view, assessment gives us a chance to talk meaningfully about the upper-level curriculum.  And, as Ms. Suskie points out, mapping can help with this endeavor.

Guest Post (Ezra Goldschlager): Don’t Call it Assessment – Focusing on “Assessment” is Alienating and Limiting

I’m delighted to welcome Ezra Goldschlager (LaVerne) for a guest post on the language of assessment:

***

When the ABA drafted and approved Standard 314, requiring law schools to “utilize … assessment methods” to “measure and improve student learning,” it missed an opportunity to focus law schools’ attention and energy in the right place: continuous quality improvement. The ABA’s choice of language in 314 (requiring schools to engage in “assessment”) and the similarly framed 315 (requiring law schools to engage in “ongoing evaluation” of programs of legal education), will guide law schools down the wrong path for a few reasons.

Calling it “assessment” gives it an air of “otherness”; it conjures a now decades-old dialogue about a shift from teaching to learning, and brings with it all of the associated preconceptions (and baggage). “Assessment” is a loaded term of art, imbued with politics and paradigm shifts.

Because it is rooted in this history, “assessment” can overwhelm. While asking faculty to improve their teaching may not sound trivial, it does not impose the same burden that asking faculty to “engage in assessment” can. I can try to help my students learn better without thinking about accounting for a substantial body of literature, but ask me to “do assessment” and I may stumble at the starting line, at least vaguely aware of this body of literature, brimming with best practices and commandments requiring me to engage in serious study.

Calling it “assessment,” that is, labeling it as something stand-alone (apart from our general desires to get better at our professions), makes it easy (and natural) for faculty to consider it something additional and outside of their general responsibilities. Administrators can do “assessment,” we may conclude; I will “teach.”

When administrators lead faculties in “assessment,” the focus on measurement is daunting. Faculty buy-in is a chronic problem for assessment efforts, and that’s in part because of suspicion about what’s motivating the measurement or what might be done with the results. This suspicion is only natural — we expect, in general, that any measurement is done for a reason, and when administrators tell faculty to get on board with assessment, they must wonder: to what end?

Finally, and probably most important, calling it “assessment” makes the improvement part of the process seem separate and perhaps optional. We don’t usually go to doctors just for a diagnosis; we want to get better. Asking one’s doctor to “diagnose,” however, does leave the question hanging: does my patient want me to help her improve, too? Calling it “assessment” puts the focus on only part of a cycle that must be completed if we are to make any of the assessing worthwhile. “Don’t just assess, close the loop,” faculty are admonished. That admonishment might not be as necessary if the instruction, in the first place, were not just to “assess.”