What is the point of curriculum mapping?

Curriculum mapping is the process of identifying where in a school’s curriculum each of its learning outcomes is being taught and assessed. We recently posted our curriculum maps on our assessment webpage, including the survey instrument we used to collect data from faculty.

Curriculum mapping was a big discussion item at an assessment conference in Boston last Spring and understandably so. But, to be clear, curriculum mapping is, itself, not assessment. It is, rather, a tool to assist with the programmatic assessment process.  It also furthers curricular reform.

Mapping is not assessment in the programmatic sense because even the best of curriculum maps will not show whether, in fact, students are learning what we want them to learn. Curriculum mapping helps with assessment because it enables an assessment committee to identify where in the curriculum to look for particular evidence (“artifacts” in the lingo) of student learning.

It also helps with curricular reform in two ways:

  • by enabling a faculty to plug holes in the curriculum.  If an outcome has been identified as desirable but it is not being taught to all or most students, a new degree requirement could be created. Our school did this with negotiation. We had identified it as a valuable skill but realized, through a curriculum mapping exercise done several years ago, that it was not being taught to a sufficient number of students. We then created a 1L course specifically on negotiation and other interpersonal skills.
  • by restructuring degree requirements so that smarter sequencing occurs. In theory, advanced instruction should build upon introductions.  A curriculum map will help show the building blocks in particular outcomes: introduction to competence to advanced.

Overall, I hope that schools put serious thought into curriculum mapping, while also recognizing that it is not the end of assessment … but instead the beginning.

Cultural Competency as a Learning Outcome in Legal Writing

Eunice Park (Western State) has a short piece on SSRN, featured in the SSRN Legal Writing eJournal and published in the AALS Teaching Methods Newsletter, about assessing cultural competency in a legal writing appellate advocacy exercise. Cultural competency is listed in Interpretation 302-1 as an example of a “professional skill” that would satisfy Standard 302’s requirement that a school’s learning outcomes include “[o]ther professional skills needed for competent and ethical participation as a member of the legal profession.”

Professor Park writes:

Legal writing courses provide an ideal setting for raising awareness of the importance of sensitivity to diverse cultural mores. One way is by creating an assignment that demonstrates how viewing determinative facts from a strictly Western lens might lead to an unfair outcome.

In writing a recent appellate brief problem, I introduced cultural competence as a learning outcome by integrating culturally-sensitive legally significant facts into the assignment.

She goes on to describe the appellate brief problem and how it helped meet the goal of enhancing students’ cultural competency.

Publishing Learning Objectives in Course Syllabi

The new ABA standards are largely focused on programmatic assessment: measuring whether students, in fact, have learned the knowledge, skills, and values that we want them to achieve by the end of the J.D. degree. This requires a faculty to gather and analyze aggregated data across the curriculum. Nevertheless, the ABA standards also implicate individual courses and the faculty who teach them.

According to the ABA Managing Director’s guidance memo on learning outcomes assessment, “Learning outcomes for individual courses must be published in the course syllabi.”  Continue reading

Checklist for Getting Started with Assessment

I’m at a conference, Responding to the New ABA Standards: Best Practices in Outcomes Assessment, being put on by Boston University and the Institute for Law Teaching and Learning.  The conference is terrific, and I’ll have a number of posts based on what I’ve learned  today.

It strikes me that law schools are at varying stages of assessment.  Some schools—particularly those who have been dealing directly with regional accreditors—are fairly well along.  

But other schools are just getting started.  For those schools, I recommend keeping it simple and taking this step-by-step approach:

  1. Ask the dean to appoint a assessment committee, composed of faculty who have a particular interest in teaching and learning.
  2. Start keeping detailed records and notes of what follows.  Consider a shared collaboration space like OneDrive or Dropbox.  
  3. As a committee, develop a set of 5-10 proposed learning outcomes for the JD degree, using those in Standard 302 as a starting point.  (Alternatively, if you wish to start getting broader buy-in, ask another committee, such as a curriculum committee, to undertake this task.)  If you school has a particular mission or focus, make sure it is incorporated in one or more of the outcomes.
  4. Bring the learning outcomes to the full faculty for a vote.
  5. Map the curriculum.  Send a survey to faculty, asking them to identify which of the institutional outcomes are taught in their courses.  If you want to go further, survey faculty on the depth of teaching/learning (introduction, practice, mastery).  Compile a chart with the classes on the Y axis and learning outcomes on the X axis.  Check off the appropriate boxes to indicate in which courses the outcomes are being taught (the point of assessment is to identify whether students are actually learning them).  
  6. Identify one of the outcomes to assess and how you’ll do so: who will measure it, which assessment tools they’ll use, and what will be done with the results.
  7. Put your learning outcomes on your school’s website.

All of this can probably be done in 1-2 years.  It essentially completes the “design phase” of the assessment process.  Separately, I’ll post about some ideas of what not to do in the early stages …  

Standardized Tests in Universities?

An interesting paper by Fredrik deBoer, a lecturer at Purdue, writing for the think tank, New America, examines the rise of assessment in K-12 and higher education.  He notes that K-12 education is populated with a plethora of standardized tests, while universities and colleges tend to operate as independent silos.  He argues that higher education’s use of standardized tests, developed by outside testing firms, should be approached with caution.  At the very least, the tests should be subjected to external validation.  He writes, “Researchers must vet these instruments to determine how well they work, and what the potential unforeseen consequences are of these types of assessments, for the good of all involved.”  More from InsideHigherEd.