Do exams measure speed or performance?

A new study out of BYU attempts to answer the question.  It’s summarized at TaxProf and the full article is here. From the abstract on SSRN:

What, if any, is the relationship between speed and grades on first year law school examinations? Are time-pressured law school examinations typing speed tests? Employing both simple linear regression and mixed effects linear regression, we present an empirical hypothesis test on the relationship between first year law school grades and speed, with speed represented by two variables: word count and student typing speed. Our empirical findings of a strong statistically significant positive correlation between total words written on first year law school examinations and grades suggest that speed matters. On average, the more a student types, the better her grade. In the end, however, typing speed was not a statistically significant variable explaining first year law students’ grades. At the same time, factors other than speed are relevant to student performance.

In addition to our empirical analysis, we discuss the importance of speed in law school examinations as a theoretical question and indicator of future performance as a lawyer, contextualizing the question in relation to the debate in the relevant psychometric literature regarding speed and ability or intelligence. Given that empirically, speed matters, we encourage law professors to consider more explicitly whether their exams over-reward length, and thus speed, or whether length and assumptions about speed are actually a useful proxy for future professional performance and success as lawyers.

The study raises important questions of how we structure exams. I know of colleagues who impose word count limits (enforceable thanks to exam software), and I think I may be joining the ranks. More broadly, are our high-stakes final exams truly measuring what we want them to?

Assessment and Strategic Planning

Over at PrawfsBlawg, my friend Jennifer Bard, dean of Cincinnati Law School, has a post on “Learning Outcomes as the New Strategic Planning.” She points readers to Professors Shaw and VanZandt’s book, Student Learning Outcomes and Law School Assessment. The book is, to be sure, an excellent resource, although parts of it may be too advanced for schools that are just getting started with assessment.  Still, it’s a great book, one that sits on the corner of my desk and is consulted often.  (Dean Bard also gave a nice shoutout to my blog as a resource.)

Citing an article by Hanover Research, Dean Bard draws a key distinction between strategic planning activities of yesteryear and what’s required under the new ABA standards.

Traditionally, law school strategic plans focused on outcomes other than whether students were learning what schools had determined their students should be learning. These often included things like faculty scholarly production, diversity, student career placement, fundraising, and admissions inputs. Former ABA Standard 203 required a strategic planning process (albeit not a strategic plan per se) to improve all of the goals of a school:

In addition to the self study described in Standard 202, a law school shall demonstrate that it regularly identifies specific goals for improving the law school’s program, identifies means to achieve the established goals, assesses its success in realizing the established goals and periodically re-examines and appropriately revises its established goals.

The old standard used the term “assessment” in a broad sense, not just as to student learning. In contrast, new Standard 315 focuses on assessment of learning outcomes to improve the curriculum:

The dean and the faculty of a law school shall conduct ongoing evaluation of the law school’s program of legal education, learning outcomes, and assessment methods; and shall use the results of this evaluation to determine the degree of student attainment of competency in the learning outcomes and to make appropriate changes to improve the curriculum.

This is the “closing the loop” of the assessment process: using the results of programmatic outcomes assessment to improve student learning.

So, what to do with the “old” way of strategic planning? Certainly, a school should still engage in a  strategic planning process that focuses on all of the important outcomes and goals of the school, of which assessment of student learning is just one piece. Paraphrasing a common expression, if you don’t measure it, it doesn’t get done. Indeed, one can interpret Standards 201 and 202 as still requiring a planning process of some kind, particularly to guide resource allocation.

Still, much of the way that some schools engage in strategic planning is wasteful and ineffective. Often, the planning cycle takes years and results in a beautiful, glossy brochure (complete with photos of happy students and faculty) that sits on the shelf. I’m much more a fan of quick-and-dirty strategic planning that involves efficiently setting goals and action items that can be accomplished over a relatively short time-horizon. The importance is not the product (the glossy brochure) but having a process that is nimble, updated often, used to guide allocation of resources, and serves as a self-accountability tool. (Here, I have to confess, my views on this have evolved since serving on the Strategic Priorities Review Team of our University. I now see much more value in the type of efficient planning I have described.)

In this respect, strategic planning and learning outcomes assessment should both have in common an emphasis on process, not product. Some of the assessment reports generated by schools as a result of regional accreditation are truly works of art, but what is being done with the information? That, to me, is the ultimate question of the value of both processes.

What is the point of curriculum mapping?

Curriculum mapping is the process of identifying where in a school’s curriculum each of its learning outcomes is being taught and assessed. We recently posted our curriculum maps on our assessment webpage, including the survey instrument we used to collect data from faculty.

Curriculum mapping was a big discussion item at an assessment conference in Boston last Spring and understandably so. But, to be clear, curriculum mapping is, itself, not assessment. It is, rather, a tool to assist with the programmatic assessment process.  It also furthers curricular reform.

Mapping is not assessment in the programmatic sense because even the best of curriculum maps will not show whether, in fact, students are learning what we want them to learn. Curriculum mapping helps with assessment because it enables an assessment committee to identify where in the curriculum to look for particular evidence (“artifacts” in the lingo) of student learning.

It also helps with curricular reform in two ways:

  • by enabling a faculty to plug holes in the curriculum.  If an outcome has been identified as desirable but it is not being taught to all or most students, a new degree requirement could be created. Our school did this with negotiation. We had identified it as a valuable skill but realized, through a curriculum mapping exercise done several years ago, that it was not being taught to a sufficient number of students. We then created a 1L course specifically on negotiation and other interpersonal skills.
  • by restructuring degree requirements so that smarter sequencing occurs. In theory, advanced instruction should build upon introductions.  A curriculum map will help show the building blocks in particular outcomes: introduction to competence to advanced.

Overall, I hope that schools put serious thought into curriculum mapping, while also recognizing that it is not the end of assessment … but instead the beginning.