Assessment is Up, Standardized Tests are Down

A new study from the Association of American Colleges and Universities found that 87% of colleges and universities are assessing student learning across the curriculum.  11% plan to do so.  The remaining 2%, well, may be in hot water with their accreditors.  85% reported having a common set of learning outcomes across all undergraduate programs, up from 78% in 2008.

An AAC&U official, Debra Humphreys, gave credit to the accreditors for this increase.

 If they had not been pushing, these numbers would not be like this.

On the other hand, fewer institutions are using standardized testing to assess learning in general education (down to 38% from 49% in 2008).  Instead, they are using rubrics to a greater extent (up from 77% to 91%), a recognition that faculty prefer to use assessments that they develop themselves.

More about the AAC&U study from this story on Inside Higher Ed.

Links to ABA resources added

Under the Resources tab, I added a page with links to ABA resources, including the full text of the new ABA standards on outcomes and assessment, the Managing Director’s memo on the subject, and the “legislative history” behind Standard 301, 302, 314, and 315.

The Maddening Lingo of Assessment

“Best practices.” “Stakeholders.” Enrollment management “levers.”  Higher education is filled with a lingo all its own.  (For many legal educators, “assessment” may very well be added to the list of “higher ed speak,” terms crafted by bureaucrats to ensure job security.  As I’ve explained elsewhere, I see assessment as valuable and connected with our role as scholars.)

One of the best pieces of advice I heard about developing an assessment culture was to avoid getting caught up in terminology.  And the land of assessment certainly has plenty of terminology to throw around.  Is something a “goal,” “objective,” or “outcome”?  In our curriculum map, do we ask whether intermediate level of learning is measured by “competence” or “reinforcement”?   Is a particular tool a “direct assessment” or “indirect assessment”?

Continue reading

Why a Blog on Assessment in Legal Education?

When the American Bar Association first began discussing revision of its accreditation standards for the J.D. degree to include a full-blown assessment requirement, I was skeptical. I saw “assessment” as more higher ed-speak with no benefit to students. “We’re already assessing students – we give final exams, writing assignments, and projects, and we track bar passage and career outcomes, right?” Later, as I learned more about assessment—including the differences between course-level and programmatic assessment—I came to the conclusion that, stripped of its at-times burdensome lingo, it was a simple process with a worthy goal: improving student learning through data-driven analysis. The process, I learned, was rooted in a scholarly approach to learning: define outcomes, measure and analyze direct and indirect evidence of student learning, and then use the information learned to improve teaching and learning.

Legal education is one of the last disciplines to adopt an assessment philosophy. Looking at assessment reports from programs, such as pharmacy, that have used assessment for years can be daunting. They have come a long way in a relatively short period of time. There is a dearth of information about assessment in legal education and, hence, this blog was born.  My goal is to bring together resources on law school assessment in one place while also offering my observations and practical insights to help keep assessment from drowning in lingo and endless report writing.  I hope readers find it valuable.