Thoughts on Assessing Communication Competencies

The ABA Standards set forth the minimal learning outcomes that every law school must adopt. They include “written and oral communication in the legal context.”

“Written communication” as a learning outcome is “low-hanging fruit” for law school assessment committees. For a few reasons, this is an easy area to begin assessing students’ learning on a program level:

  1. Per the ABA standards, there must be a writing experience in both the 1L year and in at least one of the upper-level semesters. (Some schools, such as ours, have several writing requirements.) This provides a lot of opportunities to look at student growth over time by assessing the same students’ work as 1Ls and again as 2Ls or 3Ls.  In theory, there should be improvement over time!
  2. Writing naturally generates “artifacts” to assess.  Unlike other competencies, which may require the generation of special, artificial exams or other assessments, legal writing courses are already producing several documents per student to examine.
  3. Legal writing faculty are a naturally collaborative group of faculty, if I do say so myself!  Even in schools without a formal structure (so-called “directorless” programs), my experience is that legal writing faculty work together on common problems/assignments, syllabi, and rubrics.  This allows for assessment across sections.  I also find that legal writing faculty, based on the nature of their courses, give a lot of thought to assessment, generally.

Oral communication is another matter. This is a more difficult outcome to assess. Apart from a first-year moot court exercise, most schools don’t have required courses in verbal skills, although that may be changing with the ABA’s new experiential learning requirement.  Still, I think there are some good places in the curriculum to look for evidence of student learning of this outcome.  Trial and appellate advocacy courses, for example, require significant demonstration of that skill, although in some schools only a few students may take advantage of these opportunities.  Clinics are a goldmine, as are externships.  For these courses, surveying faculty about students’ oral communication skills is one way to gather evidence of student learning. However, this is an indirect measure.  A better way to assess this outcome is to utilize common rubrics for particular assignments or experiences.  For example, after students appear in court on a clinic case, the professor could rate them using a commonly applied rubric.  Those rubrics could be used both to grade the individual students and to assess student learning more generally.

Note Taking Advice to My Evidence Students

I recently sent out an e-mail to students in my Evidence class, sharing my views on classroom laptop bans and note taking.  In the past, I’ve banned laptops, but I’ve gone back to allowing them. As with most things in the law, the question is not the rule but who gets to decide the rule. Here, with a group of adult learners, I prefer a deferential approach. I’m also cognizant that there may be generational issues at play and that what would work for me as a student might not work for the current generation of law students. So, I’ve taken to offering advice on note taking, a skill that must be honed like any other.


Dear Class:

As you begin your study of the law of Evidence, I wanted to offer my perspective on note taking.  Specifically, I’d like to weigh in on the debate about whether students should take notes by hand or using a laptop.

As you will soon find out, Evidence is a heavy, 4-credit course. Our class time—4 hours per week—will be spent working through difficult rules, cases, and problems.  Classes will build on your out-of-class preparation and will not be a mere review of what you’ve read in advance.  Thus, it is important that the way you take notes helps, not hurts, your learning.

The research overwhelmingly shows that students retain information better when they take notes by hand rather than computer. This article has a nice summary of the literature: http://www.npr.org/2016/04/17/474525392/attention-students-put-your-laptops-away?utm_campaign=storyshare&utm_source=twitter.com&utm_medium=social.  The reason why handwriting is better is pretty simple: when you handwrite, you are forced to process and synthesize the material, since it’s impossible to take down every word said in class. In contrast, when you type, you tend to function more like a court reporter, trying to take down every word that is said. Additionally, laptops present a host of distractions: e-mail, chat, the web, and games are all there to tempt you away from the task at hand, which is to stay engaged with the discussion. I’ve lost count of the number of times that I’ve called on a student engrossed in his or her laptop, only to get “Can you repeat the question?” as the response.

Of course, it’s possible to be distracted without a computer, too.  Crossword puzzles, the buzzing of a cell phone, daydreaming, or the off-topic computer usage of the person in front of you can all present distractions. And it’s more difficult to integrate handwritten notes with your outline and other materials.

If I were you, I would handwrite my notes.  But I’m not you.  You are adults and presumably know best how you study and retain information.  For this reason, I don’t ban laptops.  But if you choose to use a laptop or similar device to take notes, I have some additional suggestions.  Look into distraction avoidance software, such as FocusMe, Cold Turkey, or Freedom.  These programs block out distracting apps like web browsers and text messaging.  Turn off your wireless connection.  Turn down the display of your screen so that your on-screen activity is not distracting to those behind you.  Of course, turn off the sound.  Learn to type quietly so you’re not annoying your neighbors with the clickety-clack of your keys.

Finally, and most importantly, think strategically about what you’re typing.  You don’t need a written record of everything said in class.  Indeed, one of the reasons I record all of my classes and make the recordings available to registered students is so you don’t have to worry about missing something.  You can always go back and rewatch a portion of class that wasn’t clear to you. It’s not necessary to type background material, such as the facts of a case or the basic rule.  Most of this should be in your notes that you took when reading the materials for class (you are taking notes as you read, right?).  Instead of having a new set of notes for class, think about integrating them with what you’ve already written as you prepared for class.  That is, synthesize and integrate what is said in class about the cases and problems with what you’ve written out about them in advance.  Try your best to take fewer notes, not more.  Focus on listening and thinking along with the class discussion.  Sometimes less is more.

Above all else, do what works best for you.  If you’ve done well on law school exams while taking verbose notes, by all means continue doing so.  But, if you’ve found yourself not doing as well as you’d like, now is the time to try a new means of studying and note-taking.  You may be pleasantly surprised by experimenting with handwritten notes or, if you do use a laptop, adapting your note-taking style as suggested above.

Of course, please let me know if you’d like further advice.  I’m here to help you learn.

Regards,

Prof. Cunningham

What Law Schools Can Learn about Assessment from Other Disciplines

I have spent the last few days at the ExamSoft Assessment Conference. I gave a presentation on assessment developments in legal education, and it was great to see colleagues from other law schools there. I spent a lot of time attending presentations about how other disciplines are using assessment. I was particularly impressed by what the sciences are doing, especially nursing, pharmacy, physical therapy, podiatry, and medicine. I came away from the conference with the following takeaways about how these other disciplines are using assessment:

  • They use assessment data to improve student learning, both at an individual and macro level.  They are less focused on using assessments to “sort” students along a curve for grading purposes. Driven in part by their accreditors, the sciences use assessment data to help individual students recognize their weaknesses and, by graduation, get up to the level expected for eventual licensure, sometimes through remediation. They also use assessment data to drive curricular and teaching reform.
  • They focus on the validity and reliability of their summative assessments.  This is probably not surprising since scientists are trained in the scientific method. They are also, by nature, accepting of data and statistics. They utilize item analysis reports (see bullet #3) and rubrics (for essays) to ensure that their assessments are effective and that their grading is reliable. Assessments are reused and improved over time. Thus, a lot of effort is put into exam security.
  • They utilize item analysis data reports to improve their assessments over time. Item analysis reports show things like a KR-20 score and point biserial coefficients, which are statistical tools that can help assess the quality of individual test items and the exam as a whole. They can be generated by most scoring systems, such as Scantron and ExamSoft.
  • They utilize multiple, formative assessments in courses. 
  • They collect a lot of data on students.
  • They cooperate and share assessments across sections and professors.  It is not uncommon for there to be a single, departmentally-approved exam for a particular course. Professors teaching multiple sections of a course collaborate on writing the exam against a common set of learning outcomes.
  • They categorize and tag questions to track student progress and to assist with programmatic assessment. (In law, this could work as follows. Questions could be tagged against programmatic learning outcomes [such as knowledge of the law] and to content outlines [e.g., in Torts, a question could be tagged as referring to Battery].)  This allows them to generate reports that show how students perform over time in a particular outcome or topic.
  • They debrief assessments with students, using the results to help students learn how to improve, even when the course is over.  Here, categorization of questions is important.  (I started doing this in my Evidence course. I tagged multiple choice questions as testing hearsay, relevance, privilege, etc.  This allowed me to generate  reports out of Scantron ParScore that showed (1) how the class, as a whole, did on each category; and (2) how individual students did on each category. In turn, I’ll be able to use the data to improve my teaching next year.)
  • They utilize technology, such as ExamSoft, to make all of this data analysis and reporting possible.
  • They have trained assessment professionals to assist with the entire process.  Many schools have assessment departments or offices that can setup assessments and reports. Should we rethink the role of faculty support staff? Should we have faculty assistants move away from traditional secretarial functions and to assisting faculty with assessments? What training would be required?

Incidentally, I highly recommend the ExamSoft Assessment Conference, regardless of whether one is at an “ExamSoft law school” or not. (Full disclosure: I, like all speakers, received a very modest honorarium for my talk.) The conference was full of useful, practical information about teaching, learning, and assessment.  ExamSoft schools can also benefit from learning about new features of the software.

Off topic: WaPo op-ed on access-to-justice

Not directly assessment-related, but I thought I would share that Jennifer Bard (Cincinnati) and I have an op-ed in the Washington Post about access-to-justice. Drawing on an analogy to medicine, we argue:

Professionals must first acknowledge that not every legal task must be performed by a licensed lawyer. Instead, we need to adopt a tiered system of legal-services delivery that allows for lower barriers to entry. Just as a pharmacist can administer vaccines and a nurse practitioner can be on the front line of diagnosing and treating ailments, we should have legal practitioners who can also exercise independent judgment within the scope of their training. Such a change would expand the preparation and independence of the existing network of paralegals, secretaries and investigators already assisting lawyers.

This creates greater, not fewer, opportunities for law schools, which should provide a range of educational opportunities, from short programs for limited license holders to Ph.D.’s for those interested in academic research.

Enjoy the article!

Suskie: How to Assess Anything Without Killing Yourself … Really!

Linda Suskie (former VP, Middle States Commission on Higher Education) has posted a great list of common-sense tips about assessments on her blog. They’re based on a book by Douglas Hubbard, How to Measure Anything: Finding the Value of “Intangibles in Business.” My favorites are:

1. We are (or should be) assessing because we want to make better decisions than what we would make without assessment results. If assessment results don’t help us make better decisions, they’re a waste of time and money.

4. Don’t try to assess everything. Focus on goals that you really need to assess and on assessments that may lead you to change what you’re doing. In other words, assessments that only confirm the status quo should go on a back burner. (I suggest assessing them every three years or so, just to make sure results aren’t slipping.)

5. Before starting a new assessment, ask how much you already know, how confident you are in what you know, and why you’re confident or not confident. Information you already have on hand, however imperfect, may be good enough. How much do you really need this new assessment?

8. If you know almost nothing, almost anything will tell you something. Don’t let anxiety about what could go wrong with assessment keep you from just starting to do some organized assessment.

9. Assessment results have both cost (in time as well as dollars) and value. Compare the two and make sure they’re in appropriate balance.

10. Aim for just enough results. You probably need less data than you think, and an adequate amount of new data is probably more accessible than you first thought. Compare the expected value of perfect assessment results (which are unattainable anyway), imperfect assessment results, and sample assessment results. Is the value of sample results good enough to give you confidence in making decisions?

14. Assessment value is perishable. How quickly it perishes depends on how quickly our students, our curricula, and the needs of our students, employers, and region are changing.

15. Something we don’t ask often enough is whether a learning experience was worth the time students, faculty, and staff invested in it. Do students learn enough from a particular assignment or co-curricular experience to make it worth the time they spent on it? Do students learn enough from writing papers that take us 20 hours to grade to make our grading time worthwhile?

 

New Article on Lessons Learned from Medical Education about Assessing Professional Formation Outcomes

Neil Hamilton (St. Thomas, MN) has a new article on SSRN, Professional-Identity/Professional-Formation/Professionalism Learning Outcomes: What Can We Learn About Assessment From Medical Education? 

Here’s an except from the abstract:

The accreditation changes requiring competency-based education are an exceptional opportunity for each law school to differentiate its education so that its students better meet the needs of clients, legal employers, and the legal system. While ultimately competency-based education will lead to a change in the model of how law faculty and staff, students, and legal employers understand legal education, this process of change is going to take a number of years. However, the law schools that most effectively lead this change are going to experience substantial differentiating gains in terms of both meaningful employment for graduates and legal employer and client appreciation for graduates’ competencies in meeting employer/client needs. This will be particularly true for those law schools that emphasize the foundational principle of competency-based learning that each student must grow toward later stages of self-directed learning – taking full responsibility as the active agent for the student’s experiences and assessment activities to achieve the faculty’s learning outcomes and the student’s ultimate goal of bar passage and meaningful employment.

Medical education has had fifteen more years of experience with competency-based education from which legal educators can learn. This article has focused on medical education’s “lessons learned” applicable to legal education regarding effective assessment of professional-identity learning outcomes.

Legal education has many other disciplines, including medicine, to look to for examples of implementing outcome-based assessment.  Professor Hamilton’s article nicely draws upon lessons learned by medical schools in assessing professional formation, an outcome that some law schools have decided to implement.

In looking at professional identity formation, in particular, progression is important. The curriculum and assessments must build on each other in order to see whether students are improving in this area. The hidden curriculum is a valuable area to teach and assess a competency like professional identity formation. But this requires coordination among various silos:

Law schools historically have been structured in silos with strongly guarded turf in and around each silo. Each of the major silos (including doctrinal classroom faculty, clinical faculty, lawyering skills faculty, externship directors, career services and professional development staff, and counseling staff) wants control over and autonomy regarding its turf. Coordination among these silos is going to take time and effort and involve some loss of autonomy but in return a substantial increase in student development and employment outcomes. For staff in particular, there should be much greater recognition that they are co-educators along with faculty to help students achieve the learning outcomes.

Full-time faculty members were not trained in a competency-based education model, and many have limited experience with some of the competencies, for example teamwork, that many law schools are including in their learning outcomes. In my experience, many full-time faculty members also have enormous investments in doctrinal knowledge and legal and policy analysis concerning their doctrinal field. They believe that the student’s law school years are about learning doctrinal knowledge, strong legal and policy analysis, and research and writing skills. These faculty members emphasize that they have to stay focused on “coverage” with the limited time in their courses even though this model of coverage of doctrinal knowledge and the above skills overemphasizes these competencies in comparison with the full range of competencies that legal employers and clients indicate they want.

In my view, this is the greatest  challenge with implementing a competency-based model of education in law schools. (Prof. Hamilton’s article has a nice summary of time-based versus competency-based education models.) Most law school curricula are silo-based. At most schools, a required first-year curriculum is followed by a largely unconnected series of electives in the second and third years. There are few opportunities for longitudinal study of outcomes in such an environment. In medical schools, however, there are clear milestones at which to assess knowledge, skills, and values for progression and growth.

Database on Law Schools’ Learning Outcomes

The Holloran Center at St. Thomas Law School (MN)—run by Jerry Organ and Neil Hamilton—has created a database of law schools’ efforts to adopt learning outcomes.  The center plans to update the database quarterly.  

One of the very helpful aspects of the database is that it has coding so that a user can filter by school and by learning outcomes that go above and beyond the ABA minimum.  This will be a terrific resource as schools roll out the new ABA standards on learning outcomes.  In addition, for those of us interested in assessment as an area of scholarship, it is a treasure trove of data.

Frustratingly, it looks like many schools have decided not to go beyond the minimum competences set forth in ABA Standard 302, what the Holloran Center has categorized as a “basic” set of outcomes. The ABA’s list is far from exhaustive.  Schools that have essentially copied-and-pasted from Standard 302 have missed an opportunity to make their learning outcomes uniquely their own by incorporating aspects of their mission that make them distinctive from other schools.  Worse, it may be a sign that some schools are being dragged into the world of assessment kicking-and-screaming. On the other hand, it may indicate a lack of training or a belief that the ABA’s minimums fully encapsulate the core learning outcomes that every student should attain. Only time will tell.  As schools actually begin to assess their learning outcomes, we’ll have a better idea of how seriously assessment is being taken by law schools.

Assessing the Hidden Curriculum

I was honored to have been asked to attend St. Thomas (MN) Law School’s recent conference on professional formation, hosted by St. Thomas’ Holloran Center for Professional Formation, which is co-directed by Neil Hamilton and Jerry Organ.  The conference was fascinating and exceptionally well-run (I was particularly impressed by how Neil and Jerry nicely integrated students from the Law Journal into the conference as full participants.).  The two-day conference included a workshop to discuss ways to begin assessing professional formation in legal education.  Speakers included those from other professional disciplines, including medicine and the military.

One of the most important themes was the idea of the “hidden curriculum” in law schools, a phrase used by Professor (and Dean Emeritus) Louis Bilionis of the University of Cincinnati College of Law. The idea is that learning occurs in many forms, not just by professors in a classroom instilling concepts through traditional teaching methods.  Students interact with a range of individuals during their legal education, many of whom are actively contributing to their education, particularly as to professional formation.  Consider:

  • The Career Development Office counselor who advises a student on how to deal with multiple, competing offers from law firms in a professional manner.
  • The Externship supervisor who helps a student reflect on an ethical issue that arose in his or her placement.
  • The secretary of a law school clinic who speaks with a student who has submitted a number of typo-ridden motions.
  • A non-faculty Assistant Dean who works with the Public Interest Law Student Association to put on a successful fundraising event for student fellowships, which involves setting deadlines, creating professional communications to donors, and leading a large staff of volunteer students.
  • The Law School receptionist who pulls a student aside before an interview to help the student get composed.
  • A fellow student who suggests that a classmate could have handled an interaction with a professor in a more professional manner.

These are all opportunities for teaching professional formation, which for many schools is (at least nominally) a learning outcome.  But how do we assess such out-of-classroom learning experiences?  If professional formation is a learning outcome, I suggest that schools will need to develop methods of measuring the extent to which this value is being learned.  Here are some suggestions:

  • Many schools with robust career services programs already assess student satisfaction in this area through student surveys.  They should consider adding questions to determine the extent to which students perceive that their career counselors helped them to become professionals.
  • Embed professional identity questions in final exams in Professional Responsibility and similar courses.
  • Survey alumni.
  • If professional identity is introduced in the first year, assess whether students in the 2L and 3L Externship Program have embodied lessons that were learned in the 1L curriculum.  Site supervisors could be asked, for example, to what extent students displayed a range of professional behaviors.
  • Ask the state bar for data on disciplinary violations for graduates of your school compared to others.

I recognize that a lot of these are indirect measures.  However, if a school has a robust professional identity curriculum (as some do), direct measures can be collected and analyzed.  In doing so, schools should not ignore the “hidden curriculum” to look for evidence of student learning.

2017 Assessment Institute

It’s been a while since my last blog post, for which I’m very sorry! The Fall was a busy semester. I was in Cambodia on a Fulbright and a bug I picked up there knocked me off my feet for a while.  But, I’m better and back to blogging.

Max Huffman at Indiana University McKinney School of Law alerted me that the annual Assessment Institute will be held this year from October 22-24 in Indianapolis.  Law school assessment will be featured on the Graduate Track, which Professor Huffman will be co-directing.  There is a request for proposals. Looks like it will be a great event.