Moving to Best Practices in Online Learning

By now, all law schools are—or are about to be—fully online. My sense is that professors are mostly teaching synchronously (i.e., live) using Zoom, Webex, or some other platform. In this respect, faculty are trying their best to replicate the in-person law school classroom; in some cases, this even includes cold calling on students to recite cases. Class sizes remain as they were before the Coronavirus hit. That means that some professors are teaching in this format to 60, 80, or even 100 students.

To experts in online teaching and learning, this is not how any of this is supposed to work. Successfully moving a class online requires time, effort, and training, as this helpful post describes. And it typically requires smaller class sizes than we’re used to in legal education, although experts acknowledge that there is no magic number.

A common misconception about moving a course to an online format is that the professor merely delivers the same content in the same manner as a face-to-face classroom … just on video. Not so. Professors new to online learning often view their teaching—incorrectly—through the “lens” of face-to-face classes. In actuality, adapting a course for online learning requires starting anew with course objectives and working backwards from those outcomes the professor hopes to achieve. The steps in students’ development become “modules” (typically on a week-by-week schedule) that are thoughtfully designed to achieve smaller outcomes, like guideposts on a trail. The course is not a march through a casebook.

Each module uses the learning tools that are best designed to achieve that module’s objectives. A module may include a synchronous discussion (as Nina Kohn [Syracuse] persuasively notes) or not. Asynchronous activities, which can be completed at different times in the week, can also be effective. Pre-recorded video lectures (of no more than 15-20 minutes, since attention spans wander after that point), quizzes, discussion posts, writing assignments, readings, and creative projects (such as students creating their own narrated PowerPoints on a subject) are all arrows in the professor’s asynchronous instructional quiver. Many of these activities are themselves formative or summative assessments of students’ progress in meeting the outcomes of the modules and, in turn, the course. The key is to make deliberate, strategic choices about what mode of delivery (synchronous or asynchronous) works best for a particular module or course. Often the best courses will include a blend of the two approaches.

So let us be clear: what we’re doing right now is not best practices; it is a bandaid. It is an emergency response so that we can continue instruction in some format so that our students do not lose the entire semester.

We have a responsibility, however, to do better in the weeks ahead, not just to satisfy accreditors but as part of our obligation to provide a quality education to our students. We should embrace a culture of continuous, quality improvement. This is especially true if the current crisis continues for a significant period and requires law schools to continue to be fully online into the Fall semester.

The Short-Term

There are things we can do in the short-term to improve teaching and learning this semester. In the coming weeks, we should:

  • For faculty who are just recording and posting lectures, encourage them to move to synchronous delivery so students can ask real-time questions and otherwise participate actively.
  • For faculty who are already meeting synchronously, encourage them to adopt active, rather than passive, learning activities in their virtual classrooms. Most videoconference platforms have polling or quizzing features, for instance, which can engage all students, not just those who are participating in a discussion. Get out of the mindset of looking at your online course through a face-to-face lens.
  • Encourage small group interactions to spur discussion and participation. Both Zoom and Webex have “breakout” rooms, and I have become a big fan of using them to spur think-pair-share type exercises. Like polling, breakout sessions turn students into active, rather than passive, participants.
  • Encourage faculty to supplement synchronous sessions with optional, asynchronous exercises, such as quizzes or discussion boards. I recommend making them optional at this point, since we do not want to overwhelm students, who are also adapting to new circumstances.
  • Make ourselves reasonably available for “office hours” through drop-in Zoom or Webex sessions. Not only does this foster personal connection during this time of isolation, but these meetings also give students an opportunity to clarify points they may have missed during pre-recorded lectures or live videoconferences.
  • Help each other.  Having all been thrown in the deep end of the pool at the same time, we are all learning how to swim, some better than others. We can help each other to say afloat. Faculty should share tips, suggestions, and solutions to common problems with one another. As an example, several of us at my school have been struggling with playing videos through our videoconference platform because of “lag” issues. Last night, an adjunct professor came up with an ingenious, but low tech, solution. He turned his laptop (with webcam) to a secondary monitor, brought it close to the monitor, and played the video from the secondary monitor at full volume. It wasn’t perfect, but it allowed students to hear and see the video with decent quality.

I recognize that this is a difficult time for many law professors. Some may be dealing themselves with illness or adjusting to disruption at home. Some of my colleagues now have two full-time jobs: teaching and taking care of school-aged children. Therefore, in my list above, I use words like “try,” “encourage,” and “reasonably available,” not “must” or “require.” The suggestions I have offered must be adapted to each individual faculty member’s circumstances and capabilities at this time.

Nevertheless, to the extent we are able, we should all be thinking about how to get better at teaching in this new format over the next few weeks. For some, that may mean exploring many of the asynchronous tools at our disposal. For others, it may mean trying to facilitate greater discussions versus lecture. Some of the suggestions I list above require no additional time on the professor’s part. If we take these steps, the teaching taking place at the end of April should be better than that being offered now.

The Long-Term

While we are all hopeful that things will get back to normal soon, law schools must nevertheless plan for the possibility that online teaching will need continue into the Fall (depending, of course, on how the virus progresses). What we can get away doing now on an emergency basis is not going to be acceptable to either our accreditors or our students by the Fall. They can, and should, expect a higher quality of instruction from us given the lead-up time we have until the next semester. Here, time is on our side but only if we begin planning now. The key, in my view, is training. Faculty need training on the best practices for online teaching and learning. The Quality Matters Standards and Rubric are a great place to start in terms of where we need to be (eventually) with online learning. The summer months provide an opportunity for such training and development.

Of course, someone has to develop and deliver such training. Law schools that are attached to universities can take advantage of courses that likely already exist through their universities’ centers for teaching and learning. Typically those courses include ones on Blackboard/Canvas, online pedagogy, and similar topics. Standalone law schools should consider working together to develop such courses. Training law faculty, full-time and adjuncts, on a massive scale to deliver “best practices” online teaching will require a great deal of planning to prioritize and rollout development opportunities in a smart way. That planning should take place now, even if the medium-term future is uncertain. I’ll likely have more to say, in a future blog post, on what that planning should look like.

Something else to consider for the Fall is section size. In the academic research, 20 is the recommended, outer limit for class size, and that number is for undergraduate courses. Graduate courses typically have smaller enrollments. Class size affects the ability of a professor to provide interactive experiences and feedback.  The more interactive a course is, the better the learning experience for students. Faculty can successfully provide an interactive experience only to a limited number of students. This may require reconsideration of the class schedule, such as cutting back on electives so that additional, smaller sections of required or core courses can be offered while a law school is in a fully online mode.

Needless to say, these are difficult times for everyone. Adopting a continuous improvement mindset, though, will benefit all of our stakeholders, especially our students.

Assessment in a Time of Coronavirus and Closed Campuses

Today is March 15, 2020, and, by now, most law schools have either announced a transition to fully online teaching or set a date when they will begin doing so.  Although many schools have said that this situation is temporary and will last for no more than a few weeks, my personal prediction is that most schools will not resume face-to-face teaching this semester.  This post invites faculty and administrators to think now about the consequences for assessment during this challenging time.

Although my usual interest is in programmatic assessment, here I am writing specifically about course-based assessment.  On the one hand, the next six seeks or so may be an opportunity for faculty to provide more formative assessments to students, such as low-stakes quizzes, essays, and discussion posts. Such activities are a way to keep students engaged with material.

However, there is a looming assessment issue that will require some attention sooner rather than later: how to engage in the typical end-of-semester summative assessments, such as final exams and, for skills classes, final activities. The questions that a law school must answer are several and complex: Continue reading

Student Services and Assessment

The word “assessment” is often used in the context of students’ academics: measuring the learning that takes place in formal coursework. Let me suggest, though, that there are ways that administrators and staff who work in student services can both engage in their own assessment and help the academic side of the house with learning outcomes assessment.

(I’m speaking on this topic at the AALS Annual Meeting on Saturday, January 4, at 2:45 pm in Washington 1, if you’re interested in hearing more. My slides are attached here.)

Law schools employ many type of professionals besides faculty. A given law school may also have Continue reading

Exam Wrappers and Self-Assessment

As reported today on TaxProf, Professor Sarah Schendel (Suffolk) has a new article on SSRN, “What You Don’t Know (Can Hurt You): Using Exam Wrappers to Foster Self-Assessment Skills in Law Students.”  She describes exam wrappers as a “one page post-exam exercise” that has students self-assess their “exam preparation and exam taking skills, and prompt them to consider changes to their techniques.”  Exam wrappers have been used in a number of disciplines, including physics, chemistry, and second language acquisition; however, they are not widespread in legal education.

Suskie: Course vs. Program Learning Goals

Linda Suskie has a new blog post up about the difference between course and program learning goals.  She begins by cutting through some of the jargon and vocabulary to summarize learning goals as:

Learning goals (or whatever you want to call them) describe what students will be able to do as a result of successful completion of a learning experience, be it a course, program or some other learning experience. So course learning goals describe what students will be able to do upon passing the course, and program learning goals describe what students will be able to do upon successfully completing the (degree or certificate) program.

I encourage readers to check out the full post from Ms. Suskie.

Vollweiler: Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless

Professor and Associate Dean Debra Moss Vollweiler (Nova) has an interesting article on SSRN entitled, “Don’t Panic! The Hitchhiker’s Guide to Learning Outcomes: Eight Ways to Make Them More Than (Mostly) Harmless.”  Here’s an excerpt of the abstract:

Legal education, professors and administrators at law schools nationwide have finally been thrust fully into the world of educational and curriculum planning. Ever since ABA Standards started requiring law schools to “establish and publish learning outcomes” designed to achieve their objectives, and requiring how to assess them debuted, legal education has turned itself upside down in efforts to comply. However, in the initial stages of these requirements, many law schools viewed these requirements as “boxes to check” to meet the standard, rather than wholeheartedly embracing these reliable educational tools that have been around for decades. However, given that most faculty teaching in law schools have Juris Doctorate and not education degrees, the task of bringing thousands of law professors up to speed on the design, use and measurement of learning outcomes to improve education is a daunting one. Unfortunately, as the motivation to adopt them for many schools was merely meeting the standards, many law schools have opted for technical compliance — naming a committee to manage learning outcomes and assessment planning to ensure the school gets through their accreditation process, rather than for the purpose of truly enhancing the educational experience for students. … While schools should not be panicking at implementing and measuring learning outcomes, neither should they consign the tool to being a “mostly harmless” — one that misses out on the opportunity to improve their program of legal education through proper leveraging. Understanding that outcomes design and appropriate assessment design is itself a scholarly, intellectual function that requires judgment, knowledge and skill by faculty can dictate a path of adoption that is thoughtful and productive. This article serves as a guide to law schools implementing learning outcomes and their assessments as to ways these can be devised, used, and measured to gain real improvement in the program of legal education.

The article offers a number of recommendations for implementing assessment in a meaningful way:

  1. Ease into Reverse Planning with Central Planning and Modified Forward Planning
  2. Curriculum Mapping to Ensure Programmatic Learning Outcomes Met
  3. Cooperation Among Sections of Same Course and Vertically Through Curriculum
  4. Tying Course Evaluations to Learning Outcomes to Measure Gains
  5. Expanding the Idea of What Outcomes Can be for Legal Education
  6. Better use of Formative Assessments to Measure
  7. Use of the Bar Exam Appropriately to Measure Learning Outcomes
  8. Properly Leverage Data on Assessments Through Collection and Analysis

I was particularly interested in Professor Vollweiler’s point in her third recommendation.  Law school courses and professors are notoriously siloed.  Professors teaching the same course will use different texts, have varying learning outcomes, and assess their students in distinct ways.  This presents challenges in looking at student learning at a more macro level.  Professor Vollweiler effectively dismantles arguments against common learning outcomes.  The article should definitely be on Summer reading lists!

The Value of Sampling in Assessment

I just returned from the biennial ABA Associate Deans’ Conference, which is a fun and rewarding gathering of associate deans of academics, student affairs, research, administration, and other similar roles.  (Interestingly, more and more associate deans seem to have assessment in their titles.)

I spoke on a plenary panel about assessment, and I discussed the value of sampling in conducting programmatic assessment.  I wanted to elaborate on some of my thoughts on the subject.

Let’s say a school wants to assess the extent to which students are meeting the learning outcome of writing.  One way to do so would be to conduct what is called a “census” in which every student’s writing in a course or sequence is evaluated by an assessment committee.  In a small LL.M. or Juris Master’s program of 10 or 20 students, this might be feasible.  But in a school of, say, 900 J.D.’s, this is not workable.

A more feasible approach is to use a “sample” — a subset of the larger group.  So instead of reviewing 900 papers, perhaps the committee might look at 50 or 100.  If the sample is properly constructed, it is permissible to extrapolate the results and draw conclusions about the larger population.

Sometimes using a census is workable, even for a large group.  For example, if faculty who teach a subject all agree to embed 10 of the same multiple choice questions in their final exam, those results could be analyzed to see how the students performed on the material being tested.

Frequently, though, we are assessing something, like writing, that does not lend itself easily to embedded multiple choice questions or other easy-to-administer forms of assessment.  That’s where sampling comes in.  The key is to construct a representative sample of the larger population.  Here are some tips in doing so:

  • Consider, first, what you will be assessing.  Are you reviewing two-page papers?  Ten-page memos?  Thirty-page appellate briefs?  15-minute oral arguments in a moot court exercise?  Each of these will call for different time commitments on the part of your reviewers.  Next, take into account how many reviewers you will have.  The more reviewers, the more documents you’ll be able to assess.  Consider, also, that you’ll likely need multiple reviewers per thing that you’re assessing, and time should be allotted for the reviewers to “calibrate” expectations.  All of this will give you an idea of how much time it will take per reviewer per document or other thing that you’re looking at.
  • In general, the larger the sample size, the better.  Statistically, this has something to do with “margin or error” and “confidence interval.”  For more on picking a sample size, check out this very helpful article from Washington State University.  But, in general, a quick rule of thumb is a minimum of 10 students or 10% of the population, whichever is greater.
  • It is preferable for those doing the assessment not to be involved with picking the sample itself.  Here’s where having an assessment or data coordinator can be helpful.  Most times, a sample can be collected at random.  Online random number generators can be of help here.  There are suggestions for simplifying this process in the document I linked to above.
  • Once you have selected your sample size and identified those who will be in the sample, make sure you have a representative sample.  For example, if your population is composed of 60% women and 40% men, the sample should probably approximate this breakdown as well.  I like to look, too, at average LSAT and UGPA of the groups, as well as Law School GPA, to make sure we’ll be assessing a sample that is academic representative of the larger population.

In the assessment projects I have worked on, I have found sampling to be an effective way to make assessment easier for faculty who have a lot of competing demands on their time.  Some additional resources for sampling are:

Dean Vikram Amar on Constructing Exams

Dean Vikram Amar (Illinois) has an excellent post on Above the Law about exam writing.  He offers four thoughts based on his experience as a professor, associate dean, and dean.  First, Dean Amar talks about the benefits of interim assessments:

Regardless of how much weight I attach to midterm performance in the final course grade, and even if I use question types that are faster to grade than traditional issue spotting/analyzing questions — e.g., short answer, multiple-choice questions, modified true/false questions in which I tell students that particular statements are false but ask them to explain in a few sentences precisely how so — the feedback I get, and the feedback the students get, is invaluable.

Second, Dean Amar articulates an argument in favor of closed book exams: Continue reading

What This Professor Learned by Becoming a Student Again

For the past year, I have been a student again.  Once I finish a final paper (hopefully tomorrow), I will be receiving a Graduate Certificate in Assessment and Institutional Research from Sam Houston State University.

I enrolled in the program at “Sam” (as students call it) because I wanted to receive formal instruction in assessment, institutional research, data management, statistics, and, more generally, higher education.  These were areas where I was mainly self-taught, and I thought the online program at Sam Houston would give me beneficial skills and knowledge.  The program has certainly not disappointed.  The courses were excellent, the professors knowledgeable, and the technology flawless.  I paid for the program out-of-pocket, and it was worth every penny.  It has made me better at programmatic assessment and institutional research.  (I also turned one of my research papers into an article, which just came out this week.)

But the program had another benefit: It has made me a better teacher. Continue reading