I recently had a conversation with a dedicated and innovative colleague who lamented that her students had confirmed “without reservation” the prevailing if cynical conventional wisdom that they put forth little effort to complete assignments without the “stick” of a grade or points assigned to them. She noted her eye-opening disappointment at their “zero points = zero effort” approach to learning. She recalls, prior to polling her students, naively thinking “that might be true for your students but not for mine!” Disheartened “to learn that it is all about the grade for these students,” she has since committed herself to a “new mission – to help them see that the things we do as good teachers can help them” accomplish both durable learning and the grades they hope to achieve.
In many ways, this conversation underscores the continuing debate about the role of assessment in higher education and, by extension, corporate and workplace training. Though there seems to be a consensus among researchers about the merits of frequent, low-stakes formative assessments, many professors and corporate trainers continue to structure their instruction almost exclusively around assignments that prepare their students for success on periodic high-stakes summative assessments like term papers, research projects, and examinations or certifications. Consequently, many students understandably direct their efforts to “peak” a few times each semester, striving only to earn the very best grade possible on the few summative assignments in each of their classes. In this fashion, the very design of such courses, not to mention teaching with these assessments in mind, emphasizes the extrinsic motivation of “grade grubbing” and “getting it over with” over durable learning. Rather than privileging effective critical feedback to increase their intrinsic motivation and to shift student focus from “peaking” when asked to do so, we, as educators and trainers, sometimes succumb to the temptation to fall back upon the dominant practice we experienced throughout our own undergraduate experience. We assign readings and assignments. We teach. We train. We test. It worked for us; it should work for our students and employees, right? The truth is, sometimes it does, but we should nevertheless ask ourselves if this principal reliance upon high-stakes summative assessment results in substantive engagement with and mastery of course learning outcomes and lesson objectives. If not, how might we motivate our students and employees to embrace curiosity and learning as independent learners invested in mastery rather than what Mark Barnes calls the “empty promise of a ‘good’ grade or the threat of a ‘bad’ one”?
Ultimately, incorporating both formative andsummative assessment best practices into our course design and instruction enables us to use assessment not only oflearning but, more importantly, forlearning and aslearning. This is not, of course, to malign summative assessment with ineffective instruciton. As Henry Roediger III and Mark McDaniel note in Make It Stick, “[i]n virtually all areas of learning, you build better mastery when you use testing as a tool to identify and bring up your areas of weakness” (5). Indeed, such dynamic testing “helps us [as learners] to discover our weaknesses and [to] correct them” (151). The key, then, is to transform a traditionally static or passive summative assessment like a “standard” multiple choice examination into a dynamic test that reinforces durable learning and mastery.
Here, I am reminded of another colleague who has a truly innovative approach to summative assessment. His method transforms a “simple” multiple-choice test into a dynamic instrument that uses assessment aslearning. He tends to construct questions with one “correct” answer and others that he knows students might think are correct. The differences between the answers seem slight but actually are not, as they reinforce critical course concepts and student-learning outcomes. Once he has finished grading, he returns the examinations and gives his students time to prepare rebuttal arguments for their “incorrect” answers. Then, he walks the class through each of the exam questions, not only identifying the “right” answer but also emphasizing critical learning outcomes and allowing students to argue for their answers and at least partial additional credit. This extra step, which makes his students’ learning more effortful and deliberate, makes all the difference in the world. It transforms the test into active learning as it reinforces critical learning outcomes.
Another professor has an equally interesting approach to assessing student learning, reinforcing the value of recurring formative assessment. Last semester, he noted that his students “crammed” for examinations rather than focusing upon learning course material. To eliminate this tendency for “binge and purge learning” in which “a lot goes in but most of it comes right back out in short order,” he developed a daily low-stakes quiz with five questions that students should, at least theoretically, have no problem answering (63). The first day, as he expected, his students incorrectly answered a majority of the questions. He duly covered the questions, emphasizing the concepts behind the “correct” answers. The next lesson, in addition to five new questions, he included in that day’s quiz the questions his students had missed in the previous class. He continued this practice for the remainder of the semester. Over time, his students stopped cramming and started learning; indeed, once the students internalized the requisite intrinsic motivation to succeed, he rarely had to carry forward a question from the previous lesson. The same practice applies, of course, to corporate training, consistently reinforcing learning outcomes throughout the instruction rather than simply on a certifying examination.
For his part, Richard J. Stiggens, the former president of the Assessment Training Institute in Portland, Oregon, insists that effective formative assessment is a “process” not an “event.” Citing the research of Royce Sadler, an assessment expert and researcher in Australia, Stiggens contends “we use formative assessment productively when we use it in the instructional context to do three things. One is, keep students understanding the achievement target they’re aspiring to. The second is, use the assessment process to help them understand where they are now in relation to that expectation.” He concludes by advising educators and trainers to “use the assessment process to help students understand how to close the gap between the two.” Tellingly, he identifies “the locus of control” as residing “with the student.”
So, what might such an integrated assessment approach look like? The remainder of this article illustrates this process in an academic environment; a subsequent publication will apply similar precepts to corporate training. First, embracing the mantra of assessment as, of, and for learning, begin the course development and mapping process by identifying and tying specific student learning outcomes directly to specific lesson objectives. (See Figure 1.) If, for example, there are eight primary student-learning outcomes, ensure that you adequately address and assess each at some point (or perhaps at multiple points) throughout the course. In turn, these eight learning outcomes might manifest in two or three times as many lesson objectives specified prior to constructing the syllabus. Having finalized this course learning outcome-to-lesson objective crosswalk, build the syllabus with three specific elements in mind: daily assignments, a series of low-stake formative assessments, and the traditional higher-stakes summative assessments upon which students tend to focus their attention and effort. Then, create a spreadsheet with five rows (one each for course learning outcomes, lesson objectives, daily assignments, formative assessments, and summative assessments) and columns equal to the number of lessons in the course.
Next, fill in the matrix, aligning specific course outcomes and lesson objectives to individual daily assignments. After verifying this crosswalk, ensuring that the daily assignments demonstrably speak to the specified course and lesson objectives, turn your attention to identifying the most effective formative assessments to help quickly and efficiently evaluate whether or not (or which of) your students have grasped the most salient points of a particular lesson. Such exercises might include having students respond to an entry or exit ticket question or writing a minute paper in response to a question posed in class, taking a quiz, or conducting a parking lot exercise or a poll. Other strategies include
having students capture an “a-ha moment” or “muddiest point” on a 3 x 5 card, respond to a discussion board question, or to develop a word-cloud in response to a quiz or poll question using an online application to identify whether or not a majority of students have grasped a particular concept. Of course, you need not include such an activity in every lesson, as doing so might lessen their novelty and effectiveness. Instead, you might use the matrix to identify preliminarily an assessment activity for each lesson but only use it if necessary. If, for example, your students seem clearly to have grasped one of a lesson’s primary learning objectives, you do not necessarily need to stop and verify this fact; you can simply proceed with the lesson. When, however, your intuition, instinct, or evidence suggests that students are lost or struggling with a particular concept, you can pull the formative assessment out of your tool-bag and identify what the problem seems to be and adjust the rest of the lesson accordingly. In this respect, experience teaching a particular class and one’s subject-matter expertise help target particularly difficult lessons with a specific assessment tool.
Thankfully, we teach in exciting times when it comes to educational technology. Dozens of compelling formative assessment applications are available online – many of them free. Some quizzing and polling applications, like Kahoot!, Yacapaca!, PingPong, PollDaddy, and Quizlet, offer immediate feedback about student comprehension and can also be used as an effective entry or exit ticket exercise. Others, like the web-based “parking lot” site, Lino, and word-cloud applications like Wordables, Wordle, Mentimeter, and AnswerGarden offer innovative ways to collect and assess formative assessment data. (See Figure 2). Of course, those who prefer less technologically advanced instruction can rely upon the pre-digital versions of each of these applications. Sticky notes and a chalkboard work just as well as Lino for an “analog” classroom parking lot exercise. Traditional quizzes – though perhaps not as interactive or “game-ified” as Kahoot!, Socrative, ProProfs, or Mentimeter – do not require technology, either. For those interested in more information about formative assessment exercises like these, refer to the Pocket Guide for Evidence-Based Instruction, a useful resource published by the International Teaching Learning Cooperative Network. (See Figure 3.)
Continuing with the process, you subsequently use the feedback from these formative assessments either to validate or revise lesson objectives as you progress towards the course’s first summative assessment. Along the way, you can also validate or modify, as necessary, the specific components and learning objectives to be measured by the looming high-stakes examination, project, or essay. Thus, in keeping with Richard Stiggens’s assertion that formative assessment is a “process” rather than an “event,” the syllabus transforms into something less than a “contract” between your students and you, and into more of a course GPS with which to modify your azimuth as you proceed collectively through the course. In fact, the process continues throughout the course, consistently evaluating student learning and modifying, as required, daily assignments and assessment instruments to ensure durable learning and, as much as practicable, student mastery of course content.
Having established a viable crosswalk between course learning outcomes and lesson objectives, daily assignments, and a series of formative assessments, you are ready to design the tentative framework of the course’s summative assessments. Like many professors, in addition to examinations and projects, I assign essays to assess student learning. Generally, I have my students use what they have learned in a particular lesson or block of lessons to respond to a prompt, thereby validating not only their understanding of critical course concepts but also their ability to apply what they have learned in a precise, concise, consistent, sustained, and substantive argumentative essay. One innovative way to accomplish such a summative assessment is to model the scholarly conference paper model, requiring students to write a proposal, an annotated bibliography, and a conference-length essay. By providing critical feedback throughout the process, professors transform this summative assessment into a series of smaller formative assessments, engaging with the quality of their students’ ideas as much as the quality of their writing as they move towards “publishing” their scholarship. In other words, they experience exactly the same process we, as scholars and educators, rely upon to submit our research for publication. Moreover, many students subsequently feel that they have earned a seat at the “grown-up table,” transforming what might otherwise be a static response to an “assigned question” into their first work of true scholarship.
Of course, the conference paper model likely exceeds the abilities of some students. This is not, however, an insurmountable problem; instead, we might view this as simply a desirable difficulty. As Roediger and McDaniel argue, learning is most durable when it is effortful – even if we ask students to “solve a problem” before they truly know how to do so. This said, for less experienced students, one could still transform the summative assessment of an argumentative essay into a series of dynamic formative assessments that evaluates their growing confidence in and competence with the formal writing process (critical reading and research, brainstorming, outlining, drafting, revising, and editing). Either way, teacher-student engagement and consistent critical feedback are essential to achieving the desired learning outcome.
Critical to this entire process is consistent evaluation of whether or not students demonstrate competence with (if not mastery of) course learning outcomes and lesson objectives. By deliberately aligning course and lesson objectives with specific daily assignments and formative and summative assessments prior to the beginning of the semester, we ensure a viable plan of attack. Of course, as any retired military officer will tell you, even the very best battle plan rarely survives first contact with the enemy. The strength of the assessment as, of, and for learning model, however, is that, once instruction begins, the just-in-time feedback loop from the integrated daily assignments and formative assessments allows us to modify (or validate) not only what and how we teach but also the content and focus of our periodic high-stakes summative assessments. This model keeps students and educators alike actively engaged in durable student-centered learning: privileging competence with and, for some students, mastery of content rather than “cramming” or “peaking” a few times throughout the course.
Of course, the same principles apply to a corporate training program. Having employees “cram” or “peak” simply to pass a certification or training requirement does little to meet a company’s training, operational, safety, and risk management needs. A more targeted approach predicated upon andragogical best practices and aligning required learning outcomes with the design, execution, and assessment of training – as well as in the instructional design of the program and associated “lessons” or sessions – will result not only in a better qualified workforce but also in reduced costs and significantly improved efficiency.