Pages

Search This Blog

Monday, June 1, 2015

Looking at Student Work: Should Exposure Be Mandatory?

A few years ago, I participated in a Deeper Learning MOOC. The topic for Week 2 of the course was "looking at student work." Specifically, we were asked:
  • What can teachers and students learn from samples of student work? 
  • What methods have teachers used to evaluate project-based learning (PBL) or expeditionary (Outward Bound-style) learning? 
  • Can we establish objective criteria for assessing deeper learning? 
  • What happens when teachers encourage students to look at each other's work? Are they able to facilitate effective peer reviews? 
I wondered how these questions might apply to me as a home educator because I didn't think of myself as an evaluator. I was more like a coach or a mirror, reflecting back what I noticed, calling attention to what had been accomplished, pointing out to my kids (and other students) where they had grown strong and where they might still have room to grow. My goal was for them to learn how to evaluate their own progress, set their own high standards, and then do their best to live up to them. I didn't believe in testing or grading my kids (as I explain in The Testing Effect).

Over time, however, I began to understand why assessments based on objective criteria were important—certainly for schools, and perhaps even for home educators. Sure, my casual one-on-one assessments through conversations with my kids enabled me to get a sense of how they were doing overall,  but there were times when I also needed to evaluate their progress based on specific goals. Were her writing skills strong enough to permit enrollment in a college-level literature course? Had he mastered algebra well enough to go on to calculus? Formal assessments, or evaluations based on predetermined criteria, served a key purpose: they let me (and my kids) know exactly what had been mastered and what hadn't. They gave us feedback that said, "Hey, you might want to pay more attention to this!" or "These skills are solid. You can move on to something else now." Without that sort of feedback, it would have been hard for us to evaluate the effectiveness of our approach to learning a particular subject.

It's human nature for people to become complacent over time. If we think what we are doing is working pretty well, we tend not to change our practice. In a classroom, a teacher might be muddling along with one lesson plan after another without realizing that a few tweaks here and there could dramatically enhance student interest and retention. In a home, a parent might be going through each day's routine as always, unaware that her child hasn't really mastered geography as well as he (and she) thinks he has.

How can teachers and students get the feedback they need to do a better job? How can they determine to what extent their actions are helping or hindering progress?

The old-school method for evaluating progress was standardized testing. Granted, if the goal is to assess retention of facts (1+1=2), testing might be the answer. However, testing is limited and clearly fails to measure "deeper learning," those qualities of education that make it meaningful, relevant and lasting. That's why another form of assessment is necessary.

Which leads me to back to the topic of this post—peer reviews and "looking at student work."