Drill and rote-learning are derogatory terms in many education settings. They have the musty taint of “old-fashioned” ways of teaching. They evoke images of wooden classrooms and tight-lipped spinsters dressed in grey looming over trembling pupils as they recite their times-tables. Drill and rote-learning imply mindless repetition, devoid of understanding.

Much more attractive educational terms are “discovery”, “exploration”, “engagement”. Constructivism requires that learners engage with their materials and create learning by building on existing knowledge and experiences.

But (and I’m sure you could see this coming) I think there is a place for something not far from drill or rote-learning when teaching statistics and operations research. However I like to call it “**well-designed repetitive practice**”, rather than drill or rote-learning. With another name it smells a little sweeter.

Students need repeated exposure to and exploration of spreadsheet Linear Programming models in order to generalize and construct their own understanding correctly. Students benefit from repeated exposure to hypothesis testing in different contexts in order to discern the general from the specific. But this is not “mindless repetition” of similar examples where wrong generalizations can (and will) be constructed. The different examples should be carefully managed to make effective use of students’ time, and avoid reinforcement of incorrect concepts.

# Reason for well-designed repetitive practice

A single instance of a phenomenon does not provide enough information to transfer to another instance. It is only by being exposed to multiple instances that learners can decide which aspects are in common or general, and which are specific to that particular example. Exploring one instance of a linear program (LP) in a standard format gives an initial understanding, but in order to generalize, there must be multiple examples.

Learners, in general, endeavor to make sense of the material by making generalizations about the different examples they are given. If the common elements they perceive are not relevant, the learners make incorrect generalizations. If the first three examples of an LP spreadsheet have all decision variables in the same units, students can reasonably assume that LPs require decision variables to use the same units. To avoid this, the set of examples used must be carefully constructed. If all the hypothesis testing examples result in rejecting the null hypothesis, students gain an incorrect generalization that this is the usual result.

It is popular practice in entry-level statistics courses to require students to collect their own data, analyse and report on it. This is a wonderful way for students to learn and engage with the process of statistical analysis. My concern is that it gives only one example from which the student can construct their understanding of the process. Ideally students would have exposure to many different examples before embarking on their own project.

A learning management system is invaluable. We have a bank of very carefully constructed examples which students work through, to help them gradually develop understanding. The data is real – from questionnaires they or earlier classes completed. There is immediate feedback on submission of their answers, again to reinforce correct concepts. We explain to students that they should not to wait until they understand the process completely before they begin, but rather that the understanding comes with doing. There are many parallels for this kind of learning. Chess, sports, driving and speaking a language all develop through practice. Understanding follows practice.

What’s more, this method seems to work. Students are motivated to work through multiple examples so that they internalize the process and improve their understanding. And they gain a sense of accomplishment and confidence at correctly completing the examples.