# Teaching Experimental Design – a cross-curricular opportunity

The elements that make up a statistics, operations research or quantitative methods course cover three different dimensions (and more). There are:

**techniques**we wish students to master,**concepts**we wish students to internalise, and**attitudes**and emotions we wish the students to adopt.

Techniques, concepts and attitudes interact in how a student learns and perceives the subject. Sadly it is possible (and not uncommon) for students to master techniques, while staying oblivious to many of the concepts, and with an attitude of resignation or even antipathy towards the discipline.

# Techniques

Often, and less than ideally, course design begins with techniques. The backbone is a list of tests, graphs and procedures that students need to master in order to pass the course. The course outline includes statements like:

- Students will be able to calculate a confidence interval for a mean.
- Students will be able to formulate a linear programming model from data.
- Students will use Excel to make correct histograms. (Good luck with this one!)

Textbooks are organised around techniques, which usually appear in a given sequence, relying on the authors’ perception of how difficult each technique is. Textbooks within a given field are remarkably similar in the techniques they cover in an introductory course.

# Concepts

Concepts are more difficult to articulate. In a first course in statistics we wish students to gain an appreciation of the effects of variation. They need to understand how data from a sample differs from population data. In all of the mathematical decision sciences students struggle to understand the nature of a model. The concept of a mathematical model is far from intuitive, but essential.

# Attitudes

You can’t explicitly teach attitudes. “Today class, you are going to learn to love statistics!”. These are absorbed and formed and reformed as part of the learning process, as a result of prior experiences and attitudes. I have written a post on Anxiety, fear and antipathy for maths, stats and OR, which describes the importance of perseverance, relevance, borrowed self-efficacy and love in the teaching of these subjects. Content and problem context choices can go a long way towards improving attitudes. The instructor should know whether his or her class is more interested in the projectories of gummy bears, or the more serious topics of cancer screening and crime prevention. Classes in business schools will use different examples than classes in psychology or forestry. Whatever the context, the data should be real, so that students can really engage with it.

I was both amused and a little saddened at this quote from a very good book, “Succeed – how we can reach our goals”. The author (Heidi Grant Halvorson) has described the outcomes of some interesting experiments regarding motivation. She then says, “At this point, you may be wondering if social psychologists get a particular pleasure out of asking people to do really odd things, like eating Cheerios with chopsticks, or eating raw radishes, or not laughing at Robin Williams. The short answer is yes, we do. It makes up for all those hours spent learning statistics.” Hmmm

# Experimental Design

So what does this have to do with experimental design?

I have a little confession. I’ve never taught experimental design. I wish I had. I didn’t know as much then as I do now about teaching statistics, and I also taught business students. That’s my excuse, but I regret it. My reasoning was that businesses usually use observational data, not experimental data. And it’s true, except perhaps in marketing research, and process control and possibly several other areas. Oh.

George Cobb, whom I have quoted in several previous posts, proposed that experimental design is a mechanism by which students may learn important concepts. The technique is experimental design, but taught well, it is a way to convey important concepts in statistics and decision science. The pivotal concept is that of variation. If there were no variation, there would be no need for statistics or experimentation. It would be a sad, boring deterministic world. But variation exists, some of which is explainable, and some of which is natural, some of which is due to sampling and some of which is due to bad sampling or experimental practices. I have a YouTube video that explains these four sources of variation. Because variation exists, experiments need to be designed in such a way that we can uncover as best we can the explainable variation, without confounding it with the other types of variation.

The new New Zealand curriculum for Mathematics and Statistics includes experimental design at levels 2 and 3 of the National Certificate of Educational Achievement. (The last two years of Secondary School). The assessments are internal, and teachers help students set up, execute and analyse small experiments. At level two (implemented this year) the experiments generally involve two groups which are given two treatments, or a treatment and a control. The analysis involves boxplots and informal inference. Some schools used paired samples, but found the type of analysis to be limited as a result. At level three (to be implemented in 2013) this is taken a step further, but I haven’t been able to work out what this step is from the curriculum documents. I was hoping it might be things like randomised block design, or even Taguchi methods, but I don’t think so.

# Subjects for Experimentation

Bearing in mind the number of students, many of whom wish to use other members of the class, there can be issues of time and fatigue.Here are some possibilities. It would be great if other suggestions could be added as comments to this post.

# Behavioural

Some teachers are reluctant to use psychological experiments as it can be a bit worrying to use our students as guinea pigs. However, this is probably the easiest option, and provided informed and parental consent is received, it should be acceptable. All sorts have been suggested such as effects of various distractions (and legal stimulants) on task completion. There are possible experiments in Physical Education (Evaluate the effectiveness of a performance enhancing programme). Or in Music – how do people respond to different music?

I’d love to see some experiments done on time taken to solve Rogo puzzles! and what the effect of route length or number choice, or size or age is.

# Biology

Anything that involves growing things takes a while and can be fraught. (My own recollection of High School biology is that all my plants died.) But things like water uptake could be possible. Use sticks of celery of different lengths and see how much water they take up in a given time. Germination times or strike rates under different circumstances using cress or mustard? Talk to the Biology teacher. There are assessment standards in NZ NCEA at levels 2 and 3 which mesh well with the statistics standards.

# Technology

Baking. There are various ingredients that could have two or three levels of inclusion – making muffins with and without egg – does it affect the height? Pretty tricky to control, but fun – maybe use uniform amounts of mixture. Talk to the Food tech teacher.

Barbie bungee jumping. How does Barbie’s weight affect how far she falls. By having Barbie with and without a backpack, you get the two treatments. The bungee cords can be made out of rubber bands or elastic.

Things flying through the air from catapaults. This has been shown to work as a teaching example. There are a number of variables to alter, such as the weight of the object, the slope of the launchpad, and the person firing.

# Inject statistical ideas in application areas

John Maindonald from ANU made the following comment on a previous post: “I am increasingly attracted to the idea that the place to start injecting statistical ideas is in application areas of the curriculum. This will however work only if the teaching and learning model changes, in ways that are arguably anyway necessary in order to make effective use of those teachers who have really good and effective mathematics and statistics and computing skills.”

How exciting is that? Teachers from different discipline areas work together! There may well be logistical issues and even problems of “turf”. But wouldn’t it be great for mathematics teachers to help students with experiments and analysis in other areas of the curriculum. The students will gain from the removal of “compartments” in their learning, which will help them to integrate their knowledge. The worth of what they are doing would be obvious.

(Note for teachers in NZ. A quick look through the “assessment matrices” for other subjects uncovered a multitude of possibilities for curricular integration if the logistics and NZQA allow. )

Roger Mead’s article “The non-orthogonal design of experiments”, (J. Roy. Statist. Soc. 1990, vol 153, pp 151-201), has what seems to me to be an elementary, pretty accessible and common-sense view of design. It does go on a bit, though! (With discussion papers, I tend to read the comments first. Doesn’t everyone?)

I read your article with great interest because I regularly encounter poorly designed experiments. I manage a team of engineers with many years of experience in their fields. Their academic training is at the Masters and PhD levels. However, they have very little training in DoE. What little I know came from a few chapters of a book I read a number of years ago (Statistics for Experimenters by Box). I would appreciate any suggestions that you might have for DoE training in a corporate setting.

Hi. Thanks for your question. I have been thinking about it and will continue to do so. There are some good sites with basic material. I think Stat Trek does a very nice job of explaining the basics:

http://stattrek.com/experiments/what-is-an-experiment.aspx

I’m planning on making a YouTube video about this, so I’d be interested to know what mistakes are most common. It isn’t enough to just watch a video or read material though. They need low-jeopardy exercises to practice their understanding on. Maybe I need to add it as an element of our app.

We frequently encounter problems that involve the interaction of multiple factors (e.g. temperature and voltage variation). Engineers tend to think about changing one thing at time, but then we miss the effect of interactions. Some basic training on multi-factor experiment design would help.

Many of our statistical efforts struggle with deciding on the correct information to gather and then putting together a representative random sample. There always seems to be a problem with sampling bias. I bet I use the term “sampling bias” once a week.

I like the idea of a Youtube video series and I also view exercises as a must.

Pingback: Teaching time series with limited computer access | Learn and Teach Statistics and Operations Research