About Dr Nic

I love to teach just about anything. My specialties are statistics and operations research. I have insider knowledge on Autism through my family. I have a lovely husband, two grown-up sons, a fabulous daughter-in-law and a new adorable grandson. I have four blogs - Learn and Teach Statistics, Never Ordinary Life, Chch Relief Society and StatsLC News.

Summarising with Box and Whisker plots

In the Northern Hemisphere, it is the start of the school year, and thousands of eager students are beginning their study of statistics. I know this because this is the time of year when lots of people watch my video, Types of Data. On 23rd August the hits on the video bounced up out of their holiday slumber, just as they do every year. They gradually dwindle away until the end of January when they have a second jump in popularity, I suspect at the start of the second semester.

One of the first topics in many statistics courses is summary statistics. The greatest hits of summary statistics tend to be the mean and the standard deviation. I’ve written previously about what a difficult concept a mean is, and then another post about why the median is often preferable to the mean. In that one I promised a video. Over two years ago – oops. But we have now put these ideas into a video on summary statistics. Enjoy! In 5 minutes you can get a conceptual explanation on summary measures of position. (Also known as location or central tendency)


I was going to follow up with a video on spread and started to think about range, Interquartile range, mean absolute deviation, variance and standard deviation. So I decided instead to make a video on the wonderful boxplot, again comparing the shoe- owning habits of male and female students in a university in New Zealand.

Boxplots are great. When you combine them with dotplots as done in iNZIght and various other packages, they provide a wonderful way to get an overview of the distribution of a sample. More importantly, they provide a wonderful way to compare two samples or two groups within a sample. A distribution on its own has little meaning.

John Tukey was the first to make a box and whisker plot out of the 5-number summary way back in 1969. This was not long before I went to High School, so I never really heard about them until many years later. Drawing them by hand is less tedious than drawing a dotplot by hand, but still time consuming. We are SO lucky to have computers to make it possible to create graphs at the click of a mouse.

Sample distributions and summaries are not enormously interesting on their own, so I would suggest introducing boxplots as a way to compare two samples. Their worth then is apparent.

A colleague recently pointed out an interesting confusion and distinction. The interquartile range is the distance between the upper quartile and the lower quartile. The box in the box plot contains the middle 50% of the values in the sample. It is tempting for people to point this out and miss the point that the interquartile range is a good resistant measure of spread for the WHOLE sample. (Resistant means that it is not unduly affected by extreme values.) The range is a poor summary statistic as it is so easily affected by extreme values.

And now we come to our latest video, about the boxplot. This one is four and a half minutes long, and also uses the shoe sample as an example. I hope you and your students find it helpful. We have produced over 40 statistics videos, some of which are available for free on YouTube. If you are interested in using our videos in your teaching, do let us know and we will arrange access to the remainder of them.

20 ways to improve as a teacher of statistics (Part 1)

It embarrasses me to look back on how I taught statistics ten years ago. Were I still teaching in a university, I would not be teaching the same things the same way I did then. I did the best I can, and the course was better than many, but I know so much more now about what is important, and how it should be taught. And I hope that ten years from now, I will have learned even more, and would make more improvements.  I propose that if you aren’t a little embarrassed at how you were teaching ten years ago, then you probably should be. And if you have not changed anything in your courses, you might like to think again. The fields of statistics and statistics education are progressing and changing, and we should not be teaching a twenty-first century subject using twentieth century technology and pedagogy.

Web lists are a popular way to get ideas across, and they involve numbers, which I like. So here is my list of 20 ways to improve as a teacher of statistics. The ideas are a mix of conceptual, practical and attitudinal, in no particular order.

1. Feel the fear and do it anyway (Susan Jeffers)

This is pretty much sums up my philosophy on life. If we only do things we feel comfortable about, we are unlikely to discover the possibilities at the edge of our competence. I wrote some time ago about the knife edge of competence. We don’t want to live on it, but we do need to spend some time there. I believe that if we never have a “great idea” that turns out not to work, then we aren’t being imaginative enough. Throughout my career as a university academic, I had some fairly disastrous lectures or lessons at times, but they were well and truly outweighed by the great ideas that really did work. Experiment – if you never have a failure you aren’t trying hard enough.

2. Incremental change

Each year or semester we can take a look at a certain concept or technique that did not really work, and see if we can tweak it. We can change the way we assess one piece of work, or use a different data set. Continuous improvement is important. I recently gave a daylong seminar for 80 Statistics Scholarship students in the Waikato. It was a blast – though exhausting. It is tempting to just put the notes away for next year, but I have jotted down in my timing sheet, which activities did not work as well as I would like, and ideas to get the students writing some more. Next time I present it, there won’t be big changes, but I plan to improve it a little at a time.

3. Catastrophic change

People in Christchurch understand catastrophic change. Our earthquakes gave me the opportunity to do away with face-to-face lectures in my course. We don’t need to wait for a natural disaster, though. Sometimes we have fiddled around the edges of a course for long enough, and the underlying premises are getting stretched. It is time to draw a line at the bottom and start again. I was happy to be able to help the Statistics Department at the University of Canterbury reshape their introductory statistics offerings, beginning with the philosophy and learning objectives. Sometimes things are so broken, we need to start again, and sometimes it is invigorating to be able to use a scorched earth approach to course development.

4. Enrol in a MOOC

When we are looking for inspiration on how to improve our statistics teaching, we can’t find better than the MOOC put on by HollyLynne Lee and her team. Here is what she says:

Here at the Friday Institute at NC State, I am offering a Massive Open Online Course for Educators (MOOC-ED) that is focused on “Teaching Statistics Through Data Investigations”. The course is designed to target pedagogy and content for teachers (preservice, practicing, college-level teaching assistants, and teacher educators) in middle school, high school, and AP/ intro college levels. There will be many choices and options in the course for teachers to focus their learning around content that they teach. You can see a more detailed description of the course here:  http://go.ncsu.edu/tsdi

I enrolled in this course in its previous offering and found it extremely helpful and inspiring. It is based in the U.S. and uses their terminology, but as the NZ curriculum is based on the GAISE document, there is plenty of common ground. What I could most useful was reading the comments of the other participants, and finding what experiences are universal. I wrote about this here.

5. Join in or create a Professional learning community

I love the ideas I get from Twitter. Ideas expressed in 140 characters or less (plus pictures) can be the start of other ideas. You get to make friends with people you have never met (as opposed to Facebook where you get to “unfriend” people you have known all your life.) There is such a diversity of talent in the world, and by building up an international pool of colleagues in statistics education we can be inspired and encouraged. Taking part in the MOOC mentioned in idea 4 will help you build up your community and linkages.

6. Tie in teaching and course development with research so that you get credit for it. (Academics)

It is the sad truth in many tertiary establishments that spending too much time reworking a course and improving your teaching can be at the expense of your research programme. I am not well placed to advise in this area as I never did get a very good research programme and took redundancy to avoid being punished for my choices with reducing resources for research. However I do know that some academics do manage to do research in the pedagogy of their subject. For example:

7. Take the opportunities to participate in research programmes

Nathan Tintle recently sent out an invitation to participate in introductory statistics assessment project  as follows:

Dear Statistics instructor,

We recently received NSF funding to facilitate assessment of (algebra-based) introductory statistics courses, with a focus on gaining a better understanding of potential differences in student learning between “traditional” and simulation/ randomization-based introductory statistics courses.  As such, we are asking you to consider having your students participate in the assessment project regardless of how much (if any) simulation- and randomization-based inference methods you use in your course. If you are interested in participating, please fill out this short survey, as soon as possible, but early enough to allow time to set up individualized links for your class before your term starts: https://www.surveymonkey.com/s/9SYS8H3 .

If I had a class I could have participate in this, I would definitely do it. Nathan has assured me that instructors from other countries are also welcome to take part. Here is an opportunity to see how much difference you make in the course. Do your students actually learn things? And answering questions about how a course is taught and assessed is a great way to start thinking about improvements. AND you can build up your professional learning community.

8. Change the technology

When I was teaching introductory Management Science, I would dread the regular Excel upgrades. They were enough to make me have to redo my notes and screenshots, but they NEVER addressed the appalling Statistics Analysis ToolPak. I love Excel more than is probably moral, but I am very alive to its faults and weaknesses. As computers get more and more powerful, and different techniques are developed and become possible, the potential uses of technology change. I believe that AP statistics still uses handheld calculators, but I also believe that this is a mistake, possibly encouraged by the manufacturers. AP statistics should be examined using computer output. No one should be calculating statistics of any kind by hand. Ever! See my post on this here. Changing technology forces us to rethink what we are trying to do and why.

9. Change the textbook

Or cease to use a textbook. Or write one of your own. The first thing I ever read of George Cobb’s was an analysis of textbooks, back in the later years of last century. I strongly agree with his analysis that the questions were the most important part. This is even more applicable in these days of free online information of varying value. Depending on how confident the instructor is, a textbook can be a great help, but often they are expensive doorstop/lucky charm combinations

10. Go for a run

This one is obvious. It’s my source of all good ideas.

That will do for Part 1. I have at least another 10 points for the second part of this series.

Engaging students in learning statistics using The Islands.

Three Problems and a Solution

Modern teaching methods for statistics have gone beyond the mathematical calculation of trivial problems. Computers can enable large size studies, bringing reality to the subject, but this is not without its own problems.

Problem 1: Giving students experience of the whole statistical process

There are many reasons for students to learn statistics through running their own projects, following the complete statistical enquiry process, posing a problem, planning the data collection, collecting and cleaning the data, analysing the data and drawing conclusions that relate back to the original problem. Individual projects can be both time consuming and risky, as the quality of the report, and the resultant grade can be dependent on the quality of the data collected, which may be beyond the control of the student.

The Statistical Enquiry Cycle, which underpins the NZ statistics curriculum.

The Statistical Enquiry Cycle, which underpins the NZ statistics curriculum.

Problem 2: Giving students experience of different types of sampling

If students are given an existing database and then asked to sample from it, this can be confusing for student and sends the misleading message that we would not want to use all the data available. But physically performing a sample, based on a sampling frame, can be prohibitively time consuming.

Problem 3: Giving students experience conducting human experiments

The problem here is obvious. It is not ethical to perform experiments on humans simply to learn about performing experiments.

An innovative solution: The Islands virtual world.

I recently ran an exciting workshop for teachers on using The Islands. My main difficulty was getting the participants to stop doing the assigned tasks long enough to discuss how we might implement this in their own classrooms. They were too busy clicking around different villages and people, finding subjects of the right age and getting them to run down a 15degree slope – all without leaving the classroom.

The Island was developed by Dr Michael Bulmer from the University of Queensland and is a synthetic learning environment. The Islands, the second version, is a free, online, virtual human population created for simulating data collection.

The synthetic learning environment overcomes practical and ethical issues with applied human research, and is used for teaching students at many different levels. For a login, email james.baglin @ rmit.edu.au (without the spaces in the email address).

There are now approximately 34,000 inhabitants of the Islands, who are born, have families (or not) and die in a speeded up time frame where 1 Island year is equivalent to about 28 earth days. They each carry a genetic code that affects their health etc. The database is dynamic, so every student will get different results from it.

The Islanders

Some of the Islanders

Two magnificent features

To me the one of the two best features is the difficulty of acquiring data on individuals. It takes time for students to collect samples, as each subject must be asked individually, and the results recorded in a database. There is no easy access to the population. This is still much quicker than asking people in real-life (or “irl” as it is known on the social media.) It is obvious that you need to sample and to have a good sampling plan, and you need to work out how to record and deal with your data.

The other outstanding feature is the ability to run experiments. You can get a group of subjects and split them randomly into treatment and control groups. Then you can perform interventions, such as making them sit quietly or run about, or drink something, and then evaluate their performance on some other task. This is without requiring real-life ethical approval and informed consent. However, in a touch of reality the people of the Islands sometimes lie, and they don’t always give consent.

There are over 200 tasks that you can assign to your people, covering a wide range of topics. They include blood tests, urine tests, physiology, food and drinks, injections, tablets, mental tasks, coordination, exercise, music, environment etc. The tasks occur in real (reduced) time, so you are not inclined to include more tasks than are necessary. There is also the opportunity to survey your Islanders, with more than fifty possible questions. These also take time to answer, which encourages judicious choice of questions.


In the workshop we used the Islands to learn about sampling distributions. First each teacher took a sample of one male and one female and timed them running down a hill. We made (fairly awful) dotplots on the whiteboard using sticky notes with the individual times on them. Then each teacher took a sample and found the median time. We used very small samples of 7 each as we were constrained by time, but larger samples would be preferable. We then looked at the distributions of the medians and compared that with the distribution of our first sample. The lesson was far from polished, but the message was clear, and it gave a really good feel for what a sampling distribution is.

Within the New Zealand curriculum, we could also use The Islands to learn about bivariate relationships, sampling methods and randomised experiments.

In my workshop I had educators from across the age groups, and a primary teacher assured me that Year 4 students would be able to make use of this. Fortunately there is a maturity filter so that you can remove options relating to drugs and sexual activity.

James Baglin from RMIT University has successfully trialled the Island with high school students and psychology research methods students. The owners of the Island generously allow free access to it. Thanks to James Baglin, who helped me prepare this post.

Here are links to some interesting papers that have been written about the use of The Islands in teaching. We are excited about the potential of this teaching tool.

Michael Bulmer and J. Kimberley Haladyn (2011) Life on an Island: a simulated population to support student projects in statistics. Technology Innovations in Statistics Education, 5(1). 

Huynh, Baglin, Bedford (2014) Improving the attitudes of high school students towards statistics: An Island-based approach. ICOTS9

Baglin, Reece, Bulmer and Di Benedetto, (2013) Simulating the data investigative cycle in less than two hours: using a virtual human population, cloud collaboration and a statistical package to engage students in a quantitative research methods course.

Bulmer, M. (2010). Technologies for enhancing project assessment in large classes. In C. Reading (Ed.), Proceedings of the Eighth International Conference on Teaching Statistics, July 2010. Ljubljana, Slovenia. Retrieved from http://www.stat.auckland.ac.nz/~iase/publications/icots8/ICOTS8_5D3_BULMER.pdf

Bulmer, M., & Haladyn, J. K. (2011). Life on an Island: A simulated population to support student projects in statistics. Technology Innovations in Statistics Education, 5. Retrieved from http://escholarship.org/uc/item/2q0740hv

Baglin, J., Bedford, A., & Bulmer, M. (2013). Students’ experiences and perceptions of using a virtual environment for project-based assessment in an online introductory statistics course. Technology Innovations in Statistics Education, 7(2), 1–15. Retrieved from http://www.escholarship.org/uc/item/137120mt

Framework for statistical report-writing

I’ve been pondering what needs to happen for a student to be able to produce a good statistical report. This has been prompted by an informal survey I conducted among teachers of high school statistics in New Zealand. Because of the new curriculum and assessments, many maths teachers are feeling out of their depth, and wondering how to help their students. I asked teachers what they found most challenging in teaching statistics. By far the most common response was related to literacy or report-writing.

Here is a sample of teacher responses when asked what they find most challenging:

  • Teaching students how to write.
  • Helping students present their thoughts and ideas in a written report.
  • Writing the reports for assessment- making this interesting.
  • Helping students use the statistical language required in assessments.
  • Getting students to adequately analyse and write up a report.
  • Trying to think more like an English teacher than a Mathematics teacher

These comments tend to focus on the written aspect of the report, but I do wonder if the inability to write a coherent report is also an indicator of some other limitations.

The following diagram outlines the necessary skills and knowledge to complete a good statistical report. In addition the student needs the character traits of critical thinking, courage and persistence in order to take the report through to completion.

A framework for analysing what needs to happen in the production of a good statistical report.

A framework for analysing what needs to happen in the production of a good statistical report.

Basic Literacy

Though not sufficient on their own, literacy skills are certainly necessary. It is rather obvious that being able to write is a prerequisite to writing a report. In particular we need to be able to write in formal language. One common problem is the tendency to omit verbs, thus leaving sentences incomplete.

Understand concepts

Students must understand correctly the statistical concepts underlying the report. For example, if they are not clear what the median, mean and quartiles express, it is difficult to write convincingly about them, or indeed to report them using correct language. When students are unable to write about a concept, it may indicate that their understanding is weak.

Be familiar with graphs and output

These days students do not need to draw their own graphs or calculate statistics by hand, but do need to know what graphs and analysis are appropriate for their particular data and research question. And they need to know how to read and interpret the graphs.

Know what to look for in graphs and output

This differs from the previous aspect in that it is a higher level of acquaintanceship with the medium. For example in a regression, students need to know to look for heteroscedasticity, or outliers with undue influence. In time series students know to look for unusual spikes that occur outside the regular pattern. In comparing boxplots students look at overlap. This familiarity can only come through practice.

Understand the importance of context

What is an important feature in one context, may not be so in a different context. This can be difficult for students and instructors who are at home with the purity of mathematics, in which the context can often be ignored or assumed away. Unless students understand the importance of context, often contained within the statistical enquiry process, they are unlikely to invest time in understanding the context and looking at the relationship between the model and the real world problem.

Understand the context

Sometimes the context is easily understood by students, related to their daily life or interests such as sport, music or movies. However there are times when students need to become more conversant with an unfamiliar context. This is entirely authentic to the life of a statistician, particularly a consulting statistician. We are often faced with unfamiliar contexts. Over the years I have become more knowledgeable about areas as diverse as hand injuries, scientific expeditions to Antarctica, bank branch performance, prostate cancer screening and chicken slaughtering methods. Even though we may work with an expert in the field of the investigation, we must develop a working knowledge of the field and the terminology ourselves.

Be familiar with terminology

Part of statistical literacy is to be able to use the language of statistics. There are words that have particular meaning in a statistical context, such as random, significant, error and population. It is not acceptable to use statistical terms incorrectly in a statistical report. Statistics is a peculiar mixture of hand-waving and precision, and we need to know when each is needed. There is also a fair degree of equivocation, and students should be familiar with expressions such as “it appears…”, “there is evidence that”, and “a possible implication might be…”

These other aspects lead into the three main ideas:

Know what to include and exclude

This is where checklists can come in handy for students to make sure they have all the relevant details, and that they do not include unnecessary details. My experience is that there is a tendency for students to write a narrative of how they analysed the data, step by painful step. (I call it “what I did in the holidays.”) Students can also gain from seeing good exemplars that provide the results, without unnecessary detail about the process.

Express correct ideas in appropriate written language

This is probably the most obvious requirement for a good report. This comes from basic literacy, knowing what to look for, familiarity with the terminology and understanding of the concepts.

Relate the findings to the context

Our report must answer the investigative question or research questions. Each of the statistical findings must be related to the context from with the data has been taken. This must be done with the right amount of caution, not with bold assertions about results that the data only hints at.

If these three are happening well, then a good written report is on its way!

Developing skills

So how do we make sure students have all the requisite skills and knowledge to create a good statistical report? To start with we can use the frame work provided here to diagnose where there may be gaps in the students’ knowledge or skills. Students themselves can use this as a way to find out where their weaknesses may be.

Then students must read, talk and write, over and over. Read exemplars, talk about graphs and output and write complete sentences in the classroom. All data must be real, so that students get practice at drawing conclusions about real people and things.

This framework is a work in progress and I would be pleased to have suggestions for improvement.

Learning to teach statistics, in a MOOC

I am participating in a MOOC, Teaching statistics through data investigations. A MOOC is a fancy name for an online, free, correspondence course.  The letters stand for Massive Open Online Course. I decided to enrol for several reasons. First I am always keen to learn new things. Second, I wanted to experience what it is like to be a student in a MOOC. And third I wanted to see what materials we could produce that might help teachers or learners of statistics in the US. We are doing well in the NZ market, but it isn’t really big enough to earn us enough money to do some of the really cool things we want to do in teaching statistics to the masses.

I am now up to Unit 4, and here is what I have learned so far:

Motivation and persistence

It is really difficult to stay motivated even in the best possible MOOC. Life gets in the way and there is always something more pressing than reading the materials, taking part in discussions and watching the videos. I looked up the rate of completion for MOOCs, and this article from IEEE gives the completion rate at 5%. Obviously it will differ between MOOCs, depending on the content, the style, the reward. I have found I am best to schedule time to apply to the MOOC each week, or it just doesn’t happen.

I know more than I thought I did

It is reassuring to find out that I really do have some expertise. (This may be a bit of a worry to those of you who regularly read my blog and think I am an expert in teaching statistics.) My efforts to read and ponder, to discuss and to experiment have meant that I do know more than teachers who are just beginning to teach statistics. Phew!

The investigative process matters

I finally get the importance of the Statistical Enquiry Cycle (PPDAC in New Zealand) or Statistical Investigation Cycle (Pose Collect, Analyse, Interpret in the US). I sort of got it before, but now it is falling into place. In the old-fashioned approach to teaching statistics, almost all the emphasis was on the calculations. There would be questions asking students to find the mean of a set of numbers, with no context. This is not statistics, but an arithmetic exercise. Unless a question is embedded in the statistical process, it is not statistics. There needs to be a reason, a question to answer, real data and a conclusion to draw. Every time we develop a teaching exercise for students, we need to think about where it sits in the process, and provide the context.

Brilliant questions

I was happy to participate in the LOCUS quiz to evaluate my own statistical understanding. I was relieved to get 100%. But I was SO impressed with the questions, which reflected the work and thinking that have produced them. I understand how difficult it is to write questions to teach and assess statistical understanding, as I have written hundreds of them myself. The FOCUS questions are great questions. I will be writing some of my own following their style. I loved the ones that asked what would be the best way to improve an experimental design. Inspired!

It’s easier to teach the number stuff

I’m sure I knew this, but to see so many teachers say it, cemented it in. Teacher after teacher commented that teaching procedure is so much easier than teaching concepts. Testing knowledge of procedure is so much easier than assessing conceptual understanding. Maths teachers are really good at procedure. That fluffy, hand-waving meaning stuff is just…difficult. And it all depends. Every answer depends! The implication of this is that we need to help teachers become more confident in helping students to learn the concepts of statistics. We need to develop materials that focus on the concepts. I’m pretty happy that most of my videos do just that – my “Understanding Confidence Intervals” is possibly the only video on confidence intervals that does not include a calculation or procedure.

You learn from other participants

I’ve never been keen on group work. I suspect this is true of most over-achievers. We don’t like to work with other people on assignments as they might freeload, or worse – drag our grade down. Over the years I’ve forced students to do group assignments, as they learn so much more in the process. And I hate to admit that I have also learned more when forced to do group assignments. It isn’t just about reducing the marking load. In this MOOC we are encouraged to engage with other participants through the discussion forums. This is an important part of on-line learning, particularly in a solely on-line platform (as opposed to blended learning). I just love reading what other people say. I get ideas, and I understand better where other people are coming from.

I have something to offer

It was pretty exciting to see my own video used as a resource in the course, and to hear from the instructor how she loves our Statistics Learning Centre videos.

What now?

I still have a few weeks to run on the MOOC and I will report back on what else I learn. And then in late May I am going to USCOTS (US Conference on Teaching Statistics). It’s going to cost me a bit to get there, living as I do in the middle of nowhere in Middle Earth. But I am thrilled to be able to meet with the movers and shakers in US teaching of statistics. I’ll keep you posted!

Divide and destroy in statistics teaching

A reductionist approach to teaching statistics destroys its very essence

I’ve been thinking a bit about systems thinking and reductionist thinking, especially with regard to statistics teaching and mathematics teaching. I used to teach a course on systems thinking, with regard to operations research. Systems thinking is concerned with the whole. The parts of the system interact and cannot be isolated without losing the essence of the system. Modern health providers and social workers realise that a child is a part of a family, which may be a part of a larger community, all of which have to be treated if the child is to be helped. My sister, a physio, always finds out about the home background of her patient, so that any treatment or exercise regime will fit in with their life. Reductionist thinking, by contrast, reduces things to their parts, and isolates them from their context.

Reductionist thinking in teaching mathematics

Mathematics teaching lends itself to reductionist thinking. You strip away the context, then break a problem down into smaller parts, solve the parts, and then put it all back together again. Students practise solving straight-forward problems over and over to make sure they can do it right. They feel that a column of little red ticks is evidence that they have learned something correctly. As a school pupil, I loved the columns of red ticks. I have written about the need for drill in some aspects of statistics teaching and learning, and can see the value of automaticity – or the ability to answer something without having to think too hard. That can be a little like learning a language – you need to be automatic on the vocabulary and basic verb structures. I used to spend my swimming training laps conjugating Latin verbs – amo, amas, amat (breathe), amamus, amatis, amant (breathe). I never did meet any ancient Romans to converse with, to see if my recitation had helped any, but five years of Latin vocab is invaluable in pub quizzes. But learning statistics has little in common with learning a language.

There is more to teaching than having students learn how to get stuff correct. Learning involves the mind, heart and hands. The best learning occurs when students actually want to know the answer. This doesn’t happen when context has been removed.

I was struck by Jo Boaler’s, “The Elephant in the Classroom”, which opened my eyes to how monumentally dull many mathematics lessons can be to so many people. These people are generally the ones who do not get satisfied by columns of red ticks, and either want to know more and ask questions, or want to be somewhere else. Holistic lessons, that involve group work, experiential learning, multiple solution methods and even multiple solutions, have been shown to improve mathematics learning and results, and have lifelong benefits to the students. The book challenged many of my ingrained feelings about how to teach and learn mathematics.

Teach statistics holistically, joyfully

Teaching statistics is inherently suited for a holistic approach. The problem must drive the model, not the other way around. Teachers of mathematics need to think more like teachers of social sciences if they are to capture the joy of teaching and learning statistics.

At one time I was quite taken with an approach suggested for students who are struggling, which is to go step-by-step through a number of examples in parallel and doing one step, before moving on to the next step. The examples I saw are great, and use real data, and the sentences are correct. I can see how that might appeal to students who are finding the language aspects difficult, and are interested in writing an assignment that will get them a passing grade. However I now have concerns about the approach, and it has made me think again about some of the resources we provide at Statistics Learning Centre. I don’t think a reductionist approach is suitable for the study of statistics.

Context, context, context

Context is everything in statistical analysis. Every time we produce a graph or a numerical result we should be thinking about the meaning in context. If there is a difference between the medians showing up in the graph, and reinforced by confidence intervals that do not overlap, we need to be thinking about what that means about the heart-rate in swimmers and non-swimmers, or whatever the context is. For this reason every data set needs to be real. We cannot expect students to want to find real meaning in manufactured data. And students need to spend long enough in each context in order to be able to think about the relationship between the model and the real-life situation. This is offset by the need to provide enough examples from different contexts so that students can learn what is general to all such models, and what is specific to each. It is a question of balance.

Keep asking questions

In my effort to help improve teaching of statistics, we are now developing teaching guides and suggestions to accompany our resources. I attend workshops, talk to teachers and students, read books, and think very hard about what helps all students to learn statistics in a holistic way. I do not begin to think I have the answers, but I think I have some pretty good questions. The teaching of statistics is such a new field, and so important. I hope we all keep asking questions about what we are teaching, and how and why.

Don’t teach significance testing – Guest post

The following is a guest post by Tony Hak of Rotterdam School of Management. I know Tony would love some discussion about it in the comments. I remain undecided either way, so would like to hear arguments.


It is now well understood that p-values are not informative and are not replicable. Soon null hypothesis significance testing (NHST) will be obsolete and will be replaced by the so-called “new” statistics (estimation and meta-analysis). This requires that undergraduate courses in statistics now already must teach estimation and meta-analysis as the preferred way to present and analyze empirical results. If not, then the statistical skills of the graduates from these courses will be outdated on the day these graduates leave school. But it is less evident whether or not NHST (though not preferred as an analytic tool) should still be taught. Because estimation is already routinely taught as a preparation for the teaching of NHST, the necessary reform in teaching will not require the addition of new elements in current programs but rather the removal of the current emphasis on NHST or the complete removal of the teaching of NHST from the curriculum. The current trend is to continue the teaching of NHST. In my view, however, teaching of NHST should be discontinued immediately because it is (1) ineffective and (2) dangerous, and (3) it serves no aim.

1. Ineffective: NHST is difficult to understand and it is very hard to teach it successfully

We know that even good researchers often do not appreciate the fact that NHST outcomes are subject to sampling variation and believe that a “significant” result obtained in one study almost guarantees a significant result in a replication, even one with a smaller sample size. Is it then surprising that also our students do not understand what NHST outcomes do tell us and what they do not tell us? In fact, statistics teachers know that the principles and procedures of NHST are not well understood by undergraduate students who have successfully passed their courses on NHST. Courses on NHST fail to achieve their self-stated objectives, assuming that these objectives include achieving a correct understanding of the aims, assumptions, and procedures of NHST as well as a proper interpretation of its outcomes. It is very hard indeed to find a comment on NHST in any student paper (an essay, a thesis) that is close to a correct characterization of NHST or its outcomes. There are many reasons for this failure, but obviously the most important one is that NHST a very complicated and counterintuitive procedure. It requires students and researchers to understand that a p-value is attached to an outcome (an estimate) based on its location in (or relative to) an imaginary distribution of sample outcomes around the null. Another reason, connected to their failure to understand what NHST is and does, is that students believe that NHST “corrects for chance” and hence they cannot cognitively accept that p-values themselves are subject to sampling variation (i.e. chance)

2. Dangerous: NHST thinking is addictive

One might argue that there is no harm in adding a p-value to an estimate in a research report and, hence, that there is no harm in teaching NHST, additionally to teaching estimation. However, the mixed experience with statistics reform in clinical and epidemiological research suggests that a more radical change is needed. Reports of clinical trials and of studies in clinical epidemiology now usually report estimates and confidence intervals, in addition to p-values. However, as Fidler et al. (2004) have shown, and contrary to what one would expect, authors continue to discuss their results in terms of significance. Fidler et al. therefore concluded that “editors can lead researchers to confidence intervals, but can’t make them think”. This suggests that a successful statistics reform requires a cognitive change that should be reflected in how results are interpreted in the Discussion sections of published reports.

The stickiness of dichotomous thinking can also be illustrated with the results of a more recent study of Coulson et al. (2010). They presented estimates and confidence intervals obtained in two studies to a group of researchers in psychology and medicine, and asked them to compare the results of the two studies and to interpret the difference between them. It appeared that a considerable proportion of these researchers, first, used the information about the confidence intervals to make a decision about the significance of the results (in one study) or the non-significance of the results (of the other study) and, then, drew the incorrect conclusion that the results of the two studies were in conflict. Note that no NHST information was provided and that participants were not asked in any way to “test” or to use dichotomous thinking. The results of this study suggest that NHST thinking can (and often will) be used by those who are familiar with it.

The fact that it appears to be very difficult for researchers to break the habit of thinking in terms of “testing” is, as with every addiction, a good reason for avoiding that future researchers come into contact with it in the first place and, if contact cannot be avoided, for providing them with robust resistance mechanisms. The implication for statistics teaching is that students should, first, learn estimation as the preferred way of presenting and analyzing research information and that they get introduced to NHST, if at all, only after estimation has become their routine statistical practice.

3. It serves no aim: Relevant information can be found in research reports anyway

Our experience that teaching of NHST fails its own aims consistently (because NHST is too difficult to understand) and the fact that NHST appears to be dangerous and addictive are two good reasons to immediately stop teaching NHST. But there is a seemingly strong argument for continuing to introduce students to NHST, namely that a new generation of graduates will not be able to read the (past and current) academic literature in which authors themselves routinely focus on the statistical significance of their results. It is suggested that someone who does not know NHST cannot correctly interpret outcomes of NHST practices. This argument has no value for the simple reason that it is assumed in the argument that NHST outcomes are relevant and should be interpreted. But the reason that we have the current discussion about teaching is the fact that NHST outcomes are at best uninformative (beyond the information already provided by estimation) and are at worst misleading or plain wrong. The point is all along that nothing is lost by just ignoring the information that is related to NHST in a research report and by focusing only on the information that is provided about the observed effect size and its confidence interval.


Coulson, M., Healy, M., Fidler, F., & Cumming, G. (2010). Confidence Intervals Permit, But Do Not Guarantee, Better Inference than Statistical Significance Testing. Frontiers in Quantitative Psychology and Measurement, 20(1), 37-46.

Fidler, F., Thomason, N., Finch, S., & Leeman, J. (2004). Editors Can Lead Researchers to Confidence Intervals, But Can’t Make Them Think. Statistical Reform Lessons from Medicine. Psychological Science, 15(2): 119-126.

This text is a condensed version of the paper “After Statistics Reform: Should We Still Teach Significance Testing?” published in the Proceedings of ICOTS9.


A Sensitive Approach to Risk and Screening

Risk is an important topic

In order to make informed decisions about screening and medical interventions, people need to have a good understanding of risk and probability. The communication and understanding of risk was a very popular topic at the ICOTS 9 Conference. I have written previously about risk, but want in this post I wish to introduce our new video about risk and screening, and talk more about the communication of risk.

Human elements of risk

When we teach about screening for disease, we need to be aware that many of the things we are screening for, particularly forms of cancer, can have emotional connections in our students. We may not know that one of our students has a family member who is dying of cancer. It is important that we teach about screening for cancer, but it can be a rather depressing or even trigger an emotional upset. Teachers may know their students’ circumstances and deal compassionately with this, and always speak sensitively of disease and incidence.

Ear Pox

Camilla's ear pox was not treated in time.

Camilla’s ear pox was not treated in time.

When we made our video we were aware that people in all different circumstances would view it. In order to be able to use our usual light-hearted approach in our video, we decided to avoid talking specifically about real-life diseases and their consequences. Rather we invented a disease called ear-pox, which has very convenient values for prevalence, sensitivity and specificity. The outcome of untreated ear-pox is that the ear falls off. (Which is easy to show in an animation.) The outcome of a false positive was that a person’s ear was tattooed unnecessarily. We hope that this provides a semi-realistic example, that is not upsetting to people.


An icon array can make the proportions easier to visualise.

An icon array can make the proportions easier to visualise.

In the video we use two different ways of representing the information – an icon diagram and a natural frequency table. Both formats have their strengths in helping people to understand the implications of the figures. Because we wish the layperson to have an understanding and “feel” for the implications of the figures, it is very important to find ways to represent the situation that resonate. There is a body of research into the evolutionary history of quantitative information that suggests that people are better able to understand frequencies – things we can count, than proportions. The picture of six green dots and four red dots is more intuitive than being told that the proportion of success is 60%. Icon diagrams use dots of different colours and outlines to indicate the different states in question. This is becoming a popular way to express these concepts. You can see an animation here of the risk associated with eating bacon.  Being able to see what one person in 100 looks like, is powerful in terms of defusing anxiety regarding incidence. Here is a link to an icon array used to illustrate the effects of breast cancer screening.

The table of natural frequencies is another representation that aids in understanding. This turns probabilities into numbers of people who are in each of the four categories – Correctly identified as affected, correctly identified as not affected, and false positives and false negatives. Where incidence is low, which it often is, the number of false positives outweighs the number of correct positives by a large margin, and this is shown well on the table. Tables can be used more easily than diagrams to calculate the probability, for example, that a positive result is false.

Teach about false positives

When we teach about probability and risk, it is important to make clear the negative impacts of a false positive diagnosis. These can have lasting effects of people’s health and well-being. In my work I spend quite a bit of time on a plane, and when I am not reading Amish romances I get to talk to all sorts of people. One very interesting conversation was with a genetic counsellor. As my son has a severe disability as a result of a pair of autosomal recessive genes, my husband and I had once visited such a counsellor, and I knew of their purpose. In this single-serving plane relationship, we got to talking about people’s perceptions of risk with regard to genetics, which I found fascinating. The genetic counsellor said she had talked to people who were horrified at a one in one thousand risk of some adverse outcome. In contrast other clients were relieved that the probability of an outcome (like the one for my son) was only one in four. The perceived impact of the probability is of course tempered by the severity of the result, and the worldview of the people concerned. It is also affected by their perception of independence in probabilistic outcomes. Unfortunately there are still people who think that with having had one child with a one-in-four outcome, the chances are increased that the next three children will be fine.

It is important

Teaching people about risk, independence and probability is a holy work. We can help people to make informed choice about their own health and that of their children. The Harding Center for Risk Literacy and Sir David Spiegelhalter and his colleagues are doing great work. I would love to hear of other websites that we can link to – please add them in the comments. I hope that our new video can likewise contribute.

Nominal, Ordinal, Interval, Schmordinal

Everyone wants to learn about ordinal data!

I have a video channel with about 40 videos about statistics, and I love watching to see which videos are getting the most viewing each day. As the Fall term has recently started in the northern hemisphere, the most popular video over the last month is “Types of Data: Nominal, Ordinal, Interval/Ratio.” Similarly one of the most consistently viewed posts in this blog is one I wrote over a year ago, entitled, “Oh Ordinal Data, what do we do with you?”. Understanding about the different levels of data, and what we do with them, is obviously an important introductory topic in many statistical courses. In this post I’m going to look at why this is, as it may prove useful to learner and teacher alike.

And I’m happy to announce the launch of our new Snack-size course: Types of Data. For $2.50US, anyone can sign up and get access to video, notes, quizzes and activities that will help them, in about an hour, gain a thorough understanding of types of data.

Costing no more than a box of popcorn, our snack-size course will help help you learn all you need to know about types of data.

Costing no more than a box of popcorn, our snack-size course will help help you learn all you need to know about types of data.

The Big Deal

Data is essential to statistical analysis. Without data there is no investigative process. Data can be generated through experiments, through observational studies, or dug out from historic sources. I get quite excited at the thought of the wonderful insights that good statistical analysis can produce, and the stories it can tell. A new database to play with is like Christmas morning!

But all data is not the same. We need to categorise the data to decide what to do with it for analysis, and what graphs are most appropriate. There are many good and not-so-good statistical tools available, thanks to the wonders of computer power, but they need to be driven by someone with some idea of what is sensible or meaningful.

A video that becomes popular later in the semester is entitled, “Choosing the test”. This video gives a procedure for deciding which of seven common statistical tests is most appropriate for a given analysis. It lists three things to think about – the level of data, the number of samples, and the purpose of the analysis. We developed this procedure over several years with introductory quantitative methods students. A more sophisticated approach may be necessary at higher levels, but for a terminal course in statistics, this helped students to put their new learning into a structure. Being able to discern what level of data is involved is pivotal to deciding on the appropriate test.

Categorical Data

In many textbooks and courses, the types of data are split into two – categorical and measurement. Most state that nominal and ordinal data are categorical. With categorical data we can only count the responses to a category, rather than collect up values that are measurements or counts themselves. Examples of categorical data are colour of car, ethnicity, choice of vegetable, or type of chocolate.

With Nominal data, we report frequencies or percentages, and display our data with a bar chart, or occasionally a pie chart. We can’t find a mean of nominal data. However if the different responses are coded as numbers for ease of use in a database, it is technically possible to calculate the mean and standard deviation of those numbers. A novice analyst may do so and produce nonsense output.

The very first data most children will deal with is nominal data. They collect counts of objects and draw pictograms or bar charts of them. They ask questions such as “How many children have a cat at home?” or “Do more boys than girls like Lego as their favourite toy?” In each of these cases the data is nominal, probably collected by a survey asking questions like “What pets do you have?” and “What is your favourite toy?”

Ordinal data

Another category of data is ordinal, and this is the one that causes the most problems in understanding. My blog discusses this. Ordinal data has order, and numbers assigned to responses are meaningful, in that each level is “more” than the previous level. We are frequently exposed to ordinal data in opinion polls, asking whether we strongly disagree, disagree, agree or strongly agree with something. It would be acceptable to put the responses in the opposite order, but it would have been confusing to list them in alphabetical order: agree, disagree, strongly agree, strongly disagree. What stops ordinal data from being measurement data is that we can’t be sure about how far apart the different levels on the scale are. Sometimes it is obvious that we can’t tell how far apart they are. An example of this might be the scale assigned by a movie reviewer. It is clear that a 4 star movie is better than a 3 star movie, but we can’t say how much better. Other times, when a scale is well defined and the circumstances are right, ordinal data is appropriately, but cautiously treated as interval data.

Measurement Data

The most versatile data is measurement data, which can be split into interval or ratio, depending on whether ratios of numbers have meaning. For example temperature is interval data, as it makes no sense to say that 70 degrees is twice as hot as 35 degrees. Weight, on the other hand, is ratio data, as it is true to say that 70 kg is twice as heavy as 35kg.

A more useful way to split up measurement data, for statistical analysis purposes, is into discrete or continuous data. I had always explained that discrete data was counts, and recorded as whole numbers, and that continuous data was measurements, and could take any values within a range. This definition works to a certain degree, but I recently found a better way of looking at it in the textbook published by Wiley, Chance Encounters, by Wild and Seber.

“In analyzing data, the main criterion for deciding whether to treat a variable as discrete or continuous is whether the data on that variable contains a large number of different values that are seldom repeated or a relatively small number of distinct values that keep reappearing. Variables with few repeated values are treated as continuous. Variables with many repeated values are treated as discrete.”

An example of this is the price of apps in the App store. There are only about twenty prices that can be charged – 0.99, 1.99, 2.99 etc. These are neither whole numbers, nor counts, but as you cannot have a price in between the given numbers, and there is only a small number of possibilities, this is best treated as discrete data. Conversely, the number of people attending a rock concert is a count, and you cannot get fractions of people. However, as there is a wide range of possible values, and it is unlikely that you will get exactly the same number of people at more than one concert, this data is actually continuous.

Maybe I need to redo my video now, in light of this!

And please take a look at our new course. If you are an instructor, you might like to recommend it for your students.

A Statistics-centric curriculum

Calculus is the wrong summit of the pyramid.

“The mathematics curriculum that we have is based on a foundation of arithmetic and algebra. And everything we learn after that is building up towards one subject. And at top of that pyramid, it’s calculus. And I’m here to say that I think that that is the wrong summit of the pyramid … that the correct summit — that all of our students, every high school graduate should know — should be statistics: probability and statistics.”

Ted talk by Arthur Benjamin in February 2009. Watch it – it’s only 3 minutes long.

He’s right, you know.

And New Zealand would be the place to start. In New Zealand, the subject of statistics is the second most popular subject in our final year of schooling, with a cohort of 12,606. By comparison, the cohort for  English is 16,445, and calculus has a final year cohort of 8392, similar in size to Biology (9038), Chemistry (8183) and Physics (7533).

Some might argue that statistics is already the summit of our curriculum pyramid, but I would see it more as an overly large branch that threatens to unbalance the mathematics tree. I suspect many maths teachers would see it more as a parasite that threatens to suck the life out of their beloved calculus tree. The pyramid needs some reconstruction if we are really to have a statistics-centric curriculum. (Or the tree needs pruning and reshaping – I think I have too many metaphors!)

Statistics-centric curriculum

So, to use a popular phrase, what would a statistics-centric curriculum look like? And what would be the advantages and disadvantages of such a curriculum? I will deal with implementation issues later.

To start with, the base of the pyramid would look little different from the calculus-pinnacled pyramid. In the early years of schooling the emphasis would be on number skills (arithmetic), measurement and other practical and concrete aspects. There would also be a small but increased emphasis on data collection and uncertainty. This is in fact present in the NZ curriculum. Algebra would be introduced, but as a part of the curriculum, rather than the central idea. There would be much more data collection, and probability-based experimentation. Uncertainty would be embraced, rather than ignored.

In the early years of high school, probability and statistics would take a more central place in the curriculum, so that students develop important skills ready for their pinnacle course in the final two years. They would know about the statistical enquiry cycle, how to plan and collect data and write questionnaires.  They would perform their own experiments, preferably in tandem with other curriculum areas such as biology, food-tech or economics. They would understand randomness and modelling. They would be able to make critical comments about reports in the media . They would use computers to create graphs and perform analyses.

As they approach the summit, most students would focus on statistics, while those who were planning to pursue a career in engineering would also take calculus. In the final two years students would be ready to build their own probabilistic models to simulate real-world situations and solve problems. They would analyse real data and write coherent reports. They would truly understand the concept of inference, and why confidence intervals are needed, rather than calculating them by hand or deriving formulas.

There is always a trade-off. Here is my take on the skills developed in each of the curricula.

Calculus-centric curriculum

Statistics-centric curriculum

Logical thinking Communication
Abstract thinking Dealing with uncertainty and ambiguity
Problem-solving Probabilistic models
Modelling (mainly deterministic) Argumentation, deduction
Proof, induction Critical thinking
Plotting deterministic graphs from formulas Reading and creating tables and graphs from data

I actually think you also learn many of the calc-centric skills in the stats-centric curriculum, but I wanted to look even-handed.

Implementation issues

Benjamin suggests, with charming optimism, that the new focus would be “easy to implement and inexpensive.”  I have been a very interested observer in the implementation of the new statistics curriculum in New Zealand. It has not happened easily, being inexpensive has been costly, and there has been fallout. Teachers from other countries (of which there are many in mathematics teaching in NZ) have expressed amazement at how much the NZ teachers accept with only murmurs of complaint. We are a nation with a “can do” attitude, who, by virtue of small population and a one-tier government, can be very flexible. So long as we refrain from following the follies of our big siblings, the UK, US and Australia, NZ has managed to have a world-class education system. And when a new curriculum is implemented, though there is unrest and stress, there is seldom outright rebellion.

In my business, I get the joy of visiting many schools and talking with teachers of mathematics and statistics. I am fascinated by the difference between schools, which is very much a function of the head of mathematics and principal. Some have embraced the changes in focus, and are proactively developing pathways to help all students and teachers to succeed. Others are struggling to accept that statistics has a place in the mathematics curriculum, and put the teachers of statistics into a ghetto where they are punished with excessive marking demands.

The problem is that the curriculum change has been done “on the cheap”. As well as being small and nimble, NZ is not exactly rich. The curriculum change needed more advisors, more release time for teachers to develop and more computer power. These all cost. And then you have the problem of “me too” from other subjects who have had what they feel are similar changes.

And this is not really embracing a full stats-centric curriculum. Primary school teachers need training in probability and statistics if we are really to implement Benjamin’s idea fully. The cost here is much greater as there are so many more primary school teachers. It may well take a generation of students to go through the curriculum and enter back as teachers with an improved understanding.

Computers make it possible

Without computers the only statistical analysis that was possible in the classroom was trivial. Statistics was reduced to mechanistic and boring hand calculation of light-weight statistics and time-filling graph construction. With computers, graphs and analysis can be performed at the click of a mouse, making graphs a tool, rather than an endpoint. With computing power available real data can be used, and real problems can be addressed. High level thinking is needed to make sense and judgements and to avoid wrong conclusions.

Conversely, the computer has made much of calculus superfluous. With programs that can bash their way happily through millions of iterations of a heuristic algorithm, the need for analytic methods is seriously reduced. When even simple apps on an iPad can solve an algebraic equation, and Excel can use “What if” to find solutions, the need for algebra is also questionable.

Efficient citizens

In H.G. Wells’ popular but misquoted words, efficient citizenry calls for the ability to make sense of data. As the science fiction-writer that he was, he foresaw the masses of data that would be collected and available to the great unwashed. The levelling nature of the web has made everyone a potential statistician.

According to the engaging new site from the ASA, “This is statistics”, statisticians make a difference, have fun, satisfy curiosity and make money. And these days they don’t all need to be good at calculus.

Let’s start redesigning our pyramid.