Data for teaching – real, fake, fictional

There is a push for teachers and students to use real data in learning statistics. In this post I am going to address the benefits and drawbacks of different sources of real data, and make a case for the use of good fictional data as part of a statistical programme.

Here is a video introducing our fictional data set of 180 or 240 dragons, so you know what I am referring to.

Real collected, real database, trivial, fictional

There are two main types of real data. There is the real data that students themselves collect and there is real data in a dataset, collected by someone else, and available in its entirety. There are also two main types of unreal data. The first is trivial and lacking in context and useful only for teaching mathematical manipulation. The second is what I call fictional data, which is usually based on real-life data, but with some extra advantages, so long as it is skilfully generated. Poorly generated fictional data, as often found in case studies, is very bad for teaching.

Focus

When deciding what data to use for teaching statistics, it matters what it is that you are trying to teach. If you are simply teaching how to add up 8 numbers and divide the result by 8, then you are not actually doing statistics, and trivial fake data will suffice. Statistics only exists when there is a context. If you want to teach about the statistical enquiry process, then having the students genuinely involved at each stage of the process is a good idea. If you are particularly wanting to teach about fitting a regression line, you generally want to have multiple examples for students to use. And it would be helpful for there to be at least one linear relationship.

I read a very interesting article in “Teaching Children Mathematics” entitled, “Practıcal Problems: Using Literature to Teach Statistics”. The authors, Hourigan and Leavy, used a children’s book to generate the data on the number of times different characters appeared. But what I liked most, was that they addressed the need for a “driving question”. In this case the question was provided by a pre-school teacher who could only afford to buy one puppet for the book, and wanted to know which character appears the most in the story. The children practised collecting data as the story is read aloud. They collected their own data to analyse.

Let’s have a look at the different pros and cons of student-collected data, provided real data, and high-quality fictional data.

Collecting data

When we want students to experience the process of collecting real data, they need to collect real data. However real time data collection is time consuming, and probably not necessary every year. Student data collection can be simulated by a program such as The Islands, which I wrote about previously. Data students collect themselves is much more likely to have errors in it, or be “dirty” (which is a good thing). When students are only given clean datasets, such as those usually provided with textbooks, they do not learn the skills of deciding what to do with an errant data point. Fictional databases can also have dirty data, generated into it. The fictional inhabitants of The Islands sometimes lie, and often refuse to give consent for data collection on them.

Motivation

One of the species of dragons included in our database

One of the species of dragons included in our database

I have heard that after a few years of school, graphs about cereal preference, number of siblings and type of pet get a little old. These topics, relating to the students, are motivating at first, but often there is no purpose to the investigation other than to get data for a graph.  Students need to move beyond their own experience and are keen to try something new. Data provided in a database can be motivating, if carefully chosen. There are opportunities to use databases that encourage awareness of social justice, the environment and politics. Fictional data must be motivating or there is no point! We chose dragons as a topic for our first set of fictional data, as dragons are interesting to boys and girls of most ages.

A meaningful  question

Here I refer again to that excellent article that talks about a driving question. There needs to be a reason for analysing the data. Maybe there is concern about food provided at the tuck shop, with healthy alternatives. Or can the question be tied into another area of the curriculum, such as which type of bean plant grows faster? Or can we increase the germination rate of seeds. The Census@school data has the potential for driving questions, but they probably need to be helped along. For existing datasets the driving question used by students might not be the same as the one (if any) driving the original collection of data. Sometimes that is because the original purpose is not ‘motivating’ for the students or not at an appropriate level. If you can’t find or make up a motivating meaningful question, the database is not appropriate. For our fictional dragon data, we have developed two scenarios – vaccinating for Pacific Draconian flu, and building shelters to make up for the deforestation of the island. With the vaccination scenario, we need to know about behaviour and size. For the shelter scenario we need to make decisions based on size, strength, behaviour and breath type. There is potential for a number of other scenarios that will also create driving questions.

Getting enough data

It can be difficult to get enough data for effects to show up. When students are limited to their class or family, this limits the number of observations. Only some databases have enough observations in them. There is no such problem with fictional databases, as you can just generate as much data as you need! There are special issues with regard to teaching about sampling, where you would want a large database with constrained access, like the Islands data, or the use of cards.

Variables

A problem with the data students collect is that it tends to be categorical, which limits the types of analysis that can be used. In databases, it can also be difficult to find measurement level data. In our fictional dragon database, we have height, strength and age, which all take numerical values. There are also four categorical variables. The Islands database has a large number of variables, both categorical and numerical.

Interesting Effects

Though it is good for students to understand that quite often there is no interesting effect, we would like students to have the satisfaction of finding interesting effects in the data, especially at the start. Interesting effects can be particularly exciting if the data is real, and they can apply their findings to the real world context. Student-collected-data is risky in terms of finding any noticeable relationships. It can be disappointing to do a long and involved study and find no effects. Databases from known studies can provide good effects, but unfortunately the variables with no effect tend to be left out of the databases, giving a false sense that there will always be effects. When we generate our fictional data, we make sure that there are the relationships we would like there, with enough interaction and noise. This is a highly skilled process, honed by decades of making up data for student assessment at university. (Guilty admission)

Ethics

There are ethical issues to be addressed in the collection of real data from people the students know. Informed consent should be granted, and there needs to be thorough vetting. Young students (and not so young) can be damagingly direct in their questions. You may need to explain that it can be upsetting for people to be asked if they have been beaten or bullied. When using fictional data, that may appear real, such as the Islands data, it is important for students to be aware that the data is not real, even though it is based on real effects. This was one of the reasons we chose to build our first database on dragons, as we hope that will remove any concerns about whether the data is real or not!

The following table summarises the post.

Real data collected by the students Real existing database Fictional data
(The Islands, Kiwi Kapers, Dragons, Desserts)
Data collection Real experience Nil Sometimes
Dirty data Always Seldom Can be controlled
Motivating Can be Can be Must be!
Enough data Time consuming, difficult Hard to find Always
Meaningful question Sometimes. Can be trivial Can be difficult Part of the fictional scenario
Variables Tend towards nominal Often too few variables Generate as needed
Ethical issues Often Usually fine Need to manage reality
Effects Unpredictable Can be obvious or trivial, or difficult Can be managed

What does it mean to understand statistics?

It is possible to get a passing grade in a statistics paper by putting numbers into formulas and words into memorised phrases. In fact I suspect that this is a popular way for students to make their way through a required and often unwanted subject.

Most teachers of statistics would say that they would like students to understand what they are doing. This was a common sentiment expressed by participants in the excellent MOOC, Teaching statistics through data investigations (which is currently running again in January to May 2016.)

Understanding

This makes me wonder what it means for students to understand statistics. There are many levels to understanding things. The concept of understanding has many nuances. If a person understands English, it means that they can use English with proficiency. If they are native speakers they may have little understanding of how grammar works, but they can still speak with correct grammar. We talk about understanding how a car works. I have no idea how a car works, apart from some idea that it requires petrol and the pistons go really, really fast. I can name parts of a car engine, such as distributor and drive shaft. But that doesn’t stop me from driving a car.

Understanding statistics

I propose that when we talk about teaching students to understand statistics, we want our students to know why they are doing something, and have an idea of how it works. Students also need to be fluent in the language of statistics. I would not expect any student of an introductory or high school statistics class to be able to explain how least squares regression works in terms of matrix algebra, but I would expect them to have an idea that the fitted line in a bivariate plot is a model that minimises the squared error terms. I’m not sure anyone needs to know why “degrees of freedom” are called that – or even really what degrees of freedom do. These days computer packages look after degrees of freedom for us. We DO need to understand what a p-value is, and what it is telling us. For many people it is not necessary to know how a p-value is calculated.

Ways to teach statistics

There are several approaches to teaching statistics. The approach needs to be tailored to the students and the context of the course. I prefer a hands-on, conceptual approach rather than a mathematical one. In current literature and practice there is a push for learning through investigations, often based around the statistical inquiry cycle. The problem with one long project is that students don’t get opportunities to apply principles in different situations, in such a way that will help in transfer of learning to other situations. There are some people who still teach statistics through the mathematical formulas, but I fear they are missing out on the opportunity to help students really enjoy statistics.

I do not propose to have all the answers, but we did discover one way to help students learn, alongside other methods. This approach is to use a short video, followed by a ten question true/false quiz. The quiz serves to reinforce and elaborate on concepts taught in the video, challenge students’ misconceptions, and help students be more familiar with the vocabulary and terminology of statistics. The quizzes we develop have multiple questions that randomise to give students the opportunity to try multiple times which seems to help understanding.

This short and entertaining video gives an illustration of how you can use videos and quizzes to help students learn difficult concepts.

And here is a link to a listing of all our videos and how you can get access to them. Statistics Learning Centre Videos

We have just started a newsletter letting people know of new products and hints for teaching. You can sign up here. Sign up for newsletter

The normal distribution – three tricky bits

There are several tricky things about teaching and understanding the normal distribution, and in this post I’m going to talk about three of them. They are the idea of a model, the limitations of the normal distribution, and the idea of the probability being the area under the graph.

It’s a model!

When people hear the term distribution, they tend to think of the normal distribution. It is an appealing idea, and remarkably versatile. The normal distribution is an appropriate model for the outcome of many natural, manufacturing and human endeavours. However, it is only a model, not a rule. But sometimes the way we talk about things as “being normally distributed” can encourage incorrect thinking.

This problem can be seen in exam questions about the application of the normal distribution. They imply that the normal distribution controls the universe.

Here is are examples of question starters taken from a textbook:

  1. “The time it takes Steve to walk to school follows a normal distribution with mean 30 minutes…”.
  2. Or “The time to failure for a new component is normally distributed with a mean of…”

This terminology is too prescriptive. There is no rule that says that Steve has to time his walks to school to fit a certain distribution. Nor does a machine create components that purposefully follow a normal distribution with regard to failure time. I remember, as a student being intrigued by this idea, not really understanding the concept of a model.

When we are teaching, and at other times, it is preferable to say that things are appropriately modelled by a normal distribution. This reminds students that the normal distribution is a model. The above examples could be rewritten as

  1. “The time it takes Steve to walk to school is appropriately modelled using a normal distribution with mean 30 minutes…”.
  2. And  “The time to failure for a new component is found to have a distribution well modelled by the normal, with a mean of…”

They may seem a little clumsy, but send the important message that the normal distribution is the approximation of a random process, not the other way around.

Not everything is normal

It is also important that students do not get the idea that all distributions, or even all continuous distributions are normal. The uniform distribution and negative exponential distributions are both useful in different circumstances, and look nothing like the normal distribution. And distributions of real entities can often have many zero values, that make a distribution far from normal-looking.

The normal distribution is great for things that measure mostly around a central value, and there are increasingly fewer things as you get further from the mean in both directions. I suspect most people can understand that in many areas of life you get lots of “average” people or things, and some really good and some really bad. (Except at Lake Wobegon “where all the women are strong, all the men are good looking, and all the children are above average.”)

However the normal distribution is not useful for modelling distributions that are heavily skewed. For instance, house prices tend to have a very long tail to the right, as there are some outrageously expensive houses, even several times the value of the median. At the same time there is a clear lower bound at zero, or somewhere above it.

Inter-arrival times are not well modelled by the normal distribution, but are well modelled by a negative exponential distribution. If we want to model how long it is likely to be before the next customer arrives, we would not expect there to be as many long times as there are short times, but fewer and fewer arrivals will occur with longer gaps.

Daily rainfall is not well modelled by the normal distribution as there will be many days of zero rainfall. Amount claimed in medical insurance or any kind of insurance are not going to be well modelled by the normal distribution as there are zero claims, and also the effect of excesses. Guest stay lengths at a hotel would not be well modelled by the normal distribution. Most guests will stay one or two days, and the longer the time, the fewer people would stay that long.

Area under the graph – idea of sand

The idea of the area under the graph being the probability of an outcome’s happening in that range is conceptually challenging. I was recently introduced to the sand metaphor by Holly-Lynne  and Todd Lee. If you think about each outcome as being a grain of sand (or a pixel in a picture) then you think about how likely it is to occur, by the size of the area that encloses it. I found the metaphor very appealing, and you can read the whole paper here:

Visual representations of empirical probability distributions when using the granular density metaphor

There are other aspects of the normal distribution that can be challenging. Here is our latest video to help you to teach and learn and understand the normal distribution.

Understanding Statistical Inference

Inference is THE big idea of statistics. This is where people come unstuck. Most people can accept the use of summary descriptive statistics and graphs. They can understand why data is needed. They can see that the way a sample is taken may affect how things turn out. They often understand the need for control groups. Most statistical concepts or ideas are readily explainable. But inference is a tricky, tricky idea. Well actually – it doesn’t need to be tricky, but the way it is generally taught makes it tricky.

Procedural competence with zero understanding

I cast my mind back to my first encounter with confidence intervals and hypothesis tests. I learned how to calculate them (by hand  – yes I am that old) but had not a clue what their point was. Not a single clue. I got an A in that course. This is a common occurrence. It is possible to remain blissfully unaware of what inference is all about, while answering procedural questions in exams correctly.

But, thanks to the research and thinking of a lot of really smart and dedicated statistics teachers, we are able put a stop to that. And we must. Help us make great resourcces

We need to explicitly teach what statistical inference is. Students do not learn to understand inference by doing calculations. We need to revisit the ideas behind inference frequently. The process of hypothesis testing, is counter-intuitive and so confusing that it spills its confusion over into the concept of inference. Confidence intervals are less confusing so a better intermediate point for understanding statistical inference. But we need to start with the concept of inference.

What is statistical inference?

The idea of inference is actually not that tricky if you unbundle the concept from the application or process.

The concept of statistical inference is this –

We want to know stuff about a large group of people or things (a population). We can’t ask or test them all so we take a sample. We use what we find out from the sample to draw conclusions about the population.

That is it. Now was that so hard?

Developing understanding of statistical inference in children

I have found the paper by Makar and Rubin, presenting a “framework for thinking about informal statistical inference”, particularly helpful. In this paper they summarise studies done with children learning about inference. They suggest that “ three key principles … appeared to be essential to informal statistical inference: (1) generalization, including predictions, parameter estimates, and conclusions, that extend beyond describing the given data; (2) the use of data as evidence for those generalizations; and (3) employment of probabilistic language in describing the generalization, including informal reference to levels of certainty about the conclusions drawn.” This can be summed up as Generalisation, Data as evidence, and Probabilistic Language.

We can lead into informal inference early on in the school curriculum. The key Ideas in the NZ curriculum suggest that “ teachers should be encouraging students to read beyond the data. Eg ‘If a new student joined our class, how many children do you think would be in their family?’” In other words, though we don’t specifically use the terms population and sample, we can conversationally draw attention to what we learn from this set of data, and how that might relate to other sets of data.

Explaining directly to Adults

When teaching adults we may use a more direct approach, explaining explicitly, alongside experiential learning to understanding inference. We have just completed made a video: Understanding Inference. Within the video we have presented three basic ideas condensed from the Five Big Ideas in the very helpful book published by NCTM, “Developing Essential Understanding of Statistics, Grades 9 -12”  by Peck, Gould and Miller and Zbiek.

Ideas underlying inference

  • A sample is likely to be a good representation of the population.
  • There is an element of uncertainty as to how well the sample represents the population
  • The way the sample is taken matters.

These ideas help to provide a rationale for thinking about inference, and allow students to justify what has often been assumed or taught mathematically. In addition several memorable examples involving apples, chocolate bars and opinion polls are provided. This is available for free use on YouTube. If you wish to have access to more of our videos than are available there, do email me at n.petty@statslc.com.

Please help us develop more great resources

We are currently developing exciting innovative materials to help students at all levels of the curriculum to understand and enjoy statistical analysis. We would REALLY appreciate it if any readers here today would help us out by answering this survey about fast food and dessert. It will take 10 minutes at a maximum. We don’t mind what country you are from, and will do the currency conversions.  And in a few months I will let you know how we got on. and we would love you to forward it to your friends and students to fill it out also – the more the merrier! It is an example of a well-designed questionnaire, with a meaningful purpose.

 

 

Summarising with Box and Whisker plots

In the Northern Hemisphere, it is the start of the school year, and thousands of eager students are beginning their study of statistics. I know this because this is the time of year when lots of people watch my video, Types of Data. On 23rd August the hits on the video bounced up out of their holiday slumber, just as they do every year. They gradually dwindle away until the end of January when they have a second jump in popularity, I suspect at the start of the second semester.

One of the first topics in many statistics courses is summary statistics. The greatest hits of summary statistics tend to be the mean and the standard deviation. I’ve written previously about what a difficult concept a mean is, and then another post about why the median is often preferable to the mean. In that one I promised a video. Over two years ago – oops. But we have now put these ideas into a video on summary statistics. Enjoy! In 5 minutes you can get a conceptual explanation on summary measures of position. (Also known as location or central tendency)

 

I was going to follow up with a video on spread and started to think about range, Interquartile range, mean absolute deviation, variance and standard deviation. So I decided instead to make a video on the wonderful boxplot, again comparing the shoe- owning habits of male and female students in a university in New Zealand.

Boxplots are great. When you combine them with dotplots as done in iNZIght and various other packages, they provide a wonderful way to get an overview of the distribution of a sample. More importantly, they provide a wonderful way to compare two samples or two groups within a sample. A distribution on its own has little meaning.

John Tukey was the first to make a box and whisker plot out of the 5-number summary way back in 1969. This was not long before I went to High School, so I never really heard about them until many years later. Drawing them by hand is less tedious than drawing a dotplot by hand, but still time consuming. We are SO lucky to have computers to make it possible to create graphs at the click of a mouse.

Sample distributions and summaries are not enormously interesting on their own, so I would suggest introducing boxplots as a way to compare two samples. Their worth then is apparent.

A colleague recently pointed out an interesting confusion and distinction. The interquartile range is the distance between the upper quartile and the lower quartile. The box in the box plot contains the middle 50% of the values in the sample. It is tempting for people to point this out and miss the point that the interquartile range is a good resistant measure of spread for the WHOLE sample. (Resistant means that it is not unduly affected by extreme values.) The range is a poor summary statistic as it is so easily affected by extreme values.

And now we come to our latest video, about the boxplot. This one is four and a half minutes long, and also uses the shoe sample as an example. I hope you and your students find it helpful. We have produced over 40 statistics videos, some of which are available for free on YouTube. If you are interested in using our videos in your teaching, do let us know and we will arrange access to the remainder of them.

Engaging students in learning statistics using The Islands.

Three Problems and a Solution

Modern teaching methods for statistics have gone beyond the mathematical calculation of trivial problems. Computers can enable large size studies, bringing reality to the subject, but this is not without its own problems.

Problem 1: Giving students experience of the whole statistical process

There are many reasons for students to learn statistics through running their own projects, following the complete statistical enquiry process, posing a problem, planning the data collection, collecting and cleaning the data, analysing the data and drawing conclusions that relate back to the original problem. Individual projects can be both time consuming and risky, as the quality of the report, and the resultant grade can be dependent on the quality of the data collected, which may be beyond the control of the student.

The Statistical Enquiry Cycle, which underpins the NZ statistics curriculum.

The Statistical Enquiry Cycle, which underpins the NZ statistics curriculum.

Problem 2: Giving students experience of different types of sampling

If students are given an existing database and then asked to sample from it, this can be confusing for student and sends the misleading message that we would not want to use all the data available. But physically performing a sample, based on a sampling frame, can be prohibitively time consuming.

Problem 3: Giving students experience conducting human experiments

The problem here is obvious. It is not ethical to perform experiments on humans simply to learn about performing experiments.

An innovative solution: The Islands virtual world.

I recently ran an exciting workshop for teachers on using The Islands. My main difficulty was getting the participants to stop doing the assigned tasks long enough to discuss how we might implement this in their own classrooms. They were too busy clicking around different villages and people, finding subjects of the right age and getting them to run down a 15degree slope – all without leaving the classroom.

The Island was developed by Dr Michael Bulmer from the University of Queensland and is a synthetic learning environment. The Islands, the second version, is a free, online, virtual human population created for simulating data collection.

The synthetic learning environment overcomes practical and ethical issues with applied human research, and is used for teaching students at many different levels. For a login, email james.baglin @ rmit.edu.au (without the spaces in the email address).

There are now approximately 34,000 inhabitants of the Islands, who are born, have families (or not) and die in a speeded up time frame where 1 Island year is equivalent to about 28 earth days. They each carry a genetic code that affects their health etc. The database is dynamic, so every student will get different results from it.

The Islanders

Some of the Islanders

Two magnificent features

To me the one of the two best features is the difficulty of acquiring data on individuals. It takes time for students to collect samples, as each subject must be asked individually, and the results recorded in a database. There is no easy access to the population. This is still much quicker than asking people in real-life (or “irl” as it is known on the social media.) It is obvious that you need to sample and to have a good sampling plan, and you need to work out how to record and deal with your data.

The other outstanding feature is the ability to run experiments. You can get a group of subjects and split them randomly into treatment and control groups. Then you can perform interventions, such as making them sit quietly or run about, or drink something, and then evaluate their performance on some other task. This is without requiring real-life ethical approval and informed consent. However, in a touch of reality the people of the Islands sometimes lie, and they don’t always give consent.

There are over 200 tasks that you can assign to your people, covering a wide range of topics. They include blood tests, urine tests, physiology, food and drinks, injections, tablets, mental tasks, coordination, exercise, music, environment etc. The tasks occur in real (reduced) time, so you are not inclined to include more tasks than are necessary. There is also the opportunity to survey your Islanders, with more than fifty possible questions. These also take time to answer, which encourages judicious choice of questions.

Uses

In the workshop we used the Islands to learn about sampling distributions. First each teacher took a sample of one male and one female and timed them running down a hill. We made (fairly awful) dotplots on the whiteboard using sticky notes with the individual times on them. Then each teacher took a sample and found the median time. We used very small samples of 7 each as we were constrained by time, but larger samples would be preferable. We then looked at the distributions of the medians and compared that with the distribution of our first sample. The lesson was far from polished, but the message was clear, and it gave a really good feel for what a sampling distribution is.

Within the New Zealand curriculum, we could also use The Islands to learn about bivariate relationships, sampling methods and randomised experiments.

In my workshop I had educators from across the age groups, and a primary teacher assured me that Year 4 students would be able to make use of this. Fortunately there is a maturity filter so that you can remove options relating to drugs and sexual activity.

James Baglin from RMIT University has successfully trialled the Island with high school students and psychology research methods students. The owners of the Island generously allow free access to it. Thanks to James Baglin, who helped me prepare this post.

Here are links to some interesting papers that have been written about the use of The Islands in teaching. We are excited about the potential of this teaching tool.

Michael Bulmer and J. Kimberley Haladyn (2011) Life on an Island: a simulated population to support student projects in statistics. Technology Innovations in Statistics Education, 5(1). 

Huynh, Baglin, Bedford (2014) Improving the attitudes of high school students towards statistics: An Island-based approach. ICOTS9

Baglin, Reece, Bulmer and Di Benedetto, (2013) Simulating the data investigative cycle in less than two hours: using a virtual human population, cloud collaboration and a statistical package to engage students in a quantitative research methods course.

Bulmer, M. (2010). Technologies for enhancing project assessment in large classes. In C. Reading (Ed.), Proceedings of the Eighth International Conference on Teaching Statistics, July 2010. Ljubljana, Slovenia. Retrieved from http://www.stat.auckland.ac.nz/~iase/publications/icots8/ICOTS8_5D3_BULMER.pdf

Bulmer, M., & Haladyn, J. K. (2011). Life on an Island: A simulated population to support student projects in statistics. Technology Innovations in Statistics Education, 5. Retrieved from http://escholarship.org/uc/item/2q0740hv

Baglin, J., Bedford, A., & Bulmer, M. (2013). Students’ experiences and perceptions of using a virtual environment for project-based assessment in an online introductory statistics course. Technology Innovations in Statistics Education, 7(2), 1–15. Retrieved from http://www.escholarship.org/uc/item/137120mt

Framework for statistical report-writing

I’ve been pondering what needs to happen for a student to be able to produce a good statistical report. This has been prompted by an informal survey I conducted among teachers of high school statistics in New Zealand. Because of the new curriculum and assessments, many maths teachers are feeling out of their depth, and wondering how to help their students. I asked teachers what they found most challenging in teaching statistics. By far the most common response was related to literacy or report-writing.

Here is a sample of teacher responses when asked what they find most challenging:

  • Teaching students how to write.
  • Helping students present their thoughts and ideas in a written report.
  • Writing the reports for assessment- making this interesting.
  • Helping students use the statistical language required in assessments.
  • Getting students to adequately analyse and write up a report.
  • Trying to think more like an English teacher than a Mathematics teacher

These comments tend to focus on the written aspect of the report, but I do wonder if the inability to write a coherent report is also an indicator of some other limitations.

The following diagram outlines the necessary skills and knowledge to complete a good statistical report. In addition the student needs the character traits of critical thinking, courage and persistence in order to take the report through to completion.

A framework for analysing what needs to happen in the production of a good statistical report.

A framework for analysing what needs to happen in the production of a good statistical report.

Basic Literacy

Though not sufficient on their own, literacy skills are certainly necessary. It is rather obvious that being able to write is a prerequisite to writing a report. In particular we need to be able to write in formal language. One common problem is the tendency to omit verbs, thus leaving sentences incomplete.

Understand concepts

Students must understand correctly the statistical concepts underlying the report. For example, if they are not clear what the median, mean and quartiles express, it is difficult to write convincingly about them, or indeed to report them using correct language. When students are unable to write about a concept, it may indicate that their understanding is weak.

Be familiar with graphs and output

These days students do not need to draw their own graphs or calculate statistics by hand, but do need to know what graphs and analysis are appropriate for their particular data and research question. And they need to know how to read and interpret the graphs.

Know what to look for in graphs and output

This differs from the previous aspect in that it is a higher level of acquaintanceship with the medium. For example in a regression, students need to know to look for heteroscedasticity, or outliers with undue influence. In time series students know to look for unusual spikes that occur outside the regular pattern. In comparing boxplots students look at overlap. This familiarity can only come through practice.

Understand the importance of context

What is an important feature in one context, may not be so in a different context. This can be difficult for students and instructors who are at home with the purity of mathematics, in which the context can often be ignored or assumed away. Unless students understand the importance of context, often contained within the statistical enquiry process, they are unlikely to invest time in understanding the context and looking at the relationship between the model and the real world problem.

Understand the context

Sometimes the context is easily understood by students, related to their daily life or interests such as sport, music or movies. However there are times when students need to become more conversant with an unfamiliar context. This is entirely authentic to the life of a statistician, particularly a consulting statistician. We are often faced with unfamiliar contexts. Over the years I have become more knowledgeable about areas as diverse as hand injuries, scientific expeditions to Antarctica, bank branch performance, prostate cancer screening and chicken slaughtering methods. Even though we may work with an expert in the field of the investigation, we must develop a working knowledge of the field and the terminology ourselves.

Be familiar with terminology

Part of statistical literacy is to be able to use the language of statistics. There are words that have particular meaning in a statistical context, such as random, significant, error and population. It is not acceptable to use statistical terms incorrectly in a statistical report. Statistics is a peculiar mixture of hand-waving and precision, and we need to know when each is needed. There is also a fair degree of equivocation, and students should be familiar with expressions such as “it appears…”, “there is evidence that”, and “a possible implication might be…”

These other aspects lead into the three main ideas:

Know what to include and exclude

This is where checklists can come in handy for students to make sure they have all the relevant details, and that they do not include unnecessary details. My experience is that there is a tendency for students to write a narrative of how they analysed the data, step by painful step. (I call it “what I did in the holidays.”) Students can also gain from seeing good exemplars that provide the results, without unnecessary detail about the process.

Express correct ideas in appropriate written language

This is probably the most obvious requirement for a good report. This comes from basic literacy, knowing what to look for, familiarity with the terminology and understanding of the concepts.

Relate the findings to the context

Our report must answer the investigative question or research questions. Each of the statistical findings must be related to the context from with the data has been taken. This must be done with the right amount of caution, not with bold assertions about results that the data only hints at.

If these three are happening well, then a good written report is on its way!

Developing skills

So how do we make sure students have all the requisite skills and knowledge to create a good statistical report? To start with we can use the frame work provided here to diagnose where there may be gaps in the students’ knowledge or skills. Students themselves can use this as a way to find out where their weaknesses may be.

Then students must read, talk and write, over and over. Read exemplars, talk about graphs and output and write complete sentences in the classroom. All data must be real, so that students get practice at drawing conclusions about real people and things.

This framework is a work in progress and I would be pleased to have suggestions for improvement.

Learning to teach statistics, in a MOOC

I am participating in a MOOC, Teaching statistics through data investigations. A MOOC is a fancy name for an online, free, correspondence course.  The letters stand for Massive Open Online Course. I decided to enrol for several reasons. First I am always keen to learn new things. Second, I wanted to experience what it is like to be a student in a MOOC. And third I wanted to see what materials we could produce that might help teachers or learners of statistics in the US. We are doing well in the NZ market, but it isn’t really big enough to earn us enough money to do some of the really cool things we want to do in teaching statistics to the masses.

I am now up to Unit 4, and here is what I have learned so far:

Motivation and persistence

It is really difficult to stay motivated even in the best possible MOOC. Life gets in the way and there is always something more pressing than reading the materials, taking part in discussions and watching the videos. I looked up the rate of completion for MOOCs, and this article from IEEE gives the completion rate at 5%. Obviously it will differ between MOOCs, depending on the content, the style, the reward. I have found I am best to schedule time to apply to the MOOC each week, or it just doesn’t happen.

I know more than I thought I did

It is reassuring to find out that I really do have some expertise. (This may be a bit of a worry to those of you who regularly read my blog and think I am an expert in teaching statistics.) My efforts to read and ponder, to discuss and to experiment have meant that I do know more than teachers who are just beginning to teach statistics. Phew!

The investigative process matters

I finally get the importance of the Statistical Enquiry Cycle (PPDAC in New Zealand) or Statistical Investigation Cycle (Pose Collect, Analyse, Interpret in the US). I sort of got it before, but now it is falling into place. In the old-fashioned approach to teaching statistics, almost all the emphasis was on the calculations. There would be questions asking students to find the mean of a set of numbers, with no context. This is not statistics, but an arithmetic exercise. Unless a question is embedded in the statistical process, it is not statistics. There needs to be a reason, a question to answer, real data and a conclusion to draw. Every time we develop a teaching exercise for students, we need to think about where it sits in the process, and provide the context.

Brilliant questions

I was happy to participate in the LOCUS quiz to evaluate my own statistical understanding. I was relieved to get 100%. But I was SO impressed with the questions, which reflected the work and thinking that have produced them. I understand how difficult it is to write questions to teach and assess statistical understanding, as I have written hundreds of them myself. The FOCUS questions are great questions. I will be writing some of my own following their style. I loved the ones that asked what would be the best way to improve an experimental design. Inspired!

It’s easier to teach the number stuff

I’m sure I knew this, but to see so many teachers say it, cemented it in. Teacher after teacher commented that teaching procedure is so much easier than teaching concepts. Testing knowledge of procedure is so much easier than assessing conceptual understanding. Maths teachers are really good at procedure. That fluffy, hand-waving meaning stuff is just…difficult. And it all depends. Every answer depends! The implication of this is that we need to help teachers become more confident in helping students to learn the concepts of statistics. We need to develop materials that focus on the concepts. I’m pretty happy that most of my videos do just that – my “Understanding Confidence Intervals” is possibly the only video on confidence intervals that does not include a calculation or procedure.

You learn from other participants

I’ve never been keen on group work. I suspect this is true of most over-achievers. We don’t like to work with other people on assignments as they might freeload, or worse – drag our grade down. Over the years I’ve forced students to do group assignments, as they learn so much more in the process. And I hate to admit that I have also learned more when forced to do group assignments. It isn’t just about reducing the marking load. In this MOOC we are encouraged to engage with other participants through the discussion forums. This is an important part of on-line learning, particularly in a solely on-line platform (as opposed to blended learning). I just love reading what other people say. I get ideas, and I understand better where other people are coming from.

I have something to offer

It was pretty exciting to see my own video used as a resource in the course, and to hear from the instructor how she loves our Statistics Learning Centre videos.

What now?

I still have a few weeks to run on the MOOC and I will report back on what else I learn. And then in late May I am going to USCOTS (US Conference on Teaching Statistics). It’s going to cost me a bit to get there, living as I do in the middle of nowhere in Middle Earth. But I am thrilled to be able to meet with the movers and shakers in US teaching of statistics. I’ll keep you posted!

Divide and destroy in statistics teaching

A reductionist approach to teaching statistics destroys its very essence

I’ve been thinking a bit about systems thinking and reductionist thinking, especially with regard to statistics teaching and mathematics teaching. I used to teach a course on systems thinking, with regard to operations research. Systems thinking is concerned with the whole. The parts of the system interact and cannot be isolated without losing the essence of the system. Modern health providers and social workers realise that a child is a part of a family, which may be a part of a larger community, all of which have to be treated if the child is to be helped. My sister, a physio, always finds out about the home background of her patient, so that any treatment or exercise regime will fit in with their life. Reductionist thinking, by contrast, reduces things to their parts, and isolates them from their context.

Reductionist thinking in teaching mathematics

Mathematics teaching lends itself to reductionist thinking. You strip away the context, then break a problem down into smaller parts, solve the parts, and then put it all back together again. Students practise solving straight-forward problems over and over to make sure they can do it right. They feel that a column of little red ticks is evidence that they have learned something correctly. As a school pupil, I loved the columns of red ticks. I have written about the need for drill in some aspects of statistics teaching and learning, and can see the value of automaticity – or the ability to answer something without having to think too hard. That can be a little like learning a language – you need to be automatic on the vocabulary and basic verb structures. I used to spend my swimming training laps conjugating Latin verbs – amo, amas, amat (breathe), amamus, amatis, amant (breathe). I never did meet any ancient Romans to converse with, to see if my recitation had helped any, but five years of Latin vocab is invaluable in pub quizzes. But learning statistics has little in common with learning a language.

There is more to teaching than having students learn how to get stuff correct. Learning involves the mind, heart and hands. The best learning occurs when students actually want to know the answer. This doesn’t happen when context has been removed.

I was struck by Jo Boaler’s, “The Elephant in the Classroom”, which opened my eyes to how monumentally dull many mathematics lessons can be to so many people. These people are generally the ones who do not get satisfied by columns of red ticks, and either want to know more and ask questions, or want to be somewhere else. Holistic lessons, that involve group work, experiential learning, multiple solution methods and even multiple solutions, have been shown to improve mathematics learning and results, and have lifelong benefits to the students. The book challenged many of my ingrained feelings about how to teach and learn mathematics.

Teach statistics holistically, joyfully

Teaching statistics is inherently suited for a holistic approach. The problem must drive the model, not the other way around. Teachers of mathematics need to think more like teachers of social sciences if they are to capture the joy of teaching and learning statistics.

At one time I was quite taken with an approach suggested for students who are struggling, which is to go step-by-step through a number of examples in parallel and doing one step, before moving on to the next step. The examples I saw are great, and use real data, and the sentences are correct. I can see how that might appeal to students who are finding the language aspects difficult, and are interested in writing an assignment that will get them a passing grade. However I now have concerns about the approach, and it has made me think again about some of the resources we provide at Statistics Learning Centre. I don’t think a reductionist approach is suitable for the study of statistics.

Context, context, context

Context is everything in statistical analysis. Every time we produce a graph or a numerical result we should be thinking about the meaning in context. If there is a difference between the medians showing up in the graph, and reinforced by confidence intervals that do not overlap, we need to be thinking about what that means about the heart-rate in swimmers and non-swimmers, or whatever the context is. For this reason every data set needs to be real. We cannot expect students to want to find real meaning in manufactured data. And students need to spend long enough in each context in order to be able to think about the relationship between the model and the real-life situation. This is offset by the need to provide enough examples from different contexts so that students can learn what is general to all such models, and what is specific to each. It is a question of balance.

Keep asking questions

In my effort to help improve teaching of statistics, we are now developing teaching guides and suggestions to accompany our resources. I attend workshops, talk to teachers and students, read books, and think very hard about what helps all students to learn statistics in a holistic way. I do not begin to think I have the answers, but I think I have some pretty good questions. The teaching of statistics is such a new field, and so important. I hope we all keep asking questions about what we are teaching, and how and why.

Don’t teach significance testing – Guest post

The following is a guest post by Tony Hak of Rotterdam School of Management. I know Tony would love some discussion about it in the comments. I remain undecided either way, so would like to hear arguments.

GOOD REASONS FOR NOT TEACHING SIGNIFICANCE TESTING

It is now well understood that p-values are not informative and are not replicable. Soon null hypothesis significance testing (NHST) will be obsolete and will be replaced by the so-called “new” statistics (estimation and meta-analysis). This requires that undergraduate courses in statistics now already must teach estimation and meta-analysis as the preferred way to present and analyze empirical results. If not, then the statistical skills of the graduates from these courses will be outdated on the day these graduates leave school. But it is less evident whether or not NHST (though not preferred as an analytic tool) should still be taught. Because estimation is already routinely taught as a preparation for the teaching of NHST, the necessary reform in teaching will not require the addition of new elements in current programs but rather the removal of the current emphasis on NHST or the complete removal of the teaching of NHST from the curriculum. The current trend is to continue the teaching of NHST. In my view, however, teaching of NHST should be discontinued immediately because it is (1) ineffective and (2) dangerous, and (3) it serves no aim.

1. Ineffective: NHST is difficult to understand and it is very hard to teach it successfully

We know that even good researchers often do not appreciate the fact that NHST outcomes are subject to sampling variation and believe that a “significant” result obtained in one study almost guarantees a significant result in a replication, even one with a smaller sample size. Is it then surprising that also our students do not understand what NHST outcomes do tell us and what they do not tell us? In fact, statistics teachers know that the principles and procedures of NHST are not well understood by undergraduate students who have successfully passed their courses on NHST. Courses on NHST fail to achieve their self-stated objectives, assuming that these objectives include achieving a correct understanding of the aims, assumptions, and procedures of NHST as well as a proper interpretation of its outcomes. It is very hard indeed to find a comment on NHST in any student paper (an essay, a thesis) that is close to a correct characterization of NHST or its outcomes. There are many reasons for this failure, but obviously the most important one is that NHST a very complicated and counterintuitive procedure. It requires students and researchers to understand that a p-value is attached to an outcome (an estimate) based on its location in (or relative to) an imaginary distribution of sample outcomes around the null. Another reason, connected to their failure to understand what NHST is and does, is that students believe that NHST “corrects for chance” and hence they cannot cognitively accept that p-values themselves are subject to sampling variation (i.e. chance)

2. Dangerous: NHST thinking is addictive

One might argue that there is no harm in adding a p-value to an estimate in a research report and, hence, that there is no harm in teaching NHST, additionally to teaching estimation. However, the mixed experience with statistics reform in clinical and epidemiological research suggests that a more radical change is needed. Reports of clinical trials and of studies in clinical epidemiology now usually report estimates and confidence intervals, in addition to p-values. However, as Fidler et al. (2004) have shown, and contrary to what one would expect, authors continue to discuss their results in terms of significance. Fidler et al. therefore concluded that “editors can lead researchers to confidence intervals, but can’t make them think”. This suggests that a successful statistics reform requires a cognitive change that should be reflected in how results are interpreted in the Discussion sections of published reports.

The stickiness of dichotomous thinking can also be illustrated with the results of a more recent study of Coulson et al. (2010). They presented estimates and confidence intervals obtained in two studies to a group of researchers in psychology and medicine, and asked them to compare the results of the two studies and to interpret the difference between them. It appeared that a considerable proportion of these researchers, first, used the information about the confidence intervals to make a decision about the significance of the results (in one study) or the non-significance of the results (of the other study) and, then, drew the incorrect conclusion that the results of the two studies were in conflict. Note that no NHST information was provided and that participants were not asked in any way to “test” or to use dichotomous thinking. The results of this study suggest that NHST thinking can (and often will) be used by those who are familiar with it.

The fact that it appears to be very difficult for researchers to break the habit of thinking in terms of “testing” is, as with every addiction, a good reason for avoiding that future researchers come into contact with it in the first place and, if contact cannot be avoided, for providing them with robust resistance mechanisms. The implication for statistics teaching is that students should, first, learn estimation as the preferred way of presenting and analyzing research information and that they get introduced to NHST, if at all, only after estimation has become their routine statistical practice.

3. It serves no aim: Relevant information can be found in research reports anyway

Our experience that teaching of NHST fails its own aims consistently (because NHST is too difficult to understand) and the fact that NHST appears to be dangerous and addictive are two good reasons to immediately stop teaching NHST. But there is a seemingly strong argument for continuing to introduce students to NHST, namely that a new generation of graduates will not be able to read the (past and current) academic literature in which authors themselves routinely focus on the statistical significance of their results. It is suggested that someone who does not know NHST cannot correctly interpret outcomes of NHST practices. This argument has no value for the simple reason that it is assumed in the argument that NHST outcomes are relevant and should be interpreted. But the reason that we have the current discussion about teaching is the fact that NHST outcomes are at best uninformative (beyond the information already provided by estimation) and are at worst misleading or plain wrong. The point is all along that nothing is lost by just ignoring the information that is related to NHST in a research report and by focusing only on the information that is provided about the observed effect size and its confidence interval.

Bibliography

Coulson, M., Healy, M., Fidler, F., & Cumming, G. (2010). Confidence Intervals Permit, But Do Not Guarantee, Better Inference than Statistical Significance Testing. Frontiers in Quantitative Psychology and Measurement, 20(1), 37-46.

Fidler, F., Thomason, N., Finch, S., & Leeman, J. (2004). Editors Can Lead Researchers to Confidence Intervals, But Can’t Make Them Think. Statistical Reform Lessons from Medicine. Psychological Science, 15(2): 119-126.

This text is a condensed version of the paper “After Statistics Reform: Should We Still Teach Significance Testing?” published in the Proceedings of ICOTS9.