Spreadsheets, statistics, mathematics and computational thinking

We need to teach all our students how to design, create, test, debug and use spreadsheets. We need to teach this integrated with mathematics, statistics and computational thinking. Spreadsheets can be a valuable tool in many other subject areas including biology, physics, history and geography, thus facilitating integrated learning experiences.

Spreadsheets are versatile and ubiquitous – and most have errors. A web search on “How many spreadsheets have errors?” gives alarming results. The commonly quoted figure is 88%. These spreadsheets with errors are not just little home spreadsheets for cataloguing your Lego collection or planning your next vacation. These spreadsheets with errors involve millions of dollars, and life-affecting medical and scientific research.

Using spreadsheets to teach statistics

Use a spreadsheet to draw graphs

One of the great contributions computers make to statistical analysis is the ability to display graphs of non-trivial sets of data without onerous drawing by hand. In the early 1980s I had a summer job as a research assistant to a history professor. One of my tasks was to create a series of graphs of the imports and exports for New Zealand over several decades, illustrating the effect of the UK joining the Common Market (now the EU). It required fastidious drawing and considerable time. (And correcting fluid) These same graphs can now be created almost instantaneously, and the requirement has shifted to interpreting these graphs.

Similarly, in the classroom we should not be requiring students of any age to draw statistical graphs by hand. Drawing statistical graphs by hand is a waste of time. Students may enjoy creating the graphs by hand – I understand that – it is rewarding and not cognitively taxing. So is colouring in. The important skill that students need is to be able to read the graph – to find out what it is telling them and what it is not telling them. Their time would be far better spent looking at multiple graphs of different types, and learning how to report and critique them. They also need to be able to decide what graph will best show what they are looking for or communicating. (There will be teachers saying students need to draw graphs by hand to understand them. I’d like to know the evidence for this claim. People have said for years that students need to calculate standard deviation by hand to understand it, and I reject that also.)

At primary school level, the most useful graph is almost always the bar or column chart. These are easily created physically using data cards, or by entering category totals and using a spreadsheet. Here is a video showing just how easy it is.

Use a spreadsheet for statistical calculations

Spreadsheets are also very capable of calculating summary statistics and creating hypothesis tests and confidence intervals. Dedicated statistical packages are better, but spreadsheets are generally good enough. I would also teach pivot-tables as soon as possible, but that is a topic for another day.

Using spreadsheets to teach mathematics

Spreadsheets are so versatile! Spreadsheets help students to understand the concept of a variable. When you write a formula in a cell, you are creating an algebraic formula. Spreadsheets illustrate the need for sensible rounding and numeric display. Use of order of operations and brackets is essential. They can be used for exploring patterns and developing number sense. I have taught algebraic graphing, compared with line fitting using spreadsheets. Spreadsheets can solve algebraic problems. Spreadsheets make clear the concept of mathematics as a model. Combinatorics and Graph Theory are also enabled through spreadsheets. For users using a screenreader, the linear nature of formulas in spreadsheets makes it easier to read.

Using spreadsheets to teach computational thinking

In New Zealand we are rolling out a new curriculum for information technology, including  computational thinking. At primary school level, computational thinking includes “[students] develop and debug simple programs that use inputs, outputs, sequence and iteration.” (Progress outcome 3, which is signposted to be reached at about Year 7) Later the curriculum includes branching.

In most cases the materials include unplugged activities, and coding using programmes such as Scratch or Java script. Robots such as Sphero and Lego make it all rather exciting.

All of these ideas can also be taught using a spreadsheet. Good spreadsheet design has clear inputs and outputs. The operations need to be performed in sequence, and iteration occurs when we have multiple rows in a spreadsheet. Spreadsheets need to be correct, robust and easy to use and modify. These are all important principles in coding. Unfortunately too many people have never had the background in coding and program design and thus their spreadsheets are messy, fragile, oblique and error-prone.

When we teach spreadsheets well to our students we are giving them a gift that will be useful for their life.

Experience teaching spreadsheets

I designed and taught a course in quantitative methods for business, heavily centred on spreadsheets. The students were required to use spreadsheets for mathematical and statistical tasks. Many students have since expressed their gratitude that they are capable of creating and using spreadsheets, a skill that has proved useful in employment.

 

Advertisements

Statistical software for worried students

Statistical software for worried students: Appearances matter

Let’s be honest. Most students of statistics are taking statistics because they have to. I asked my class of 100 business students who choose to take the quantitative methods course if they did not have to. Two hands went up.

Face it – statistics is necessary but not often embraced.

But actually it is worse than that. For many people statistics is the most dreaded course they are required to take. It can be the barrier to achieving their career goals as a psychologist, marketer or physician. (And it should be required for many other careers, such as journalism, law and sports commentator.)

Choice of software

Consequently, we have worried students in our statistics courses. We want them to succeed, and to do that we need to reduce their worry. One decision that will affect their engagement and success is the choice of computer package. This decision rightly causes consternation to instructors. It is telling that one of the most frequently and consistently accessed posts on this blog is Excel, SPSS, Minitab or R. It has been  viewed 55,000 times in the last five years.

The problem of which package to use is no easier to solve than it was five years ago when I wrote the post. I am helping a tertiary institution to re-develop their on-line course in statistics. This is really fun – applying all the great advice and ideas from ”
Guidelines for Assessment and Instruction in Statistics” or GAISE. They asked for advice on what statistics package to use. And I am torn.

Requirements

Here is what I want from a statistical teaching package:

  • Easy to use
  • Attractive to look at (See “Appearances Matter” below)
  • Helpful output
  • Good instructional materials with videos etc (as this is an online course)
  • Supports good pedagogy

If I’m honest I also want it to have the following characteristics:

  • Guidance for students as to what is sensible
  • Only the tests and options I want them to use in my course – not too many choices
  • An interpretation of the output
  • Data handling capabilities, including missing values
  • A pop up saying “Are you sure you want to make a three dimensional pie-chart?”

Is this too much to ask?

Possibly.

Overlapping objectives

Here is the thing. There are two objectives for introductory statistics courses that partly overlap and partly conflict. We want students to

  • Learn what statistics is all about
  • Learn how to do statistics.

They probably should not conflict, but they require different things from your software. If all we want the students to do is perform the statistical tests, then something like Excel is not a bad choice, as they get to learn Excel as well, which could be handy for c.v. expansion and job-getting. If we are more concerned about learning what statistics is all about, then an exploratory package like Tinkerplots or iNZight could be useful.

Ideally I would like students to learn both what statistics is all about and how to do it. But most of all, I want them to feel happy about doing statistical analysis.

Appearances matter

Eye-appeal is important for overcoming fear. I am confident in mathematics, but a journal article with a page of Greek letters and mathematical symbols, makes me anxious. The Latex font makes me nervous. And an ugly logo puts me off a package. I know it is shallow. But it is a thing, and I suspect I am far from alone. Marketing people know that the choice of colour, word, placement – all sorts of superficial things effect whether a product sells. We need to sell our product, statistics, and to do that, it needs to be attractive. It may well be that the people who design software are less affected by appearance, but they are not the consumers.

Terminal or continuing?

This is important: Most of our students will never do another statistical analysis.

Think about it :

Most of our students will never do another statistical analysis.

Here are the implications: It is important for the students to learn what statistics is about, where it is needed, potential problems and good communication and critique of statistical results. It is not important for students to learn how to program or use a complex package.

Students need to experience statistical analysis, to understand the process. They may also discover the excitement of a new set of data to explore, and the anticipation of an interesting result. These students may decide to study more statistics, at which time they will need to learn to operate a more comprehensive package. They will also be motivated to do so because they have chosen to continue to learn statistics.

Excel

In my previous post I talked about Excel, SPSS, Minitab and R. I used to teach with Excel, and I know many of my past students have been grateful they learned it. But now I know better, and cannot, hand on heart recommend Excel as the main software. Students need to be able to play with the data, to look at various graphs, and get a feel for variation and structure. Excel’s graphing and data-handling capabilities, particularly with regard to missing values, are not helpful. The histograms are disastrous. Excel is useful for teaching students how to do statistics, but not what statistics is all about.

SPSS and Minitab

SPSS was a personal favourite, but it has been a while since I used it. It is fairly expensive, and chances are the students will never use it again. I’m not sure how well it does data exploration. Minitab is another nice little package. Both of these are probably overkill for an introductory statistics course.

R and R Commander

R is a useful and versatile statistical language for higher level statistical analysis and learning but it is not suitable for worried students. It is unattractive.

R Commander is a graphical user interface for R. It is free, and potentially friendlier than R. It comes with a book. I am told it is a helpful introduction to R. R Commander is also unattractive. The book was formatted in Latex. The installation guide looks daunting. That is enough to make me reluctant – and I like statistics!

The screenshot displayed on the front page of R Commander

iNZight and iNZight Lite

I have used iNZight a lot. It was developed at the University of Auckland for use in their statistics course and in New Zealand schools. The full version is free and can be installed on PC and Mac computers, though there may be issues with running it on a Mac. The iNZight lite, web-based version is fine. It is free and works on any platform. I really like how easy it is to generate various plots to explore the data. You put in the data, and the graphs appear almost instantly. IiNZIght encourages engagement with the data, rather than doing things to data.

For a face-to-face course I would choose iNZight Lite. For an online course I would be a little concerned about the level of support material available. The newer version of iNZight, and iNZight lite have benefitted from some graphic design input. I like the colours and the new logo.

Genstat

I’ve heard about Genstat for some time, as an alternative to iNZight for New Zealand schools, particularly as it does bootstrapping. So I requested an inspection copy. It has a friendly vibe. I like the dialog box suggesting the graph you might like try. It lacks the immediacy of iNZight lite. It has the multiple window thing going on, which can be tricky to navigate. I was pleased at the number of sample data sets.

NZGrapher

NZGrapher is popular in New Zealand schools. It was created by a high school teacher in his spare time, and is attractive and lean. It is free, funded by donations and advertisements. You enter a data set, and it creates a wide range of graphs. It does not have the traditional tests that you would want in an introductory statistics course, as it is aimed at the NZ school curriculum requirements.

Statcrunch

Statcrunch is a more attractive, polished package, with a wide range of supporting materials. I think this would give confidence to the students. It is specifically designed for teaching and learning and is almost conversational in approach. I have not had the opportunity to try out Statcrunch. It looks inviting, and was created by Webster West, a respected statistics educator. It is now distributed by Pearson.

Jasp

I recently had my attention drawn to this new package. It is free, well-supported and has a clean, attractive interface. It has a vibe similar to SPSS. I like the immediate response as you begin your analysis. Jasp is free, and I was able to download it easily. It is not as graphical as iNZight, but is more traditional in its approach. For a course emphasising doing statistics, I like the look of this.

Data, controls and output from Jasp

Conclusion

So there you have it. I have mentioned only a few packages, but I hope my musings have got you thinking about what to look for in a package. If I were teaching an introductory statistics course, I would use iNZight Lite, Jasp, and possibly Excel. I would use iNZight Lite for data exploration. I might use Jasp for hypothesis tests, confidence intervals and model fitting. And if possible I would teach Pivot Tables in Excel, and use it for any probability calculations.

Your thoughts

This is a very important topic and I would appreciate input. Have I missed an important contender? What do you look for in a statistical package for an introductory statistics course? As a student, how important is it to you for the software to be attractive?

The Central Limit Theorem – with Dragons

To quote Willy Wonka, “A little magic now and then is relished by the best of men [and women].” Any frequent reader of this blog will know that I am of a pragmatic nature when it comes to using statistics. For most people the Central Limit Theorem can remain in the realms of magic. I have never taught it, though at times I have waved my hands past it.

Sometimes you don’t need to know.

Students who want that sort of thing can read about it in their textbooks or look it up online. The New Zealand school curriculum does not include it, as I explained in 2012.

But – there are many curricula and introductory statistics courses that include The Central Limit Theorem, so I have chosen to blog about it, in preparation to making a video. In this post I will cover what the Central Limit does. Maybe my approach will give ideas to teachers on how they might teach it.

Sampling distribution of a mean

First let me explain what a sampling distribution is. (And let me add the term to Dr Nic’s long list of statistics terms that cause unnecessary confusion.) A sampling distribution of a mean is the distribution of the means of samples of the same size taken from the same population. The distribution of the means will be different from the distribution of values in the original population.  The Central Limit Theorem tells us useful things about the sampling distribution and its relationship to the distribution of the values in the population.

Example using dragons

We have a population of 720 dragons, and each dragon has a strength value of 1 to 8. The distribution of the strengths goes from 1 to 8 and has a population mean somewhere around 4.5. We take a sample of four dragons from the population. (Dragons are difficult to catch and measure so it will just be 4.)

We find the mean. Then we think about what other values we might have got for samples that size. In real life, that is all we can do. But to understand what is happening, we will take multiple samples using cards, and then a spreadsheet, to explore what happens.

Important aspects of the Central Limit Theorem

Aspect 1: The sampling distribution will be less spread than the population from which it is drawn.

Dragon example

What do you think is the largest value the mean strength of the four dragons will take? Theoretically you could have a sample of four dragons, each with strength of 8, giving us a sample mean of 8. But it isn’t very likely. The chances that all four values are greater than the mean are pretty small.  (It’s about a 6% chance). If there are equal numbers of dragons with each strength value, then the probability of getting all four dragons with strength 8 is 0.0002.

So already we have worked out that the distribution of the sample means is going to be less spread than the distribution of the original population.

Aspect 2: The sampling distribution will be well-modelled by a normal distribution.

Now isn’t that amazing – and really useful! And even more amazing, it doesn’t even matter what the underlying population distribution is, the sampling distribution will still (in most cases) look like a normal distribution.

If you think about it, it does make sense. I like to see practical examples – so here is one!

Dragon example

We worked out that it was really unlikely to get a sample of four dragons with a mean strength of 8. Similarly it is really unlikely to get a sample of four dragons with a mean strength of 1.
Say we assumed that the strength of dragons was uniform – there are equal numbers of dragons with each of the strengths. Then we find out all the possible combinations of strengths from samples of 4 dragons. Bearing in mind there are eight different strengths, that gives us 8 to the power of 4 or 4096 possible combinations. We can use a spreadsheet to enumerate all these equally likely combinations. Then we find the mean strength and we get this distribution.

Or we could take some samples of four dragons and see what happens. We can do this with our cards, or with a handy spreadsheet, and here is what we get.

Four samples of four dragons each

The sample mean values are 4.25, 5.25, 4.75 and 6. Even with really small samples we can see that the values of the means are clustering around some central point.

Here is what the means of 1000 samples of size 4 look like:

And hey presto – it resembles a normal distribution! By that I mean that the distribution is symmetric, with a bulge in the middle and tails in either direction. A normal distribution is useful for modelling just about anything that is the result of a large number of change effects.

The bigger the sample size and the more samples we take, the more the distribution of the means (the sampling distribution) looks like a normal distribution. The Central Limit Theorem gives mathematical explanation for this. I put this in the “magic” category unless you are planning to become a theoretical statistician.

Aspect 3: The spread of the sampling distribution is related to the spread of the population.

If you think about it, this also makes sense. If there is very little variation in the population, then the sample means will all be about the same.  On the other hand, if the population is really spread out, then the sample means will be more spread out too.

Dragon example

Say the strengths of the dragons occur equally from 1 to 5 instead of from 1 to 8. The spread of the means of teams of four dragons are going to go from 1 to 5 also, though most of the values will be near the middle.

Aspect 4: Bigger samples lead to a smaller spread in the sampling distribution.

As we increase the size of the sample, the means become less varied. We reduce the effect of one extreme value. Similarly the chance of getting all high values in our sample or all low values gets smaller and smaller. Consequently the spread of the sample means will decrease. However, the reduction is not linear. By that I mean that the effect achieved by adding one more to the sample decreases, depending on how big the sample is in the first place. Say you have a sample of size n = 4, and you increase it to n = 5, that is a 25% increase in information. If you have a sample n = 100 and increase it to size n=101, that is only a 1% increase in information.

Now here is the coolest thing! The spread of the sampling distribution is the standard deviation of the population, divided by the square root of the sample size. As we do not know the standard deviation of the population (σ), we use the standard deviation of the sample (s) to approximate it. The spread of the sampling distribution is usually called the standard error, or s.e.

 

Implications of the Central Limit Theorem

The properties listed above underpin most traditional statistical inference. When we find a confidence interval of a mean, we use the standard error in the formula. If we used the sample standard deviation we would be finding the values between which most of the values in the sample lie. By using the standard error, we are finding the values between which most of the sample means lie.

Sample size

The Central Limit Theorem applies best with large samples. A rule of thumb is that the sample should be 30 or more. For smaller samples we need to use the t distribution rather than the normal distribution in our testing or confidence intervals. If the sample is very small, such as less than 15, then we can still use the t-distribution if the underlying population has a normal shape. If the underlying population is not normal, and the sample is small, then other methods, such as resampling should be used, as the Central Limit Theorem does not hold.

Reminder!

We do not take multiple samples of the same population in real life. This simulation is just that – a pretend example to show how the Central Limit Theorem plays out. When we undergo inferential statistics we have one sample, and from that we use what we know about it to make inferences about the population from which it is drawn.

Teaching suggestion

Data cards are extremely useful tools to help understand sampling and other aspects of inference. I would suggest getting the class to take multiple small samples(n=4), using cards, and finding the means. Plot the means. Then take larger samples (n=9) and similarly plot the means. Compare the shape and spread of the distributions of the means.

The Dragonistics data cards used in this post can be purchased at The StatsLC shop.

10 hints to make the most of teaching and academic conferences

Hints for conference benefit maximisation

I am writing this post in a spartan bedroom in Glenn Hall at La Trobe University in Bundoora (Melbourne, Australia.) Some outrageously loud crows are doing what crows do best outside my window, and I am pondering on how to get the most out of conferences. In my previous life as a University academic, I attended a variety of conferences, and discovered some basic hints for enjoying them and feeling that my time was productively used. In the interests of helping conference newcomers I share them here. They are in no particular order.

1. Lower your expectations

Sad, but true, many conference presentations are obvious, obscure or dull. And some are annoying. If you happen to hit an interesting and entertaining presentation – make the most of it. I have talked to several newbies this afternoon whose experience of the MAV conference could be described as underwhelming. This is not the fault of the conference, but rather a characteristic of conferences as a whole. My rule of thumb is that if you get one inspiring or useful presentation per day you are winning. (Added later) You can generally find something positive in any presentation, and it is good to tweet that. (Thanks David Butler for reminding me!)

2. Pace yourself

When I first went to conferences I would make sure that I attended every session, feeling I needed to fulfil my obligations to the University that was kindly funding (or in those days, part-funding) my trip and attendance. Fortunately I was saved from exhaustion by my mentor, who pointed out that you had diminishing returns, if not negative returns on continued attendance beyond a certain point. Consequently I have learned to take a break and not attend every single presentation I can. Some down-time is also good for contemplating what you have heard. Conferences are also a chance to step back from the daily grind, and think about your own teaching practice or research.

3. Go to something out of your usual area of interest.

When I used to teach operations research, many of the research talks went whizzing over my head. But every now and then I would find a gem, which for me would be a wonderful story I could tell in lectures of how operations research had saved money, lives or the world from annihilation. You never know what you might find.

4. Remember “Names” are just people too.

It may be my colonial cringe, but I tend to be a little in awe of the “big names” in any field. These are the people who have been paid to attend the conference, who give keynote addresses, and you have actually heard of before. Next year at the NZAMT conference in October, Dan Meyer is going to be a keynote speaker. I have to say I am a little in awe of him, but at the same time know that that is silly. Dick de Veaux is one of my favourite keynote speakers and you could not ask for a nicer or more generous person. The point is that speakers are people too, and are playing a certain role at a conference, which means that they should give the punters some of their time. – So this is my advice to paid keynote speakers – be nice to people. It can’t hurt, and it can make a real difference in their lives. Because of my YouTube videos I have a small level of celebrity among some teachers and learners of statistics in New Zealand. (I said it was small) I LOVE it when people talk to me, and hope no one would feel reluctant. If it is in your power to do good, do it

5.Talk to people.

This can be daunting and tiring, but is essential to make the most of a conference opportunity. The point of conferences is to bring people together, so if you do not talk to anyone other than the people you came with, you could have stayed home and watched presentations on YouTube. I am learning that some conversation topics are easy starters : “Where are you from?”, “What do you teach/research?”, “Have you been to any good sessions?” “What did you think of the Keynote?” are all reasonably safe. To my surprise, criticising the US President elect was not universally well received, so I have learned to avoid that one. Being positive is a good idea, and one I need to remember at all times. When I do not agree with what a speaker is saying I have a tendency to growl in a Marge Simpsonesque way. This can be disturbing to the people around me and I am attempting to stop it.

At the 2016 MAV conference I had yellow hair, and immediately found kinship with a delightful and insightful young teacher with magenta hair. Now if we could just have found an attendee with cyan hair we could have impersonated a printer cartridge! I went to Sharon’s presentation and she to mine, and I believe we were both the better for it.

We have Yellow and Magenta - but where is Cyan?

We have Yellow and Magenta – but where is Cyan?

6. Be brave and give a presentation

The biennial NZ Association of Maths teachers conference is being held in Christchurch on 3rd to 6th October 2017. I strongly believe we need more input from primary teachers, and more collaboration across primary, secondary and tertiary. It would be SOO wonderful to have many primary teachers giving workshops or presentations of work they are doing in their maths classrooms.

The abstracts are due by the end of May and if any primary teachers would like some help putting one together, I would be really happy to help.

7. Visit the trade displays

The companies that have trade displays pay a considerable amount for the right to do so. I believe that teachers need producers of educational resources, and when you visit producers and give them the opportunity to talk about their product, it makes it worthwhile for them to sponsor, thus keeping the price down. And you never know – you might find something really useful!

8. Split up to maximise benefit.

If two or more of you come from the same school or organisation, it is a good idea to plan your programme together. When there are 40 – or even 10 presentations to choose from in any one slot, it is more sensible to attend different ones.

9. Plan ahead

It is really helpful to know when conferences are approaching, so I have added links below to the maths teaching conferences I know about, in the hope that many of you may think about attending. Do let me know any you know about that I haven’t listed.

10. Wear sensible shoes

This particularly applies to the MAV conference at La Trobe University. It is held on a massive campus, which is particularly confusing to get around, so one tends to cover far more ground than intended. I was pleased I sacrificed style for comfort in this particular instance, after a bad attack of blisters last year.

11.Add your own hints

Any other conference attenders here – what other suggestions could you make?

Mathematics and statistics teaching conferences in New Zealand and Australia

Primary Mathematics Association 25 March 2017, Auckland

AAMT 11 – 13 July 2017, Canberra, Australia

2017 MANSW Annual Conference 15-17 September 2017.

NZAMT 3 – 7 October 2017 Christchurch New Zealand

MAV Early Dec 2017 Melbourne, Australia

 

 

Teachers and resource providers – uneasy bedfellows

Trade stands and cautious teachers

It is interesting to provide a trade stand at a teachers’ conference. Some teachers are keen to find out about new things, and come to see how we can help them. Others studiously avoid eye-contact in the fear that we might try to sell them something. Trade stand holders regularly put sweets and chocolate out as “bait” so that teachers will approach close enough to engage. Maybe it gives the teachers an excuse to come closer? Either way it is representative of the uneasy relationship that “trade” has with salaried educators.

Money and education

Money and education have an uneasy relationship. For schools to function, they need considerable funding – always more than what they get. In New Zealand, and in many countries, education is predominantly funded by the state. Schools are built and equipped, teachers are paid and resources are purchased with money provided by the taxpayer. Extras are raised through donations from parents and fund-raising efforts. However, because it is not apparent that money is changing hands, schools are perceived as virtuous establishments, existing only because of the goodness of the teachers. This contrasts with the attitude to resource providers, who are sometimes treated as parasitic with their motives being all about the money. It is possible that some resource providers are in it just for the money, but it seems to me that there are richer seams to mine in health, sport, retail etc.

Statistics Learning Centre is a social enterprise

Statistics Learning Centre is a social enterprise. We fit in the fuzzy area between “not-for-profit” and commercial enterprise. We measure our success by the impact we are having in empowering teachers to teach statistics and all people to understand statistics. We need money in order to continue to make an impact. Statistics Learning Centre has made considerable contributions to the teaching and learning of statistics in New Zealand and beyond for several years. This post lists just some of the impact we have had.  We believe in what we are doing, and work hard so that our social enterprise is on a solid financial footing.

StatsLC empowers teachers

Soon after the change to the NCEA Statistics standards, there was a shortage of good quality practice external exams. Even the ones provided as official exemplars did not really fit the curriculum. Teachers approached us, requesting that we create practice exams that they could trust were correct and aligned to the curriculum. We did so in 2015 and 2016, at considerable personal effort and only marginal financial recompense. We see that as helping statistics to be better understood in schools and the wider community.

We, at Statistics Learning Centre, grasp at opportunities to teach teachers how to teach statistics better, to empower all teachers to teach statistics. Our workshops are well received, and we have regular attenders who know they will get value for their time. We use an inclusive, engaging approach, and participants have a good time. I believe in our resources – the videos, the quizzes, the data cards, the activities, the professional development. I believe that they are among the best you can get. So when I give workshops, I do talk about the resources. It would seem counter-productive for all concerned, not to mention contrived, to do otherwise. They are part of a full professional development session. Many mathematical associations have no trouble with this, and I love to go to conferences, and contribute.

I am aware that there are some commercial enterprises who wish to give commercial presentations at conferences. If their materials are not of a high standard, this can put the organisers in a difficult position. Consequently some organisations have a blanket ban on any presentations that reference any paid product. I feel this is a little unfortunate, as teachers miss out on worthwhile contributions. But I understand the problem.

The Open Market model – supply and demand

I believe that there is value in a market model for resources.  People have suggested that we should get the Government to fund access to Statistics Learning Centre resources for all schools. That would be delightful, and give us the freedom and time to create even better resources. But that would make it almost impossible for any other new provider, who may have an even better product, to get a look in. When such a monopoly occurs, it reduces the incentives for providers to keep improving.

Saving work for the teachers, and building on a product

Teachers want the best for their students, and have limited budgets. They may spend considerable amounts of time printing, cutting and laminating in order to provide teaching resources at a low cost. This was one of the drivers for producing our Dragonistics data cards – to provide at a reasonable cost, some ready-made, robust resources, so that teachers did not have to make their own. As it turned out we were able to provide interesting data with clear relationships, and engaging graphics so that we provide something more than just data turned into datacards.

Free resources

There are free resources available on the internet. Other resources are provided by teachers who are sharing what they have done while teaching their own students. Resources provided for free can be of a high pedagogical standard. Having a high production standard, however, can be prohibitively expensive for individual producers who are working in their spare time.  It can also be tricky for another teacher to know what is suitable, and a lot of time can be spent trying to find high quality, reliable resources.

Teachers and resource providers – a symbiotic relationship

Teachers need good resource providers. It makes sense for experts to create high quality resources, drawing on current thinking with regard to content specific pedagogy. These can support teachers, particularly in areas in which they are less confident, such as statistics. And they do need to be paid for their work.

It helps when people recognise that our materials are sound and innovative, when they give us opportunities to contribute and when they include us at the decision-making table. Let us know how we can help you, and in partnership we can become better bed-fellows.

What do you think?

 

(Note that this post is also being published on our blog: Building a Statistics Learning  Community, as I felt it was important,)

 

Data for teaching – real, fake, fictional

There is a push for teachers and students to use real data in learning statistics. In this post I am going to address the benefits and drawbacks of different sources of real data, and make a case for the use of good fictional data as part of a statistical programme.

Here is a video introducing our fictional data set of 180 or 240 dragons, so you know what I am referring to.

Real collected, real database, trivial, fictional

There are two main types of real data. There is the real data that students themselves collect and there is real data in a dataset, collected by someone else, and available in its entirety. There are also two main types of unreal data. The first is trivial and lacking in context and useful only for teaching mathematical manipulation. The second is what I call fictional data, which is usually based on real-life data, but with some extra advantages, so long as it is skilfully generated. Poorly generated fictional data, as often found in case studies, is very bad for teaching.

Focus

When deciding what data to use for teaching statistics, it matters what it is that you are trying to teach. If you are simply teaching how to add up 8 numbers and divide the result by 8, then you are not actually doing statistics, and trivial fake data will suffice. Statistics only exists when there is a context. If you want to teach about the statistical enquiry process, then having the students genuinely involved at each stage of the process is a good idea. If you are particularly wanting to teach about fitting a regression line, you generally want to have multiple examples for students to use. And it would be helpful for there to be at least one linear relationship.

I read a very interesting article in “Teaching Children Mathematics” entitled, “Practıcal Problems: Using Literature to Teach Statistics”. The authors, Hourigan and Leavy, used a children’s book to generate the data on the number of times different characters appeared. But what I liked most, was that they addressed the need for a “driving question”. In this case the question was provided by a pre-school teacher who could only afford to buy one puppet for the book, and wanted to know which character appears the most in the story. The children practised collecting data as the story is read aloud. They collected their own data to analyse.

Let’s have a look at the different pros and cons of student-collected data, provided real data, and high-quality fictional data.

Collecting data

When we want students to experience the process of collecting real data, they need to collect real data. However real time data collection is time consuming, and probably not necessary every year. Student data collection can be simulated by a program such as The Islands, which I wrote about previously. Data students collect themselves is much more likely to have errors in it, or be “dirty” (which is a good thing). When students are only given clean datasets, such as those usually provided with textbooks, they do not learn the skills of deciding what to do with an errant data point. Fictional databases can also have dirty data, generated into it. The fictional inhabitants of The Islands sometimes lie, and often refuse to give consent for data collection on them.

Motivation

One of the species of dragons included in our database

One of the species of dragons included in our database

I have heard that after a few years of school, graphs about cereal preference, number of siblings and type of pet get a little old. These topics, relating to the students, are motivating at first, but often there is no purpose to the investigation other than to get data for a graph.  Students need to move beyond their own experience and are keen to try something new. Data provided in a database can be motivating, if carefully chosen. There are opportunities to use databases that encourage awareness of social justice, the environment and politics. Fictional data must be motivating or there is no point! We chose dragons as a topic for our first set of fictional data, as dragons are interesting to boys and girls of most ages.

A meaningful  question

Here I refer again to that excellent article that talks about a driving question. There needs to be a reason for analysing the data. Maybe there is concern about food provided at the tuck shop, with healthy alternatives. Or can the question be tied into another area of the curriculum, such as which type of bean plant grows faster? Or can we increase the germination rate of seeds. The Census@school data has the potential for driving questions, but they probably need to be helped along. For existing datasets the driving question used by students might not be the same as the one (if any) driving the original collection of data. Sometimes that is because the original purpose is not ‘motivating’ for the students or not at an appropriate level. If you can’t find or make up a motivating meaningful question, the database is not appropriate. For our fictional dragon data, we have developed two scenarios – vaccinating for Pacific Draconian flu, and building shelters to make up for the deforestation of the island. With the vaccination scenario, we need to know about behaviour and size. For the shelter scenario we need to make decisions based on size, strength, behaviour and breath type. There is potential for a number of other scenarios that will also create driving questions.

Getting enough data

It can be difficult to get enough data for effects to show up. When students are limited to their class or family, this limits the number of observations. Only some databases have enough observations in them. There is no such problem with fictional databases, as you can just generate as much data as you need! There are special issues with regard to teaching about sampling, where you would want a large database with constrained access, like the Islands data, or the use of cards.

Variables

A problem with the data students collect is that it tends to be categorical, which limits the types of analysis that can be used. In databases, it can also be difficult to find measurement level data. In our fictional dragon database, we have height, strength and age, which all take numerical values. There are also four categorical variables. The Islands database has a large number of variables, both categorical and numerical.

Interesting Effects

Though it is good for students to understand that quite often there is no interesting effect, we would like students to have the satisfaction of finding interesting effects in the data, especially at the start. Interesting effects can be particularly exciting if the data is real, and they can apply their findings to the real world context. Student-collected-data is risky in terms of finding any noticeable relationships. It can be disappointing to do a long and involved study and find no effects. Databases from known studies can provide good effects, but unfortunately the variables with no effect tend to be left out of the databases, giving a false sense that there will always be effects. When we generate our fictional data, we make sure that there are the relationships we would like there, with enough interaction and noise. This is a highly skilled process, honed by decades of making up data for student assessment at university. (Guilty admission)

Ethics

There are ethical issues to be addressed in the collection of real data from people the students know. Informed consent should be granted, and there needs to be thorough vetting. Young students (and not so young) can be damagingly direct in their questions. You may need to explain that it can be upsetting for people to be asked if they have been beaten or bullied. When using fictional data, that may appear real, such as the Islands data, it is important for students to be aware that the data is not real, even though it is based on real effects. This was one of the reasons we chose to build our first database on dragons, as we hope that will remove any concerns about whether the data is real or not!

The following table summarises the post.

Real data collected by the students Real existing database Fictional data
(The Islands, Kiwi Kapers, Dragons, Desserts)
Data collection Real experience Nil Sometimes
Dirty data Always Seldom Can be controlled
Motivating Can be Can be Must be!
Enough data Time consuming, difficult Hard to find Always
Meaningful question Sometimes. Can be trivial Can be difficult Part of the fictional scenario
Variables Tend towards nominal Often too few variables Generate as needed
Ethical issues Often Usually fine Need to manage reality
Effects Unpredictable Can be obvious or trivial, or difficult Can be managed

What does it mean to understand statistics?

It is possible to get a passing grade in a statistics paper by putting numbers into formulas and words into memorised phrases. In fact I suspect that this is a popular way for students to make their way through a required and often unwanted subject.

Most teachers of statistics would say that they would like students to understand what they are doing. This was a common sentiment expressed by participants in the excellent MOOC, Teaching statistics through data investigations (which is currently running again in January to May 2016.)

Understanding

This makes me wonder what it means for students to understand statistics. There are many levels to understanding things. The concept of understanding has many nuances. If a person understands English, it means that they can use English with proficiency. If they are native speakers they may have little understanding of how grammar works, but they can still speak with correct grammar. We talk about understanding how a car works. I have no idea how a car works, apart from some idea that it requires petrol and the pistons go really, really fast. I can name parts of a car engine, such as distributor and drive shaft. But that doesn’t stop me from driving a car.

Understanding statistics

I propose that when we talk about teaching students to understand statistics, we want our students to know why they are doing something, and have an idea of how it works. Students also need to be fluent in the language of statistics. I would not expect any student of an introductory or high school statistics class to be able to explain how least squares regression works in terms of matrix algebra, but I would expect them to have an idea that the fitted line in a bivariate plot is a model that minimises the squared error terms. I’m not sure anyone needs to know why “degrees of freedom” are called that – or even really what degrees of freedom do. These days computer packages look after degrees of freedom for us. We DO need to understand what a p-value is, and what it is telling us. For many people it is not necessary to know how a p-value is calculated.

Ways to teach statistics

There are several approaches to teaching statistics. The approach needs to be tailored to the students and the context of the course. I prefer a hands-on, conceptual approach rather than a mathematical one. In current literature and practice there is a push for learning through investigations, often based around the statistical inquiry cycle. The problem with one long project is that students don’t get opportunities to apply principles in different situations, in such a way that will help in transfer of learning to other situations. There are some people who still teach statistics through the mathematical formulas, but I fear they are missing out on the opportunity to help students really enjoy statistics.

I do not propose to have all the answers, but we did discover one way to help students learn, alongside other methods. This approach is to use a short video, followed by a ten question true/false quiz. The quiz serves to reinforce and elaborate on concepts taught in the video, challenge students’ misconceptions, and help students be more familiar with the vocabulary and terminology of statistics. The quizzes we develop have multiple questions that randomise to give students the opportunity to try multiple times which seems to help understanding.

This short and entertaining video gives an illustration of how you can use videos and quizzes to help students learn difficult concepts.

And here is a link to a listing of all our videos and how you can get access to them. Statistics Learning Centre Videos

We have just started a newsletter letting people know of new products and hints for teaching. You can sign up here. Sign up for newsletter

Engaging students in learning statistics using The Islands.

Three Problems and a Solution

Modern teaching methods for statistics have gone beyond the mathematical calculation of trivial problems. Computers can enable large size studies, bringing reality to the subject, but this is not without its own problems.

Problem 1: Giving students experience of the whole statistical process

There are many reasons for students to learn statistics through running their own projects, following the complete statistical enquiry process, posing a problem, planning the data collection, collecting and cleaning the data, analysing the data and drawing conclusions that relate back to the original problem. Individual projects can be both time consuming and risky, as the quality of the report, and the resultant grade can be dependent on the quality of the data collected, which may be beyond the control of the student.

The Statistical Enquiry Cycle, which underpins the NZ statistics curriculum.

The Statistical Enquiry Cycle, which underpins the NZ statistics curriculum.

Problem 2: Giving students experience of different types of sampling

If students are given an existing database and then asked to sample from it, this can be confusing for student and sends the misleading message that we would not want to use all the data available. But physically performing a sample, based on a sampling frame, can be prohibitively time consuming.

Problem 3: Giving students experience conducting human experiments

The problem here is obvious. It is not ethical to perform experiments on humans simply to learn about performing experiments.

An innovative solution: The Islands virtual world.

I recently ran an exciting workshop for teachers on using The Islands. My main difficulty was getting the participants to stop doing the assigned tasks long enough to discuss how we might implement this in their own classrooms. They were too busy clicking around different villages and people, finding subjects of the right age and getting them to run down a 15degree slope – all without leaving the classroom.

The Island was developed by Dr Michael Bulmer from the University of Queensland and is a synthetic learning environment. The Islands, the second version, is a free, online, virtual human population created for simulating data collection.

The synthetic learning environment overcomes practical and ethical issues with applied human research, and is used for teaching students at many different levels. For a login, email james.baglin @ rmit.edu.au (without the spaces in the email address).

There are now approximately 34,000 inhabitants of the Islands, who are born, have families (or not) and die in a speeded up time frame where 1 Island year is equivalent to about 28 earth days. They each carry a genetic code that affects their health etc. The database is dynamic, so every student will get different results from it.

The Islanders

Some of the Islanders

Two magnificent features

To me the one of the two best features is the difficulty of acquiring data on individuals. It takes time for students to collect samples, as each subject must be asked individually, and the results recorded in a database. There is no easy access to the population. This is still much quicker than asking people in real-life (or “irl” as it is known on the social media.) It is obvious that you need to sample and to have a good sampling plan, and you need to work out how to record and deal with your data.

The other outstanding feature is the ability to run experiments. You can get a group of subjects and split them randomly into treatment and control groups. Then you can perform interventions, such as making them sit quietly or run about, or drink something, and then evaluate their performance on some other task. This is without requiring real-life ethical approval and informed consent. However, in a touch of reality the people of the Islands sometimes lie, and they don’t always give consent.

There are over 200 tasks that you can assign to your people, covering a wide range of topics. They include blood tests, urine tests, physiology, food and drinks, injections, tablets, mental tasks, coordination, exercise, music, environment etc. The tasks occur in real (reduced) time, so you are not inclined to include more tasks than are necessary. There is also the opportunity to survey your Islanders, with more than fifty possible questions. These also take time to answer, which encourages judicious choice of questions.

Uses

In the workshop we used the Islands to learn about sampling distributions. First each teacher took a sample of one male and one female and timed them running down a hill. We made (fairly awful) dotplots on the whiteboard using sticky notes with the individual times on them. Then each teacher took a sample and found the median time. We used very small samples of 7 each as we were constrained by time, but larger samples would be preferable. We then looked at the distributions of the medians and compared that with the distribution of our first sample. The lesson was far from polished, but the message was clear, and it gave a really good feel for what a sampling distribution is.

Within the New Zealand curriculum, we could also use The Islands to learn about bivariate relationships, sampling methods and randomised experiments.

In my workshop I had educators from across the age groups, and a primary teacher assured me that Year 4 students would be able to make use of this. Fortunately there is a maturity filter so that you can remove options relating to drugs and sexual activity.

James Baglin from RMIT University has successfully trialled the Island with high school students and psychology research methods students. The owners of the Island generously allow free access to it. Thanks to James Baglin, who helped me prepare this post.

Here are links to some interesting papers that have been written about the use of The Islands in teaching. We are excited about the potential of this teaching tool.

Michael Bulmer and J. Kimberley Haladyn (2011) Life on an Island: a simulated population to support student projects in statistics. Technology Innovations in Statistics Education, 5(1). 

Huynh, Baglin, Bedford (2014) Improving the attitudes of high school students towards statistics: An Island-based approach. ICOTS9

Baglin, Reece, Bulmer and Di Benedetto, (2013) Simulating the data investigative cycle in less than two hours: using a virtual human population, cloud collaboration and a statistical package to engage students in a quantitative research methods course.

Bulmer, M. (2010). Technologies for enhancing project assessment in large classes. In C. Reading (Ed.), Proceedings of the Eighth International Conference on Teaching Statistics, July 2010. Ljubljana, Slovenia. Retrieved from http://www.stat.auckland.ac.nz/~iase/publications/icots8/ICOTS8_5D3_BULMER.pdf

Bulmer, M., & Haladyn, J. K. (2011). Life on an Island: A simulated population to support student projects in statistics. Technology Innovations in Statistics Education, 5. Retrieved from http://escholarship.org/uc/item/2q0740hv

Baglin, J., Bedford, A., & Bulmer, M. (2013). Students’ experiences and perceptions of using a virtual environment for project-based assessment in an online introductory statistics course. Technology Innovations in Statistics Education, 7(2), 1–15. Retrieved from http://www.escholarship.org/uc/item/137120mt

Don’t teach significance testing – Guest post

The following is a guest post by Tony Hak of Rotterdam School of Management. I know Tony would love some discussion about it in the comments. I remain undecided either way, so would like to hear arguments.

GOOD REASONS FOR NOT TEACHING SIGNIFICANCE TESTING

It is now well understood that p-values are not informative and are not replicable. Soon null hypothesis significance testing (NHST) will be obsolete and will be replaced by the so-called “new” statistics (estimation and meta-analysis). This requires that undergraduate courses in statistics now already must teach estimation and meta-analysis as the preferred way to present and analyze empirical results. If not, then the statistical skills of the graduates from these courses will be outdated on the day these graduates leave school. But it is less evident whether or not NHST (though not preferred as an analytic tool) should still be taught. Because estimation is already routinely taught as a preparation for the teaching of NHST, the necessary reform in teaching will not require the addition of new elements in current programs but rather the removal of the current emphasis on NHST or the complete removal of the teaching of NHST from the curriculum. The current trend is to continue the teaching of NHST. In my view, however, teaching of NHST should be discontinued immediately because it is (1) ineffective and (2) dangerous, and (3) it serves no aim.

1. Ineffective: NHST is difficult to understand and it is very hard to teach it successfully

We know that even good researchers often do not appreciate the fact that NHST outcomes are subject to sampling variation and believe that a “significant” result obtained in one study almost guarantees a significant result in a replication, even one with a smaller sample size. Is it then surprising that also our students do not understand what NHST outcomes do tell us and what they do not tell us? In fact, statistics teachers know that the principles and procedures of NHST are not well understood by undergraduate students who have successfully passed their courses on NHST. Courses on NHST fail to achieve their self-stated objectives, assuming that these objectives include achieving a correct understanding of the aims, assumptions, and procedures of NHST as well as a proper interpretation of its outcomes. It is very hard indeed to find a comment on NHST in any student paper (an essay, a thesis) that is close to a correct characterization of NHST or its outcomes. There are many reasons for this failure, but obviously the most important one is that NHST a very complicated and counterintuitive procedure. It requires students and researchers to understand that a p-value is attached to an outcome (an estimate) based on its location in (or relative to) an imaginary distribution of sample outcomes around the null. Another reason, connected to their failure to understand what NHST is and does, is that students believe that NHST “corrects for chance” and hence they cannot cognitively accept that p-values themselves are subject to sampling variation (i.e. chance)

2. Dangerous: NHST thinking is addictive

One might argue that there is no harm in adding a p-value to an estimate in a research report and, hence, that there is no harm in teaching NHST, additionally to teaching estimation. However, the mixed experience with statistics reform in clinical and epidemiological research suggests that a more radical change is needed. Reports of clinical trials and of studies in clinical epidemiology now usually report estimates and confidence intervals, in addition to p-values. However, as Fidler et al. (2004) have shown, and contrary to what one would expect, authors continue to discuss their results in terms of significance. Fidler et al. therefore concluded that “editors can lead researchers to confidence intervals, but can’t make them think”. This suggests that a successful statistics reform requires a cognitive change that should be reflected in how results are interpreted in the Discussion sections of published reports.

The stickiness of dichotomous thinking can also be illustrated with the results of a more recent study of Coulson et al. (2010). They presented estimates and confidence intervals obtained in two studies to a group of researchers in psychology and medicine, and asked them to compare the results of the two studies and to interpret the difference between them. It appeared that a considerable proportion of these researchers, first, used the information about the confidence intervals to make a decision about the significance of the results (in one study) or the non-significance of the results (of the other study) and, then, drew the incorrect conclusion that the results of the two studies were in conflict. Note that no NHST information was provided and that participants were not asked in any way to “test” or to use dichotomous thinking. The results of this study suggest that NHST thinking can (and often will) be used by those who are familiar with it.

The fact that it appears to be very difficult for researchers to break the habit of thinking in terms of “testing” is, as with every addiction, a good reason for avoiding that future researchers come into contact with it in the first place and, if contact cannot be avoided, for providing them with robust resistance mechanisms. The implication for statistics teaching is that students should, first, learn estimation as the preferred way of presenting and analyzing research information and that they get introduced to NHST, if at all, only after estimation has become their routine statistical practice.

3. It serves no aim: Relevant information can be found in research reports anyway

Our experience that teaching of NHST fails its own aims consistently (because NHST is too difficult to understand) and the fact that NHST appears to be dangerous and addictive are two good reasons to immediately stop teaching NHST. But there is a seemingly strong argument for continuing to introduce students to NHST, namely that a new generation of graduates will not be able to read the (past and current) academic literature in which authors themselves routinely focus on the statistical significance of their results. It is suggested that someone who does not know NHST cannot correctly interpret outcomes of NHST practices. This argument has no value for the simple reason that it is assumed in the argument that NHST outcomes are relevant and should be interpreted. But the reason that we have the current discussion about teaching is the fact that NHST outcomes are at best uninformative (beyond the information already provided by estimation) and are at worst misleading or plain wrong. The point is all along that nothing is lost by just ignoring the information that is related to NHST in a research report and by focusing only on the information that is provided about the observed effect size and its confidence interval.

Bibliography

Coulson, M., Healy, M., Fidler, F., & Cumming, G. (2010). Confidence Intervals Permit, But Do Not Guarantee, Better Inference than Statistical Significance Testing. Frontiers in Quantitative Psychology and Measurement, 20(1), 37-46.

Fidler, F., Thomason, N., Finch, S., & Leeman, J. (2004). Editors Can Lead Researchers to Confidence Intervals, But Can’t Make Them Think. Statistical Reform Lessons from Medicine. Psychological Science, 15(2): 119-126.

This text is a condensed version of the paper “After Statistics Reform: Should We Still Teach Significance Testing?” published in the Proceedings of ICOTS9.

 

The Myth of Random Sampling

I feel a slight quiver of trepidation as I begin this post – a little like the boy who pointed out that the emperor has  no clothes.

Random sampling is a myth. Practical researchers know this and deal with it. Theoretical statisticians live in a theoretical world where random sampling is possible and ubiquitous – which is just as well really. But teachers of statistics live in a strange half-real-half-theoretical world, where no one likes to point out that real-life samples are seldom random.

The problem in general

In order for most inferential statistical conclusions to be valid, the sample we are using must obey certain rules. In particular, each member of the population must have equal possibility of being chosen. In this way we reduce the opportunity for systematic error, or bias. When a truly random sample is taken, it is almost miraculous how well we can make conclusions about the source population, with even a modest sample of a thousand. On a side note, if the general population understood this, and the opportunity for bias and corruption were eliminated, general elections and referenda could be done at much less cost,  through taking a good random sample.

However! It is actually quite difficult to take a random sample of people. Random sampling is doable in biology, I suspect, where seeds or plots of land can be chosen at random. It is also fairly possible in manufacturing processes. Medical research relies on the use of a random sample, though it is seldom of the total population. Really it is more about randomisation, which can be used to support causal claims.

But the area of most interest to most people is people. We actually want to know about how people function, what they think, their economic activity, sport and many other areas. People find people interesting. To get a really good sample of people takes a lot of time and money, and is outside the reach of many researchers. In my own PhD research I approximated a random sample by taking a stratified, cluster semi-random almost convenience sample. I chose representative schools of different types throughout three diverse regions in New Zealand. At each school I asked all the students in a class at each of three year levels. The classes were meant to be randomly selected, but in fact were sometimes just the class that happened to have a teacher away, as my questionnaire was seen as a good way to keep them quiet. Was my data of any worth? I believe so, of course. Was it random? Nope.

Problems people have in getting a good sample include cost, time and also response rate. Much of the data that is cited in papers is far from random.

The problem in teaching

The wonderful thing about teaching statistics is that we can actually collect real data and do analysis on it, and get a feel for the detective nature of the discipline. The problem with sampling is that we seldom have access to truly random data. By random I am not meaning just simple random sampling, the least simple method! Even cluster, systematic and stratified sampling can be a challenge in a classroom setting. And sometimes if we think too hard we realise that what we have is actually a population, and not a sample at all.

It is a great experience for students to collect their own data. They can write a questionnaire and find out all sorts of interesting things, through their own trial and error. But mostly students do not have access to enough subjects to take a random sample. Even if we go to secondary sources, the data is seldom random, and the students do not get the opportunity to take the sample. It would be a pity not to use some interesting data, just because the collection method was dubious (or even realistic). At the same time we do not want students to think that seriously dodgy data has the same value as a carefully collected random sample.

Possible solutions

These are more suggestions than solutions, but the essence is to do the best you can and make sure the students learn to be critical of their own methods.

Teach the best way, pretend and look for potential problems.

Teach the ideal and also teach the reality. Teach about the different ways of taking random samples. Use my video if you like!

Get students to think about the pros and cons of each method, and where problems could arise. Also get them to think about the kinds of data they are using in their exercises, and what biases they may have.

We also need to teach that, used judiciously, a convenience sample can still be of value. For example I have collected data from students in my class about how far they live from university , and whether or not they have a car. This data is not a random sample of any population. However, it is still reasonable to suggest that it may represent all the students at the university – or maybe just the first year students. It possibly represents students in the years preceding and following my sample, unless something has happened to change the landscape. It has worth in terms of inference. Realistically, I am never going to take a truly random sample of all university students, so this may be the most suitable data I ever get.  I have no doubt that it is better than no information.

All questions are not of equal worth. Knowing whether students who own cars live further from university, in general, is interesting but not of great importance. Were I to be researching topics of great importance, such safety features in roads or medicine, I would have a greater need for rigorous sampling.

So generally, I see no harm in pretending. I use the data collected from my class, and I say that we will pretend that it comes from a representative random sample. We talk about why it isn’t, but then we move on. It is still interesting data, it is real and it is there. When we write up analysis we include critical comments with provisos on how the sample may have possible bias.

What is important is for students to experience the excitement of discovering real effects (or lack thereof) in real data. What is important is for students to be critical of these discoveries, through understanding the limitations of the data collection process. Consequently I see no harm in using non-random, realistic sampled real data, with a healthy dose of scepticism.