Mathematics teaching Rockstar – Jo Boaler

Moving around the education sector

My life in education has included being a High School maths teacher, then teaching at university for 20 years. I then made resources and gave professional development workshops for secondary school teachers. It was exciting to see the new statistics curriculum being implemented into the New Zealand schools. And now we are making resources and participating in the primary school sector. It is wonderful to learn from each level of teaching. We would all benefit from more discussion across the levels.

Educational theory and idea-promoters

My father used to say (and the sexism has not escaped me) “Never run after a woman, a bus or an educational theory, as there will be another one along soon.” Education theories have lifespans, and some theories are more useful than others. I am not a fan of “learning styles” and fear they have served many students ill. However, there are some current ideas and idea-promoters in the teaching of mathematics that I find very attractive. I will begin with Jo Boaler, and intend to introduce you over the next few weeks to Dan Meyer, Carol Dweck and the person who wrote “Making it stick.”

Jo Boaler – Click here for official information

My first contact with Jo Boaler was reading “The Elephant in the Classroom.” In this Jo points out how society is complicit in the idea of a “maths brain”. Somehow it is socially acceptable to admit or be almost defensively proud of being “no good at maths”. A major problem with this is that her research suggests that later success in life is connected to attainment in mathematics. In order to address this, Jo explores a less procedural approach to teaching mathematics, including greater communication and collaboration.

Mathematical Mindsets

It is interesting to  see the effect Jo Boaler’s recent book, “Mathematical Mindsets “, is having on colleagues in the teaching profession. The maths advisors based in Canterbury NZ are strong proponents of her idea of “rich tasks”. Here are some tweets about the book:

“I am loving Mathematical Mindsets by @joboaler – seriously – everyone needs to read this”

“Even if you don’t teach maths this book will change how you teach for ever.”

“Hands down the most important thing I have ever read in my life”

What I get from Jo Boaler’s work is that we need to rethink how we teach mathematics. The methods that worked for mathematics teachers are not the methods we need to be using for everyone. The defence “The old ways worked for me” is not defensible in terms of inclusion and equity. I will not even try to boil down her approach in this post, but rather suggest readers visit her website and read the book!

At Statistics Learning Centre we are committed to producing materials that fit with sound pedagogical methods. Our Dragonistics data cards are perfect for use in a number of rich tasks. We are constantly thinking of ways to embed mathematics and statistics tasks into the curriculum of other subjects.

Challenges of implementation

I am aware that many of you readers are not primary or secondary teachers. There are so many barriers to getting mathematics taught in a more exciting, integrated and effective way. Primary teachers are not mathematics specialists, and may well feel less confident in their maths ability. Secondary mathematics teachers may feel constrained by the curriculum and the constant assessment in the last three years of schooling in New Zealand. And tertiary teachers have little incentive to improve their teaching, as it takes time from the more valued work of research.

Though it would be exciting if Jo Boaler’s ideas and methods were espoused in their entirety at all levels of mathematics teaching, I am aware that this is unlikely – as in a probability of zero. However, I believe that all teachers at all levels can all improve, even a little at a time. We at Statistics Learning Centre are committed to this vision. Through our blog, our resources, our games, our videos, our lessons and our professional development we aim to empower all teacher to teach statistics – better! We espouse the theories and teachings explained in Mathematical Mindsets, and hope that you also will learn about them, and endeavour to put them into place, whatever level you teach at.

Do tell us if Jo Boalers work has had an impact on what you do. How can the ideas apply at all levels of teaching? Do teachers need to have a growth mindset about their own ability to improve their teaching?

Here are some quotes to leave you with:

Mathematical Mindsets Quotes

“Many parents have asked me: What is the point of my child explaining their work if they can get the answer right? My answer is always the same: Explaining your work is what, in mathematics, we call reasoning, and reasoning is central to the discipline of mathematics.”
“Numerous research studies (Silver, 1994) have shown that when students are given opportunities to pose mathematics problems, to consider a situation and think of a mathematics question to ask of it—which is the essence of real mathematics—they become more deeply engaged and perform at higher levels.”
“The researchers found that when students were given problems to solve, and they did not know methods to solve them, but they were given opportunity to explore the problems, they became curious, and their brains were primed to learn new methods, so that when teachers taught the methods, students paid greater attention to them and were more motivated to learn them. The researchers published their results with the title “A Time for Telling,” and they argued that the question is not “Should we tell or explain methods?” but “When is the best time do this?”
“five suggestions that can work to open mathematics tasks and increase their potential for learning: Open up the task so that there are multiple methods, pathways, and representations. Include inquiry opportunities. Ask the problem before teaching the method. Add a visual component and ask students how they see the mathematics. Extend the task to make it lower floor and higher ceiling. Ask students to convince and reason; be skeptical.”

All quotes from

Jo Boaler, Mathematical Mindsets: Unleashing Students’ Potential through Creative Math, Inspiring Messages and Innovative Teaching

Teachers and resource providers – uneasy bedfellows

Trade stands and cautious teachers

It is interesting to provide a trade stand at a teachers’ conference. Some teachers are keen to find out about new things, and come to see how we can help them. Others studiously avoid eye-contact in the fear that we might try to sell them something. Trade stand holders regularly put sweets and chocolate out as “bait” so that teachers will approach close enough to engage. Maybe it gives the teachers an excuse to come closer? Either way it is representative of the uneasy relationship that “trade” has with salaried educators.

Money and education

Money and education have an uneasy relationship. For schools to function, they need considerable funding – always more than what they get. In New Zealand, and in many countries, education is predominantly funded by the state. Schools are built and equipped, teachers are paid and resources are purchased with money provided by the taxpayer. Extras are raised through donations from parents and fund-raising efforts. However, because it is not apparent that money is changing hands, schools are perceived as virtuous establishments, existing only because of the goodness of the teachers. This contrasts with the attitude to resource providers, who are sometimes treated as parasitic with their motives being all about the money. It is possible that some resource providers are in it just for the money, but it seems to me that there are richer seams to mine in health, sport, retail etc.

Statistics Learning Centre is a social enterprise

Statistics Learning Centre is a social enterprise. We fit in the fuzzy area between “not-for-profit” and commercial enterprise. We measure our success by the impact we are having in empowering teachers to teach statistics and all people to understand statistics. We need money in order to continue to make an impact. Statistics Learning Centre has made considerable contributions to the teaching and learning of statistics in New Zealand and beyond for several years. This post lists just some of the impact we have had.  We believe in what we are doing, and work hard so that our social enterprise is on a solid financial footing.

StatsLC empowers teachers

Soon after the change to the NCEA Statistics standards, there was a shortage of good quality practice external exams. Even the ones provided as official exemplars did not really fit the curriculum. Teachers approached us, requesting that we create practice exams that they could trust were correct and aligned to the curriculum. We did so in 2015 and 2016, at considerable personal effort and only marginal financial recompense. We see that as helping statistics to be better understood in schools and the wider community.

We, at Statistics Learning Centre, grasp at opportunities to teach teachers how to teach statistics better, to empower all teachers to teach statistics. Our workshops are well received, and we have regular attenders who know they will get value for their time. We use an inclusive, engaging approach, and participants have a good time. I believe in our resources – the videos, the quizzes, the data cards, the activities, the professional development. I believe that they are among the best you can get. So when I give workshops, I do talk about the resources. It would seem counter-productive for all concerned, not to mention contrived, to do otherwise. They are part of a full professional development session. Many mathematical associations have no trouble with this, and I love to go to conferences, and contribute.

I am aware that there are some commercial enterprises who wish to give commercial presentations at conferences. If their materials are not of a high standard, this can put the organisers in a difficult position. Consequently some organisations have a blanket ban on any presentations that reference any paid product. I feel this is a little unfortunate, as teachers miss out on worthwhile contributions. But I understand the problem.

The Open Market model – supply and demand

I believe that there is value in a market model for resources.  People have suggested that we should get the Government to fund access to Statistics Learning Centre resources for all schools. That would be delightful, and give us the freedom and time to create even better resources. But that would make it almost impossible for any other new provider, who may have an even better product, to get a look in. When such a monopoly occurs, it reduces the incentives for providers to keep improving.

Saving work for the teachers, and building on a product

Teachers want the best for their students, and have limited budgets. They may spend considerable amounts of time printing, cutting and laminating in order to provide teaching resources at a low cost. This was one of the drivers for producing our Dragonistics data cards – to provide at a reasonable cost, some ready-made, robust resources, so that teachers did not have to make their own. As it turned out we were able to provide interesting data with clear relationships, and engaging graphics so that we provide something more than just data turned into datacards.

Free resources

There are free resources available on the internet. Other resources are provided by teachers who are sharing what they have done while teaching their own students. Resources provided for free can be of a high pedagogical standard. Having a high production standard, however, can be prohibitively expensive for individual producers who are working in their spare time.  It can also be tricky for another teacher to know what is suitable, and a lot of time can be spent trying to find high quality, reliable resources.

Teachers and resource providers – a symbiotic relationship

Teachers need good resource providers. It makes sense for experts to create high quality resources, drawing on current thinking with regard to content specific pedagogy. These can support teachers, particularly in areas in which they are less confident, such as statistics. And they do need to be paid for their work.

It helps when people recognise that our materials are sound and innovative, when they give us opportunities to contribute and when they include us at the decision-making table. Let us know how we can help you, and in partnership we can become better bed-fellows.

What do you think?

 

(Note that this post is also being published on our blog: Building a Statistics Learning  Community, as I felt it was important,)

 

Data for teaching – real, fake, fictional

There is a push for teachers and students to use real data in learning statistics. In this post I am going to address the benefits and drawbacks of different sources of real data, and make a case for the use of good fictional data as part of a statistical programme.

Here is a video introducing our fictional data set of 180 or 240 dragons, so you know what I am referring to.

Real collected, real database, trivial, fictional

There are two main types of real data. There is the real data that students themselves collect and there is real data in a dataset, collected by someone else, and available in its entirety. There are also two main types of unreal data. The first is trivial and lacking in context and useful only for teaching mathematical manipulation. The second is what I call fictional data, which is usually based on real-life data, but with some extra advantages, so long as it is skilfully generated. Poorly generated fictional data, as often found in case studies, is very bad for teaching.

Focus

When deciding what data to use for teaching statistics, it matters what it is that you are trying to teach. If you are simply teaching how to add up 8 numbers and divide the result by 8, then you are not actually doing statistics, and trivial fake data will suffice. Statistics only exists when there is a context. If you want to teach about the statistical enquiry process, then having the students genuinely involved at each stage of the process is a good idea. If you are particularly wanting to teach about fitting a regression line, you generally want to have multiple examples for students to use. And it would be helpful for there to be at least one linear relationship.

I read a very interesting article in “Teaching Children Mathematics” entitled, “Practıcal Problems: Using Literature to Teach Statistics”. The authors, Hourigan and Leavy, used a children’s book to generate the data on the number of times different characters appeared. But what I liked most, was that they addressed the need for a “driving question”. In this case the question was provided by a pre-school teacher who could only afford to buy one puppet for the book, and wanted to know which character appears the most in the story. The children practised collecting data as the story is read aloud. They collected their own data to analyse.

Let’s have a look at the different pros and cons of student-collected data, provided real data, and high-quality fictional data.

Collecting data

When we want students to experience the process of collecting real data, they need to collect real data. However real time data collection is time consuming, and probably not necessary every year. Student data collection can be simulated by a program such as The Islands, which I wrote about previously. Data students collect themselves is much more likely to have errors in it, or be “dirty” (which is a good thing). When students are only given clean datasets, such as those usually provided with textbooks, they do not learn the skills of deciding what to do with an errant data point. Fictional databases can also have dirty data, generated into it. The fictional inhabitants of The Islands sometimes lie, and often refuse to give consent for data collection on them.

Motivation

One of the species of dragons included in our database

One of the species of dragons included in our database

I have heard that after a few years of school, graphs about cereal preference, number of siblings and type of pet get a little old. These topics, relating to the students, are motivating at first, but often there is no purpose to the investigation other than to get data for a graph.  Students need to move beyond their own experience and are keen to try something new. Data provided in a database can be motivating, if carefully chosen. There are opportunities to use databases that encourage awareness of social justice, the environment and politics. Fictional data must be motivating or there is no point! We chose dragons as a topic for our first set of fictional data, as dragons are interesting to boys and girls of most ages.

A meaningful  question

Here I refer again to that excellent article that talks about a driving question. There needs to be a reason for analysing the data. Maybe there is concern about food provided at the tuck shop, with healthy alternatives. Or can the question be tied into another area of the curriculum, such as which type of bean plant grows faster? Or can we increase the germination rate of seeds. The Census@school data has the potential for driving questions, but they probably need to be helped along. For existing datasets the driving question used by students might not be the same as the one (if any) driving the original collection of data. Sometimes that is because the original purpose is not ‘motivating’ for the students or not at an appropriate level. If you can’t find or make up a motivating meaningful question, the database is not appropriate. For our fictional dragon data, we have developed two scenarios – vaccinating for Pacific Draconian flu, and building shelters to make up for the deforestation of the island. With the vaccination scenario, we need to know about behaviour and size. For the shelter scenario we need to make decisions based on size, strength, behaviour and breath type. There is potential for a number of other scenarios that will also create driving questions.

Getting enough data

It can be difficult to get enough data for effects to show up. When students are limited to their class or family, this limits the number of observations. Only some databases have enough observations in them. There is no such problem with fictional databases, as you can just generate as much data as you need! There are special issues with regard to teaching about sampling, where you would want a large database with constrained access, like the Islands data, or the use of cards.

Variables

A problem with the data students collect is that it tends to be categorical, which limits the types of analysis that can be used. In databases, it can also be difficult to find measurement level data. In our fictional dragon database, we have height, strength and age, which all take numerical values. There are also four categorical variables. The Islands database has a large number of variables, both categorical and numerical.

Interesting Effects

Though it is good for students to understand that quite often there is no interesting effect, we would like students to have the satisfaction of finding interesting effects in the data, especially at the start. Interesting effects can be particularly exciting if the data is real, and they can apply their findings to the real world context. Student-collected-data is risky in terms of finding any noticeable relationships. It can be disappointing to do a long and involved study and find no effects. Databases from known studies can provide good effects, but unfortunately the variables with no effect tend to be left out of the databases, giving a false sense that there will always be effects. When we generate our fictional data, we make sure that there are the relationships we would like there, with enough interaction and noise. This is a highly skilled process, honed by decades of making up data for student assessment at university. (Guilty admission)

Ethics

There are ethical issues to be addressed in the collection of real data from people the students know. Informed consent should be granted, and there needs to be thorough vetting. Young students (and not so young) can be damagingly direct in their questions. You may need to explain that it can be upsetting for people to be asked if they have been beaten or bullied. When using fictional data, that may appear real, such as the Islands data, it is important for students to be aware that the data is not real, even though it is based on real effects. This was one of the reasons we chose to build our first database on dragons, as we hope that will remove any concerns about whether the data is real or not!

The following table summarises the post.

Real data collected by the students Real existing database Fictional data
(The Islands, Kiwi Kapers, Dragons, Desserts)
Data collection Real experience Nil Sometimes
Dirty data Always Seldom Can be controlled
Motivating Can be Can be Must be!
Enough data Time consuming, difficult Hard to find Always
Meaningful question Sometimes. Can be trivial Can be difficult Part of the fictional scenario
Variables Tend towards nominal Often too few variables Generate as needed
Ethical issues Often Usually fine Need to manage reality
Effects Unpredictable Can be obvious or trivial, or difficult Can be managed

Divide and destroy in statistics teaching

A reductionist approach to teaching statistics destroys its very essence

I’ve been thinking a bit about systems thinking and reductionist thinking, especially with regard to statistics teaching and mathematics teaching. I used to teach a course on systems thinking, with regard to operations research. Systems thinking is concerned with the whole. The parts of the system interact and cannot be isolated without losing the essence of the system. Modern health providers and social workers realise that a child is a part of a family, which may be a part of a larger community, all of which have to be treated if the child is to be helped. My sister, a physio, always finds out about the home background of her patient, so that any treatment or exercise regime will fit in with their life. Reductionist thinking, by contrast, reduces things to their parts, and isolates them from their context.

Reductionist thinking in teaching mathematics

Mathematics teaching lends itself to reductionist thinking. You strip away the context, then break a problem down into smaller parts, solve the parts, and then put it all back together again. Students practise solving straight-forward problems over and over to make sure they can do it right. They feel that a column of little red ticks is evidence that they have learned something correctly. As a school pupil, I loved the columns of red ticks. I have written about the need for drill in some aspects of statistics teaching and learning, and can see the value of automaticity – or the ability to answer something without having to think too hard. That can be a little like learning a language – you need to be automatic on the vocabulary and basic verb structures. I used to spend my swimming training laps conjugating Latin verbs – amo, amas, amat (breathe), amamus, amatis, amant (breathe). I never did meet any ancient Romans to converse with, to see if my recitation had helped any, but five years of Latin vocab is invaluable in pub quizzes. But learning statistics has little in common with learning a language.

There is more to teaching than having students learn how to get stuff correct. Learning involves the mind, heart and hands. The best learning occurs when students actually want to know the answer. This doesn’t happen when context has been removed.

I was struck by Jo Boaler’s, “The Elephant in the Classroom”, which opened my eyes to how monumentally dull many mathematics lessons can be to so many people. These people are generally the ones who do not get satisfied by columns of red ticks, and either want to know more and ask questions, or want to be somewhere else. Holistic lessons, that involve group work, experiential learning, multiple solution methods and even multiple solutions, have been shown to improve mathematics learning and results, and have lifelong benefits to the students. The book challenged many of my ingrained feelings about how to teach and learn mathematics.

Teach statistics holistically, joyfully

Teaching statistics is inherently suited for a holistic approach. The problem must drive the model, not the other way around. Teachers of mathematics need to think more like teachers of social sciences if they are to capture the joy of teaching and learning statistics.

At one time I was quite taken with an approach suggested for students who are struggling, which is to go step-by-step through a number of examples in parallel and doing one step, before moving on to the next step. The examples I saw are great, and use real data, and the sentences are correct. I can see how that might appeal to students who are finding the language aspects difficult, and are interested in writing an assignment that will get them a passing grade. However I now have concerns about the approach, and it has made me think again about some of the resources we provide at Statistics Learning Centre. I don’t think a reductionist approach is suitable for the study of statistics.

Context, context, context

Context is everything in statistical analysis. Every time we produce a graph or a numerical result we should be thinking about the meaning in context. If there is a difference between the medians showing up in the graph, and reinforced by confidence intervals that do not overlap, we need to be thinking about what that means about the heart-rate in swimmers and non-swimmers, or whatever the context is. For this reason every data set needs to be real. We cannot expect students to want to find real meaning in manufactured data. And students need to spend long enough in each context in order to be able to think about the relationship between the model and the real-life situation. This is offset by the need to provide enough examples from different contexts so that students can learn what is general to all such models, and what is specific to each. It is a question of balance.

Keep asking questions

In my effort to help improve teaching of statistics, we are now developing teaching guides and suggestions to accompany our resources. I attend workshops, talk to teachers and students, read books, and think very hard about what helps all students to learn statistics in a holistic way. I do not begin to think I have the answers, but I think I have some pretty good questions. The teaching of statistics is such a new field, and so important. I hope we all keep asking questions about what we are teaching, and how and why.

Don’t teach significance testing – Guest post

The following is a guest post by Tony Hak of Rotterdam School of Management. I know Tony would love some discussion about it in the comments. I remain undecided either way, so would like to hear arguments.

GOOD REASONS FOR NOT TEACHING SIGNIFICANCE TESTING

It is now well understood that p-values are not informative and are not replicable. Soon null hypothesis significance testing (NHST) will be obsolete and will be replaced by the so-called “new” statistics (estimation and meta-analysis). This requires that undergraduate courses in statistics now already must teach estimation and meta-analysis as the preferred way to present and analyze empirical results. If not, then the statistical skills of the graduates from these courses will be outdated on the day these graduates leave school. But it is less evident whether or not NHST (though not preferred as an analytic tool) should still be taught. Because estimation is already routinely taught as a preparation for the teaching of NHST, the necessary reform in teaching will not require the addition of new elements in current programs but rather the removal of the current emphasis on NHST or the complete removal of the teaching of NHST from the curriculum. The current trend is to continue the teaching of NHST. In my view, however, teaching of NHST should be discontinued immediately because it is (1) ineffective and (2) dangerous, and (3) it serves no aim.

1. Ineffective: NHST is difficult to understand and it is very hard to teach it successfully

We know that even good researchers often do not appreciate the fact that NHST outcomes are subject to sampling variation and believe that a “significant” result obtained in one study almost guarantees a significant result in a replication, even one with a smaller sample size. Is it then surprising that also our students do not understand what NHST outcomes do tell us and what they do not tell us? In fact, statistics teachers know that the principles and procedures of NHST are not well understood by undergraduate students who have successfully passed their courses on NHST. Courses on NHST fail to achieve their self-stated objectives, assuming that these objectives include achieving a correct understanding of the aims, assumptions, and procedures of NHST as well as a proper interpretation of its outcomes. It is very hard indeed to find a comment on NHST in any student paper (an essay, a thesis) that is close to a correct characterization of NHST or its outcomes. There are many reasons for this failure, but obviously the most important one is that NHST a very complicated and counterintuitive procedure. It requires students and researchers to understand that a p-value is attached to an outcome (an estimate) based on its location in (or relative to) an imaginary distribution of sample outcomes around the null. Another reason, connected to their failure to understand what NHST is and does, is that students believe that NHST “corrects for chance” and hence they cannot cognitively accept that p-values themselves are subject to sampling variation (i.e. chance)

2. Dangerous: NHST thinking is addictive

One might argue that there is no harm in adding a p-value to an estimate in a research report and, hence, that there is no harm in teaching NHST, additionally to teaching estimation. However, the mixed experience with statistics reform in clinical and epidemiological research suggests that a more radical change is needed. Reports of clinical trials and of studies in clinical epidemiology now usually report estimates and confidence intervals, in addition to p-values. However, as Fidler et al. (2004) have shown, and contrary to what one would expect, authors continue to discuss their results in terms of significance. Fidler et al. therefore concluded that “editors can lead researchers to confidence intervals, but can’t make them think”. This suggests that a successful statistics reform requires a cognitive change that should be reflected in how results are interpreted in the Discussion sections of published reports.

The stickiness of dichotomous thinking can also be illustrated with the results of a more recent study of Coulson et al. (2010). They presented estimates and confidence intervals obtained in two studies to a group of researchers in psychology and medicine, and asked them to compare the results of the two studies and to interpret the difference between them. It appeared that a considerable proportion of these researchers, first, used the information about the confidence intervals to make a decision about the significance of the results (in one study) or the non-significance of the results (of the other study) and, then, drew the incorrect conclusion that the results of the two studies were in conflict. Note that no NHST information was provided and that participants were not asked in any way to “test” or to use dichotomous thinking. The results of this study suggest that NHST thinking can (and often will) be used by those who are familiar with it.

The fact that it appears to be very difficult for researchers to break the habit of thinking in terms of “testing” is, as with every addiction, a good reason for avoiding that future researchers come into contact with it in the first place and, if contact cannot be avoided, for providing them with robust resistance mechanisms. The implication for statistics teaching is that students should, first, learn estimation as the preferred way of presenting and analyzing research information and that they get introduced to NHST, if at all, only after estimation has become their routine statistical practice.

3. It serves no aim: Relevant information can be found in research reports anyway

Our experience that teaching of NHST fails its own aims consistently (because NHST is too difficult to understand) and the fact that NHST appears to be dangerous and addictive are two good reasons to immediately stop teaching NHST. But there is a seemingly strong argument for continuing to introduce students to NHST, namely that a new generation of graduates will not be able to read the (past and current) academic literature in which authors themselves routinely focus on the statistical significance of their results. It is suggested that someone who does not know NHST cannot correctly interpret outcomes of NHST practices. This argument has no value for the simple reason that it is assumed in the argument that NHST outcomes are relevant and should be interpreted. But the reason that we have the current discussion about teaching is the fact that NHST outcomes are at best uninformative (beyond the information already provided by estimation) and are at worst misleading or plain wrong. The point is all along that nothing is lost by just ignoring the information that is related to NHST in a research report and by focusing only on the information that is provided about the observed effect size and its confidence interval.

Bibliography

Coulson, M., Healy, M., Fidler, F., & Cumming, G. (2010). Confidence Intervals Permit, But Do Not Guarantee, Better Inference than Statistical Significance Testing. Frontiers in Quantitative Psychology and Measurement, 20(1), 37-46.

Fidler, F., Thomason, N., Finch, S., & Leeman, J. (2004). Editors Can Lead Researchers to Confidence Intervals, But Can’t Make Them Think. Statistical Reform Lessons from Medicine. Psychological Science, 15(2): 119-126.

This text is a condensed version of the paper “After Statistics Reform: Should We Still Teach Significance Testing?” published in the Proceedings of ICOTS9.

 

The Myth of Random Sampling

I feel a slight quiver of trepidation as I begin this post – a little like the boy who pointed out that the emperor has  no clothes.

Random sampling is a myth. Practical researchers know this and deal with it. Theoretical statisticians live in a theoretical world where random sampling is possible and ubiquitous – which is just as well really. But teachers of statistics live in a strange half-real-half-theoretical world, where no one likes to point out that real-life samples are seldom random.

The problem in general

In order for most inferential statistical conclusions to be valid, the sample we are using must obey certain rules. In particular, each member of the population must have equal possibility of being chosen. In this way we reduce the opportunity for systematic error, or bias. When a truly random sample is taken, it is almost miraculous how well we can make conclusions about the source population, with even a modest sample of a thousand. On a side note, if the general population understood this, and the opportunity for bias and corruption were eliminated, general elections and referenda could be done at much less cost,  through taking a good random sample.

However! It is actually quite difficult to take a random sample of people. Random sampling is doable in biology, I suspect, where seeds or plots of land can be chosen at random. It is also fairly possible in manufacturing processes. Medical research relies on the use of a random sample, though it is seldom of the total population. Really it is more about randomisation, which can be used to support causal claims.

But the area of most interest to most people is people. We actually want to know about how people function, what they think, their economic activity, sport and many other areas. People find people interesting. To get a really good sample of people takes a lot of time and money, and is outside the reach of many researchers. In my own PhD research I approximated a random sample by taking a stratified, cluster semi-random almost convenience sample. I chose representative schools of different types throughout three diverse regions in New Zealand. At each school I asked all the students in a class at each of three year levels. The classes were meant to be randomly selected, but in fact were sometimes just the class that happened to have a teacher away, as my questionnaire was seen as a good way to keep them quiet. Was my data of any worth? I believe so, of course. Was it random? Nope.

Problems people have in getting a good sample include cost, time and also response rate. Much of the data that is cited in papers is far from random.

The problem in teaching

The wonderful thing about teaching statistics is that we can actually collect real data and do analysis on it, and get a feel for the detective nature of the discipline. The problem with sampling is that we seldom have access to truly random data. By random I am not meaning just simple random sampling, the least simple method! Even cluster, systematic and stratified sampling can be a challenge in a classroom setting. And sometimes if we think too hard we realise that what we have is actually a population, and not a sample at all.

It is a great experience for students to collect their own data. They can write a questionnaire and find out all sorts of interesting things, through their own trial and error. But mostly students do not have access to enough subjects to take a random sample. Even if we go to secondary sources, the data is seldom random, and the students do not get the opportunity to take the sample. It would be a pity not to use some interesting data, just because the collection method was dubious (or even realistic). At the same time we do not want students to think that seriously dodgy data has the same value as a carefully collected random sample.

Possible solutions

These are more suggestions than solutions, but the essence is to do the best you can and make sure the students learn to be critical of their own methods.

Teach the best way, pretend and look for potential problems.

Teach the ideal and also teach the reality. Teach about the different ways of taking random samples. Use my video if you like!

Get students to think about the pros and cons of each method, and where problems could arise. Also get them to think about the kinds of data they are using in their exercises, and what biases they may have.

We also need to teach that, used judiciously, a convenience sample can still be of value. For example I have collected data from students in my class about how far they live from university , and whether or not they have a car. This data is not a random sample of any population. However, it is still reasonable to suggest that it may represent all the students at the university – or maybe just the first year students. It possibly represents students in the years preceding and following my sample, unless something has happened to change the landscape. It has worth in terms of inference. Realistically, I am never going to take a truly random sample of all university students, so this may be the most suitable data I ever get.  I have no doubt that it is better than no information.

All questions are not of equal worth. Knowing whether students who own cars live further from university, in general, is interesting but not of great importance. Were I to be researching topics of great importance, such safety features in roads or medicine, I would have a greater need for rigorous sampling.

So generally, I see no harm in pretending. I use the data collected from my class, and I say that we will pretend that it comes from a representative random sample. We talk about why it isn’t, but then we move on. It is still interesting data, it is real and it is there. When we write up analysis we include critical comments with provisos on how the sample may have possible bias.

What is important is for students to experience the excitement of discovering real effects (or lack thereof) in real data. What is important is for students to be critical of these discoveries, through understanding the limitations of the data collection process. Consequently I see no harm in using non-random, realistic sampled real data, with a healthy dose of scepticism.

Open Letter to Khan Academy about Basic Probability

Khan academy probability videos and exercises aren’t good either

Dear Mr Khan

You have created an amazing resource that thousands of people all over the world get a lot of help from. Well done. Some of your materials are not very good, though, so I am writing this open letter in the hope that it might make some difference. Like many others, I believe that something as popular as Khan Academy will benefit from constructive criticism.

I fear that the reason that so many people like your mathematics videos so much is not because the videos are good, but because their experience in the classroom is so bad, and the curriculum is poorly thought out and encourages mechanistic thinking. This opinion is borne out by comments I have read from parents and other bloggers. The parents love you because you help their children pass tests.  (And these tests are clearly testing the type of material you are helping them to pass!) The bloggers are not so happy, because you perpetuate a type of mathematical instruction that should have disappeared by now. I can’t even imagine what the history teachers say about your content-driven delivery, but I will stick to what I know. (You can read one critique here)

Just over a year ago I wrote a balanced review of some of the Khan Academy videos about statistics. I know that statistics is difficult to explain – in fact one of the hardest subjects to teach. You can read my review here. I’ve also reviewed a selection of videos about confidence intervals, one of which was from Khan Academy. You can read the review here.

Consequently I am aware that blogging about the Khan Academy in anything other than glowing terms is an invitation for vitriol from your followers.

However, I thought it was about time I looked at the exercises that are available on KA, wondering if I should recommend them to high school teachers for their students to use for review. I decided to focus on one section, introduction to probability. I put myself in the place of a person who was struggling to understand probability at school.

Here is the verdict.

First of all the site is very nice. It shows that it has a good sized budget to use on graphics and site mechanics. It is friendly to get into. I was a bit confused that the first section in the Probability and Statistics Section is called “Independent and dependent events”. It was the first section though. The first section of this first section is called Basic Probability, so I felt I was in the right place. But then under the heading, Basic probability, it says, “Can I pick a red frog out of a bag that only contains marbles?” Now I have no trouble with humour per se, and some people find my videos pretty funny. But I am very careful to avoid confusing people with the humour. For an anxious student who is looking for help, that is a bit confusing.

I was excited to see that this section had five videos, and two sets of exercises. I was pleased about that, as I’ve wanted to try out some exercises for some time, particularly after reading the review from Fawn Nguyen on her experience with exercises on Khan Academy. (I suggest you read this – it’s pretty funny.)

So I watched the first video about probability and it was like any other KA video I’ve viewed, with primitive graphics and a stumbling repetitive narration. It was correct enough, but did not take into account any of the more recent work on understanding probability. It used coins and dice. Big yawn. It wastes a lot of time. It was ok. I do like that you have the interactive transcript so you can find your way around.

It dawned on me that nowhere do you actually talk about what probability is. You seem to assume that the students already know that. In the very start of the first video it says,

“What I want to do in this video is give you at least a basic overview of probability. Probability, a word that you’ve probably heard a lot of and you are probably just a little bit familiar with it. Hopefully this will get you a little deeper understanding.”

Later in the video there is a section on the idea of large numbers of repetitions, which is one way of understanding probability. But it really is a bit skimpy on why anyone would want to find or estimate a probability, and what the values actually mean. But it was ok.

The first video was about single instances – one toss of a coin or one roll of a die. Then the second video showed you how to answer the questions in the exercises, which involved two dice. This seemed ok, if rather a sudden jump from the first video. Sadly both of these examples perpetuate the common misconception that if there are, say, 6 alternative outcomes, they will necessarily be equally likely.

Exercises

Then we get to some exercises called “Probability Space” , which is not an enormously helpful heading. But my main quest was to have a go at the exercises, so that is what I did. And that was not a good thing. The exercises were not stepped, but started right away with an example involving two dice and the phrase “at least one of”. There was meant to be a graphic to help me, but instead I had the message “scratchpad not available”. I will summarise my concerns about the exercises at the end of my letter. I clicked on a link to a video that wasn’t listed on the left, called Probability Space and got a different kind of video.

This video was better in that it had moving pictures and a script. But I have problems with gambling in videos like this. There are some cultures in which gambling is not acceptable. The other problem I have is with the term  “exact probability”, which was used several times. What do we mean by “exact probability”? How does he know it is exact? I think this sends the wrong message.

Then on to the next videos which were worked examples, entitled “Example: marbles from a bag, Example: Picking a non-blue marble, Example: Picking a yellow marble.” Now I understand that you don’t want to scare students with terminology too early, but I would have thought it helpful to call the second one, “complementary events, picking a non-blue marble”. That way if a student were having problems with complementary events in exercises from school, they could find their way here. But then I’m not sure who your audience is. Are you sure who your audience is?

The first marble video was ok, though the terminology was sloppy.

The second marble video, called “Example: picking a non-blue marble”, is glacially slow. There is a point, I guess in showing students how to draw a bag and marbles, but… Then the next example is of picking numbers at random. Why would we ever want to do this? Then we come to an example of circular targets. This involves some problem-solving regarding areas of circles, and cancelling out fractions including pi. What is this about? We are trying to teach about probablity so why have you brought in some complication involving the area of a circle?

The third marble video attempts to introduce the idea of events, but doesn’t really. By trying not to confuse with technical terms, the explanation is more confusing.

Now onto some more exercises. The Khan model is that you have to get 5 correct in a row in order to complete an exercise. I hope there is some sensible explanation for this, because it sure would drive me crazy to have to do that. (As I heard expressed on Twitter)

What are circular targets doing in with basic probability?

The first example is a circular target one.  I SO could not be bothered working out the area stuff so I used the hints to find the answer so I could move onto a more interesting example. The next example was finding the probability of a rolling a 4 from a fair six sided die. This is trivial, but would have been not a bad example to start with. Next question involve three colours of marbles, and finding the probability of not green. Then another dart-board one. Sigh. Then another dart board one. I’m never going to find out what happens if I get five right in a row if I don’t start doing these properly. Oh now – it gave me circumference. SO can’t be bothered.

And that was the end of Basic probability. I never did find out what happens if I get five correct in a row.

Venn diagrams

The next topic is called “Venn diagrams and adding probabilities “. I couldn’t resist seeing what you would do with a Venn diagram. This one nearly reduced me to tears.

As you know by now, I have an issue with gambling, so it will come as no surprise that I object to the use of playing cards in this example. It makes the assumption that students know about playing cards. You do take one and a half minutes to explain the contents of a standard pack of cards.  Maybe this is part of the curriculum, and if so, fair enough. The examples are standard – the probability of getting a Jack of Hearts etc. But then at 5:30 you start using Venn diagrams. I like Venn diagrams, but they are NOT good for what you are teaching at this level, and you actually did it wrong. I’ve put a comment in the feedback section, but don’t have great hopes that anything will change. Someone else pointed this out in the feedback two years ago, so no – it isn’t going to change.

Khan Venn diagram

This diagram is misleading, as is shown by the confusion expressed in the questions from viewers. There should be a green 3, a red 12, and a yellow 1.

Now Venn diagrams seem like a good approach in this instance, but decades of experience in teaching and communicating complex probabilities has shown that in most instances a two-way table is more helpful. The table for the Jack of Hearts problem would look like this:

Jacks Not Jacks Total
Hearts 1 12 13
Not Hearts 3 36 39
Total 4 48 52

(Any teachers reading this letter – try it! Tables are SO much easier for problem solving than Venn diagrams)

But let’s get down to principles.

The principles of instruction that KA have not followed in the examples:

  • Start easy and work up
  • Be interesting in your examples – who gives a flying fig about two dice or random numbers?
  • Make sure the hardest part of the question is the thing you are testing. This is particularly violated with the questions involving areas of circles.
  • Don’t make me so bored that I can’t face trying to get five in a row and not succeed.

My point

Yes, I do have one. Mr Khan you clearly can’t be stopped, so can you please get some real teachers with pedagogical content knowledge to go over your materials systematically and make them correct. You have some money now, and you owe it to your benefactors to GET IT RIGHT. Being flippant and amateurish is fine for amateurs but you are now a professional, and you need to be providing material that is professionally produced. I don’t care about the production values – keep the stammers and “lellows” in there if you insist. I’m very happy you don’t have background music as I can’t stand it myself. BUT… PLEASE… get some help and make your videos and exercises correct and pedagogically sound.

Dr Nic

PS – anyone else reading this letter, take a look at the following videos for mathematics.

And of course I think my own Statistics Learning Centre videos are pretty darn good as well.

Other posts about concerns about Khan:

Another Open Letter to Sal ( I particularly like the comment by Michael Paul Goldenberg)

Breaking the cycle (A comprehensive summary of the responses to criticism of Khan

Teaching with School League tables

NCEA League tables in the newspaper

My husband ran for cover this morning when he saw high school NCEA (National Certificates of Educational Achievement)  league tables in the Press. However, rather than rave at him yet again, I will grasp the opportunity to expound to a larger audience. Much as I loathe and despise league tables, they are a great opportunity to teach students to explore data rich reports with a critical and educated eye.  There are many lessons to learn from league tables. With good teaching we can help dispell some of the myths the league tables promulgate.

When a report is made short and easy to understand, there is a good chance that much of the ‘truth’ has been lost along with the complexity. The table in front of me lists 55 secondary and area schools from the Canterbury region. These schools include large “ordinary” schools and small specialist schools such as Van Asch Deaf Education Centre and Southern Regional Health School. They include single-sex and co-ed, private, state-funded and integrated. They include area schools which are in small rural communities, which cover ages 5 to 21. The “decile” of each of the schools is the only contextual information given, apart from the name of the school.  (I explain the decile, along with misconceptions at the end of the post.) For each school is given percentages of students passing at the three levels. It is not clear whether the percentages in the newspaper are of participation rate or school roll.

This is highly motivating information for students as it is about them and their school. I had an argument recently with a student from a school which scores highly in NCEA. She was insistent that her friend should change schools from one that has lower scores. What she did not understand was that the friend had some extra learning difficulties, and that the other school was probably more appropriate for her. I tried to teach the concept of added-value, but that wasn’t going in either. However I was impressed with her loyalty to her school and I think these tables would provide an interesting forum for discussion.

Great context discussion

You could start with talking about what the students think will help a school to have high pass rates. This could include a school culture of achievement, good teaching, well-prepared students and good resources. This can also include selection and exclusion of students to suit the desired results, selection of “easy” standards or subjects, and even less rigorous marking of internal assessment. Other factors to explore might be single-sex vs co-ed school, the ethnic and cultural backgrounds of the students, private vs state-funded schools.  All of these are potential explanatory variables. Then you can point out how little of this information is actually taken into account in the table. This is a very common occurrence, with limited space and inclusion of raw data. I suspect at least one school appears less successful because some of the students sit different exams, either Cambridge or International Baccalaureate. These may be the students who would have performed well in NCEA.

Small populations

It would be good to look at the impact of small populations, and populations of very different sizes in the data. Students should think about what impact their behaviour will have on the results of the school, compared with a larger or smaller cohort. The raw data provided by the Ministry of Education does give a warning for small cohorts. For a small school, particularly in a rural area, there may be only a handful of students in year 13, so that one student’s success or failure has a large impact on the outcome. At the other end of the scale, there are schools of over 2000, which will have about 400 students in year 13. This effect is important to understand in all statistical reporting. One bad event in a small hospital, for instance, will have a larger percentage effect than in a large hospital.

Different rules

We hear a lot about comparing apples and oranges. School league tables include a whole fruit basket of different criteria. Schools use different criteria for allowing students into the school, into different courses, and whether they are permitted to sit external standards. Attitudes to students with special educational needs vary greatly. Some schools encourage students to sit levels outside their year level.

Extrapolating from a small picture

What one of the accompanying stories points out is that NCEA is only a part of what schools do. Sometimes the things that are measurable get more attention because it is easier to report in bulk. A further discussion with students could be provoked using statements such as the following, which the students can vote on, and then discuss. You could also discuss what evidence you would need to be able to refute or support them.

  • A school that does well in NCEA level 3 is a good school.
  • Girls’ schools do better than boys’ schools at NCEA because girls are smarter than boys.
  • Country schools don’t do very well because the clever students go to boarding school in the city.
  • Boys are more satisfied with doing just enough to get achieved.

Further extension

If students are really interested you can download the full results from the Ministry of Education website and set up a pivot table on Excel to explore questions.

I can foresee some engaging and even heated discussions ensuing. I’d love to hear how they go.

Short explanation of Decile – see also official website.

The decile rating of the school is an index developed in New Zealand and is a measure of social deprivation. The decile rating is calculated from a combination of five values taken from census data for the meshblocks in which the students reside. A school with a low decile rating of 1 or 2 will have a large percentage of students from homes that are crowded, or whose parents are not in work or have no educational qualifications. A school with a decile rating of 10 will have the fewest students from homes like that. The system was set up to help with targeted funding for educational achievement. It recognises that students from disadvantaged homes will need additional resources in order to give them equal opportunity to learn. However, the term has entered the New Zealand vernacular as a measure of socio-economic status, and often even of worth. A decile 10 school is often seen as a rich school or a “top” school. The reality is that this is not the case.  Another common misconception is that one tenth of the population of school age students is in each of the ten bands. How it really works is that one tenth of schools is in each of the bands. The lower decile schools are generally smaller than other schools, and mostly primary schools. In 2002 there were nearly 40,000 secondary students in decile 10 schools, with fewer than 10,000 in decile 1 schools.

Conceptualising Probability

The problem with probability is that it doesn’t really exist. Certainly it never exists in the past.

Probability is an invention we use to communicate our thoughts about how likely something is to happen. We have collectively agreed that 1 is a certain event and 0 is impossible. 0.5 means that there is just as much chance of something happening as not. We have some shared perception that 0.9 means that something is much more likely to happen than to not happen. Probability is also useful for when we want to do some calculations about something that isn’t certain. Often it is too hard to incorporate all uncertainty, so we assume certainty and put in some allowance for error.

Sometimes probability is used for things that happen over and over again, and in that case we feel we can check to see if our predication about how likely something is to happen was correct. The problem here is that we actually need things to happen a really big lot of times under the same circumstances in order to assess if we were correct. But when we are talking about the probability of a single event, that either will or won’t happen, we can’t test out if we were right or not afterwards, because by that time it either did or didn’t happen. The probability no longer exists.

Thus to say that there is a “true” probability somewhere in existence is rather contrived. The truth is that it either will happen or it won’t. The only way to know a true probability would be if this one event were to happen over and over and over, in the wonderful fiction of parallel universes. We could then count how many times it would turn out one way rather than another. At which point the universes would diverge!

However, for the interests of teaching about probability, there is the construct that there exists a “true probability” that something will happen.

Why think about probability?

What prompted these musings about probability was exploring the new NZ curriculum and companion documents, the Senior Secondary Guide and nzmaths.co.nz.

In Level 8 (last year of secondary school) of the senior secondary guide it says, “Selects and uses an appropriate distribution to solve a problem, demonstrating understanding of the relationship between true probability (unknown and unique to the situation), model estimates (theoretical probability) and experimental estimates.”

And at NZC level 3 (years 5 and 6 at Primary school!) in the Key ideas in Probability it talks about “Good Model, No Model and Poor Model” This statement is referred to at all levels above level 3 as well.

I decided I needed to make sense of these two conceptual frameworks: true-model-experimental and good-poor-no, and tie it to my previous conceptual framework of classical-frequency-subjective.

Here goes!

Delicious Mandarins

Let’s make this a little more concrete with an example. We need a one-off event. What is the probability that the next mandarin I eat will be delicious? It is currently mandarin season in New Zealand, and there is nothing better than a good mandarin, with the desired combination of sweet and sour, and with plenty of juice and a good texture. But, being a natural product, there is a high level of variability in the quality of mandarins, especially when they may have parted company with the tree some time ago.

There are two possible outcomes for my future event. The mandarin will be delicious or it will not. I will decide when I eat it. Some may say that there is actually a continuum of deliciousness, but for now this is not the case. I have an internal idea of deliciousness and I will know. I think back to my previous experience with mandarins. I think about a quarter are horrible, a half are nice enough and about a quarter are delicious (using the Dr Nic scale of mandarin grading). If the mandarin I eat next belongs to the same population as the ones in my memory, then I can predict that there is a 25% probability that the mandarin will be delicious.

The NZ curriculum talks about “true” probability which implies that any value I give to the probability is only a model. It may be a model based on empirical or experimental evidence. It can be based on theoretical probabilities from vast amounts of evidence, which has given us the normal distribution. The value may be only a number dredged up from my soul, which expresses the inner feeling of how likely it is that the mandarin will be delicious, based on several decades of experience in mandarin consumption.

More examples

Let us look at some more examples:

What is the probability that:

  • I will hear a bird on the way to work?
  • the flight home will be safe?
  • it will be raining when I get to Christchurch?
  • I will get a raisin in my first spoonful of muesli?
  • I will get at least one raisin in half of my spoonfuls of muesli?
  • the shower in my hotel room will be enjoyable?
  • I will get a rare Lego ® minifigure next time I buy one?

All of these events are probabilistic and have varying degrees of certainty and varying degrees of ease of modelling.

Easy to model Hard to model
Unlikely Get a rare Lego ® minifigure Raining in Christchurch
No idea Raisin in half my spoonfuls Enjoyable shower
Likely Raisin in first spoonful Bird, safe flight home

And as I construct this table I realise also that there are varying degrees of importance. Except for the flight home, none of those examples matter. I am hoping that a safe flight home has a probability extremely close to 1. I realise that there is a possibility of an incident. And it is difficult to model. But people have modelled air safety and the universal conclusion is that it is safer than driving. So I will take the probability and fly.

Conceptual Frameworks

How do we explain the different ways that probability has been described? I will now examine the three conceptual frameworks I introduced earlier, starting with the easiest.

Traditional categorisation

This is found in some form in many elementary college statistics text books. The traditional framework has three categories –classical or “a priori”, frequency or historical, and subjective.

Classical or “a priori” – I had thought of this as being “true” probability. To me, if there are three red and three white Lego® blocks in a bag and I take one out without looking, there is a 50% chance that I will get a red one. End of story. How could it be wrong? This definition is the mathematically interesting aspect of probability. It is elegant and has cool formulas and you can make up all sorts of fun examples using it. And it is the basis of gambling.

Frequency or historical – we draw on long term results of similar trials to gain information. For example we look at the rate of germination of a certain kind of seed by experiment, and that becomes a good approximation of the likelihood that any one future seed will germinate. And it also gives us a good estimate of what proportion of seeds in the future will germinate.

Subjective – We guess! We draw on our experience of previous similar events and we take a stab at it. This is not seen as a particularly good way to come up with a probability, but when we are talking about one off events, it is impossible to assess in retrospect how good the subjective probability estimate was. There is considerable research in the field of psychology about the human ability or lack thereof to attribute subjective probabilities to events.

In teaching the three part categorisation of sources of probability I had problems with the probability of rain. Where does that fit in the three categories? It uses previous experimental data to build a model, and current data to put into the model, and then a probability is produced. I decided that there is a fourth category, that I called “modelled”. But really that isn’t correct, as they are all models.

NZ curriculum terminology

So where does this all fit in the New Zealand curriculum pronouncements about probability? There are two conceptual frameworks that are used in the document, each with three categories as follows:

True, modelled, experimental

In this framework we start with the supposition that there exists somewhere in the universe a true probability distribution. We cannot know this. Our expressions of probability are only guesses at what this might be. There are two approaches we can take to estimate this “truth”. These two approaches are not independent of each other, but often intertwined.

One is a model estimate, based on theory, such as that the probability of a single outcome is the number of equally likely ways that it can occur over the number of possible outcomes. This accounts for the probability of a red brick as opposed to a white brick, drawn at random. Another example of a modelled estimate is the use of distributions such as the binomial or normal.

In addition there is the category of experimental estimate, in which we use data to draw conclusions about what it likely to happen. This is equivalent to the frequency or historical category above. Often modelled distributions use data from an experiment also. And experimental probability relies on models as well.  The main idea is that neither the modelled nor the experimental estimate of the “true” probability distribution is the true distribution, but rather a model of some sort.

Good model, poor model, no model

The other conceptual framework stated in the NZ curriculum is that of good model, poor model and no model, which relates to fitness for purpose. When it is important to have a “correct” estimate of a probability such as for building safety, gambling machines, and life insurance, then we would put effort into getting as good a model as possible. Conversely, sometimes little effort is required. Classical models are very good models, often of trivial examples such as dice games and coin tossing. Frequency models aka experimental models may or may not be good models, depending on how many observations are included, and how much the future is similar to the past. For example, a model of sales of slide rules developed before the invention of the pocket calculator will be a poor model for current sales. The ground rules have changed. And a model built on data from five observations of is unlikely to be a good model. A poor model is not fit for purpose and requires development, unless the stakes are so low that we don’t care, or the cost of better fitting is greater than the reward.

I have problems with the concept of “no model”. I presume that is the starting point, from which we develop a model or do not develop a model if it really doesn’t matter. In my examples above I include the probability that I will hear a bird on the way to work. This is not important, but rather an idle musing. I suspect I probably will hear a bird, so long as I walk and listen. But if it rains, I may not. As I am writing this in a hotel in an unfamiliar area I have no experience on which to draw. I think this comes pretty close to “no model”. I will take a guess and say the probability is 0.8. I’m pretty sure that I will hear a bird. Of course, now that I have said this, I will listen carefully, as I would feel vindicated if I hear a bird. But if I do not hear a bird, was my estimate of the probability wrong? No – I could assume that I just happened to be in the 0.2 area of my prediction. But coming back to the “no model” concept – there is now a model. I have allocated the probability of 0.8 to the likelihood of hearing a bird. This is a model. I don’t even know if it is a good model or a poor model. I will not be walking to work this way again, so I cannot even test it out for the future, and besides, my model was only for this one day, not for all days of walking to work.

So there you have it – my totally unscholarly musings on the different categorisations of probability.

What are the implications for teaching?

We need to try not to perpetuate the idea that probability is the truth. But at the same time we do not wish to make students think that probability is without merit. Probability is a very useful, and at times highly precise way of modelling and understanding the vagaries of the universe. The more teachers can use language that implies modelling rather than rules, the better. It is common, but not strictly correct to say, “This process follows a normal distribution”. As Einstein famously and enigmatically said, “God does not play dice”. Neither does God or nature use normal distribution values to determine the outcomes of natural processes. It is better to say, “this process is usefully modelled by the normal distribution.”

We can have learning experiences that help students to appreciate certainty and uncertainty and the modelling of probabilities that are not equi-probable. Thanks to the overuse of dice and coins, it is too common for people to assess things as having equal probabilities. And students need to use experiments.  First they need to appreciate that it can take a large number of observations before we can be happy that it is a “good” model. Secondly they need to use experiments to attempt to model an otherwise unknown probability distribution. What fun can be had in such a class!

But, oh mathematical ones, do not despair – the rules are still the same, it’s just the vigour with which we state them that has changed.

Comment away!

Post Script

In case anyone is interested, here are the outcomes which now have a probability of 1, as they have already occurred.

  • I will hear a bird on the way to work? Almost the minute I walked out the door!
  • the flight home will be safe? Inasmuch as I am in one piece, it was safe.
  • it will be raining when I get to Christchurch? No it wasn’t
  • I will get a raisin in my first spoonful of muesli? I did
  • I will get at least one raisin in half of my spoonfuls of muesli? I couldn’t be bothered counting.
  • the shower in my hotel room will be enjoyable? It was okay.
  • I will get a rare Lego minifigure next time I buy one? Still in the future!

Oh Ordinal data, what do we do with you?

What can you do with ordinal data? Or more to the point, what shouldn’t you do with ordinal data?

First of all, let’s look at what ordinal data is.

It is usual in statistics and other sciences to classify types of data in a number of ways. In 1946, Stanley Smith Stevens suggested a theory of levels of measurement, in which all measurements are classified into four categories, Nominal, Ordinal, Interval and Ratio. This categorisation is used extensively, and I have a popular video explaining them. (Though I group Interval and Ratio together as there is not much difference in their behaviour for most statistical analysis.)

Costing no more than a box of popcorn, our snack-size course will help help you learn all you need to know about types of data.

Costing no more than a box of popcorn, our snack-size course will help help you learn all you need to know about types of data, and appropriate statistics and graphs.

Nominal is pretty straight-forward. This category includes any data that is put into groups, in which there is no inherent order. Examples of nominal data are country of origin, sex, type of cake, or sport. Similarly it is pretty easy to explain interval/ratio data. It is something that is measured, by length, weight, time (duration), cost and similar. These two categorisations can also be given as qualitative and quantitative, or non-parametric and parametric.

Ordinal data

But then we come to ordinal level of measurement. This is used to describe data that has a sense of order, but for which we cannot be sure that the distances between the consecutive values are equal. For example, level of qualification has a sense of order

  • A postgraduate degree is higher than
  • a Bachelor’s degree,which is higher than
  • a high-school qualification, which is higher
  • than no qualification.

There are four steps on the scale, and it is clear that there is a logical sense of order. However, we cannot sensibly say that the difference between no qualification and a high-school qualification is equivalent to the difference between the high-school qualification and a bachelor’s degree, even though both of those are represented by one step up the scale.

Another example of ordinal level of measurement is used extensively in psychological, educational and marketing research, known as a Likert scale. (Though I believe the correct term is actually Likert item – and according to Wikipedia, the pronunciation should be Lick it, not Like it, as I have used for some decades!). A statement is given, and the response is given as a value, often from 1 to 5, showing agreement to the statement. Often the words “Strongly agree, agree, neutral, disagree, strongly disagree” are used. There is clearly an order in the five possible responses. Sometimes a seven point scale is used, and sometimes the “neutral” response is eliminated in an attempt to force the respondent to commit one way or the other.

The question at the start of this post has an ordinal response, which could be perceived as indicating how quantitative the respondent believes ordinal data to be.

What prompted this post was a question from Nancy under the YouTube video above, asking:

“Dr Nic could you please clarify which kinds of statistical techniques can be applied to ordinal data (e.g. Likert-scale). Is it true that only non-parametric statistics are possible to apply?”

Well!

As shown in the video, there are the purists, who are adamant that ordinal data is qualitative. There is no way that a mean should ever be calculated for ordinal, data, and the most mathematical thing you can do with it is find the median. At the other pole are the practical types, who happily calculate means for any ordinal data, without any concern for the meaning (no pun intended.)

There are differing views on finding the mean for ordinal data.

There are differing views on finding the mean for ordinal data.

So the answer to Nancy would depend on what school of thought you belong to.

Here’s what I think:

All ordinal data is not the same. There is a continuum of “ordinality” if you like.

There are some instances of ordinal data which are pretty much nominal, with a little bit of order thrown in. These should be distinguished from nominal data, only in that they should always be graphed as a bar chart (rather than a pie-chart)* because there is inherent order. The mode is probably the only sensible summary value other than frequencies. In the examples above, I would say that “level of qualification” is only barely ordinal. I would not support calculating a mean for the level of qualification. It is clear that the gaps are not equal, and additionally any non-integer result would have doubtful interpretation.

Then there are other instances of ordinal data for which it is reasonable to treat it as interval data and calculate the mean and median. It might even be supportable to use it in a correlation or regression. This should always be done with caution, and an awareness that the intervals are not equal.

Here is an example for which I believe it is acceptable to use the mean of an ordinal scale. At the beginning and the end of a university statistics course, the class of 200 students is asked the following question: How useful do you think a knowledge of statistics is will be to you in your future career? Very useful, useful, not useful.

Now this is not even a very good Likert question, as the positive and negative elements are not balanced. There are only three choices. There is no evidence that the gaps between the elements are equal. However if we score the elements as 3,2 and 1, respectively and find that the mean for the 200 students is 1.5 before the course, and 2.5 after the course, I would say that there is meaning in what we are reporting. There are specific tests to use for this – and we could also look at how many students changed their minds positively or negatively. But even without the specific test, we are treating this ordinal data as something more than qualitative. What also strengthens the evidence for doing this is that the test is performed on the same students, who will probably perceive the scale in the same way each time, making the comparison more valid.

So what I’m saying is that it is wrong to make a blanket statement that ordinal data can or can’t be treated like interval data. It depends on meaning and number of elements in the scale.

What do we teach?

And again the answer is that it depends! For my classes in business statistics I told them that it depends. If you are teaching a mathematical statistics class, then a more hard line approach is justified. However, at the same time as saying, “you should never calculate the mean of ordinal data”, it would be worthwhile to point out that it is done all the time! Similarly if you teach that it is okay to find the mean of some ordinal data, I would also point out that there are issues with regard to interpretation and mathematical correctness.

Please comment!

Foot note on Pie charts

*Yes, I too eschew pie-charts, but for two or three categories of nominal data, where there are marked differences in frequency, if you really insist, I guess you could possibly use them, so long as they are not 3D and definitely not exploding. But even then, a barchart is better. – perhaps a post for another day, but so many have done this.