# Why decimals are difficult

Recently a couple of primary teachers admitted a little furtively to me that they “never got decimals”. It got me wondering about what was difficult about decimals. For people who “get” decimals, they are just another number, with the decimal point showing. Clearly this was not the case for all.

So in true 21st century style I Googled it: “Why are decimals difficult”

I got some wonderfully interesting results, one of which is a review paper by Hugues Lortie-Forgues, Jing Tian and Robert S. Siegler, entitled “Why is learning fraction and decimal arithmetic so difficult?”, which I draw on in this post.

# You need to know

For teachers of statistics, this is important. In particular, students learning about statistics sometimes have difficulty identifying if a p-value of 0.035 is smaller or larger than the alpha value of 0.05. In this post I talk about why that may be. I will also give links to a couple of videos that might be helpful for them. For teachers of mathematics it might give some useful insights.

# Whole numbers and rational numbers

Whole numbers are the numbers we start with when we begin to learn maths – 1, 2, 3, 4,… and 0. Zero has an interesting role of having no magnitude in itself, but acting as a place-filler to make sure we can tell the meaning of a number. Without zero, 2001 and 201 and 21 would all look the same! From early on we recognise that longer numbers represent larger quantities. We know that a salary with lots of zeroes is better than one with only a few. \$1000000 is more than \$200 even though 2 is greater than 1.

Rational numbers are the ones that come in between, but also include whole numbers. All of the following are considered rational numbers: ½, 0.3, 4/5, 34.87, 3¾, 2000

When we talk about whole numbers, we can say what number comes before and after the number. 35 comes before 36. 37 comes after 36. But with rational numbers, we cannot do this. There are infinite rational numbers in any given interval. Between 0 and 1 there are infinite rational numbers.

Rational numbers are usually expressed as fractions (½, 3¾) or decimals (0.3, 34.87).

There are several things that make rational numbers (fractions and decimals) tricky. In this post I focus on decimals

# Decimal notation and size of number

As I explained before, when we learn about whole numbers, we learn a useful rule-of-thumb that longer strings of digits correspond to larger numbers. However, the length of the decimal is unrelated to its magnitude. For example, 10045 is greater than 230. The longer number corresponds to greater magnitude. But 0.10045 is less than 0.230. We look at the first digit after the point to find out which number is bigger. The way that you judge which is bigger out of two decimals is quite different from how you do it with whole numbers. The second of my videos illustrates this.

# Effect of multiplying by numbers between 0 and 1

The results of multiplying by decimals between 0 and 1 are different from what we are used to.

When we learn about multiplication of whole numbers, we find that when we multiply, the answer will always be bigger than both of the numbers we are multiplying.
3 × 4 = 12. 12 is greater than either 3 or 4.
However, if we multiply 0.3 × 0.4 we get 0.12, which is smaller than either 0.3 and 0.4. Or if we multiply 6 by 0.4, we get 2.4, which is less than 6, but greater than 0.4. This can be quite confusing.

## Aside for statistics teachers

In statistics we often quote the R squared value from regression. To get it, we square r, the correlation coefficient, and what is quite a respectable value, like 0.6, gets reduced to a mere 0.36.

# Effect of dividing by decimals between 0 and 1

Similarly, when we divide whole numbers by whole numbers, the answer will be less than the number we are dividing. 100 / 5 = 20. Twenty is less than 100, but in this case is greater than 5.  But when we divide by a decimal between 0 and 1 it all goes crazy and things get bigger! 100/ 0.5 = 200. People who are at home with all this madness don’t notice it, but I can see how it can alarm the novice.

# Decimal arithmetic doesn’t behave like regular arithmetic

When we add or subtract two numbers, we need to line up the decimal places, so that we know that we are adding values with corresponding place values. This is looks different from the standard algorithm where we line up the right-hand side. In fact it is the same, but because the decimal point is invisible, it doesn’t seem the same.

## Method for multiplication of decimals

When you multiply numbers with decimals in, you do it like regular multiplication and then you count the number of digits to the right of the decimal in each of the factors and add them together and that is how many digits to have to the right of the decimal in the answer! I have a confession here. I know how to do this, and have taught how to do this, but I don’t recall ever working out why we do this or getting students to work it out.

## Method for division of decimals

Is this even a thing? My immediate response is to use a calculator. I seem to remember moving the decimal point around in a somewhat cavalier manner so that it disappears from the number we are dividing by. But who ever does long division by hand?

Okay teacher friends – I now see why you find decimals difficult.

The paper talks about approaches that help. The main one is that students need to spend time on understanding about magnitude.

My suggestion is to do plenty of work using money. Somehow we can get our heads around that.

And use a calculator, along with judicious estimation.

Here are two videos I have made, to help people get their heads around decimals.

# Don’t teach significance testing – Guest post

The following is a guest post by Tony Hak of Rotterdam School of Management. I know Tony would love some discussion about it in the comments. I remain undecided either way, so would like to hear arguments.

# GOOD REASONS FOR NOT TEACHING SIGNIFICANCE TESTING

It is now well understood that p-values are not informative and are not replicable. Soon null hypothesis significance testing (NHST) will be obsolete and will be replaced by the so-called “new” statistics (estimation and meta-analysis). This requires that undergraduate courses in statistics now already must teach estimation and meta-analysis as the preferred way to present and analyze empirical results. If not, then the statistical skills of the graduates from these courses will be outdated on the day these graduates leave school. But it is less evident whether or not NHST (though not preferred as an analytic tool) should still be taught. Because estimation is already routinely taught as a preparation for the teaching of NHST, the necessary reform in teaching will not require the addition of new elements in current programs but rather the removal of the current emphasis on NHST or the complete removal of the teaching of NHST from the curriculum. The current trend is to continue the teaching of NHST. In my view, however, teaching of NHST should be discontinued immediately because it is (1) ineffective and (2) dangerous, and (3) it serves no aim.

1. Ineffective: NHST is difficult to understand and it is very hard to teach it successfully

We know that even good researchers often do not appreciate the fact that NHST outcomes are subject to sampling variation and believe that a “significant” result obtained in one study almost guarantees a significant result in a replication, even one with a smaller sample size. Is it then surprising that also our students do not understand what NHST outcomes do tell us and what they do not tell us? In fact, statistics teachers know that the principles and procedures of NHST are not well understood by undergraduate students who have successfully passed their courses on NHST. Courses on NHST fail to achieve their self-stated objectives, assuming that these objectives include achieving a correct understanding of the aims, assumptions, and procedures of NHST as well as a proper interpretation of its outcomes. It is very hard indeed to find a comment on NHST in any student paper (an essay, a thesis) that is close to a correct characterization of NHST or its outcomes. There are many reasons for this failure, but obviously the most important one is that NHST a very complicated and counterintuitive procedure. It requires students and researchers to understand that a p-value is attached to an outcome (an estimate) based on its location in (or relative to) an imaginary distribution of sample outcomes around the null. Another reason, connected to their failure to understand what NHST is and does, is that students believe that NHST “corrects for chance” and hence they cannot cognitively accept that p-values themselves are subject to sampling variation (i.e. chance)

2. Dangerous: NHST thinking is addictive

One might argue that there is no harm in adding a p-value to an estimate in a research report and, hence, that there is no harm in teaching NHST, additionally to teaching estimation. However, the mixed experience with statistics reform in clinical and epidemiological research suggests that a more radical change is needed. Reports of clinical trials and of studies in clinical epidemiology now usually report estimates and confidence intervals, in addition to p-values. However, as Fidler et al. (2004) have shown, and contrary to what one would expect, authors continue to discuss their results in terms of significance. Fidler et al. therefore concluded that “editors can lead researchers to confidence intervals, but can’t make them think”. This suggests that a successful statistics reform requires a cognitive change that should be reflected in how results are interpreted in the Discussion sections of published reports.

The stickiness of dichotomous thinking can also be illustrated with the results of a more recent study of Coulson et al. (2010). They presented estimates and confidence intervals obtained in two studies to a group of researchers in psychology and medicine, and asked them to compare the results of the two studies and to interpret the difference between them. It appeared that a considerable proportion of these researchers, first, used the information about the confidence intervals to make a decision about the significance of the results (in one study) or the non-significance of the results (of the other study) and, then, drew the incorrect conclusion that the results of the two studies were in conflict. Note that no NHST information was provided and that participants were not asked in any way to “test” or to use dichotomous thinking. The results of this study suggest that NHST thinking can (and often will) be used by those who are familiar with it.

The fact that it appears to be very difficult for researchers to break the habit of thinking in terms of “testing” is, as with every addiction, a good reason for avoiding that future researchers come into contact with it in the first place and, if contact cannot be avoided, for providing them with robust resistance mechanisms. The implication for statistics teaching is that students should, first, learn estimation as the preferred way of presenting and analyzing research information and that they get introduced to NHST, if at all, only after estimation has become their routine statistical practice.

3. It serves no aim: Relevant information can be found in research reports anyway

Our experience that teaching of NHST fails its own aims consistently (because NHST is too difficult to understand) and the fact that NHST appears to be dangerous and addictive are two good reasons to immediately stop teaching NHST. But there is a seemingly strong argument for continuing to introduce students to NHST, namely that a new generation of graduates will not be able to read the (past and current) academic literature in which authors themselves routinely focus on the statistical significance of their results. It is suggested that someone who does not know NHST cannot correctly interpret outcomes of NHST practices. This argument has no value for the simple reason that it is assumed in the argument that NHST outcomes are relevant and should be interpreted. But the reason that we have the current discussion about teaching is the fact that NHST outcomes are at best uninformative (beyond the information already provided by estimation) and are at worst misleading or plain wrong. The point is all along that nothing is lost by just ignoring the information that is related to NHST in a research report and by focusing only on the information that is provided about the observed effect size and its confidence interval.

## Bibliography

Coulson, M., Healy, M., Fidler, F., & Cumming, G. (2010). Confidence Intervals Permit, But Do Not Guarantee, Better Inference than Statistical Significance Testing. Frontiers in Quantitative Psychology and Measurement, 20(1), 37-46.

Fidler, F., Thomason, N., Finch, S., & Leeman, J. (2004). Editors Can Lead Researchers to Confidence Intervals, But Can’t Make Them Think. Statistical Reform Lessons from Medicine. Psychological Science, 15(2): 119-126.

This text is a condensed version of the paper “After Statistics Reform: Should We Still Teach Significance Testing?” published in the Proceedings of ICOTS9.

# Khan Academy Statistics videos are not good

I don’t like the Khan Academy videos about statistics. But I can see why some people do. Some are okay, though some are very bad. I’m rather sorry they exist though, as they perpetuate the idea of statistics as mathematics.

# Khan Academy, critics and supporters

Just in case you have been living under a rock, with respect to mathematics education, I will explain what Khan Academy is.

Sal Khan made little YouTube videos to teach a family member maths. Other people watched them and found them useful. Bill Gates discovered them and threw money at them. Now there are heaps of videos, with some back up exercises, and some people think this is the best thing to happen to maths (and other) education. Other people think that the videos lack pedagogical content knowledge. Sal agrees – he says he just makes them up as he goes along.

Diane Ravitch linked into the Khan Academy debate, beginning with this post, which is what got me looking into this. Two mathematics teachers made videos after the style of Mystery Science Theater 3000 starring two of Khan’s poorer mathematical contributions. The one on multiplying negative numbers was particularly poor and has since been replaced. Critiques of Khan seem to meet with two kinds of comments. One group is people who know about teaching, who are pleased that someone is pointing out that the emperor, though not naked, is poorly clad. The other lot are generally telling the mean teachers to leave Khan alone, that he is the saviour of mathematics teaching, and they would never have understood mathematics without him. The supporters also either suggest vested interest (for people who make educational materials) or that the writers should try to do better (for those people who don’t make educational materials). To be fair, the first group are also calling for other people to make better videos and put them out there.

For a good summary of the pros and cons of KA, here is a recent article in the Washington Post: “How well does Khan Academy teach?

So I took a look at Khan Academy statistics videos. I know something about the teaching of statistics. I have many years of experience of successful teaching, I have done research and I have read some of the literature. I have pedagogical content knowledge (I understand what makes it hard for people to learn statistics.) And I have made my own statistics teaching videos, which have been well received. I wrote some time ago about the educational principles based on research into multimedia, which have been used in developing these videos. Unlike Khan I have thought hard and long about how to present these tricky concepts. I have written and rewritten the scripts. I have edited my audio to remove errors and hesitations, I have…anyway – back to Khan Academy.

To be fair, statistics is one of the most difficult subjects to teach, so I didn’t have high hopes.

To start with the list of topics under the statistics heading showed a strong mathematics influence. This may reflect the state of the curriculum in the United States, but in no way reflects current understanding of how statistics is best understood. I couldn’t find anything about variation, levels of measurement and sampling methods, which are all foundation concepts of statistics. I think it would be more correct to call the collection of videos “the mathematics of statistics”. It starts with the “Mean, Median and Mode.” Not exactly a great way to enter the exciting world of statistics. And he mispronounces the adjectival use of “arithmetic”, which is a bit embarrassing. (Note in 2017 – it has now been corrected. – Yay)

I summoned up courage to view the video on reading Pie Graphs. It was not good. The example was percentages of ticket sales for Mediterranean cruises over a year. That data should never have been put into a Pie Graph. For two reasons! First there are too many slices of pie. A pie chart should never be used for that many categories. But worse than that, the categories are ordinal – they are months. The best choice of graph is a bar or column chart, with the months in order, as you would then be able to see trend! (I have to stop myself here or I could rave on much longer). My point is that Khan has used a very BAD graph as his example. This is one of the worst things a teacher can do, as it entrenches in the students’ minds that this is acceptable. The only thing good about the graph was that it was not three dimensional, and  it is not exploding. It didn’t even have a title. Bad, bad, bad. (Sorry I was meant to be stopping)

# Confidence Intervals

I am tempted to say Khan is arrogant to think he can produce something after a few minutes thought. Actually I was tempted to say something rather stronger than that. I have to admit I haven’t watched many of the videos, but I really don’t want to spend too much of my life doing that. I chose one on Confidence Intervals, which nearly had me throwing things at the computer. It never explains what a confidence interval is. The bumbling around was so painful I couldn’t watch the video in its entirety. I’m pretty sure he got it wrong. He was so confused by the end that I can’t say for sure. My own confidence intervals video is one of my earlier ones, so it is a little rough, but I’d wager most people understand better what a confidence interval is after watching it. (UPDATE: Since writing this post I have made a better video about confidence intervals. It explains what confidence intervals ARE!) You can watch it here:

So then I decided I should look at the video entitled p-value and hypothesis tests. This is something I know many people struggle with. It is crucial to understanding inferential statistics. I have spent many hours working out ways in which to teach this that will help people to understand.

# The p-value and hypothesis testing

Well I watched most of the p-value video, and was pleasantly surprised. The explanation of how we get the p-value is sound, and once he gets into his flow, the hesitations get less irritating. There is a small error – talking about 100 samples, rather than a sample of 100 observations. Also it is a bad idea to have a sample size of 100 in an example as this can get confused with the 100 in the expression of the confidence interval as a percentage. But it does give a good mathematical explanation of how the p-value is calculated. I’m not sure how well it helps students to understand what a p-value is. For a mathematically capable student, this would probably be enlightening. I have my doubts about most of the business students I have taught over the last two decades.

My main criticism is that the video is dull. It doesn’t provide anything more than the mathematics. But apart from alienating non-mathematical students it isn’t harmful. In fact if I had a student who wanted to know the mathematics behind the statistics, I would be happy to send them there. People have commented that my videos don’t tell you how the p-value is calculated. This is true. That is not the aim. Maybe I’ll do one about that one day, but I figured it was more important to know what to do with one.

# Khan Academy videos on statistics aren’t good

My point is, surely we can do better than that! Bill Gates has thrown money at the Khan Academy. Wouldn’t it be wonderful if it were the purveyor of really good practice rather than mediocrity? One blogger suggests that if Khan Academy could use really good videos, it really could be useful.

I have gone on long enough.

I realise now, that asking a busy person to watch my videos is a bit of a cheek. They aren’t that long though. They are funny and clever. They are NOT like Khan Academy. I think they are worth the six to ten minutes each.

Here are links to my three most popular ones. Enjoy.