Statistical software for worried students

Statistical software for worried students: Appearances matter

Let’s be honest. Most students of statistics are taking statistics because they have to. I asked my class of 100 business students who choose to take the quantitative methods course if they did not have to. Two hands went up.

Face it – statistics is necessary but not often embraced.

But actually it is worse than that. For many people statistics is the most dreaded course they are required to take. It can be the barrier to achieving their career goals as a psychologist, marketer or physician. (And it should be required for many other careers, such as journalism, law and sports commentator.)

Choice of software

Consequently, we have worried students in our statistics courses. We want them to succeed, and to do that we need to reduce their worry. One decision that will affect their engagement and success is the choice of computer package. This decision rightly causes consternation to instructors. It is telling that one of the most frequently and consistently accessed posts on this blog is Excel, SPSS, Minitab or R. It has been  viewed 55,000 times in the last five years.

The problem of which package to use is no easier to solve than it was five years ago when I wrote the post. I am helping a tertiary institution to re-develop their on-line course in statistics. This is really fun – applying all the great advice and ideas from ”
Guidelines for Assessment and Instruction in Statistics” or GAISE. They asked for advice on what statistics package to use. And I am torn.


Here is what I want from a statistical teaching package:

  • Easy to use
  • Attractive to look at (See “Appearances Matter” below)
  • Helpful output
  • Good instructional materials with videos etc (as this is an online course)
  • Supports good pedagogy

If I’m honest I also want it to have the following characteristics:

  • Guidance for students as to what is sensible
  • Only the tests and options I want them to use in my course – not too many choices
  • An interpretation of the output
  • Data handling capabilities, including missing values
  • A pop up saying “Are you sure you want to make a three dimensional pie-chart?”

Is this too much to ask?


Overlapping objectives

Here is the thing. There are two objectives for introductory statistics courses that partly overlap and partly conflict. We want students to

  • Learn what statistics is all about
  • Learn how to do statistics.

They probably should not conflict, but they require different things from your software. If all we want the students to do is perform the statistical tests, then something like Excel is not a bad choice, as they get to learn Excel as well, which could be handy for c.v. expansion and job-getting. If we are more concerned about learning what statistics is all about, then an exploratory package like Tinkerplots or iNZight could be useful.

Ideally I would like students to learn both what statistics is all about and how to do it. But most of all, I want them to feel happy about doing statistical analysis.

Appearances matter

Eye-appeal is important for overcoming fear. I am confident in mathematics, but a journal article with a page of Greek letters and mathematical symbols, makes me anxious. The Latex font makes me nervous. And an ugly logo puts me off a package. I know it is shallow. But it is a thing, and I suspect I am far from alone. Marketing people know that the choice of colour, word, placement – all sorts of superficial things effect whether a product sells. We need to sell our product, statistics, and to do that, it needs to be attractive. It may well be that the people who design software are less affected by appearance, but they are not the consumers.

Terminal or continuing?

This is important: Most of our students will never do another statistical analysis.

Think about it :

Most of our students will never do another statistical analysis.

Here are the implications: It is important for the students to learn what statistics is about, where it is needed, potential problems and good communication and critique of statistical results. It is not important for students to learn how to program or use a complex package.

Students need to experience statistical analysis, to understand the process. They may also discover the excitement of a new set of data to explore, and the anticipation of an interesting result. These students may decide to study more statistics, at which time they will need to learn to operate a more comprehensive package. They will also be motivated to do so because they have chosen to continue to learn statistics.


In my previous post I talked about Excel, SPSS, Minitab and R. I used to teach with Excel, and I know many of my past students have been grateful they learned it. But now I know better, and cannot, hand on heart recommend Excel as the main software. Students need to be able to play with the data, to look at various graphs, and get a feel for variation and structure. Excel’s graphing and data-handling capabilities, particularly with regard to missing values, are not helpful. The histograms are disastrous. Excel is useful for teaching students how to do statistics, but not what statistics is all about.

SPSS and Minitab

SPSS was a personal favourite, but it has been a while since I used it. It is fairly expensive, and chances are the students will never use it again. I’m not sure how well it does data exploration. Minitab is another nice little package. Both of these are probably overkill for an introductory statistics course.

R and R Commander

R is a useful and versatile statistical language for higher level statistical analysis and learning but it is not suitable for worried students. It is unattractive.

R Commander is a graphical user interface for R. It is free, and potentially friendlier than R. It comes with a book. I am told it is a helpful introduction to R. R Commander is also unattractive. The book was formatted in Latex. The installation guide looks daunting. That is enough to make me reluctant – and I like statistics!

The screenshot displayed on the front page of R Commander

iNZight and iNZight Lite

I have used iNZight a lot. It was developed at the University of Auckland for use in their statistics course and in New Zealand schools. The full version is free and can be installed on PC and Mac computers, though there may be issues with running it on a Mac. The iNZight lite, web-based version is fine. It is free and works on any platform. I really like how easy it is to generate various plots to explore the data. You put in the data, and the graphs appear almost instantly. IiNZIght encourages engagement with the data, rather than doing things to data.

For a face-to-face course I would choose iNZight Lite. For an online course I would be a little concerned about the level of support material available. The newer version of iNZight, and iNZight lite have benefitted from some graphic design input. I like the colours and the new logo.


I’ve heard about Genstat for some time, as an alternative to iNZight for New Zealand schools, particularly as it does bootstrapping. So I requested an inspection copy. It has a friendly vibe. I like the dialog box suggesting the graph you might like try. It lacks the immediacy of iNZight lite. It has the multiple window thing going on, which can be tricky to navigate. I was pleased at the number of sample data sets.


NZGrapher is popular in New Zealand schools. It was created by a high school teacher in his spare time, and is attractive and lean. It is free, funded by donations and advertisements. You enter a data set, and it creates a wide range of graphs. It does not have the traditional tests that you would want in an introductory statistics course, as it is aimed at the NZ school curriculum requirements.


Statcrunch is a more attractive, polished package, with a wide range of supporting materials. I think this would give confidence to the students. It is specifically designed for teaching and learning and is almost conversational in approach. I have not had the opportunity to try out Statcrunch. It looks inviting, and was created by Webster West, a respected statistics educator. It is now distributed by Pearson.


I recently had my attention drawn to this new package. It is free, well-supported and has a clean, attractive interface. It has a vibe similar to SPSS. I like the immediate response as you begin your analysis. Jasp is free, and I was able to download it easily. It is not as graphical as iNZight, but is more traditional in its approach. For a course emphasising doing statistics, I like the look of this.

Data, controls and output from Jasp


So there you have it. I have mentioned only a few packages, but I hope my musings have got you thinking about what to look for in a package. If I were teaching an introductory statistics course, I would use iNZight Lite, Jasp, and possibly Excel. I would use iNZight Lite for data exploration. I might use Jasp for hypothesis tests, confidence intervals and model fitting. And if possible I would teach Pivot Tables in Excel, and use it for any probability calculations.

Your thoughts

This is a very important topic and I would appreciate input. Have I missed an important contender? What do you look for in a statistical package for an introductory statistics course? As a student, how important is it to you for the software to be attractive?


There’s more to reading graphs than meets the eye

There’s more to reading graphs than meets the eye

For those of us who know how to read a graph, it can be difficult to imagine what another person could find difficult. But then when I am presented with an unusual style of graph, or one where the data has been presented badly, I suddenly feel empathy for those who are less graph-literate.

Graphs are more common now as we have Excel to make them for us – for better or worse. An important skill for the citizens of tomorrow and today is to be able to read a graph or table and to be critical of how well it accomplishes its goals.

Here are some stages of reading a graph, much of which also applies to reading a table.

Reading about the graph

When one is familiar with graphs, and the graph is well made, we can become oblivious to the conventions. Just as readers know that English is written from left to right, graph readers understand that the height of a bar chart corresponds to the quantity of something. When people familiar with graphs look at a graph, they take in information unconsciously. This would include what type of graph it is – bar chart, line graph, scatterplot…and what it is about – the title, axis labels and legend tell us this. And they are also able to ignore unimportant aspects. For example if someone has made a 3-D bar chart, experienced graph-readers know that the thickness of the bar does not express information. Colours are generally used to distinguish different elements, but the choice of which colour is used is seldom part of the message. Other aspects about graphs, which may or may not be apparent, include the purpose of the graph and the source of the data.

Beginner graph readers need to learn how to use the various conventions to read ABOUT the data or graph. Any exploration of a graph needs to start with the question, “What is this graph about?”

Identifying one piece of data

When children start making and reading graphs, it is good for them to start with data about themselves, often represented in a picture graph, where each individual observation is shown.  A picture graph is concrete. Each child may point out their particular piece of data – the one that says that they like Wheaties, or prefer mushrooms on their pizza. This is an early stage in  the process of abstraction, that leads eventually to understanding less intuitive graphs such as the box and whisker or a time series chart. It is also important for all graph readers to be aware what each piece of data, or observation, represents and how it is represented.

Identifying one piece of data may help avoid the confusion of graphs which show raw data rather than summary data. For an example, a class may have data about the number of people in households. If this data is entered raw into a spreadsheet, and a graph created, we can end up with something like the graph immediately below (Graph 1).

This is not a good graph, but is what a naive user may well get out of Excel

Graph 1: This is not a good graph, but is what a naive user may well get out of Excel

In this we can identify that each member is represented by a bar, and the height gives the number of people in their family. I usually call this a value graph, as it shows only the individual values, with no aggregation.

A more useful representation of this same data is a summary bar chart, as shown below. (Graph 2) There are two dimensions operating. Horizontally we have the number of people in a household, and vertically we have the number of class members that have the corresponding number of people in their household. Note that it is less intuitive seeing where each class member is. Dividing the bar up into individual blocks can help with that.

Household size

Graph 2: A summary of the size of household for a group of people

Reading off the graph

In order to make sense of a graph, we often need to look at two dimensions simultaneously. If we wish to know how many people in the class come from a household of 5, we need to select along the horizontal axis, the value 5. Then we follow the bar up to the top and take our eye back to the vertical axis to see how high this value is. A ruler can help with this process.  When we read off a graph, our statements tend to be summaries of a single attribute, such as “There are 2 people who come from households of 6.”  “There are 17 dragons that breathe fire.”

Reading within the graph (comparisons, relationships)

Reading within the graph is a more complex task, even with simple graphs. When we read within a graph we are interested in comparisons and relationships. For example we may wish to see which breath type is most common among our herd of dragons. In order to answer this using the graph below, we first need to find the highest bar, by drawing our eye along the top, or drawing a ruler down the page. Then we look down that bar, and read of the name of the breath type. There are many more complex relationships, such as whether green dragons tend to be taller or shorter than red dragons, and which are more likely to be friendly. By introducing another attribute, we are in fact adding a dimension to our analysis.

This is a column chart (or bar chart) summarising the breath types.

This is a column chart (or bar chart) summarising the breath types.

Reading beyond the graph, beyond the data

This idea of reading beyond the data has been suggested as a step towards informal and then formal inference. We can perceive that our data does not represent all existing instances, and can make predictions or suppositions about what might happen in the other instances. For example, for our sample of dragons, we have seen that the green dragons tend to be more likely to be friendly than the red dragons. We could surmise that this holds over the other dragons as well. We can introduce this idea by asking the students, “I wish to have a new dragon join the herd and would prefer it to be friendly. Would I be better to get a green dragon or a red dragon?”

Judging the graph

The advantage of programs like Excel is that many people can make graphs without too much trouble. This is also a problem, as often the graph Excel produces is not really suitable for the task, and can have all sorts of visual clutter which obscures the information displayed. Learners need to think about the graph, either their own, or one they are reading and ask whether it is successful in communicating correctly the information that needs to be communicated. Does the graph serve the purpose it was created for?

I suggest that the steps listed here are a worthwhile structure to use in reading graphs, particularly for beginners. This then leads into another process, summarised as OSEM. You can read about this here in this post, A helpful structure for analysing graphs.

Teaching a service course in statistics

Teaching a service course in statistics

Most students who enrol in an initial course in statistics at university level do so because they have to. I did some research on attitudes to statistics in my entry level quantitative methods course, and fewer than 1% of the students had chosen to be in that course. This is a little demoralising, if you happen to think that statistics is worthwhile and interesting.

Teaching a service course in statistics is one of the great challenges of teaching. A “Service Course” is a course in statistics for students who are majoring in some other subject, such as Marketing or Medicine or Education. For some students it is a terminating course – they will never have to look at a p-value again (they hope). For some students it is the precursor to further applied statistics such as marketing research or biological research. Having said that, statistics for citizens is important and interesting and engaging if taught that way. And we might encourage some students to carry on.

Yet the teachers and textbook writers seem to do their best to remove the joy. Statistics is a difficult subject to understand. Often the way the instructor thinks is at odds with the way the students think and learn. The mathematical nature of the subject is invested with all sorts of emotional baggage.

Here are some of the challenges of teaching a statistics service course.

Limited mathematical ability

It is important to appreciate how limited the mathematical understanding is of some of the students in service courses. In my first year quantitative methods course, I made sure my students knew basic algebra, including rearranging and solving equations. This was all done within a business context. Even elementary algebra  was quite a stumbling block to some students, for whom algebra had been a bridge too far at school. There were students in a postgrad course I taught who were not sure which was larger, out of 0.05 and 0.1, and talked about crocodiles with regard to greater than and less than signs. And these were schoolteachers! Another senior maths teacher in that group had been teaching the calculation of confidence intervals, without actually understanding what they were.

The students are not like statisticians. Methods that worked to teach statisticians and mathematicians are unlikely to work for them. I wrote about this in my post about the Golden Rule, and how it applies at a higher level for teaching.

I realised a few years ago that I am not a mathematician. I do not have the ability to think in the abstract that is part of a true mathematician. Operations Research was my thing, because I was good at mathematics, but my understanding was concrete. This has been a surprising gift for me as a teacher, as it has meant that I can understand better what the students find difficult. Formulas do not tell them anything. Calculating by hand does not lead to understanding. It is from this philosophy that I approach the production of my videos. I am particularly pleased with my recent video about confidence intervals, which explains the ideas, with nary a formula in sight, but plenty of memorable images.


One of my more constantly accessed posts is  Excel, SPSS, Minitab or R?. This consistent interest indicates that the course of software is a universal problem.  People are very quick to say how evil Excel is, and I am under no illusions as to many of the shortcomings. The main point of my post was, however, that it depends on the class you are teaching.

As I have taught mainly business students, I still hold that for them, Excel is ideal. Not so much for the statistical aspects, but because they learn to use Excel. Last Saturday the ideas for today’s posts were just forming in my mind when the phone rang, and despite my realising it was probably a telemarketer (we have caller ID on our phone) I answered it. It was a nice young woman asking me to take part in a short survey about employment opportunities for women in the Christchurch Rebuild. After I’d answered the questions, explaining that I was redundant from the university because of the earthquakes and that I had taught statistics, she realised that I had taught her. (This is a pretty common occurrence for me in our small town-city – even when I buy sushi I am served by ex-students). So I asked her about her experience in my course, and she related how she would never have taken the course, but enjoyed it and passed. I asked about Excel, and she told me that she had never realised what you could do with Excel before, and now still used it. This is not an isolated incident. When students are taught Excel as a tool, they use it as a tool, and continue to do so after the course has ended.

When business students learn using Excel, it has the appearance of relevance. They are aware that spreadsheets are used in business. It doesn’t seem like time wasted. So I stand by my choice to use Excel. However if I were still teaching at University, I would also be using iNZight. And if I taught higher levels I would continue to use SPSS, and learn more about R.


As I said in a previous post Statistics Textbooks suck out all the fun. Very few textbooks do no harm. I wonder if this site could provide a database of statistics texts and reviews. I would be happy to review textbooks and include them here. My favourite elementary textbook is, sadly, out of print. It is called “Taking the Fear out of Data Analysis”, by the fabulously named Adamantis Diamantopoulos and Bodo Schlegelmilch. It takes a practical approach, and has a warm, nurturing style. It lacks exercises. I have used extracts from it over the years. The choice of textbook, like the choice of software, is “horses for courses”, but I think there are some horses that should not be put anywhere near a course. I do wonder how many students use textbooks as anything other than a combination lucky charm and paper weight.

In comparison with the plethora of college texts of varying value, at high-school level the pickings for textbooks are thin. This probably reflects the newness of the teaching of statistics at high-school level.  A major problem with textbooks is that they are so quickly out of date, and at school level it is not practical to replace class sets too often.

Perhaps the answer is online resources, which can be updated as needed, and are flexible and give immediate feedback.  😉

Emotional baggage

I was less than gentle with a new acquaintance in the weekend. When asked about my business, I told him that I make on-line materials to help people teach and learn statistics. He proceeded to relate a story of a misplaced use of a percentage as a reason why he never takes any notice of statistics. I have tired of the “Lies, damned lies, and statistics” jibe and decided not to take it lying down. I explained that the world is a better place because of statistical analysis. Much research, including medical would not be possible in the absence of methods for statistical analysis. An understanding of the concepts of statistics is a vital part of intelligent citizenship, especially in these days of big and ubiquitous data.

I stopped at that point, but have pondered since. What is it that makes people so quick to denigrate the worth of statistics? I suspect it is ignorance and fear. They make themselves feel better about their inadequacies by devaluing the things they lack. Just a thought.

This is not an isolated instance. In fact I was so surprised when a lighthouse keeper said that statistics sounded interesting and wanted to know more, that I didn’t really know what to say next! You can read about that in a previous post. Statistics is an interesting subject – really!

But the students in a service course in statistics may well be in the rather large subset of humanity who have yet to appreciate the worth of the subject. They may even have fear and antipathy towards the subject, as I wrote about previously. Anxiety, fear and antipathy for maths, stats and OR.

People are less likely to learn if they have negative attitudes towards the subject. And when they do learn it may well be “learning to pass” rather than actual learning which is internalised.

So what?

Keep the faith! Statistics is an important subject. Keep trying new things. If you never have a bad moment in your teaching, you are not trying enough new things. And when you hear from someone whose life was changed because of your teaching, there is nothing like it!

Excel, SPSS, Minitab or R?

I often hear this question: Should I use Excel to teach my class? Or should I use R? Which package is the best?

Update in April 2018: I have written a further post, covering other aspects and other packages.

It depends on the class

The short answer is: It depends on your class. You have to ask yourself, what are the attitudes, skills and knowledge that you wish the students to gain in the course. What is it that you want them to feel and do and understand?

If the students are never likely to do any more statistics, what matters most is that they understand the elementary ideas, feel happy about what they have done, and recognise the power of statistical analysis, so they can later employ a statistician.

If the students are strong in programming, such as engineering or computer science students, then they are less likely to find the programming a barrier, and will want to explore the versatility of the package.

If they are research students and need to take the course as part of a research methods paper, then they should be taught on the package they are most likely to use in their research.

Over the years I have taught statistics using Excel, Minitab and SPSS. These days I am preparing materials for courses using iNZight, which is a specifically designed user interface with an R engine. I have dabbled in R, but never had students who are suitable to be taught using R.

Here are my pros and cons for each of these, and when are they most suitable.


I have already written somewhat about the good and bad aspects of Excel, and the evils of Excel histograms. There are many problems with statistical analysis with Excel. I am told there are parts of the analysis toolpak which are wrong, though I’ve never found them myself. There is no straight-forward way to do a hypothesis test for a mean. The data-handling capabilities of the spreadsheet are fantastic, but the toolpak cannot even deal well with missing values. The output is idiosyncratic, and not at all intuitive. There are programming quirks which should have been eliminated many years ago. For example when you click on a radio button to say where you wish the output to go, the entry box for the data is activated, rather than the one for the output. It requires elementary Visual Basic to correct this, but has never happened. Each time Excel upgrades I look for this small fix, and have repeatedly been disappointed.

So, given these shortcomings, why would you use Excel? Because it is there, because you are helping students gain other skills in spreadsheeting at the same time, because it is less daunting to use a familiar interface. These reasons may not apply to all students. Excel is the best package for first year business students for so many reasons.

PivotTables in Excel are nasty to get your head around, but once you do, they are fantastic. I resisted teaching PivotTables for some years, but I was wrong. They may well be one of the most useful things I have ever taught at university. I made my students create comparative bar charts on Excel, using Pivot-Tables. One day Helen and I will make a video about PivotTables.


Minitab is a lovely little package, and has very nice output. Its roots as a teaching package are obvious from the user-friendly presentation of results. It has been some years since I taught with Minitab. The main reason for this is that the students are unlikely ever to have access to Minitab again, and there is a lot of extra learning required in order to make it run.


Most of my teaching at second year undergraduate and MBA and Masters of Education level has been with SPSS. Much of the analysis for my PhD research was done on SPSS. It’s a useful package, with its own peculiarities. I really like the data-handling in terms of excluding data, transforming variables and dealing with missing values. It has a much larger suite of analysis tools, including factor analysis, discriminant analysis, clustering and multi-dimensional scaling, which I taught to second year business students and research students.  SPSS shows its origins as a suite of barely related packages, in the way it does things differently between different areas. But it’s pretty good really.


R is what you expect from a command-line open-source program. It is extremely versatile, and pretty daunting for an arts or business major. I can see that R is brilliant for second-level and up in statistics, preferably for students who have already mastered similar packages/languages like MatLab or Maple. It is probably also a good introduction to high-level programming for Operations Research students.


This brings us to iNZight, which is a suite of routines using R, set in a semi-friendly user interface. It was specifically written to support the innovative New Zealand school curriculum in statistics, and has a strong emphasis on visual representation of data and results. It includes alternatives that use bootstrapping as well as traditional hypothesis testing. The time series package allows only one kind of seasonal model. I like iNZight. If I were teaching at university still, I would think very hard about using it. I certainly would use it for Time Series analysis at first year level. For high school teachers in New Zealand, there is nothing to beat it.

It has some issues. The interface is clunky and takes a long time to unzip if you have a dodgy computer (as I do). The graphics are unattractive. Sorry guys, I HATE the eyeball, and the colours don’t do it for me either. I think they need to employ a professional designer. SOON! The data has to be just right before the interface will accept it. It is a little bit buggy in a non-disastrous sort of way. It can have dimensionality/rounding issues. (I got a zero slope coefficient for a linear regression with an r of 0.07 the other day.)

But – iNZight does exactly what you want it to do, with lots of great graphics and routines to help with understanding. It is FREE. It isn’t crowded with all the extras that you don’t really need. It covers all of the New Zealand statistics curriculum, so the students need only to learn one interface.

There are other packages such as Genstat, Fathom and TinkerPlots, aimed at different purposes. My university did not have any of these, so I didn’t learn them. They may well be fantastic, but I haven’t the time to do a critique just now. Feel free to add one as a comment below!

Protectionism vs empowerment in the teaching of statistics

Where are you on the Fastidiousness Scale?

Sometimes statisticians just have to let go, and accept that some statistical analysis will be done in less than ideal conditions, with fairly dodgy data and more than a few violated assumptions.  Sometimes the wrong graph will be used. Sometimes people will claim causation from association. Just as sometimes people put apostrophes where they should not and misuse the word “comprise”.

When we are teaching, particularly non-majors, we need to think hard about where we sit on the fastidiousness scale. (In my experience just about all statistics teaching is to non-majors, which may say something about the attitudes of people to statistics.)

The fastidiousness scale is best described by its two extremes. At one extreme statistical analysis is performed only by mathematical statisticians, using tools like SAS and R, but only if they know exactly how each formula works (and have preferably proved them as well) and have done small examples by hand. All data is perfectly random, unbiased and representative. We could call this end protectionism.

At the other end of the fastidiousness scale just about anyone can do statistical analysis, using Excel.  They accept that the formulas do what the instructor tells them they do. It is a black box approach. The data goes into the black box, and the results come out. Any graph is better than no graph. Any data is better than no data. THis end is probably best labelled “cavalier”.

Some instructors teach as if the mathematical extreme were the ideal and they reluctantly allow people to do really basic summary statistics so long as the data is random, with a large sample size. They fill their teaching with warnings, and include the correction for small population in their early lectures. This protectionism could be construed as professional snobbery. This is evident in attitudes to the use of Excel for statistical analysis. I accept that the data analysis toolpak in Excel leaves a lot to be desired. (see post about Excel and post about Excel histograms) But at the same time, lots of people have access to Excel and are at home using it. When Excel is used to introduce the statistical concepts it is building on current skills, and empowering people.

Two positions on the scale are protectionism and empowerment.

Protectionism has the advantage that no bad statistical analysis is ever done. Any results that are published are properly explained, and are totally sound with regard to sample size and sampling method, choice of variable, choice of analysis, interpretation and data display. One concern is that the mathematical focus may mean that the practical aspects are neglected.

I do not recommend the cavalier end of the fastidiousness scale either. But somewhere in that direction lies empowerment. The advantages of empowerment are legion! Even if people do bad statistical analysis it is better than none at all. Taking a sample and drawing conclusions from it is better than not taking a sample. As people are empowered to do and understand statistics, they may better understand statistical ideas when they are presented to them in other contexts.

Teaching Statistics to Physios

Some years ago my sister asked me to be a keynote speaker at a hand-therapy conference. At the time I had mainly taught Operations Research and some regression analysis. But it included a free trip to Queenstown away from my children, so how could I resist? I was to do a one-hour plenary session on statistics and an elective workshop on quantitative research methods. It was scheduled first thing in the morning after the “dinner” the night before. Attendance at my session was compulsory if they were to get credit towards their professional accreditation. I did wonder if my sister actually liked me! My audience was over a hundred physiotherapists and occupational therapists who specialise in the treatment of hands, from Australia and New Zealand. They are all clever people, who generally had little knowledge of statistics. I assumed, correctly, that most of them were nervous of statistics. They had been taught by protectionists, and felt afraid, like over-protected children.

I decided to take an approach of empowerment – that all statistics boiled down to a few main ideas and that if they could understand those, they would be able to read academic reports on statistical analysis critically and, with help, do their own research. I taught about levels of data, the concept of sampling, and the meaning of the p-value. I used examples about hands. And I took an enabling, encouraging approach without being patronising.

It worked. The attendees felt empowered, and a large number came to my follow-up workshop.  I don’t know if any of them went on to apply much of what I taught them but I do know that a lot of them changed their attitude to statistical analysis.

Attitudes outlast skills and knowledge

Sometimes we forget that we are teaching attitudes, skills and knowledge- in that order of importance. If our students finish our course feeling that statistics is interesting, possible and relevant, then we have accomplished a great thing. People will forget skills and knowledge, but attitudes stick. If the students know that at one point they knew how to perform a comparison of two means, and that it wasn’t that difficult, if the time comes again, they are more likely to work out how to do it again. They have been empowered!

Imagine if only people who can spell well and write with correct grammar were allowed to write, if  only the best chefs could cook and the rest of us would just watch in awe, if only professional musicians were allowed to play instruments, if only professional sport people were allowed to participate. Just as amateur writers, musicians, sportspeople and chefs have a better appreciation of the true nature of the endeavour, empowered amateur statisticians are in a better position to appreciate the worth and importance of rigorous, fastidious statistical analysis.

Let us cast off the shackles of protectionism and start empowering. Or at least move a little way down the fastidiousness scale when teaching non-majors.

Beware of Excel Histograms

Excel histograms are a disgrace. Microsoft should be embarrassed to have them associated with their ubiquitous and generally wonderful spreadsheet, Excel. I have previously posted on how useful and versatile Excel is for enabling people to bypass the number crunching, and get to the ideas behind statistics. This is mostly true. But the histogram add-in should come with a health and safety warning.

To start with, the default look for the histogram is outrageously poor. I have some data on times a person takes to solve a Rogo puzzle. (Collected as part of our research on what factors affect solution time.) I put the data in the spreadsheet, and use the data analysis toolpak to create a histogram using the default settings. Voila!

Histogram produced using default settings in Excel

I’ll stretch it out a bit so you can see it in all its glory:

Histogram using Excel defaults, stretched out.

And here is what can be produced from this with a fair degree of manipulation:

Excel Histogram whipped into shape

This is not just a question of cosmetics. The way the horizontal axis is labelled makes it very difficult to read correctly an Excel-produced histogram unless the adaptation shown above is used. And sometimes, an Excel-produced histogram is just plain incorrect.

Bad histogram in NCEA sample exam

What prompted this tirade is a question from the sample external examination question in NCEA level 2 “Apply probability methods in solving problems”. This is an exam that over 15,000 students are likely to take. Fortunately this is only an examplar and not the real thing. It includes a badly labelled histogram, which I am almost certain was made in Excel.

Histogram taken from an exam exemplar

The introduction says: “Ali has a farm in Southland. She records the weights of 32 lambs born on her farm. The results are shown on the histogram above.”

The first question asks: “What proportion of the lambs weighs less than 1.25 kg?”

Go on – work it out.

I can’t answer this for certain. The labelling of the horizontal axis renders this question unanswerable. I suspect the desired answer is one out of 32. To get this answer I assume that the lambs are weighed to a precision of one decimal place, and the numbers under the graph are the inclusive upper bounds for the area above. I make this assumption because I know that is what Excel does. That is two too many assumptions for students in an exam. This is too many assumptions for any graph. Graphs exist in order to communicate, not confuse.

A histogram always has “bins” which cover a range of values. If you went to school last century and learnt to draw them by hand, you would put the boundary number between the bins on the tick mark on the graph that was the boundary between the bins. Intuitive!

A Google search on the word histogram shows most of the histograms with the tick marks at the boundaries, and quite a few using the Excel work around shown above. That is because the only way you can get the number and the tick mark to line up, is to move the tick mark to the centre. An Excel column chart is designed to be a value graph for nominal data, and it is being pressed into service in an unnatural way.

Another example

This is a simple mockup to illustrate

Example A

The question is, how many people scored 3 or less in the test?

It isn’t clear. Did one person score between 0 and 3, and then three between 3 and 6? The data is actually:  0, 0.5, 1, 2, 5, 5.5, 7, 8, 9,10 and the answer is that four people scored three or less. The following histogram shows this.

Example B (same data as Example A)

All it takes is some relabelling and the meaning is clear.

Teaching implications

We thought long and hard about the teaching of histograms within a Business Statistics course. We concluded that any student who is likely to need to produce a histogram in the future, is likely to (ought to?) have a better statistics package than Excel to use. Teaching them this bizarre work-around in Excel is a waste of student time (We decided this after we made students do this in a course.) It is more important for students to be able to interpret histograms correctly, and be aware of the pitfalls of badly labelled histograms. Consequently we taught students to interpret and then critique histograms, rather than construct them themselves, and assessed the same way.

If you ever use a histogram yourself, make sure you do not fall into the pit shown above!

And for those of you who persist in teaching histograms in Excel (or need help yourself in knowing how to do it – hence avoiding said pit), here is a pdf handout.


Good luck.

The best outcome would be that Microsoft get their very poor data analysis add-in fixed up, and the world would be a better place. Any chance of that?

Excel in Statistics and Operations Research

I love spreadsheets

The spreadsheet package is a wondrous thing. It has revolutionized a great many processes in the office, home and scientific research. It has affected the way we think and teach. It has enabled many more people to program and to build models, without even knowing it (and sometimes very badly). And, for better or worse, Excel has become the default spreadsheet package.

I have used Excel to create crosswords

I love spreadsheets and I love Excel.  I first became acquainted with Multiplan and Lotus-123 in 1984 as part of my graduate degree. It was amazing to see how versatile these spreadsheets were. Since then I have also used and taught Visicalc, a couple of shareware spreadsheets and finally Excel. At home I have used Excel to plan holidays, make sense of time zones, organize my decorative egg collection, convert recipes, budget, run the children’s organization at my church, make crosswords and other learning activities, plan a layout for my little sun-room, keep track of my time when juggling home, work and study commitments, cater for functions and schedule work. It is a standing joke at work that if there is a problem I will use a spreadsheet to solve it. And if that fails try duct tape.

Videos about spreadsheets

I have even made YouTube videos about Excel spreadsheets. Our first one, Absolute and Relative References, was made for the class to help with teaching a concept we seemed to have to explain over and over. It came popular worldwide and was the springboard for other spreadsheets about Excel. The video on Linear Programming using Excel Solver uses Lego pieces to illustrate a product mix LP. At an INFORMS conference in San Diego the FrontLine representative voiced his approval of the video, and it has received consistent viewing internationally for many years. It is due for renewal, thanks to upgrades in Solver, but is still helpful for teaching the fundamentals of LP.

Excel and Statistics

There are different schools of thought about using Excel in the teaching of statistics and operations research. My view is that it is preferable to teach Excel in conjunction with another discipline than in a specific computer skills course. Excel plays a big part as a tool in the entry-level Quantitative Methods and Management Science courses taught here at the University of Canterbury.

Statistics is not mathematics, but when it is taught by mathematicians, they are often captured by the mathematical aspects of statistics. Because they understand formulas by looking at them, they think their students will do also. This is fine for a course in mathematical statistics. My experience is that most students of statistics are not in the class from choice, and many find the mathematics daunting. It becomes a barrier to understanding statistics. Using a spreadsheet can help to remove that barrier.

I am aware that the Statistics ToolPak in Excel is far from ideal. It doesn’t include a test for a mean, and the data handling capabilities are poor at best. It can’t deal with missing values, the dialog boxes are unintuitive, and it isn’t even easy to add in when you want to use it. There are no box plots and the histograms are a joke at best and usually incorrect. I would much rather use a dedicated statistics package such as Minitab or SPSS. But there are two over-riding reasons why I continue to have the students learn statistics using Excel. The first reason is that they may never see Minitab or SPSS again in their lives, but Excel or something like it will be in any business they are in. The other reason is that they are learning Excel skills as well – what my colleague charmingly calls CV (resume) expansion. Time and again I have received feedback from students on how the Excel skills have come in handy. Because they use Excel all this time in the course, for all calculations, it becomes a tool, rather than a hurdle.

Graphing in Excel

Excel Graphs are ubiquitous. I wonder if the minion who decided on the default colors of maroon and beige in Excel 2003 smiled when he/she saw the myriad graphs produced in that color scheme. Certainly it was a flag to show whence they came and that the producer of the graph did not know enough to change from the default. Sadly some of the Excel graphs are awful. Tufte would not approve of all the trimming and empty space. Exploding three-dimensional pie-charts should be permanently exploded and abandoned. I realize it is not Excel’s fault that people like really bad graphs. I just wish it wasn’t so good at enabling them. A pie chart is another indicator that the analyst doesn’t really know what they are doing.

Excel makes possible the production of truly awful pie charts.

Despite all that it is fantastic that graphs are easy to produce in Excel. Graphing should be the first step in any data analysis, but manual production of graphs is tedious in the extreme. Excel makes it possible to have a good look at your data, quite quickly, and get an idea for what is happening, in terms of relationships and errors before blundering on with unnecessary analysis on dirty data. There is now no excuse not to graph. And Pivot Charts! These are SO cool. Once I got over the tricky interface and worked out how it all worked, I became a convert. My students are required to produce two-way bar charts and tables using Pivot-Tables and Pivot-Charts. I’m sure they will thank me for it one day. Some of them even do at the time!

Operations Research and Excel

In teaching Operations Research Excel is a boon. We can now escape from trivial Linear programs limited to two decision variables so that they can be plotted on the Cartesian plane. In our Management Science course we use Excel for teaching the idea of a model, for linear programming, multicriteria decision making, discrete event simulation and for financial models using discounting and annuities. There is a transformative opportunity to explore the models and see instantly the effect of a change in input value. Excel is a great introduction to Operations Research, and in an entry-level or MBA paper such as business methods, is sufficient. However any second level paper in OR should be using more appropriate software.

Teachers of decision sciences such as Statistics and Operations Research have the opportunity to teach good spread-sheeting practice, along with their own discipline. We need to be careful that we model best practice with regard to formatting, use of input cells, and avoiding numbers in formulas. This last video is a little silly, because I gave my editor a free-hand. However, the four style rules are still worth promulgating. Enjoy!