# Teaching how to write statistical reports

It is difficult to write statistical reports and it is difficult to teach how to write statistical reports.

When statistics is taught in the traditional way, with emphasis on the underlying mathematics the process of statistics is truncated at both ends. When we concentrate on the sterile analysis, the messy “writing stuff” is avoided. Students do not devise their own investigative questions, and they do not write up the results.

Here’s the thing though – in reality, the analysis step of a statistical investigation is a very small part of the whole, and performed at the click of a button or two.

Ultimately the embedding of the analysis back into an investigation should not be a problem. The really interesting part of statistics happens all around the analysis. Understanding the context enriches the learning, transforming the discipline from mathematics to statistics. We can help students embrace the excitement of a true statistical investiation. But in this time of transition, the report-writing aspects are a problem. They are a problem for the learner and for the teacher.

The new New Zealand curriculum for statistics requires report-writing as an essential component of the majority of assessment, particularly at the final year of high school. This is causing understandable concern among teachers, who come predominantly from a mathematical background. I can imagine myself a few years ago saying. “I became a maths teacher so I wouldn’t have to teach and mark essays!” In addition the results from the students are less than stellar, even from capable students. Teachers do not like their students to perform poorly.

All statistics courses should have a component of report-writing, unless they are courses in the mathematics of statistics. The problem here is, like the secondary school teachers in New Zealand, many statistics instructors are dealing with the mathematics more than the application of statistics, and are not confident of their own ability at report-writing themselves. Normal human behaviour is to avoid it. Having taught service statistics courses in a business school for two decades, I have gradually made the transition to more emphasis on report-writing and am convinced that statistical report-writing needs to be taught explicitly, and taught well.

# Report-writing is a fundamental and useful skill

For teachers who are uncomfortable with teaching and marking reports, it would be nice to dismiss the process of report-writing  as “not important”. Much of statistics teaching is in a service course, as discussed in my previous blog. It is unlikely that any of these students will ever have to write a report on a statistical analysis, other than as part of the assessment for the course.  So why do we put them and ourselves through this?

## You don’t realise whether you understand or not until you try to write it down.

The written word requires a higher level of precision than a thought or a spoken explanation. Your sentences look at you from the page and mock you with their vagueness and ambiguity. I find this out time and again as I blog. What seems like a well thought out argument in my head as I do my morning run, falls to shreds on paper, before being mustered into some semblance of order. It is in writing that we identify the flaws in our understanding. As we try to write our findings we become more aware of fuzzy thinking and gaps in reasoning. As we write we are required to organise our thoughts.

## Better critics of other reports

A student who has been required to produce a report of a good standard will be exposed to examples of good and bad reports and will be better able to identify incorrect thinking in reports they read themselves. This is perhaps the most important purpose of a terminal course in statistics. Having said that, it is both heart-warming and alarming to hear from past-students the wonderful things they are doing with the statistics they learned in my one-semester course.

## Useful skill for employment

Students need to be able to read and write as part of empowered citizenship. The skill of writing a coherent report in good English is highly sought after by employers, and of great use at university in just about every discipline. It is a transferable skill to many endeavours.

## Reports are needed for assessment

On a practical level, if the teacher is going to evaluate understanding they need evidence to work from. A written report provides one form of evidence of understanding.

# Report-writing is difficult to teach

Some maths teachers may feel inadequate in teaching “English”, as they see report-writing. They do not have the pedagogical content knowledge in teaching writing that they do for teaching algebra or percentages, for instance. Pedagogical content knowledge is more than the intersection of knowing a subject, and being able to teach in a general sort of way. It is the knowledge of how to teach a certain discipline, what is difficult to learners, and how to help them learn.

# Some basic ideas for teaching report-writing

To write at good report you need to understand what is going on, have the appropriate vocabulary, and use a clear structure. Good teaching will emphasise understanding. Getting students to write sentences about output, and sharing them with their peers is a great way to identify misunderstandings. As these sentences are shared, the teacher can model the use of correct technical language. They can say, for instance, “You have the essence correct here, but there are some more precise terms you could use, such as …” Teachers can either give students outlines for reports, or they can give them several good reports and get the students to identify the underlying structure. I am a firm believer in the generous use of headings within a report. They provide signposts for writer and reader alike.

You can see this in my video, Writing up a Time Series Report.

Report-writing requires practice. The assessment report should not be the first report of that type that a student writes. In the world of motivated students with no other demands on their time, it would be great to have them write up one assignment for the practice and then learn from that to produce a better one. I am aware that students tend not to do the work unless there is a grade attached to it, so it can be difficult to get a student to do a “practice report” ahead of the “real assessment.”  There are other alternatives that approximate this, however, which require less input from the teacher. One of these, the use of templates, is explained in an earlier post, Templates for statistical reports – spoon-feeding?

There is nothing wrong with using templates and “sensible sentences”. (not to be confused with “sensible sentencing”, which seems devoid of sense.) There are only so many ways to say that “the median number of pairs of shoes owned by women is ten.” It is also a difficult sentence to make sound elegant. Good reports will look similar. This is not creative-writing – it is report-writing. Sure the marking may be boring when all the reports seem very similar, but it is a small price to pay when you avoid banging your head against the desk at the bizarre and disorganised offerings.

This is but a musing on the teaching of report-writing. Glenda Francis, in  “An approach to report writing in statistics courses” identifies similar issues, and provides a fuller background to the problem. She also indicates that there is much to be done in developing this area of teaching and research. I will be providing professional development in this area over the next month to at least three groups of teachers, and I look forward to learning a great deal from them, as we explore these issues together.

# The role of context in statistical analysis

The wonderful advantage of teaching statistics is the real-life context within which any applicaton must exist. This can also be one of the difficulties. Statistics without context is merely the mathematics of statistics, and is sterile and theoretical.  The teaching of statistics requires real data. And real data often comes with a fairly solid back-story.

One of the interesting aspects for practicing statisticians, is that they can find out about a wide range of applications, by working in partnership with specialists. In my statistical and operations research advising I have learned about a range of subjects, including the treatment of hand injuries, children’s developmental understanding of probability, the bed occupancy in public hospitals, the educational needs of blind students, growth rates of vegetables, texted comments on service at supermarkets, killing methods of chickens, rogaine route choice, co-ordinating scientific expeditions to Antarctica and the cost of care for neonatals in intensive care. I found most of these really interesting and was keen to work with the experts on these projects. Statisticians tend to work in teams with specialists in related disciplines.

# Learning a context can take time

When one is part of a long-term project, time spent learning the intricacies of the context is well spent. Without that, the meaning from the data can be lost. However, it is difficult to replicate this in the teaching of statistics, particularly in a general high school or service course. The amount of time required to become familiar with the context takes away from the time spent learning statistics. Too much time spent on one specific project or area of interest can mean that the students are unable to generalise. You need several different examples in order to know what is specific to the context and what is general to all or most contexts.

One approach is to try to have contexts with which students are already familiar. This can be enabled by collecting the data from the students themselves. The Census at School project provides international data for students to use in just this way. This is ideal, in that the context is familiar, and yet the data is “dirty” enough to provide challenges and judgment calls.

Some teachers find that this is too low-level and would prefer to use biological data, or dietary or sports data from other sources. I have some reservations about this. In New Zealand the new statistics curriculum is in its final year of introduction, and understandably there are some bedding-in issues. One I perceive is the relative importance of the context in the students’ reports. As these reports have high-stakes grades attached to them, this is an issue. I will use as an example the time series “standard”. The assessment specification states, among other things, “Using the statistical enquiry cycle to investigate time series data involves: using existing data sets, selecting a variable to investigate, selecting and using appropriate display(s), identifying features in the data and relating this to the context, finding an appropriate model, using the model to make a forecast, communicating findings in a conclusion.”

The full “standard” is given here: Investigate Time Series Data This would involve about five weeks of teaching and assessment, in parallel with four other subjects.(The final 3 years of schooling in NZ are assessed through the National Certificate of Educational Achievement (NCEA). Each year students usually take five subject areas, each of which consists of about six “achievement standards” worth between 3 and 6 credits. There is a mixture of internally and externally assessed standards.)

In this specification I see that there is a requirement for the model to be related to the context. This is a great opportunity for teachers to show how models are useful, and their limitations. I would be happy with a few sentences indicating that the student could identify a seasonal pattern and make some suggestions as to why this might relate to the context, followed by a similar analysis of the shape of the trend. However there are some teachers who are requiring students to do independent literature exploration into the area, and requiring references, while forbidding the referencing of Wikipedia.

# This concerns me, and I call for robust discussion.

Statistics is not research methods any more than statistics is mathematics. Research methods and standards of evidence vary between disciplines. Clearly the evidence required in medical research will differ from that of marketing research. I do not think it is the place of the statistics teacher to be covering this. Mathematics teachers are already being stretched to teach the unfamiliar material of statistics, and I think asking them and the students to become expert in research methods is going too far.

It is also taking out all the fun.

# Keep the fun

Statistics should be fun for the teacher and the students. The context needs to be accessible or you are just putting in another opportunity for antipathy and confusion. If you aren’t having fun, you aren’t doing it right. Or, more to the point, if your students aren’t having fun, you aren’t doing it right.

# Some suggestions about the role of context in teaching statistics and operations research

• Use real data.
• If the context is difficult to understand, you are losing the point.
• The results should not be obvious. It is not interesting that year 12 boys weigh more than year 9 boys.
• Null results are still results. (We aren’t trying for academic publications!)
• It is okay to clean up data so you don’t confuse students before they are ready for it.
• Sometimes you should use dirty data – a bit of confusion is beneficial.
• Various contexts are better than one long project.
• Avoid the plodding parts of research methods.
• Avoid boring data. Who gives a flying fish about the relative sizes of dolphin jaws?
• Wikipedia is a great place to find out the context for most high school statistics analysis. That is where I look. It’s a great starting place for anyone.

# Excel, SPSS, Minitab or R?

I often hear this question: Should I use Excel to teach my class? Or should I use R? Which package is the best?

# It depends on the class

The short answer is: It depends on your class. You have to ask yourself, what are the attitudes, skills and knowledge that you wish the students to gain in the course. What is it that you want them to feel and do and understand?

If the students are never likely to do any more statistics, what matters most is that they understand the elementary ideas, feel happy about what they have done, and recognise the power of statistical analysis, so they can later employ a statistician.

If the students are strong in programming, such as engineering or computer science students, then they are less likely to find the programming a barrier, and will want to explore the versatility of the package.

If they are research students and need to take the course as part of a research methods paper, then they should be taught on the package they are most likely to use in their research.

Over the years I have taught statistics using Excel, Minitab and SPSS. These days I am preparing materials for courses using iNZight, which is a specifically designed user interface with an R engine. I have dabbled in R, but never had students who are suitable to be taught using R.

Here are my pros and cons for each of these, and when are they most suitable.

# Excel

I have already written somewhat about the good and bad aspects of Excel, and the evils of Excel histograms. There are many problems with statistical analysis with Excel. I am told there are parts of the analysis toolpak which are wrong, though I’ve never found them myself. There is no straight-forward way to do a hypothesis test for a mean. The data-handling capabilities of the spreadsheet are fantastic, but the toolpak cannot even deal well with missing values. The output is idiosyncratic, and not at all intuitive. There are programming quirks which should have been eliminated many years ago. For example when you click on a radio button to say where you wish the output to go, the entry box for the data is activated, rather than the one for the output. It requires elementary Visual Basic to correct this, but has never happened. Each time Excel upgrades I look for this small fix, and have repeatedly been disappointed.

So, given these shortcomings, why would you use Excel? Because it is there, because you are helping students gain other skills in spreadsheeting at the same time, because it is less daunting to use a familiar interface. These reasons may not apply to all students. Excel is the best package for first year business students for so many reasons.

PivotTables in Excel are nasty to get your head around, but once you do, they are fantastic. I resisted teaching PivotTables for some years, but I was wrong. They may well be one of the most useful things I have ever taught at university. I made my students create comparative bar charts on Excel, using Pivot-Tables. One day Helen and I will make a video about PivotTables.

# Minitab

Minitab is a lovely little package, and has very nice output. Its roots as a teaching package are obvious from the user-friendly presentation of results. It has been some years since I taught with Minitab. The main reason for this is that the students are unlikely ever to have access to Minitab again, and there is a lot of extra learning required in order to make it run.

# SPSS

Most of my teaching at second year undergraduate and MBA and Masters of Education level has been with SPSS. Much of the analysis for my PhD research was done on SPSS. It’s a useful package, with its own peculiarities. I really like the data-handling in terms of excluding data, transforming variables and dealing with missing values. It has a much larger suite of analysis tools, including factor analysis, discriminant analysis, clustering and multi-dimensional scaling, which I taught to second year business students and research students.  SPSS shows its origins as a suite of barely related packages, in the way it does things differently between different areas. But it’s pretty good really.

# R

R is what you expect from a command-line open-source program. It is extremely versatile, and pretty daunting for an arts or business major. I can see that R is brilliant for second-level and up in statistics, preferably for students who have already mastered similar packages/languages like MatLab or Maple. It is probably also a good introduction to high-level programming for Operations Research students.

# iNZight

This brings us to iNZight, which is a suite of routines using R, set in a semi-friendly user interface. It was specifically written to support the innovative New Zealand school curriculum in statistics, and has a strong emphasis on visual representation of data and results. It includes alternatives that use bootstrapping as well as traditional hypothesis testing. The time series package allows only one kind of seasonal model. I like iNZight. If I were teaching at university still, I would think very hard about using it. I certainly would use it for Time Series analysis at first year level. For high school teachers in New Zealand, there is nothing to beat it.

It has some issues. The interface is clunky and takes a long time to unzip if you have a dodgy computer (as I do). The graphics are unattractive. Sorry guys, I HATE the eyeball, and the colours don’t do it for me either. I think they need to employ a professional designer. SOON! The data has to be just right before the interface will accept it. It is a little bit buggy in a non-disastrous sort of way. It can have dimensionality/rounding issues. (I got a zero slope coefficient for a linear regression with an r of 0.07 the other day.)

But – iNZight does exactly what you want it to do, with lots of great graphics and routines to help with understanding. It is FREE. It isn’t crowded with all the extras that you don’t really need. It covers all of the New Zealand statistics curriculum, so the students need only to learn one interface.

There are other packages such as Genstat, Fathom and TinkerPlots, aimed at different purposes. My university did not have any of these, so I didn’t learn them. They may well be fantastic, but I haven’t the time to do a critique just now. Feel free to add one as a comment below!

# Where are you on the Fastidiousness Scale?

Sometimes statisticians just have to let go, and accept that some statistical analysis will be done in less than ideal conditions, with fairly dodgy data and more than a few violated assumptions.  Sometimes the wrong graph will be used. Sometimes people will claim causation from association. Just as sometimes people put apostrophes where they should not and misuse the word “comprise”.

When we are teaching, particularly non-majors, we need to think hard about where we sit on the fastidiousness scale. (In my experience just about all statistics teaching is to non-majors, which may say something about the attitudes of people to statistics.)

The fastidiousness scale is best described by its two extremes. At one extreme statistical analysis is performed only by mathematical statisticians, using tools like SAS and R, but only if they know exactly how each formula works (and have preferably proved them as well) and have done small examples by hand. All data is perfectly random, unbiased and representative. We could call this end protectionism.

At the other end of the fastidiousness scale just about anyone can do statistical analysis, using Excel.  They accept that the formulas do what the instructor tells them they do. It is a black box approach. The data goes into the black box, and the results come out. Any graph is better than no graph. Any data is better than no data. THis end is probably best labelled “cavalier”.

Some instructors teach as if the mathematical extreme were the ideal and they reluctantly allow people to do really basic summary statistics so long as the data is random, with a large sample size. They fill their teaching with warnings, and include the correction for small population in their early lectures. This protectionism could be construed as professional snobbery. This is evident in attitudes to the use of Excel for statistical analysis. I accept that the data analysis toolpak in Excel leaves a lot to be desired. (see post about Excel and post about Excel histograms) But at the same time, lots of people have access to Excel and are at home using it. When Excel is used to introduce the statistical concepts it is building on current skills, and empowering people.

# Two positions on the scale are protectionism and empowerment.

Protectionism has the advantage that no bad statistical analysis is ever done. Any results that are published are properly explained, and are totally sound with regard to sample size and sampling method, choice of variable, choice of analysis, interpretation and data display. One concern is that the mathematical focus may mean that the practical aspects are neglected.

I do not recommend the cavalier end of the fastidiousness scale either. But somewhere in that direction lies empowerment. The advantages of empowerment are legion! Even if people do bad statistical analysis it is better than none at all. Taking a sample and drawing conclusions from it is better than not taking a sample. As people are empowered to do and understand statistics, they may better understand statistical ideas when they are presented to them in other contexts.

# Teaching Statistics to Physios

Some years ago my sister asked me to be a keynote speaker at a hand-therapy conference. At the time I had mainly taught Operations Research and some regression analysis. But it included a free trip to Queenstown away from my children, so how could I resist? I was to do a one-hour plenary session on statistics and an elective workshop on quantitative research methods. It was scheduled first thing in the morning after the “dinner” the night before. Attendance at my session was compulsory if they were to get credit towards their professional accreditation. I did wonder if my sister actually liked me! My audience was over a hundred physiotherapists and occupational therapists who specialise in the treatment of hands, from Australia and New Zealand. They are all clever people, who generally had little knowledge of statistics. I assumed, correctly, that most of them were nervous of statistics. They had been taught by protectionists, and felt afraid, like over-protected children.

I decided to take an approach of empowerment – that all statistics boiled down to a few main ideas and that if they could understand those, they would be able to read academic reports on statistical analysis critically and, with help, do their own research. I taught about levels of data, the concept of sampling, and the meaning of the p-value. I used examples about hands. And I took an enabling, encouraging approach without being patronising.

It worked. The attendees felt empowered, and a large number came to my follow-up workshop.  I don’t know if any of them went on to apply much of what I taught them but I do know that a lot of them changed their attitude to statistical analysis.

# Attitudes outlast skills and knowledge

Sometimes we forget that we are teaching attitudes, skills and knowledge- in that order of importance. If our students finish our course feeling that statistics is interesting, possible and relevant, then we have accomplished a great thing. People will forget skills and knowledge, but attitudes stick. If the students know that at one point they knew how to perform a comparison of two means, and that it wasn’t that difficult, if the time comes again, they are more likely to work out how to do it again. They have been empowered!

Imagine if only people who can spell well and write with correct grammar were allowed to write, if  only the best chefs could cook and the rest of us would just watch in awe, if only professional musicians were allowed to play instruments, if only professional sport people were allowed to participate. Just as amateur writers, musicians, sportspeople and chefs have a better appreciation of the true nature of the endeavour, empowered amateur statisticians are in a better position to appreciate the worth and importance of rigorous, fastidious statistical analysis.

Let us cast off the shackles of protectionism and start empowering. Or at least move a little way down the fastidiousness scale when teaching non-majors.

# Don’t bury students in tools

In our statistics courses and textbooks there is a tendency to hand our students tool after tool, wanting to teach them all they need to know. However students can feel buried under these tools and unable to decide which to use for which task. This is also true in beginning Operations Research or Management Science courses. To the instructors, it is obvious whether to use the test for paired or independent samples or whether to use multicriteria decision making or a decision tree.  But it is just another source of confusion for the student, who wants to be told what to do.

## Tools for statistics and operations research

A common approach to teaching hypothesis testing in business statistics courses, if textbooks are anything to go by, is to teach several different forms of hypothesis testing, starting with the test for a mean, and test for a proportion then difference of two means, independent and paired, then difference of two proportions. Then we have tests for regression and correlation, and chi-squared test for independence. These are the seven basic statistical tests that people are likely to use or see. I would probably add ANOVA, if there is enough time. Even listed, this seems a bit confusing.

An introductory operations research course might include any number of topics including linear programming, integer programming, inventory control, queueing, simulation, decision analysis, critical path, the assignment problem, dynamic programming, systems analysis, financial modelling, inventory control…And I would hope some overall teaching about models and the OR process.

## Issues with the pile of tools

Of course we need to teach the essential tools of our discipline, but there are two issues arising from this approach.

The obvious one is that students are left bewildered as to which test they should use when. Because of the way textbooks and courses are organised, students don’t usually have to decide which tool to use in a given situation. If the preceding chapter is about linear programming, then the questions will be about linear programming.

The second issue is that unless students are helped, they fail to see the connections between the techniques and are left with a fragmented view of the discipline. It is not just a question of which tool to use for which task, it is about seeing the linkages and the similarities. We want to help them have integrated knowledge.

## Providing activities to help with organisation

In both my introductory courses I attempted to address this, with varying degrees of success.

In our management science course we end the year with a case of a situation with multiple needs, and the students were to identify which technique would be useful in each instance. Then the final exam has a similar question, with specific questions about over-arching concepts such as deterministic and stochastic inputs, and the purpose of the model – to optimise or inform. This is also an opportunity to address issues of ethics and worldview.

In the final section of the business statistics course we have a large bank of questions for students to work through, to give them practice in deciding which test to use. I was careful to make sure that there was more than one question related to each scenario, so that students would not learn unhelpful shortcuts, such as, if the question is about weight loss, the answer must be paired difference of two means. I also analysed the mistakes given in multichoice answers, to see where confusion was arising, sometimes due to poor wording. From this I refined the questions.

## Examples of the questions for test choice in hypothesis testing

Management thinks there is a difference in productivity between the two days of the week in a certain work area. The production output of a random sample of 15 factory workers is recorded on both a Tuesday and a Friday of the same week. For each worker, the number of completed garments is counted on both days.

A restaurant manager is thinking of doing a special “girls’ night out” promotion. She suspects that groups of women together are more likely to stay for dessert than mixed adult or family groups. For the next two weeks she gets the staff to write down for each table whether they stay for dessert, and what type of group they are. She asks you to see if her suspicion is correct.

A human resources department has data on 200 past employees, including how long, in months, they stayed at the company, and the mark out of 100 they got in their recruitment assessment. They ask you to work out whether you can predict how long a person will stay, based on their test mark.

A researcher wanted to investigate whether a new diet was effective in helping with weight loss. She got 40 volunteers and got 20 to use the diet and the other 20 to eat normally. After 6 weeks the weights (in kg) before and after were recorded for each volunteer, and the difference calculated. She then looked at how the weight losses differed between the two groups.

## Comment on the questions

You might notice that all the examples are in a business context. This is because this is a course for business students, and they need to know that what they are learning is relevant to their future. Questions about dolphins and pine trees are not suitable for these students. (Unless we are making money out of them!)

## The master diagram

The students to work through these multiple choice questions on-line, and we offered help and coached them through questions with which they had difficulty. By taking my turn with the teaching assistants in the computer labs, I was able to understand better how the students perceived the tests, and ways to help them with this. The result is a diagram, or set of diagrams which shows the relationships between the tests, and a procedure to help them make the decision. I am a great believer in diagrams, but they need to be well thought out. Many textbooks have branching diagrams, showing a decision process for which test to use. I felt there was a more holistic way to approach it, and thought long and hard, and tried out my diagrams on students before I came up with our different approach. You can see the diagrams here by clicking on the link to the pdf which you can download: Choosing the test diagrams

The three questions which help the students to identify the most appropriate test are:

1. What level of measurement is the data – Nominal or interval/ratio?
2. How many samples do we have?
3. What is the purpose of our analysis?

I made an on-line lesson which takes the students through the steps over and over, and created the diagrams to help them. Time and again the students said how much it helped them to fit it all together. Eventually I made the following video, which is on YouTube. I suspect it must be coming up to summary time in courses in the US, as this video has recently attracted a lot of views, and positive comments.

The video is also part of our app, AtMyPace: Statistics along with two sets of questions to help students to learn about the different types of tests and how to tell them apart. You can access the same resources on-line through AtMyPace:Statistics statsLC.com.

It is important to see the subject as a whole, and not a jumbled mass of techniques and ideas, and this has really helped my students and many others through the video and app.

## Best wishes for the holiday season

It is Christmas time and here in Christchurch the sun is shining and barbecues and beaches are calling. I am taking a break from the blog for the great New Zealand shut-down and will be back in the New Year.

Thank you for all the followers and especially your comments, Likes and ReTweets.

# Teaching time series with limited computer access

Last century this wasn’t really an issue, at least not in high schools, as statistics has been a peripheral part of the mathematics curriculum and the mathematics of statistics has been taught as a subset of mathematics.

But this is changing, and it looks as if the change is starting in New Zealand. The NZ school curriculum has leapt ahead of the rest of the world. Statistics is taught at all levels and at the higher levels of high school, statistics is taught as it is actually done in practice – using computers. All analysis is done by a computer package, particularly using iNZight, a purpose-built, free package. The emphasis is on understanding, concepts and critical thinking, rather than the mechanical and slow application of formulas. The rigour has moved from the calculations to the meaning. It is SO exciting!

One big concern for many teachers is access to computers. In many schools there aren’t enough computer suites to schedule the students in for their statistics classes. So how do we deal with this?

It might seem that the computers are needed every day, but in fact they aren’t. And neither is it necessary to have one computer per student.

## Make them share

I’ve never had a problem when students have had to share computers. I find the people who do share a computer, learn better than those who are trying to work it out on their own. I actively encourage sharing computers in a lab.

I recently had the opportunity to be on the learning end, with computer instruction. The teacher was showing what to do at the front, and we in the class were echoing her steps on our computers. This is not ideal, as it requires everyone to be at the same pace, but as we were adults it was fine. I hadn’t brought my laptop, so I was sharing with another student. I’m pretty sure I learned more, as I got to follow what was happening on both computers, rather than trying to work it out and keep up. I was also able to help my partner, as she would lose track of what was happening when her computer wasn’t doing what it was meant to.

I have found this to be true at all levels, especially when learning a new package. Having two heads at the computer encourages discussion, which is an important element in learning. Students are also more likely to ask questions when they have already discussed a problem with another student. Pairing is so useful that some software companies get programmers to work in pairs, sharing a computer and work desk, because they have discovered that this has benefits.

## Think about what we are trying to teach

I am currently developing resources for a unit in time series analysis, based on the New Zealand curriculum, and using the free software, iNZight. At first glance, you might think that the entire unit would need to be taught in a computer lab. This is definitely not the case. And because of the layout of many computer labs, in fact you are better to stay out of them for most of the unit so that students can work in groups.

I find that it is worthwhile to think about the attitudes, skills and knowledge that we wish our pupils to develop in a unit – in that order of importance! These examples are illustrative rather than exhaustive.

Attitudes – By the end of the topic all students should feel that time series analysis is interesting and relevant (and maybe even fun!).

Time series analysis is pretty straightforward at the beginner level, but can be quite exciting. Once you know the basics, and with a convenient package to speed up the mechanics, you can do some interesting detective work. I would want the students to share some of this excitement, and start to explore on their own.

## Skills

Students should be able to:

• identify elements of a time series, relating them to the real life context.
• write a report on a time series analysis using correct terminology, clear enough for a non-expert reader to understand.
• use iNZight to analyse different time series.

## Knowledge

• Student should be able to explain and apply the following terms correctly: time series, trend, seasonality, stationary, noise, variation

And that is about it really!

So how do we do this, with or without full computer access.

Even with unlimited computer access I would get students to work in pairs for much of the time. I would start away from the computers. First display graphs of time series to the class and get them to write down sentences about them in their pairs. Then share with the class. We should get sentences like, “It mainly goes up, and then it goes down” and “there is a pattern that repeats”. From that the teacher can introduce the ideas of trend, seasonality and noise, modelling the correct use of specialist language.

Then I would talk about the context – or maybe the context should have come first… The time series chosen should be one with an easy to identify context, such as retail sales of recreational goods, or patterns of tourist arrivals. These series are available in New Zealand at Infoshare or in iNZight format via Statslc. Other countries will have similar series available. Again get the students to write down sentences, this time relating them to the context.

Homework could be to find a graph of a time series on-line or in a magazine. Or to make a list of things that might show seasonality.

Next I would get the students onto the computers in pairs. They should have a worksheet like the one here, so that they can work step-by-step through the package at their own pace in pairs. At some time during the class they could swap roles, if one has been instructing and the other operating.

The data set here RetailNZTS4 has four series in it, which show different behaviours. Students should see if they can get all the graphs they need for further analysis.

Four time series compared using iNZight software

The next class is away from the computers again. Here they are writing sentences about the graphs. They should do this alone, and in pairs, and compare in groups. It would be good to have a computer or two available for students to take turns to get any graphs they might find they need. When people are in front of a computer it tends to dominate their thinking and they can produce far too much output with very little thought. Moving away from the computer encourages a more reflective approach.

Then start on another data set. I would use the one about accommodation, AccRegNZTS13 comparing the seasonal patterns of occupancy in different regions of New Zealand. If there are enough computers, the students can spend one day creating the graphs and exploring, then the next day writing it up. Maybe different groups could take different regions, and find out why the pattern is the way it is for that region, then report back to the class.

Then the teacher may like to give some of the mathematical background to how a computer package would go about producing the output.RetailNZTS4

The learning is in the writing and the talking.

The point I’m trying to make is that you actually need to move away from the computers quite often. If you are REALLY stuck for computers you could even print off (and laminate?) the outputs from the different time series, so that the students can study and discuss them. Number or name them for easy reference, and have question sheets to go with them.The computer is only the tool, and with a bit of creativity, we can still teach the important attitudes, skills and knowledge with limited computer access.

I am aware as I am writing this that it is some time since I taught a class of high school students. I would be thrilled to hear comments from the “chalk-face” as to how realistic you think this is! And of course other suggestions will be welcome for teaching a computer-rich subject in a computer-poor environment.

Having said that, one of my experiences as a trainee teacher was having to teach my first lesson to a class at Rotorua Lakes High School during a powercut – which meant no computers and no OHP. We did desk-checking (how you can use pen and paper to look for bugs in code) and it went surprisingly well.

# The one-armed operations researcher

My mentor, Hans Daellenbach told me a story about a client asking for a one-armed Operations Researcher. The client was sick of getting answers that went, “On the one hand, the best decision would be to proceed, but on the other hand…”

People like the correct answer. They like certainty. They like to know they got it right.

I tease my husband that he has to find the best picnic spot or the best parking place, which involves us driving around considerably longer than I (or the children) were happy with. To be fair, we do end up in very nice picnic spots. However, several of the other places would have been just fine too!

In a different context I too am guilty of this – the reason I loved mathematics at school was because you knew whether you were right or wrong and could get a satisfying row of little red ticks (checkmarks) down the page. English and other arts subjects, I found too mushy as you could never get it perfect. Biology was annoying as plants were so variable, except in their ability to die. Chemistry was ok, so long as we stuck to the nice definite stuff like drawing organic molecules and balancing redox equations.

I think most mathematics teachers are mathematics teachers because they like things to be right or wrong. They like to be able to look at an answer and tell whether it is correct, or if it should get half marks for correct working. They do NOT want to mark essays, which are full of mushy judgements.

Again I am sympathetic. I once did a course in basketball refereeing. I enjoyed learning all the rules, and where to stand, and the hand signals etc, but I hated being a referee. All those decisions were just too much for me. I could never tell who had put the ball out, and was unhappy with guessing. I think I did referee two games at a church league and ended up with an angry player bashing me in the face with the ball. Looking back I think it didn’t help that I wasn’t much of a player either.

I also used to find marking exam papers very challenging, as I wanted to get it right every time. I would agonise over every mark, thinking it could be the difference between passing and failing for some poor student. However as the years went by, I realised that the odd mistake or inconsistency here or there was just usual, and within the range of error. To someone who failed by one mark, my suggestion is not to be borderline. I’m pretty sure we passed more people that we shouldn’t have, than the other way around.

# Life is not deterministic

The point is, that life in general is not deterministic and certain and rule-based. This is where the great divide lies between the subject of mathematics and the practice of statistics. Generally in mathematics you can find an answer and even check that it is correct. Or you can show that there is no answer (as happened in one of our national exams in 2012!). But often in statistics there is no clear answer. Sometimes it even depends on the context. This does not sit well with some mathematics teachers.

In operations research there is an interesting tension between optimisers and people who use heuristics. Optimisers love to say that they have the optimal solution to the problem. The non-optimisers like to point out that the problem solved optimally, is so far removed from the actual problem, that all it provides is an upper or lower bound to a practical solution to the actual real-life problem situation.

Judgment calls occur all through the mathematical decision sciences. They include

• What method to use – Linear programming or heuristic search?
• Approximations – How do we model a stochastic input in a deterministic model?
• Assumptions – Is it reasonable to assume that the observations are independent?
• P-value cutoff – Does a p-value of exactly 0.05 constitute evidence against the null hypothesis?
• Sample size – Is it reasonable to draw any inferences at all from a sample of 6?
• Grouping – How do we group by age? by income?
• Data cleaning – Do we remove the outlier or leave it in?

A comment from a maths teacher on my post regarding the Central Limit Theorem included the following: “The questions that continue to irk me are i) how do you know when to make the call? ii) What are the errors involved in making such a call? I suppose that Hypothesis testing along with p-values took care of such issues and offered some form of security in accepting or rejecting such a hypothesis. I am just a little worried that objectivity is being lost, with personal interpretation being the prevailing arbiter which seems inadequate.”

These are very real concerns, and reflect the mathematical desire for correctness and security. But I propose that the security was an illusion in the first place. There has always been personal interpretation.Informal inference is a nice introduction to help us understand that. And in fact it would be a good opportunity for lively discussion in a statistics class.

With bootstrapping methods we don’t have any less information than we did using the Central Limit Theorem. We just haven’t assumed normality or independence. There was no security. There was the idea that with a 95% confidence interval, for example, we are 95% sure that we contain the true population value. I wonder how often we realised that 1 in 20 times we were just plain wrong, and in quite a few instances the population parameter would be far from the centre of the interval.

The hopeful thing about teaching statistics via bootstrapping, is that by demystifying it we may be able to inject some more healthy scepticism into the populace.

# Teaching Experimental Design – a cross-curricular opportunity

The elements that make up a statistics, operations research or quantitative methods course cover three different dimensions (and more). There are:

• techniques we wish students to master,
• concepts we wish students to internalise, and
• attitudes and emotions we wish the students to adopt.

Techniques, concepts and attitudes interact in how a student learns and perceives the subject. Sadly it is possible (and not uncommon) for students to master techniques, while staying oblivious to many of the concepts, and with an attitude of resignation or even antipathy towards the discipline.

# Techniques

Often, and less than ideally, course design begins with techniques. The backbone is a list of tests, graphs and procedures that students need to master in order to pass the course. The course outline includes statements like:

• Students will be able to calculate a confidence interval for a mean.
• Students will be able to formulate a linear programming model from data.
• Students will use Excel to make correct histograms. (Good luck with this one!)

Textbooks are organised around techniques, which usually appear in a given sequence, relying on the authors’ perception of how difficult each technique is. Textbooks within a given field are remarkably similar in the techniques they cover in an introductory course.

# Concepts

Concepts are more difficult to articulate. In a first course in statistics we wish students to gain an appreciation of the effects of variation. They need to understand how data from a sample differs from population data. In all of the mathematical decision sciences students struggle to understand the nature of a model. The concept of a mathematical model is far from intuitive, but essential.

# Attitudes

You can’t explicitly teach attitudes. “Today class, you are going to learn to love statistics!”. These are absorbed and formed and reformed as part of the learning process, as a result of prior experiences and attitudes. I have written a post on Anxiety, fear and antipathy for maths, stats and OR, which describes the importance of perseverance, relevance, borrowed self-efficacy and love in the teaching of these subjects. Content and problem context choices can go a long way towards improving attitudes. The instructor should know whether his or her class is more interested in the projectories of gummy bears, or the more serious topics of cancer screening and crime prevention. Classes in business schools will use different examples than classes in psychology or forestry. Whatever the context, the data should be real, so that students can really engage with it.

I was both amused and a little saddened at this quote from a very good book, “Succeed – how we can reach our goals”. The author (Heidi Grant Halvorson) has described the outcomes of some interesting experiments regarding motivation. She then says, “At this point, you may be wondering if social psychologists get a particular pleasure out of asking people to do really odd things, like eating Cheerios with chopsticks, or eating raw radishes, or not laughing at Robin Williams. The short answer is yes, we do. It makes up for all those hours spent learning statistics.” Hmmm

# Experimental Design

So what does this have to do with experimental design?

I have a little confession. I’ve never taught experimental design. I wish I had. I didn’t know as much then as I do now about teaching statistics, and I also taught business students. That’s my excuse, but I regret it. My reasoning was that businesses usually use observational data, not experimental data. And it’s true, except perhaps in marketing research, and process control and possibly several other areas. Oh.

George Cobb, whom I have quoted in several previous posts, proposed that experimental design is a mechanism by which students may learn important concepts. The technique is experimental design, but taught well, it is a way to convey important concepts in statistics and decision science. The pivotal concept is that of variation. If there were no variation, there would be no need for statistics or experimentation. It would be a sad, boring deterministic world. But variation exists, some of which is explainable, and some of which is natural, some of which is due to sampling and some of which is due to bad sampling or experimental practices. I have a YouTube video that explains these four sources of variation. Because variation exists, experiments need to be designed in such a way that we can uncover as best we can the explainable variation, without confounding it with the other types of variation.

The new New Zealand curriculum for Mathematics and Statistics includes experimental design at levels 2 and 3 of the National Certificate of Educational Achievement. (The last two years of Secondary School). The assessments are internal, and teachers help students set up, execute and analyse small experiments. At level two (implemented this year) the experiments generally involve two groups which are given two treatments, or a treatment and a control. The analysis involves boxplots and informal inference. Some schools used paired samples, but found the type of analysis to be limited as a result.  At level three (to be implemented in 2013) this is taken a step further, but I haven’t been able to work out what this step is from the curriculum documents. I was hoping it might be things like randomised block design, or even Taguchi methods, but I don’t think so.

# Subjects for Experimentation

Bearing in mind the number of students, many of whom wish to use other members of the class, there can be issues of time and fatigue.Here are some possibilities. It would be great if other suggestions could be added as comments to this post.

# Behavioural

Some teachers are reluctant to use psychological experiments as it can be a bit worrying to use our students as guinea pigs. However, this is probably the easiest option, and provided informed and parental consent is received, it should be acceptable. All sorts have been suggested such as effects of various distractions (and legal stimulants) on task completion. There are possible experiments in Physical Education (Evaluate the effectiveness of a performance enhancing programme). Or in Music – how do people respond to different music?

I’d love to see some experiments done on time taken to solve Rogo puzzles! and what the effect of route length or number choice, or size or age is.

# Biology

Anything that involves growing things takes a while and can be fraught. (My own recollection of High School biology is that all my plants died.) But things like water uptake could be possible. Use sticks of celery of different lengths and see how much water they take up in a given time. Germination times or strike rates under different circumstances using cress or mustard?  Talk to the Biology teacher. There are assessment standards in NZ NCEA at levels 2 and 3 which mesh well with the statistics standards.

# Technology

Baking. There are various ingredients that could have two or three levels of inclusion – making muffins with and without egg – does it affect the height? Pretty tricky to control, but fun – maybe use uniform amounts of mixture. Talk to the Food tech teacher.

Barbie bungee jumping. How does Barbie’s weight affect how far she falls. By having Barbie with and without a backpack, you get the two treatments. The bungee cords can be made out of rubber bands or elastic.

Things flying through the air from catapaults. This has been shown to work as a teaching example. There are a number of variables to alter, such as the weight of the object, the slope of the launchpad, and the person firing.

# Inject statistical ideas in application areas

John Maindonald from ANU made the following comment on a previous post: “I am increasingly attracted to the idea that the place to start injecting statistical ideas is in application areas of the curriculum.  This will however work only if the teaching and learning model changes, in ways that are arguably anyway necessary in order to make effective use of those teachers who have really good and effective mathematics and statistics and computing skills.”

How exciting is that? Teachers from different discipline areas work together! There may well be logistical issues and even problems of “turf”. But wouldn’t it be great for mathematics teachers to help students with experiments and analysis in other areas of the curriculum. The students will gain from the removal of “compartments” in their learning, which will help them to integrate their knowledge. The worth of what they are doing would be obvious.

(Note for teachers in NZ. A quick look through the “assessment matrices” for other subjects uncovered a multitude of possibilities for curricular integration if the logistics and NZQA allow. )

# Why resampling is better than hypothesis tests and confidence intervals

The author, with kiwi, visiting Geroge Cobb in his office at Mt Holyoke in April 2008. (Not very flattering for either of the humans)

I love to read George Cobb’s writing. In person I found him a kind and intelligent man, and a generous host. But his writing is laugh-out-loud funny at times, provocative and inspiring. I suspect that he may be the indirect cause of near civil unrest among maths teachers in New Zealand. The link between George Cobb and mathematics teachers’ concerns is resampling. I will provide some background, then explain.

The New Zealand curriculum is divided into eight learning areas, one of which is called Mathematics and Statistics. The separate acknowledgement of Statistics, which I believe occurred in 2007, is indicative of the status which statistics is now afforded in the curriculum. It also sends the message that the subject of statistics is not simply a part of mathematics, but is its own discipline. This has met with approval from statisticians, and mixed reception from mathematicians, some of whom would still like to keep statistics firmly tucked in as a minor bedfellow of algebra and trigonometry. Regular readers will be aware of my feelings on this. They are expressed in the posts Hey mathematics, leave the stats alone and last week’s offering What mathematics teachers need to know about statistics. How I came to these views, from maths teacher, to Operations Researcher, to statistics educator are described in another post, the End of OR at UC.

Along with the change in title to Mathematics and Statistics, has come a new approach to the study of statistics at all levels of schooling. In the final year of schooling, there are now enough assessment items to provide a separate subject called Statistics. This overcomes the odd conflation, “Mathematics with Statistics” which evolved to “Statistics and Modelling”, and now has cast off any vestiges of Operations Research (sniff) to stand proudly as “Statistics”. Serious students of mathematics need to take both subjects. In the same way that Science at year 11 becomes three subjects: Biology, Physics and Chemistry in Year 12, Mathematics in Year 12 now splits into two subjects: Mathematics and Statistics at Year 13.

The most difficult aspects of the curriculum changes are experimental design, critiquing reports, and resampling. The rest of this post will address resampling.

# Cast off the t-test

George Cobb, in his article, “The Introductory Statistics Course: A Ptolemaic Curriculum” provides an overview of the problems of inferential statistics and how technological advances have freed us from the shackles of assuming normality, or making myriad corrections to allow for non-normality. But though the shackles are open, it is difficult to cast them aside.  Cobb points out that the t-test is the centerpiece of the introductory statistics curriculum because that is what scientists and social scientists use most often. Scientists and social scientists use t-tests most often because that is what they were taught in introductory statistics courses.

Cobb’s argument is that we are living with the legacy of lack of computational power. Until the advent of computers there was no choice but to use analytic methods, as computation was impossible. In operations research we see that neat little work-arounds and approximations to reduce computational time are no longer needed as computers become increasingly powerful. In statistical analysis the normal distribution was used to approximate the true distribution because anything else was prohibitively difficult to compute. This is no longer the case. Elegance, which is desirable in pure mathematics, has no place in the dirty world of statistics and real data. Cobb states, “We need to throw away the old notion that the normal approximation to a sampling distribution belongs at the center of our curriculum, and create a new curriculum whose center is the core logic of inference.”

These are fighting words. And it seems that the New Zealand curriculum is the first to take up the challenge. The University of Auckland has provided computational tools to enable resampling, with supporting materials. Thanks to iNZight it is possible for all students to take repeated samples and explore the outcomes without the burden of repeated hand calculation. The graphical displays enable understanding further.

## Dumbing down or more appropriate

Mathematics teachers are concerned that the resampling approach is a “dumbing down” of the curriculum. It does seem that way at first – that we are leaving the “difficult” material of proper confidence intervals to university. However, the intention is that students will actually understand what inference is about, which will make the learning of the traditional methods (now also automated) almost trivial. I don’t have a problem with confidence intervals and p-values as they are. They are pretty easy to compute. I do see a problem with an entire exam section which simply required students to select the correct formula, plug in the values and give the result. I am happy to concede that the computational and mathematical requirements in the new statistics curriculum are reduced. But that is because the subject is statistics, not mathematics, and other skills are used and developed. The aim is to develop statistical literacy, reasoning and thinking.

Teachers have expressed that traditional statistics is more rigorous than the resampling method. Because traditional statistics encompasses formulas and proofs this SEEMS more rigorous and correct. But they are wrong! Using the analytical methods gives us deceptively exact answers to what are often seriously flawed models. Fisher himself in 1936 explained that the analytical method is used because the simple and tedious method of resampling was not possible. (See the link to the Cobb paper above)  I can see why teachers might think traditional methods are preferable as maths teachers are seldom statisticians. This is why there is a national curriculum, so that decisions about what students learn are not reliant on the knowledge of one individual. You may notice that my videos generally teach the traditional ideas of the p-value and confidence intervals. I am a recent convert to resampling. (Perhaps with the usual evangelistic zeal of a new convert)

In “Developing Students’ Statistical Reasoning: connecting research and teaching practice”, Garfield and Ben-Zvi suggest that “ideas of informal inference are introduced early in the course, and revisited with growing complexity”. This is what will be happening year by year in the New Zealand setting, if the teachers are given enough support to do enact the curriculum.

# What is resampling/randomisation/bootstrapping?

Cobb summarises resampling as “three R’s: randomize, repeat, reject. Randomize data production; repeat by simulation to see what’s typical and what’s not; reject any model that puts your data in its tail.”

In essence you use the sample data to take large numbers of random samples and examine the behaviour of these samples. From there you can see the likelihood of getting a result such as the original as a matter of chance (similar to a p-value). You can also use multiple samples (taken with replacement) to create confidence intervals. It seems too simple to be true. But it is a better approach than to use a flawed approximation provided by regular statistical analysis. There is time enough to learn that later on when you want to be published in an academic journal. Once students truly understand inference, learning other techniques will be more straight-forward, and one hopes they will have enough understanding to be critical of them.

# How do you teach resampling?

Good question. I would begin with a small example which can be started by hand and then finished off with a simulation. There is a nice little one in the Cobb paper cited earlier. Then work through several more examples using quite different contexts. I would use the iNZight programs or Excel, depending on the nature of the problem. With a class you can get quite a few iterations of a small problem in a reasonably short time. I’m not a great believer in homework for the sake of it (a story for another day) but getting students to hand iterate an experiment a few times at home sounds ideal. There are materials with suggestions on the Census at School site. There doesn’t seem to be much on the TKI site, however (on Sept 2012). Let us hope that there will be more soon for teachers who are planning for the 2013 school year.

I’m excited, and I have already written too much for one day. We are at the start of a wonderful adventure in teaching and curriculum development, and yet again New Zealand is leading the world. I hope I can help to make it happen.

By the way  – please comment – I can’t be getting it right all the time, and dissent is important!

# Forget algebra – is Statistics necessary?

There is the popular (amongst statisticians) statement from H.G.Wells. Usually it is quoted as: “Statistical thinking will one day be as necessary for efficient citizenship as the ability to read and write.”

According to a paper in Historia Mathematica, what Wells actually said (in 1929) was:

“The time may not be very remote when it will be understood that for complete initiation as an efficient citizen of one of the new great complex world wide states that are now developing, it is as necessary to be able to compute, to think in averages and
maxima and minima, as it is now to be able to read and to write.”

Not quite as pithy as the paraphrase, and sadly he didn’t mention statistics specifically. But – the point is, he was correct – or would have been if he had actually said what he is attributed as having said.

## Statistical understanding is a fundamental literacy for the twenty-first century.

My blog post title today is intentionally provocative, and based on the blog by Andrew Hacker, in the New York Times, “Is Algebra Necessary?”. This article has many academics and others quite exercised at the thought that algebra might not be essential to all students. Or even that someone could dare to suggest that this might be the case. As it is not clear to me what the term “algebra” encompasses, I find it difficult to decide one way or the other. Some aspects of algebra are really handy, and I teach them in my Quantitative methods for business course. (Much to the disgust of some of my students.) But an awful lot of algebra though fun and useful for many professions, is not really essential for the general populace. To me it is more important for someone to be able to interpret statements of causation correctly than to be able to solve a quadratic. Hacker’s point is that this obsession with algebra for all is providing a barrier to students who are otherwise talented and capable.

It seems that people on team: “Algebra for all” have a somewhat privileged view of the populace. They are concerned for College students and particularly physicists, engineers and biologists. And they seem to be focussed on occupational concerns more than citizenship. Surely the subjects that everyone is required to master, should be the subjects needed for being an “efficient citizen,” to borrow Wells’s phrase.  What skills and attitudes and knowledge do we want all our citizens to have, regardless of their career path? I think an understanding of variability and data are pivotal to effective decision-making.

By the time a person leaves compulsory schooling they should have a working understanding of the nature of variation in the universe and the implications of this variation. They should be able to examine data presented in various forms and make judgments from it. The Guidelines for Assessment and Instruction in Statistics Education (GAISE) Report of 2005 states: “Every high school graduate should be able to use sound statistical reasoning to intelligently cope with the requirements of citizenship, employment, and family and to be prepared of a healthy, happy, and productive life.” How does algebraic reasoning fit in that sentence? It is more difficult to see the direct benefit to citizenship, though for some employment it would be needed.

The study of the discipline of statistics teaches a wide range of skills:
Number skills, writing, critical thinking, application, lateral thinking, argument, reasoning, visual interpretation, communication, persistence, coping with ambiguity. These are skills important for citizenship.

Andrew Hacker, in his controversial article “Is Algebra necessary?” said, “Ours is fast becoming a statistical age, which raises the bar for informed citizenship. “

And Rob Knop commented in his response to the Hacker article, “So, yes, I would agree that we could and perhaps should de-emphasize algebra in favor of making time for statistical awareness, and perhaps in filling in the basic number sense that students failed to get out of elementary school.”

It is interesting that both sides of the argument agree on the necessity of statistics in education.

In New Zealand the curriculum area previously known as Mathematics is now called Mathematics and Statistics, and statistics is getting a much greater emphasis at all levels of schooling. However there are mathematics teachers who still perceive statistics as one of many sub-branches of mathematics, though this is not how statisticians perceive their discipline. (For more about this maths/stats divide, see an earlier post, “Hey mathematics – leave the stats alone.“) There are problems arising, as many of the teachers are not as familiar with statistics as they would like to be. It has been interesting reading the bulletin boards where teachers express their concerns. The transition will be challenging for many. And there may arise a new breed of teacher who specializes in teaching statistics.

This is an exciting time to be a statistics educator. The research is there, the will is there, the technology is there and the need is there. Move over, Algebra. Statistics is coming through.

## Afterword

For any loyal followers who tune in each week, there will be a break for a few weeks unless I can convince my colleague to do a guest post. See you in September!