Mark Twain attributed the “Lies, damned lies and statistics” to Benjamin Disraeli – and it still has a modicum of truth in it.

I was accosted over the weekend by a petition wielding lady with statistics at her finger tips. She wanted something banned (It doesn’t matter what, it is the process I am discussing) – and she had all the facts and figures to hand. But the problem I found with it was that she ultimately wanted this restriction of freedom because 10% of the population agreed that it should be so.

When I pointed out, that this meant 90% of the population disagreed with it, she bristled and accused me of ignoring the minority opinion. I then again pointed out to her, that she was ignoring the majority opinion; which she dismissed as being ‘unimportant’.

Subsequently I didn’t sign her petition. I cannot recall what the perceived injustice was, but I was staggered at the naivety of the reasoning. Statistics are being used more and more to justify any amount of decisions; and more and more I find that positive actions are being urged on the basis of negative statistics.

According to Twain, Disraeli didn’t trust statistic either; and I am becoming disenchanted. But many speakers will use statistic to prove their ideas or opinions are right. If 75% of school children can’t locate Papua New Guinea on a map, it might indeed indicate that they are not being taught regional geography. But if that figure is only 25%, can we draw the same conclusion?

I am not going into sets and subsets, it’s been a long time since I waded in that murky water! But if we understand that statistics are largely concerned with discovering possible connections between disparate facts, then we can have a very general idea of what is being presented to us.

Statistics use samples to obtain results, and the Neilson rating on TV audiences is a perfect example of how it works. When we read that 1.5 million people actually watched a certain programme we don’t believe that someone went out and did a head count, now do we? Of course not, a sample is taken from certain families which are used, demographically, to represent the whole. That these families are selected carefully goes without saying, but the end result is that one family can be used to represent the viewing habits of thousands of others.

But advertising rates are decided on the basis of these samples, and the same principle is used to generate figures that are said to reflect the views of the population at large. But do they? If so, how can one political party be 5 points ahead of their opposition in Poll A, but trailing them in Poll B? Both outcomes were a result of an opinion poll which created the statistics.

If we are to have some chance of determining the validity of statistics we must first determine some guidelines:

  • Who conducted the study and how reputable are they?
  • Did those conducting the study have a vested interest in the outcome?
  • What exactly was the study measuring?
  • Who was interviewed?
  • What questions were asked?
  • What were the comparisons?

Who conducted this study

People using statistics to prove their point often tell us that numbers don’t lie – but we can find statistics to prove that dairy food is bad for our health and others that show we need them to stay healthy. We can find studies that show some soft drinks cause cancer, and others that say that the only thing that they do is make you thirstier. All these statistics are available, but numbers don’t lie? Well it all depends on who is publishing the figures and what they are trying to prove.

The argument about smoking is a case in point – are the statistics provided by a Cancer Society or a Tobacco company? If we look at the soft drink discussion in America, we can find results of studies provided by the American Cancer Society and the USA FDA – both are highly reputable organisations, yet both hold different opinions on the subject based on the statistics they have gathered. It’s a mine field out there!

What was the study designed to measure?

Another sound bite we hear a lot when discussion statistics, is “You can’t compare apples and oranges” – but of course you can, if you ask the right questions.

If the study is designed to determine colour then of course you can’t compare them, but if we wanted to find out about the sugar content of fruit, then comparing apples and oranges would be important. Again, asking about vitamin content? – then compare them – but ask about habitat then don’t. You see, it all depends on what you are measuring.

So ask yourself, is the comparison on these statistics a valid one.

Who was interviewed and what was asked?

Is the group representative and how unambiguous were the questions? Were they designed to measure opinions, emotions or feelings? Or are they conducted to measure facts? Again, opinions will differ according to who you ask. Is crime rising? Depends on whether you ask victims or law enforcement agencies. Not all crime is reported so figures will vary.

When asking people for their opinion it is even more difficult to get really reliable figures. Many people will say what they think you want to hear, others will simply lie.

Some opinions, ideas and thoughts are determined by outside circumstances, such as demographics, cultural background, and education. The human factor is difficult to eradicate from surveys which are designed to tap into public opinion.

And the way in which the questions are asked can also affect the outcome – leading questions are not allowed in a court room, but if you have taken one of those telephone surveys you will have been asked one or two!

Questions can be framed to obtain the required response – lawyers who engage in cross examinations are well aware of that! So are companies with a vested interest in the outcome of a survey. Loaded questions are a killer “Have you stopped cheating on your tax” appears to ask for a simple yes or no answer – but think about it!

TV Interviewers who claim to be investigative reporters are good at this; when the victim tries to claim that they have never cheated, they get shouted down with “Just answer the question! Yes or No!” Of course, they can’t answer the question as asked, it doesn’t give them a possible truthful alternative.

Now it might seem that the above means that statistics are worthless, but it is not meant to. I merely want to point out that there must be some thought given to the statistics you quote to support your argument. So if you have other supporting evidence for your decision, it is wise to use it to back up the statistics. It is up to the user to show that the statistics in this case are relevant and reliable.

Statistics are often the easiest and most concise way to express our evidence, but we need to be sure that they are valid and reliable for our audience to accept them as proof positive.

Let’s leave the last word with Mrs Robert Taft –

“I always find that statistics are hard to swallow and impossible to digest. The only one I can ever remember is that if all the people who go to sleep in church were laid end to end they would be a lot more comfortable

Michele @ Trischel

Pin It on Pinterest

Share This