The moment someone mentions Statistics, the most often seen reaction is a big yawn or a sigh of disbelief. This is because mostly people would have heard this statistic in some ad / marketing journal or may be from a not so trusty source. And that reaction is justified 86% of the time ;). This sense of disbelief / wonder (and it works both ways) mostly comes because the reader can’t vouch for the veracity of the numbers. The assumption always is that the person talking about the numbers (or the statistic, technically speaking) knows what (s)he is talking about. And even if that is not the case, it is very easy to be lost amongst the numbers – when there are averages, percentages, YoY (year-on-year) growth, percentage points, APRs and myriad other measures that are used to make the person talking about these (who more often than not is selling something) seem either very good or very bad. And looks like all these people seem to have the same field manual – How to lie with statistics. And this doesn’t seem to have changed from 1954, when the book was first published !
OK, before the book gets any more negative connotations – that is not the purpose of the book. The book is to help the reader see through these marketing ploys. To be able to ask the right questions and when to dismiss a statistic as faulty. It is a field manual to beat the cheaters in their own game. And this delightful book comes in a small package – just 150 pages, and is an easy read. The book has 10 chapters each with a specific theme
- The sample with a built-in bias : the origin of the statistics problems – the sample. Any statistic is based on some sample (because the whole population can’t be tested) and every sample has some sort of bias, even if the person wanting the statistic tries hard to not create any. The built-in bias comes from the respondents not replying honestly, the market researcher picking a sample that gives better numbers, personal biases based on the respondent’s perception of the market researcher, data not being available at a certain past time are a few of the biases that creep in when building a statistic. One of the example (from the 1950s) that the author mentions is a readership survey of two magazines. Respondents were asked which magazine they read the most – Harpers or True love story. Most respondents came back that they read the True Love Story, but that publisher’s figures came back that the True Love Story had a much higher circulation than Harpers – refuting the results from the sampling. The reason for this discrepancy – people were not willing to respond due to their own bias. As Dr.House says – Everybody Lies ! Summary of the chapter – given any statistic, question the sample that was taken. Assume that there is always a bias in the sample
- The well-chosen average: how not qualifying an average can change the meaning of the data. Before I delve into this, quickly, when I say, average – what comes to your mind? Sum(x1….xn) / N – right? The arithmetic mean. But I said average, not arithmetic average did I? Not many people know that there are 3 averages
- Arithmetic average / mean – sum of quantities / number of quantities
- Median – the middle point of the data which separates the data, the midpoint when data is sorted
- Mode – the data point that occurs the most in a given set of data
And when someone says average, leaving it unqualified, there is a lot of room for juggling. The author mentions a very simple example. If an organization publishes a statistic that the average pay of the employees is $1000, what does this mean? This makes most of us think that almost everyone makes around $2000 – the reader thinks it is the median. But, the corporation can be talking about an arithmetic mean, where the boss might be earning say $10,500 and the rest of the 19 employees earn $500 each – the arithmetic average. Just by not qualifying the average the published fact can be completely twisted out of form from the real facts.The way out – always ask what is the kind of the average that someone is talking about.
- Who says so – who is the one who is publishing the result? Do they have anything to gain from the result
- How does he know – how did they measure this result? Was there any sampling bias?
- Did someone change the subject – Are the cause and effect getting muddled?
- Does it make sense – ’nuff said 🙂
And that summarizes the book. This book is a must-read if you are interested in numbers and their interpretations. And, the tricks mentioned in this old book are in use even now. So, the date of publishing and the relevance have a negative correlation ;). Definitely recommended read. An 8/10 – looses out because some of the topics were not given a thorough treatment.