Vous êtes sur la page 1sur 3

A fortnightly briefing, examining issues at the heart of politics, what they mean for public affairs, and how

practitioners should respond.

Friday Focus: Statistics: beware misuse!


In this Friday Focus, Mark Glover considers the use and abuse of statistics within politics and what to bear in mind when interpreting or presenting data. Statistics: beware misuse I find myself a bit of a rarity in the world of public relations and public affairs, surrounded in the main by PPE, History and English graduates. It is however occasionally useful to have followed a different route at university and instead have a degree in mathematics and statistics. Now, Im not saying I am any Pythagoras , but a degree and two A-levels in maths combined with responsibility for running a successful agency, does at least give you a good insight into how numbers should be interpreted in both business and politics; and a healthy scepticism for how numbers are often reported in the media. As public affairs professionals, a good understanding of numbers is crucial. We need to make sure that our clients make as strong a case as possible. Often their case will rely on statistics and if the numbers dont add up, this can destroy a reputation. Too often statistics are used incorrectly and the wrong interpretation placed on them in order to support someones view of what is happening. It is important to understand that statistics is a science and that, if understood, there should be little doubt as to what a statistic is saying. The art of statistics is in their interpretation and trying to either understand what is happening in the wider world from observations of a smaller sample or trying to predict what is likely to happen in the future from the analysis of trends in the past. Indeed the definition of statistics according to the Oxford English Dictionary is:

the practice or science of collecting and analysing numerical data in large quantities, especially for the purpose of inferring proportions in a whole from those in a representative sample. Yet often errors occur in attributing understanding to that whole (or a wider audience) from observation of a sample. Too often sweeping conclusions are attributed to statistics ignoring issues such as sample size, bias and acceptable margins of error - all of which, to the statistician, would indicate that a conclusion was not justified. Below I want to briefly explain some of the mathematical terms and why public affairs and communications practitioners should be aware of their misuse; then look at one or two of the more common mistakes that I have seen in the media or in public relations. This is not a comprehensive

review but to cover just two or three things to bear in mind when you next use numbers. Sample size To remove absolutely any doubt about interpreting what people think in a particular population, you must ask everyone in that population. However, in statistics and using mathematical formulas based on probability it is possible to identify with a certain degree of confidence what a population is likely to do or think by collecting data from a smaller sample of people who have similar characteristics. The bigger the size of the sample then the smaller the margins of error will be. Another way of saying this is also that the bigger the size of the sample, the higher the confidence limits will be (i.e. the more confidence you can have in the result).

A margin of error is the outside limits of what the result says. So for example, if it is reported that 30% of people are likely to vote Conservative in an election and the margin of error is 3% then the result is saying that there is a strong probability that when the election occurs the Conservatives will poll between 27% and 33% of the vote. The strong probability is the level of confidence that you can have that the result will follow the s ame pattern as the sample. Polling companies often work on 99% or 95% levels of confidence. (The mathematics used relates to the behaviour of a normal distribution.) Where errors occur - the wrong sample size Deciding who is in your sample and how big the sample needs to be is a science in itself. Ideally you would want the sample size to be statistically significant, made up of very similar people to those in the population, and randomly selected to avoid the occurrence of bias. What happens, too often, is that results are attributed to a whole when the sample size is too small or badly chosen. For example, if you are speaking to your neighbours about an issue to do with council tax then they are all likely to be in the same council tax band as you and receiving similar services to you. Attributing what they say and think to reflect all electors in a borough forgets that most people in the borough are likely to be in different council tax bands and receiving different services. From a public affairs perspective, this is particularly important when talking to local councillors who are often influenced by those people most affected by an issue and who tend to be the most vocal. It is hard for councillors to consider the impact on what is often referred to as the silent majority, who may often have a different perspective, but are not as vocal. The introduction of bias Bias can occur in many forms, from overlooking a specific characteristic, to introducing bias in the questions someone is asking. An example of bias would be if you took a poll of those supporting independence in a constituency with high levels of SNP voters and applied your findings to the whole of Scotland. Often political parties deliberately introduce bias to produce statistics that support their argument but would not stand up to any form of scrutiny. Bias can also be introduced in questioning. Often you can hear a leading question, whether it is in

court, the media or on an opinion survey. Asking the wrong question can give you a response that is inaccurate and from which you will not be able to attribute the outcome to any wider population. For example, the question, As a believer in womens rights you must support the imposition of all women shortlists makes an assumption about you and guides you to select a specific answer all of which can be considered as the introduction of bias. Indeed if you are a member of the CIPR and you produce survey responses which included leading questions or deliberate bias, then your opponents may have a strong case to make an official complaint against you. Understanding what is meant by the margin of error Labour 31% Conservative 28% and Lib Dem 20% with a margin of error of 5% Labour 28% Conservative 31% and Lib Dem 18% with a margin of error of 5% The two results above are actually remarkably similar as the margin of error means that a Conservative or Labour victory is equally probable. However, that will not stop journalists reporting there has been a massive shift from Labour to Conservative, when the results say nothing of the sort. To be able to say that the sample size would need to increase to reduce the margin of error until a definite result became apparent. In this case you would need to reduce the margin of error to at least 1% before you could interpret any definitive winner. Those of us working in public affairs need to advise our clients on how the political parties are faring across the country. A weak understanding of details such as error margins will lead to flawed advice. Looking at a change and not what that change is based on In todays press a number of companies have announced their change in profitability for the year. Say some have increased their profitability by 75% and others by 25%. A simple reading of the figures would suggest the 75% growth was the better performing company. However, if we know that this years 75% growth made up for a previous years loss and over two years the growth was 0%, while the 25% growth was the second year that the company had achieved this and over two years its growth was 62.5% then that would lead us to a different conclusion. As public affairs practitioners, it is therefore incumbent upon us to delve behind the basic statistic and ensure we know the full story. Those that fail to do this at best send out clients under prepared. At worst they risk destroying reputations, both their clients and their own, by reporting untruths or deliberately misleading people.

Vous aimerez peut-être aussi