Vous êtes sur la page 1sur 12

Research and evaluation PR grows up?

The Holy Grail


In the early 1990s, something strange happened. As recession began to bite and budgets
were slashed, public relations practitioners began to talk about whether or not PR could be
measured, and so prove its worth. Up to that point, no one had really been concerned about
measurement of results. Gut feeling was felt to be an acceptable gauge of whether a PR
campaign had succeeded. People relied on their own experience to tell them what would work in
a particular situation. Puchan et al. (1999: 166) suggested that an interest in evaluation was
stimulated by the consumer interest movement, the implementation of measures to improve
managerial effectiveness and the tendency to professionalisation.
Public relations education in the UK lagged behind the USA, but a masters degree was
introduced at Stirling University, and a postgraduate diploma at West Herts College in Watford.
In 1989, the first undergraduate degrees in public relations opened their doors to students in
Bournemouth and Plymouth. Leeds Metropolitan followed a year later. Public relations had its
own academics now. In an academic environment, lecturers are required to do scholarly research.
A research culture began to develop both within and outside higher education.
Today, some things have changed but others seem remarkably static. The number of public
relations courses has grown, along with the number of staff required to produce erudite
publications on the subject, yet PR research in the UK is still in its infancy. There are very few
academics actually doing research, and little of it connects with the public relations industry.
However, as UK academics interact with European professors, a more lively scholarly debate is
developing.
The level of debate on evaluation has developed, so that the Holy Grail is no longer sought,
although there was a brief interlude where PRE-fix sought an industry standard. There is a
plethora of techniques which are accepted, from media content analysis to market research. How
far these have actually been translated from debate into action is still questionable. Interestingly
enough, the authors experience is that the debate on evaluation in the USA is still stuck at the
media evaluation level.
Taking research seriously
Some companies are grasping the nettle. In April 1998, a Forum on Communication Research
was held at the London School of Economics. Representatives from academia and industry heard
presentations from practitioners who were using research techniques to inform their management
decisions. Academics and a market research expert talked about some projects with which they
were involved. The main objectives of the forum were to bring together practitioners who are
grappling with questions of measurement and evaluation with researchers . . . who might be able
to help shed some light on the issues, and to identify real world problems which could be
worked on . . . where companies have a lot of available data but little time to do a usefully
thorough analysis. The latter work would be carried out by graduate students to facilitate the
industry academia interface.
Others in the industry feel that there is a role for purely academic research which could
feed into and inform practice. Sandra Macleod of Echo Research was involved in the
Communications Forum:
1

One aim of the Forum was to bring together potential sponsors, to see if we could identify
if there was some major research that we all felt was needed. Then we could all put
something in the pot to fund it. There is also a need for greater involvement of industry and
academia so that both can benefit. There is an interesting trend where more people from a
research background are becoming involved in PR, especially in-house, people who have
an understanding about how behaviours are influenced, what various audiences feel about
organisations. They are turning that into communications messages, turning the PR plan
into a dialogue.
Most of the industry outlook on research seems to concentrate on purely practical aspects.
Research is often linked to clients, and how information could help consultancies win a pitch for
an account (Medhurst 1999). The PR Toolkit (see case study, pp. 31012) includes audit as the
first step in the PR planning process. Here research is used as a diagnostic tool to develop PR
strategy, as well as monitoring the success of that strategy as it unfolds. The checklist to
assemble the PR brief encourages the practitioner to work with the client to set out their
commercial background and objectives, and audit the existing PR activity. Gathering information
is seen as essential in order to set realistic measurable objectives. Unless a definite starting point
is defined, it is not possible to effectively measure and evaluate the success of the PR strategy.
There are some encouraging signs that the worlds of academia and practice are coming
together. As discussed in Chapter 5, a body of knowledge and research is one of the prerequisites
for the development of PR into a profession.
A Delphi survey was undertaken in 2000 to attempt to draw up a common definition of the
practice, although it concluded that it is difficult to find any pattern in the naming of the field.
This research led to further definitions of practice and parameters of public relations, suggesting
that there were four characteristics of European public relations:
Reflective analysing standards in society to enable the organisation to adjust its
own standards.
Managerial developing plans and maintaining relationships with publics.
Operational carrying out communication plans.
Educational helping the members of the organisation to become effective communicators
(van Ruler and Vercic 2002).
Using evaluation in PR
The public relations industry is worth $24.5 billion worldwide. Public relations evaluation has
grown by 3040 per cent a year over the past five years, stated Sandra Macleod at the IPRA
conference in 1997.
The evaluation debate has thankfully moved on from the Holy Grail stage mentioned
above. Jon White (1999) said, Its time for us to move on from the search for evaluation
methods, which already exist, to applying these in terms of improvements to the quality of
management in public relations. This is a development of his statement in 1994, that the need to
evaluate activities in PR is partly a measure of professional insecurity (quoted in Houlder 1994:
20). It is true that many areas of management experience difficulty in measuring effectiveness.
2

Evaluation did not emerge as a distinctive field of professional scientific practice until the late
1960s, and it began to emerge as an issue in PR at about the same time (Noble 1999).
In terms of gaining the budgets to carry out evaluation, Peter Hehir has suggested, We get
the budgets we deserve. If you cant prove the importance of what you do, youll stay in a
backwater PR department or struggle to keep your consultancys head above water (quoted in
Purdie 1997: 8). As President of the International Committee of Public Relations Consultancies
Associations (ICO), he advised the education of clients to appreciate the issues they faced and to
allocate realistic budgets to allow for research and evaluation.
Models of evaluation
Watson provides a good summary of several models of evaluation, beginning with Cutlips PII
(preparation, implementation and impact). This is a staircase of suggested measures. The
foundation layer of preparation evaluates strategic planning, the quality of the message and the
adequacy of background information gathered to design the programme. Implementation
examines how many messages are sent to the media and who received them. The capstone of
impact examines changes in attitudes, opinions and behaviour.
MacNamaras macro model has a similar division of categories, this time inputs, outputs
and results. The model is drawn in the form of a pyramid (Figure 19.1), and, unlike Cutlip,
suggests evaluation methodologies which can be used with each level. Thus assessing the
appropriateness of message content in the input section can be evaluated by the use of focus
groups, the number of messages placed in the media in the output level can be assessed using
media monitoring, and the number who change attitudes when evaluating results needs to be
measured with quantitative research. MacNamara puts it forward as a practical model for
planning and managing evaluation of public relations.
Lindenmanns public relations yardstick also takes a three-level approach, assuming that
objectives have been set beforehand. The three levels relate to the extent of measurement to
becarried out. The first basic level measures output, such as media impressions. The intermediate
level judges whether the target audience received the messages, using qualitative and quantitative
methods to evaluate awareness. The final, advanced level looks at outcomes, such as attitude
change. Lindenmann emphasises, there is no one simplistic method for measuring PR
effectiveness . . . an array of different tools and techniques is needed to properly assess PR
impact.
Watson adds his own two models. The first links to Grunigs models of press agentry and
public information, stressing short-term, media analysis. The second model emphasises the
continuing, long-term nature of public relations, taking into account the effect of the programme
actitivites, making sure that judgements at each stage are fed back into the decision-making loop
(Watson in Kitchen 1997: 2906).
Lip service
The take-up of evaluation in the industry has not been overwhelming, despite a majority view
that it is necessary. IPRA published the results of a survey in 1994, which found that 90 per cent
of its members thought evaluation was necessary, but that only 18 per cent frequently undertook
evaluative research. Furthermore, 31 per cent felt that trying to measure precisely is next to
impossible (IPRA 1994).
3

A year after beginning its PR%F campaign in 1998 to encourage client companies to
allocate 10 per cent of their budgets to research and evaluation, PR Week undertook a major
survey to show the uptake of evaluation methods by the PR industry. The results were
disappointing. While the majority of practitioners believed that their work could be measured
across a variety of disciplines, including crisis management, internal communications and
business to business, those working in government relations overwhelmingly disagreed. A total
of 20 per cent of respondents still felt their success could not be measured. There was widespread
agreement that the PR industry needed to improve its efforts at evaluation, and the vast majority
stated that I am personally committed to evaluating PR efforts. However, when it came to the
methods which could be used, the most commonly used method was media content analysis and
press cuttings, closely followed by media reach. This was despite the fact that only a third
regarded the latter as an effective tool to convince budget holders to part with resources to fund
it. A worryingly high proportion of 33 per cent said they relied on gut feel to judge their
success. The main obstacle to planning and evaluating PR activity was cited as difficulty in
obtaining budgets, followed by lack of time. In addition, 43 per cent of respondents gave no
answer when asked to state their investment in evaluation (Cowlett and Nicholas 1999).
A comparison of theory and practice in the USA was carried out by Ahles and Bosworth in
2003. By analysis of a sample of entries for the PRSA Silver Anvil awards, they found that while
82 per cent of the public relations programmes submitted had objectives relating to changing
behaviour, only 12 per cent addressed attitude change, a key component of behaviour change.
Most campaigns had a clear tie to the organisations mission and goals, but many did not specify
a time frame or amount of change to be achieved. While 79 per cent of campaigns were
concerned with increasing awareness, only 45 per cent carried out any benchmark research to
quantify this at the start of the campaign, and there was no follow-up to assess what had
changed. Half of the campaigns stated objectives in strengthening relationships, but there was no
evidence of any benchmarking or follow-up research. They recommended improving the quality
of objectives, and improving benchmarking and follow-up research (Ahles and Bosworth 2003).
The main obstacle to planning and evaluating PR activity was cited as difficulty in
obtaining budgets, followed by lack of time. In addition, 43 per cent of respondents gave no
answer when asked to state their investment in evaluation (Cowlett and Nicholas 1999).
The value of evaluation
The idea that evaluation is worthwhile and necessary is not new. In his address to the IRR
congress in October 1998, MORI Director Peter Hutton stated,
Evaluation is a sensible part of any PR programme . . . There must come a point when you
have to ask What effect has my PR spend had? and How do I know? In MORI we have
developed a model based on the idea that business success comes from moving people up a
hierarchy from awareness, through trust, transaction, satisfaction, commitment and
advocacy. There are many different ways of evaluating the success of a PR event or
campaign. The most useful, however, will be part of a well executed PR initiative with
clearly defined measures of success which relate back to equally clearly defined corporate
and communications objectives.
Alison Clarke, of Huntsworth plc, agreed.
4

Evaluation is part of the planning process. It enables one to quantify the lessons learned
and develop benchmarks for future measurement. There are a variety of measurement tools
now available for all kinds of evaluation, from input (analysis of existing data, focus group,
pilot questionnaire), output (statistics on distribution, media monitoring, media content
analysis, communication audit) and outcome (focus group discussion, surveys, pre and post
tests). PR can influence beliefs, attitudes, opinions and behaviour. If the communications
function is to be considered as a managerial one, it must refine its instruments of
measure . . . to prove it is both useful and beneficial.
The media coverage debate
One of the first and easiest ways of evaluating success in PR was to count the amount of media
coverage gained. As the evaluation debate developed, the question of whether press cuttings
were enough exercised many column inches itself. The Association of Media Evaluation
Companies (AMEC) in conjunction with ICO produced an explanatory booklet in 1997 (AMEC
1997): Media evaluation is the systematic appraisal of a companys reputation, products or
services, or those of its competitors, as measured by the presence in the media. AMEC put
forward media evaluation as an ongoing management tool which should be part of the business
planning process. It stated that analysis involved the weighting of significant elements, the
combination of measurement and judgement. This could also give information about trends, as
media analysis would be ineffective in a vacuum. To those who argued that media analysis was a
mere part of the picture, and not a very useful part at that, AMEC replied that output generated as
a result of a PR programme could be measured in articles, papers, speaking engagements;
outcomes could be measured by changes in attitudes and behaviours; and out-take by the
degree to which the audience retained the message. By using information gained from media
coverage in different ways, several aspects could be measured. Share of voice would indicate the
total amount of coverage for the industry or topic, and the percentage of that from the client. By
examining message content analysis the extent to which coverage communicated the desired
messages could be assessed. Trend analysis could track performance over time and look forward
from historical information.
According to Krippendorf (1980: 9), The pursuit of content analysis is fundamentally
empirical in orientation, exploratory, concerned with real phenomenon and predictive in intent.
Content analysis can use the past to show what will happen in the future. You have to
understand the methodology and trust it, says David Phillips.
By using media content analysis, Phillips suggests, PR practitioners can see what words to
use in a press release, which journals to go to, which journalists, the optimum times to send in
information, and how long it will take to get into print. Some messages are not part of the media
agenda and some words always have negative connotations. Content analysis can identify the
media agenda which the PR practitioner must work within.
Sandra Macleod, speaking at the IPRA conference in 1994, felt that the increasing sophistication
of media content analysis techniques had led to unrealistic demands from clients.
On the communications research side, where clients used to be happy with half
yearly or monthly reports, now they want all the coverage about themselves and their
5

competitors online the same day. But few companies have management that could absorb
it daily. We can do that, if it is genuinely part of the decision support system, but I dont
see that yet in companies. It is a tremendous opportunity to respond quickly, and
technology has shown that things have got faster. Media content analysis provides a
regular measurement of how reputation is shifting, why and what can be done about it.
Media coverage can anticipate events.
Philips added in 2003:
Timely media content analysis, and today this means the online media as well, can move the
practice of public relations into the realm of daily strategic application of corporate
resources. It can reflect the affective changing moods and behaviours in the public sphere as
well as among specific publics hour by hour. At any point in the day, the ability to identify
what people want to know through analyses of search engine queries is now both possible
and inexpensive. This reflects both interest and behaviours of online communities. The
influential media, that will change how organisations behave, has shifted. The Guardian has
more international influence than The Times because its online reach is greater and there are
more websites linked to its pages. A story in the Guardian can be identified for its
comparative influence and should ring louder bells for the PR practitioner. Such evaluation
techniques applied across the media and delivered as strategic intelligences in a timely
fashion, reflect consumer and other stakeholder influences. The PR industry could push this
open door to the top table if only it could realise the value of evaluation.
Variety of methods
There is certainly no shortage of methods available. Most of these have grown from the various
methods of media measurement carried out by commercial companies such as Media
Measurement and Echo, but content analysis, though great claims are made for its impact on
areas of PR activity other than media relations, is no longer the whole story.
Paul Noble (1999: 15) states, There is no one simple, single solution to the problem of
public relations evaluation . . . different evaluation methodologies are appropriate at different
stages of the process. Furthermore, evaluation is not undertaken for its own sake, but for a
purpose. The large number of variables with which PR is concerned complicates matters. He
quotes Patton,
The practice of evaluation involves the systematic collection of information about the
activities, characteristics and outcomes of programmes, personnel and products for use by
specific people to reduce uncertainties, improve effectiveness, and make decisions with
regard to what those programmes are doing and affecting. (Noble 1999: 15)
Some practitioners use evaluation in summative form, assessing final outcomes, to offer
accountability to clients and themselves. There is room for both formative and summative
evaluation and different methodologies are required for different steps (Noble 1999: 18).
Curtis (1999) suggests that techniques used in other areas of marketing communications
can be utilised by PR practitioners. Its more complex, but its entirely possible, says Jennie
Kettlewell of Bulletin International (quoted in Curtis 1999).
6

Puchan et al. (1999) suggest that while evaluation methods are fairly well established,
more could be done to assess the impact of public relations at a broader, social level.
At the end of the day, it is the effectiveness of the message that is important. In
communication, its not the journey that counts. Its the destination, said Jim Macnamara
(2002). He urged practitioners to evaluate in order to provide evidence of what they contributed
to their clients business. Macnamara advised that media content analysis, interviews and minisurveys could be available at relatively low cost, with expensive professional surveys being
needed only annually to add to the DIY methods. He suggested that not doing evaluation was in
the least, short-sighted and under-performing as a communication professional. At worst . . . not
doing evaluation is irresponsible . . . and unethical.
The case studies that follow use a variety of methods. The first describes the development
and content of the IPR Toolkit. Other studies show how Echo Research has provided media
analysis to illuminate the effect of PR activity for two clients, and how VisitBritain uses research
on an ongoing basis.
Case study one: The IPR Toolkit
The IPR Taskforce was established in June 1998. PR Week identified a similar initiative with the
PRCA, and joint funding was agreed in September 1998. PR Week then leant support to the
initiative by headlining the PR%F campaign in October. Working with Mike Fairchild, as author,
and AMEC, the group set themselves a goal of publication by June 1999. Alison Clarke, VicePresident of the IPR, took a presentation to several local group meetings in advance of the main
launch of the finished Toolkit to emphasise the importance of measuring public relations. The
Toolkit was designed for practitioners, whether in-house or consultancy, clients, PR training and
education bodies and journalists. Case studies on best practice were also included. In keeping
with the aims of the campaign, evaluation was built in from the start. Content was tested with a
client panel, and feedback from users was encouraged. An online version was then developed.
The main thrust of the Toolkit rests on the five-step objective setting, measurement and
planning model (Figure 19.2). This was refined in subsequent editions. The process was broken
down into five stages.
Step 1 Audit Gathering of existing data, such as current communications and leaflets,
identifying benchmarks, analysing the current situation in order to set a good brief. Using
research as a diagnostic tool to define the problem and issues.
Step 2 Setting Objectives This is the key to well thought out and executed evaluation. Using the
organisations aims, communication objectives can be set. Audiences, messages, desired
responses and timing should be identified. Other communication activities need to be taken into
account.
Step 3 Strategy and plan This is the part of the campaign where the overall strategy is drawn
up, based on the objectives, together with the tactics to be used. Pre-testing of proposed
techniques should be employed here. Whether output, out-take oroutcomes are to be evaluated
must be decided and appropriate techniques built into the programme.
Step 4 Ongoing measurement The main question to be asked at this stage is, Are we getting
there? Measurement can be carried out at regular intervals. Analysis at
this stage might lead to some of the tactics being adjusted.
7

Step 5 Results and evaluation At the conclusion of the PR programme or campaign, there is a
thorough appraisal of whether the objectives were achieved. Costbenefit analysis would
determine whether value was given, and might also point out things which could be done
differently next time.
Each step of the process is examined in detail in the Toolkit. Extensive information,about and
definitions of research techniques are provided. Advice on using media evaluation as a diagnostic
tool, some DIY measurement systems and case studies on pinpointing the effect of public
relations are included. Interaction with advertising, marketing and viral marketing is examined,
as well as adapting the planning, research and evaluation techniques to the web.
The main addition to the latest edition of the Toolkit is a guide to the variety of media
evaluation methods available. Most evaluation in practice is still a measure of media coverage,
and the Toolkit seeks to give an objective view to what the various methods can be used for.
The most effective way of evaluating public relations is to use a combination of tools, and
over time it is hoped that practitioners can persuade clients and employers to use more
sophisticated methods.
Case study two: Fish4 and Echo Research
Fish4 is the umbrella brand name of a range of interactive online services, backed by the
majority of the UKs regional newspapers. Through its entry point on the internet
(www.fish4.co.uk), consumers can search vast databases of cars, homes, jobs and other
directory-based services. The Fish4 site was launched in October 1999, aiming to be the UKs
leading source for finding cars, jobs, homes and local businesses. It is already one of the UKs
top five car websites, the UKs number one jobs website and the UKs number one homes
website.
Local expertise, local content
While the majority of web-based businesses seek to exploit their global reach to the full, Fish4s
primary goal is to be the local expert. The company aims to deliver to its approximately
950,000 users information which is specific to their locality. In theory, according to Jonathan
Lines, Fish4s energetic Sales and Marketing Director, the logistics involved in delivering such
highly localised information are nightmarish. But in practical terms, we have access to unrivalled
and extensive content. The content to which Lines refers is that gathered by approximately 800
daily and weekly regional newsspapers around 80 per cent of the UKs local print media.
We undertake a huge amount of research because we feel it is critical to our future success
and evolution as a company, Lines says. We have to know how we are perceived across all our
target audiences.
Fish4 commissioned Echo Research to conduct continuous tracking and evaluation from
October 1999 to March 2000, the time of a major drive to raise awareness and drive traffic to the
site. This included evaluation of PR-generated media coverage, together with interviews with
opinion formers, journalists and consumers.
Objectives
8

The main aim was to:


evaluate brand recall and awareness
assess the effectiveness of above-the-line advertising spend
measure the impact of public relations activity.
Integrated marketing communications research
During the period covered by the monitoring, Fish4 ran an integrated communications campaign
using TV, outdoor, press and radio and PR: appropriate arrangements were made to facilitate
internet searches for the site. After running advertising at a relatively low level through the last
quarter of 1999, Fish4 sharply increased expenditure, primarily on TV and outdoor, in February
2000, backed by an intensifying of the ongoing PR activity.
Research showed that the increased advertising spend led to a dramatic rise in brand
awareness, which was sustained through PR when the TV burst had finished. At the same time,
the number of visitors to the site started to grow rapidly.
Analysis of the sources of brand awareness showed the importance of the TV burst and the
impact of posters, but also spotlighted the continuing value of favourable PR in newspapers, with
it being the second most important source of awareness.
Interviews with opinion formers in the internet and marketing fields endorsed Fish4s
creative work and media strategy and was fed into subsequent campaigns:
Its a likeable and a friendly campaign
I think its quite clever
Makes people instantly link it to the internet
They need to raise awareness like any other brand, product or FMCG. TV is the best medium
for this.
Fine-tuning PR strategy
The value of Fish4s PR activity and the support received from its regional newspaper investors
was shown by detailed regional analysis. A statistical relationship between consumer site visits
and the volume of PR-generated coverage indicated that while advertising was driving
awareness, PR activity could be prompting visits. The vast majority of the editorial coverage
occurred in regional publications.
As a result of analysis of more than 1,000 UK press cuttings across the national, regional
and trade media, the company was able to quickly fine tune its PR strategy. This included finetuning media targeting strategy and using Echos detailed byline reports to identify target
journalists.
Conclusions
Echos work with Fish4 demonstrates how successful research can inform effective decisionmaking. By integrating and tailoring media analysis and market research techniques, Echo was
able to identify strategic operational and communications opportunities.
9

Advertising tracking and media analysis informed Fish4 about what was happening.
Market research enabled Fish4 to understand why it was happening. Combining the two gave the
company a unique insight into the drivers of its brand, facilitating decisions which helped the
company take control of the media agenda, promote an image in keeping with its strategic aim,
and ensure value for money from scarce resources.
Case study three: Research for Communication Measurement British Gas and Echo
Research
Echo has worked with British Gas for six years. The main part of its work relates to media
analysis of national coverage in print and broadcast media. This is carried out monthly and is
presented to and discussed regularly with the PR team to enable them to monitor their
performance. Key performance indicators for the team are linked to features of the analysis and
the impact PR has on the general public.
In summer 2002 British Gas launched a new branding and communications strategy to
reposition British Gas as a home services provider. Spearheaded by an ad campaign Theres no
place like a British Gas home, PR was used to support the new positioning, providing
consumers with a flavour of the new British Gas home products and services.
The three elements of this integrated measurement project were:
media evaluation
impact to influence model by i-to-i tracker
focus groups.
British Gas commissioned Echo Research in October 2002 to undertake media analysis of its
campaigns relating to Home Services and combine this with measurement of the impact on the
general public through various market research techniques, the first time British Gas campaigns
had been evaluated in this way. This combined methodology was designed to provide British Gas
with a more thorough understanding of the level of impact their PR work around Home Services
had on consumers, so that they could measure how much it influenced consumers to consider
using British Gas products or services relating to the home.
The presence of campaign-specific messages in media coverage was tracked through media
analysis, while focus groups and an omnibus survey were conducted to test the relevance and
recall of these messages, as well as the impact of media stories on the general public.
Quantitative measures generated by the media evaluation and survey research were fed into
British Gass annual planning and target-setting process, to establish benchmarks. These key
performance indicators are now collated and presented in a monthly scorecard provided by Echo
and incorporated into British Gass evaluation and survey work, which is conducted twice-yearly
for all major Home Services campaigns.
Additionally, Echo regularly provides the monthly reports providing detailed evaluation of
different aspects of British Gass communications activity (for example, press release/message
take-up, spokesperson activity) in their media coverage and assessment of how it compares to
their key competitors and a two-page executive summary report designed for the
communications director to share findings with the senior executive team.
Key findings
10

Echo Research analysed five home service campaigns in relation to the following areas:
presence of messages (pre-determined campaign specific and overall proposition)
presence of key campaign spokespeople
prominence of BGAS.
The PR campaign reached 90 per cent of homeowners over six times. British Gas measured the
effectiveness of the PR campaign by analysing and comparing the findings for two groups in the
sample those who had been exposed to the communications and those who had not. On all the
measures of success for the campaign uplift was seen for those that were PR aware. Those who
had seen the communications were 52 per cent more likely to agree with the main British Gas
proposition related to offering advice, products and services for the home.
Key findings and recommendations from the focus groups focused on message recall and
relevance to British Gass target audience, and how the stories and concept of a home brand
were perceived. Messages were readily understood, although general recall of press stories was
low, even when prompted. Stories had most impact when they concentrated on:
physical safety side of the brand
demonstrating practical benefits to consumers
relevance to reader in terms of life-stage.
Conclusions
The research showed that the Home Campaign made a positive contribution to British Gass
overall performance in the media, although it was perceived largely in practical terms with little
sign of emotional attachment or warmth. Therefore, marketing based on physical safety and
financial security would be more effective. PR helped improve message recall and enhanced the
TV advertising message.
Case study four: Proactive and reactive strategies in response to media coverage Jo Leslie
at VisitBritain
Despite being one of the worlds biggest industries (worth 76 billion in the UK alone) tourism is
incredibly volatile. In 2001 Britain lost 1.5 billion in inbound tourism due to the combined
effects of foot and mouth disease and the September 11 attacks. In 2003, early estimates of the
effect of the Iraq war and SARS suggested that they could cost the same again.
The media are central to every crisis, sometimes causing it in the first place and at other
times perpetuating it beyond what it may deserve. Disaster pictures are wired around the world at
the touch of a button. Getting these images removed and putting positive ones in their place is
much harder.
Head of Public Relations, Jo Leslie, comments:
During the past two years VisitBritain has learnt three lessons. The first is that a rigorous
monitoring and evaluation process is essential. The second is that the UK media is our
worst enemy, and finally that we should always be prepared for the worst. Monitoring the
11

world media during foot and mouth allowed us to spot rumours and inaccuracies and issue
swift rebuttals. More recently a regular reporting system from all markets around the world
has allowed us to assess how international travel has been affected by the Iraq war and
SARS and how they were impacting on perceptions of travel to Britain. From this we were
able to judge exactly what messages to promote and how to do it, whether reactively
through press statements and interviews or proactively through releases, VNRs, press
briefings and bringing journalists to see for themselves.
VisitBritain spends 750,000 on public relations each year. Of this, nearly 10 per cent is
spent on monitoring press coverage in eight international regions covering 31 key markets.
Britain is the fifth most popular destination. Monitoring these many countries in a variety of
languages is a huge logistical challenge. However, says Leslie, We are publicly accountable for
every 1 we spend and therefore we must prove ROI.
The Tourism Industry Emergency Response (TIER) group was set up in September 2001. It
comprises senior representatives from every sector of the tourism industry who meet regularly to
plan scenarios, agree impact assessments and develop recovery plans. It also ensures that
everyone in the industry gives out consistent messages.
In terms of evaluation, VisitBritain operates a four-pronged evaluation system. First,
volume and value of coverage are assessed. For each item of coverage, a standardised score sheet
is completed by overseas PR officers and returned to London to be entered into a database. Twice
a year the database is sent to an outside consultant to calculate AVEs. The data are analysed in a
variety of ways. For example, in the financial year 2002/3, there were 7,785 articles and
broadcasts, up 35 per cent from the previous year. The unweighted advertising value of this was
289.8 million. Weighted according to Price Waterhouse formulas for target audiences, this was
valued at 504.1 million. Data are also broken down according to markets, regions and themes.
The score sheet includes marks out of 5 (with clear guidelines) for audience, impact and
content. How closely the audience matches the target market segment, how prominent the piece
is and how positive and accurate the coverage is and whether it reflects key messages are thus
factored in.
The next element is an annual survey of travel writers, to assess their attitudes towards
Britain and their opinions about prospects for travel there. Their attitudes towards VisitBritain
are included in the survey. In the most recent survey, two-thirds of journalists thought that Britain
would become a more popular tourist destination. They were positive about the nations heritage,
countryside, culture, architecture and variety. Negative opinions were expressed about the
weather, public transport, food and prices due to the strong pound. With regard to VisitBritain,
two-thirds of journalists said that they were the best national tourist organisation. The website,
newsletters, knowledge and dynamism got positive reactions.
The final plank in the evaluation strategy is carried out every few years. Before and after
polls of readers of individual newspapers and magazines are undertaken around specific
campaigns. This establishes whether coverage has any influence on readers perceptions and
behaviours. The surveys are supported by dedicated PR lines at British call centres to track
enquiries. For example, 30 per cent of National Geographic readers recalled seeing an article on
London and 40 per cent of them were more interested in visiting London as a result.
A KPMG audit in June 2002 concluded that VisitBritain had a sophisticaed media
evaluation system . . . ahead of competitors/analogous organisations.
12

Vous aimerez peut-être aussi