Vous êtes sur la page 1sur 15

Event Evaluation

Buzz up!12 votes

Steps Involved in Writing the Evaluation of an Event


Event evaluation is necessary to make you and your team more
efficient and effective, the next time you organize an event. It is all
about finding your mistakes and learning from them.
Event evaluation should be done immediately after the event is over or
the next day. Conduct a meeting with your team members to evaluate
your event.

Step 1: Determine the extent to which event and advertising


objectives have been achieved.
If you are not able to achieve your event and advertising objectives
through your event, then no matter how much people enjoyed the
event or how much popularity your event got, it is a complete failure on
a commercial level.

Step 2: Get feedback from your clients and target audience.


One good way of getting feedback is through feedback form. To make
sure that your clients give you feedback, make the feedback form part of your Exit pass form. The exit pass
form is required to get exit pass for security clearance, to remove exhibits from the facility.
To get feedback from target audience/ guests, make feedback form part of your gift voucher. A guest can
redeem the gift voucher only when he/she fills the feedback form and give it back to an attendant. These
tactics are required to get feedback, as people are generally reluctant to give any feedback in writing.
You can ask following questions in your feedback form:

Q1) Did you enjoy the event? If no, then please state the reason.

Q2) what do you like most in the event?

Q3) what do you like least in the event?

Q4) what are the problems you faced during the event?

Q5) what could have been done to make this event better?

Q6) How do you rate the various services provided by us (please check one of the option):
Hospitality: Excellent, good, average, poor
Catering: Excellent, good, average, poor
Transportation: Excellent, good, average, poor
Management staff behavior: Excellent, good, average, poor
Management staff services: Excellent, good, average, poor

Q7) would you like to participate in our next event?

Note: Your questionnaire should not have more than 10 questions. You don't want to irritate your guests.
Ask only relevant questions and keep the questionnaire short to 5-6 questions. Of course the type of
questions you will ask may change from event to event.
And don't forget to mention the following line in you feedback form: "Thank you for taking the time to
complete this feedback form" .
Evaluation is an activity that seeks to understand and measure the extent to which an
event has succeeded in achieving its purpose. The purpose of an event will differ with
respect to the category and variation of event. However, to provide reach and interaction
would be a generic purpose that events satisfy.

There can be two attitudes with which evaluation can be put in its proper perspective. The
concept of evaluation stated above was a critical examination digging out what went
wrong. A more constructive focus for evaluation is to make recommendations about how
an event might be improved to achieve its aims more effectively.

To carry out an evaluation and measurement exercise it is essential that the predefined
objectives of the events have been properly understood. The brief should contain all the
data to be communicated since if an event has been organized without a clearly defined
purpose, any evaluation would be rather pointless.

The Basic Event Evaluation Process

In events, the basic evaluation process involves three steps:

• Establishing tangible objectives and incorporating sensitivity in evaluation


• Measuring the performance before, during and after the event correcting
deviations from plans

These steps are discussed below:

1. Establishing Tangible Objectives and Sensitivity in Evaluation

Setting objectives for an event is easier said than done. It is more difficult to set standards
and declaring an event successful after it meets them. To provide tangibility to the
problem, the best approach is to begin with definition of the target audience for whom the
event has been organized. In the case of commercial events, the audience could be end
users who use the company’s products. An event might be conceptualized to achieve
different things for different audience. Once the audience has been defined, the next step
is to identify and put on paper what each of the audience is expected to think, feel and do
having been to the event, that it did not think, feel or do beforehand. This adds an element
of tangibility to the evaluation and measurement proceedings.

The number of mega-events has increased dramatically in the past few years and the costs
of organizing events have also increased exponentially. The costs of production in major
events can be enormous and therefore, in the near future one can expect companies to
start asking questions about the effectiveness of their events to see whether their money is
being spent prudently.

Creativity is derived from the Greek word enthousiasm which literally translates into
‘God, within’. Setting out to evaluate such an effort that is considered to be the work of
Gods themselves demands a certain amount of sensitivity during evaluation. Objective
evaluation should also take into consideration the nature of the concept and the process of
execution of the event in their entirety. However professional the evaluation, there is
scope for error and misjudgment if sensitivity is not adhered to. This is because it takes a
creative and sensitive mind to spot wrong questions or situations where asking questions
might be the wrong method and observation might be more appropriate. One of the ways
of nurturing and encouraging this sensitivity is to place evaluation within the context of a
team approach all the way from conceptualization to carrying out of the event.

From experience, it is known that people involved in an event are more open. minded and
less committed to any particular course of action before the event occurs. Yet another
learning is that, if things are shown to be wrong after a decision has been taken, the
majority of people involved in the decision-making process may try to wash their hands
of the fault. Thus, adding sensitivity to the evaluation process is very important.

2. Measuring Performance

Although perfect measurement is not always practicable, the measurement of


performance against the objectives should ideally be done on a forward looking basis so
that deviations may be detected in advance of their occurrence and avoided by
appropriate actions. The concept research is used to anticipate the viability of a concept
during the conceptualization process. Formative and objective evaluations are carried out
during the customization phase of an event. Summative evaluation can be carried out to
measure performance during the event.

• Concept Research: At the conceptualization stage, if a concept team does not


have a sound basis upon which to make a decision ‘between various options, a
commissioning of audience research to help in defining the strategic approach to
be adopted in the event is appropriate. It essentially involves presenting the
various options to a representative sample of the target audience in a story form
and inviting their reactions. This provides enough material for understanding the
pros and cons of the various available alternatives. The downside to this method is
that it is speculative in nature since it deals with plans that nobody has as yet tried
to implement. This method is called concept research.
• Formative Evaluation: Evaluation at this stage focuses on things that are
actually happening. After the conceptualization team makes an attempt to
customize and implement an agreed strategy, steps can be taken to evaluate the
success with which customization is proceeding. These evaluations are aimed at
shaping the form of the final event. Mock-up displays and presentations of the
event are used to carry out research to check whether they are achieving the
desired reactions from the audience. These evaluations are conducted among
small sample representative of the target audience in an open-ended and
qualitative fashion since the main emphasis is on discovering how the concept
might be better represented. The outcome of these formative evaluations lead to a
discussion among the team in which proposals for rectifying any weak points in
the communications can be put forward. A point, which should be safeguarded
against whilst using this technique, is to interpret consumer reactions with
considerable sensitivity to stimulate the creative process further and also to ensure
that good ideas are not killed simply because they were not properly presented in
mock-up form.
• Objective Evaluation: This is the stage when approval from the client is sought
before starting the execution related activities of an event. The evaluation team
has to provide the objective evidence that has been collected which justifies the
proposed concept solutions. The team also provides reassurance on how and why
the particular event will work among its intended audience. Since taking the client
into confidence requires certain amount of objectivity and professionalism, this
technique is called objective evaluation.
• Summative Evaluation: After the event has started, the evaluation team should
be concerned with measuring the impact of the event upon its audience. Among
other things, they should establish the extent to which the objectives or aims of
the event have been met and whether the event can be improved in any way and if
so, how This will not apply for short term events though. A major purpose of
evaluating an event after it has opened to the public is that it provides the team
with the opportunity of learning from their mistakes. The team should assimilate
the information thus collected so that they can avoid making similar mistakes in
the future.

3. Correcting Deviations

The fundamental reason why event evaluation is carried out is to navigate the event so as
to ensure that the event objectives are achieved in total. And since deviations may occur
during any stage in the event designing phase, it is important that measurement is carried
out at all possible stages.

Critical Evaluation Points

Events can be evaluated based on the critical success factors listed below; from both the
clients’ and event organizer’s viewpoints.

Critical Evaluation Points from Event Organizer’s Point of View

There are multiple criteria for evaluating the success of an event from the event
organizer’s point of view. These are over and above ensuring perfect reach and
interaction for the client by networking on-time & at lowest cost. The client event-target
audience fit should match the clients’ brand/product/company image and personality
perfectly, keeping the target audience as the focal point. This is a very critical evaluation
point. Ensuring the profitability of an event such that there is maximum profitability with
minimum mark ups is another critical evaluation point. Since resources are also a major
constraint for event organizers, the resource management efficiency i.e., resources
committed and span of time for which it stays committed – financial, human, equipment
and infrastructure should be a minimum. The number of staff and volunteers involved
should be appropriate to offer quality service.
Logistics and efficiency of event execution for ensuring smooth proceedings without
unnecessary delays and damages is another critical success factor. Creating avenues for
lead generation & its proper management during the event is a critical factor. Each and
every completed event should generate more inquiries and these should be responded to
immediately. Opportunities for explanation of available synergies and expansion of
services offered to client to keep strategic integration and diversification options open is
also an important factor. Since an event is essentially a one-off affair and any last
moment problem can convert an exceptionally well-planned event into a disaster, all care
needs to be taken during the event execution. Yet, another important critical success
factor is the degree of localization or customization accommodated in the concept to suit
the demographic and other variables of various places where the event is to be carried
out.

Critical Evaluation Points from Clients’ Point of View

We have discussed earlier that the impact an event has on its target audience is equivalent
to the measure of reach and interaction that occur during the event. Whereas reach is
tangible, interaction to a certain extent is intangible as well as not always quantifiable.
Immediate and long-term benefits that accrue from an event are important when
evaluating an event from the clients’ point of view. A cost-benefit analysis concerning
the effectiveness of reach and interaction is a must as a pre-event activity. Post-event
stock taking activity should be done to confirm whether the event has occurred as per
plans. This analysis should consider the actual cost of the event that includes the non-
budgeted expenditure as well as the actual benefits that accrued to the client from the
event. The accrual of benefits can be judged by measuring the tangible parts of the
objectives that have been achieved.

Measuring Reach

Reach is of two types – external and actual event reach. Since events require massive
external publicity-press, radio, television and other media are needed to ensure that the
event is noticed and the benefit of reach is provided to the client. Measurement of
external reach is possible by using the circulation figures of newspapers and promotions
on television and the radio. The DART and TRP ratings that rate the popularity of
programmes on air and around which the promotion is slotted, is a very tangible though
approximate method for measuring the external reach of a promotion campaign on
television. Measurement of external reach should be tempered with the timing of the
promotions as effectiveness of recall and action initiated amongst the target audience is
highly dependent on this important variable. For example, releasing ads and promos one
month ill advance should be considered more as an awareness exercise for propagating
the event concept, time, date and venue of these owe to the audience. The entry criteria –
free, invited or ticketed show should be clearly mentioned here. The measurement of the
actual reach of an event is relatively simple. The capacity of the venue is a figure that
provides the upper limit for the actual reach. Ticket sales or numbers of invitees are also
direct measurement tools. Registration of participants and requests for filling in
questionnaires are also common methods of measuring the actual reach of an event.
Concept of event quality and measuring quality of event

Exactly on the lines of the evaluation of effectiveness of an event comes the concept of
event quality. In essence, quality of an event exists in the clients’ perspective and thus
varies from client to client. By aiming for quality by maintaining standards, preventing
mistakes, never cutting corners and using only top quality infrastructure is looking at
quality from a skewed angle.

Unless the target audience and the clients perceive the quality of the job in the same way
as the event organizers, the big picture of quality is not complete. Therefore, it is critical
to match the clients’ expectations and experiences by including even the minutest details
to arrive at the perceived quality of event. In matters of dispute, it is value to the client
that finally matters.

For the client, quality of an event is a bundle of attributes. A few of these critical
attributes are quality and reliability of equipment used, aesthetic appeal, appropriate cost
and timely completion of the project.

Each client will care more about some attribute than others. Thus, it is important to find
out how clients would define quality event service. Competence in project management
from conceptualization to carryout, reliability and integrity as in the past performances of
events that have been executed by the event organizer is a very important quality
criterion. Responsiveness to the clients’ requirements i.e., empathy, mutual confidence
and trust are also criteria used by clients to size up the quality of event organizers. In
addition, an easy-to. Work with manner, personal involvement and caring that the event
organizer exudes also helps. Delivery of promises and deals should be ensured.

Every client expects the event to provide the ideal audience to associate with; impress
and entice. Thus, the quality of an event can also be defined in terms of the audience
quality. Clients should focus on three major statistics that define audience quality:

• Net buying influences which can be defined as the ratio of the number of
audience that can recommend, specify or approve purchase to the total population
at the event.
• Total buying plans imply the percentage of the audience planning to buy a
product/service from the sponsors’ stables within the next 12 months after the
show.
• Average audience interest is the percentage of audience that shows an interest in
the sponsors’ products or services during the event itself and immediately after.
This may be measured by keeping track of the number of visitors to the sponsors’
stall or exhibit area during the event.
Was your event a success?

1. The standard approach


2. Problems with it
3. An improved approach
4. Triad discussions
5. Evaluating beyond the event

The standard approach


Often, at the end of a seminar, a talk, a course - or in fact any kind of event when an
audience is in a room other than for entertainment - questionnaires are handed out to
audience members at the end. They are requested to fill in their questionnaire, giving
their opinion of the talk (or whatever), and to hand it up.

When writing this web page, I looked for some published research on this kind of
evaluation. I couldn't find much at all. Perhaps this kind of research seems too trivial to
take seriously. However, a lot of people put a lot of effort into these events; they are
genuinely interested in audience reactions, and how the event could be improved next
time.

Because not much seems to have been written on this, and because Audience Dialogue
has helped evaluate several hundred such events in the last few years, I thought it was
worthwhile to try to record some of the principles we've learned: how to do it, and how
not to do it.

The questionnaires are sometimes called "happy sheets," suggesting that the participants
give too favourable an opinion. The implication is that if they were asked the same
questions after the event, by a third party, opinions would be less favourable. Actually,
that's not what Audience Dialogue has found. If you ask audience members to rate the
event they've just attended on a scale between 0 and 10 - where 0 is the worst possible
rating, and 10 the best possible - the average answer seems to be around 7 out of 10, no
matter when the questions are asked or who asked them.

Problems with the standard approach


The standard approach to event evaluation has four big problems:

1. It measures only attitudes, not behaviour or knowledge.


2. It happens too soon. Normally, the questionnaires are filled in at the end of the
event. This allows no time for learning, or behavioural change.
3. But in a long course or event, it can also happens too late. By the end of a day,
participants may have forgotten the points they thought of hours ago.
4. It evaluates only a small part of the process. Think about what an event is trying
to achieve, and you soon realize that questionnaires given out to gather the
audience's opinions are only a small part of the process - see Beyond the event
(below) for more on this.

An improved approach to event evaluation: summary


principles
To solve those problems, the scope of the evaluation needs to be extended, and more time
needs to be allowed. Some ways to achieve these goals are...

1. Always think of the event in its context. The study is never solely about the event as
experienced by participants: that's just one part of the evaluation. Every event is done for
some purpose, and those attending it usually don't know the full purpose. It's useful to
think of events using program logic, like this:

• Some inputs were used to do


• some activities - which created
• some outputs - which caused
• some (short term) impacts - which led to
• some (long term) outcomes, among some groups of people, in some situations

With that framework in mind, an event is evaluated by answering these questions:

1. What inputs were used? (Money, time, resources)


2. What activities were done? (An event was organized?)
3. What outputs were produced? (E.g. X people attended the event.) For indicators
of efficiency, calculate how much input it took to produce a given output.
4. What were the impacts? (Partly from the questionnaire given out after the event,
partly from other sources.)
5. What are the outcomes? (E.g. reviewing the event a few months later, what effects
did it have, among what people?) For indicators of effectiveness, compare these
outcomes with the initial goals.

Considering the entire planning and effects of the event, you can see that a questionnaire
filled in on the spot produces only a small proportion of the information needed to
evaluate the event, and the program it forms part of. Consider all the people involved -
participants, those affected, and those who did not attend but were still affected in some
way. Even if an event is a flop at the box-office, it may still have important effects on
artistic life. Perhaps the spending on sets for a play helped to keep some precious skill
alive in the local area.

And even if this particular event wasn't a box-office success, and had no effects on
artistic life, it can still fulfil a broader purpose. For example, if a local drama group
produces an avant-garde play, this may help to attract the attention of distant funding
sources. (But if that's one of the purposes of such a production, the achievement of that
purpose shouldn't be left to chance: it should be sought as a planned outcome, in the
framework mentioned above.)

Broadening the context further still, consider benchmarking your event against others.
This is done by (i) gathering data in a standard format, then (ii) comparing the results for
your event with other results in the same format. One such format is the Transfer of
Training Evaluation Model (TOTEM) which can be used in a wide variety of educational
evaluation contexts. This can be found at the US Department of Energy's Knowledge
Transfer Website at www.t2ed.com, though benchmarking data doesn't seem to be
available there.

In general, I suggest that you try to find and use a standard evaluation scale, rather than
trying to develop your own. There are many pitfalls in developing a new scale, some of
which are not obvious till it's too late.

Another aspect of context is peer review. Other people and organizations that produce
events of this kind can be useful sources of evaluation - even if they are biased. Though
they'll all have different viewpoints, if a wide range of experts agree on a criticism, you'd
better take it seriously. For successful peer evaluations, you need to have 3 or more
people present, with experience in the same type of event. Get them to fill in a special
questionnaire (based on the same one that ordinary participants fill in, but with extra
questions), and see if they all agree about the strong points or weak points of your event.
If they do, you should take notice.

Even when you have all this information, you can still be left wondering. Perhaps you
asked participants to rate the event on a scale of 0 to 10, and the average rating was 7 out
of 10. Is that high or low? (In fact, it's a little below average - based on our results from
hundreds of surveys). Unless you have a context to place the results in, such figures will
be meaningless. This is an argument that lots of little evaluations are more useful than
one big one.

For long courses and events - more than about half a day - participants often forget
suggestions they thought of making. And if communication isn't working well in a course
that runs for a week, it's no help to discover that at the very end. Quick evaluation
sessions - using both written and spoken form - at the beginning or end of each day can
be very helpful.

Improving event questionnaires

Mix multiple-response questions (easy to answer, but not very informative) with open-
ended (more valuable responses, if people take the time to think about the answers, and if
the questions are fully relevant).

Keep it to one page (A4 or letter size) if possible, but definitely to one sheet of paper. If
both sides of the paper are printed, write PTO or OVER or MORE at the bottom right of
both pages. Consider the nature and size of the surface that will people have to write on.
For example, if they are sitting in theatre-type seats, without table tops, will they rest the
questionnaire on their knees to fill it in? In that case, maybe it should be printed on card,
not on thin paper.

Open-ended questions should span a range of generality. For example, if you ask the very
general question "What other comments would you like to make about this seminar?"
nobody's comments are excluded, but many people will not have time to think of
comments. (Usually, at the end of an event, most people are in a rush to leave.) On the
other hand, if you ask only specific questions, such as "Which slides, if any, had writing
that was too small for you to read?" people who had problems you hadn't expected will
have nowhere to give an answer. The solution: use both types of open-ended question, the
specific as well as the general.

Ask behaviour questions as well as attitude questions. Questions such as "How would
you rate the quality of tonight's performance, on a scale of 0 to 10?" are about attitudes or
feelings. While these are perfectly valid, they don't necessarily relate to future behaviour
- which may interest you more. Perhaps what you really want to know is "If we put on
another play like this in a few months' time, how likely are you to attend?" A behavioural
intention question like that, though far from a perfect prediction, normally produces more
useful results than an attitude question.

Other useful behavioural intention questions are along the lines of "What changes will
you make in your organization as a result to attending today's workshop?" A list of
actions can then be presented, and respondents invited to tick those that apply. The
interesting thing about this approach is that it can be (for some people) self-fulfilling: the
act of making the choice on the questionnaire can actually cause them to carry out their
intention.

Even more accurate than behavioural intention is behavioural reporting. For this to work,
you could collect their name and address on the questionnaire given out at the event, and
ask their permission to recontact them later. Perhaps a month or two later you can
recontact those respondents and ask what they have actually done as a result of attending
that event. If the results are favourable - that is, if the respondents have done what they
said they'd do - this can be a very powerful argument for seeking more funding.

Improving the environment for evaluation

Though the questionnaire wording and layout is important, its environment is even more
important. Ideally, you want everybody present to fill in their questionnaire, and you
want honest answers from them.

Improving the response rate

Imagine you're a member of the audience at a seminar. What you heard and saw over the
last hour or two was quite interesting, but it's getting late now, and you have to go home
and cook dinner for the children. Everybody is asked to pick up a questionnaire on their
way out, fill it in, and put it in a box. You don't really feel like doing this (it seems hardly
worth the effort to simply record "It was OK") but the compere asked everybody to make
the effort and fill in the form. So you pick up a form off the heap as you leave. It's long -
about 20 questions - and they seem to be repetitive. Some look quite difficult to answer,
but obviously worthwhile. They want your comments or suggestions for "next time" - not
that you plan to come along "next time." Some of the wording is hard to understand.

For example, one question was "How adequately did the presentation meet your learning
expectations?" This was to be answered on a 0-10 scale. So, when you have figured out
exactly what this question is asking, what might a score of 10 out of 10 mean? "I
expected it to be perfect, and it was." Or (equally valid) "I expected it to be useless - and
it was." In practice, the question was so opaque that you didn't read it very carefully, and
just gave a general rating out of 10 based on what you thought you had learned from the
seminar.

So you decide to fill it in (giving that question 7 out of 10), but then you realize you don't
have a pen with you. Maybe you can borrow one. Also, you have nothing to rest the
paper on when you fill it in. People around you are putting their forms up to the wall, and
trying to write with ballpoints - which don't work well unless pointing down. The lights
are dim, and the questionnaire is printed on blue paper - very hard to read. Also, you can't
see the place where the presenter said the completed forms should be left. So you put the
questionnaire in your bag, and take it home. Maybe you'll fill it in later tonight, and mail
it back to the organizers tomorrow.

But when you get home, there is a minor crisis (perhaps the cat was sick) and you forget
to fill in the form. You put it away for later, then lose it. A week later, it surfaces in a
heap of paper. By then you've forgotten what you were going to write, there's no address
on the questionnaire for you to mail it back to, and by now it's probably too late anyway.
Still, you're reluctant to throw it out, so you move it into a heap of papers that you might
think about some day. Maybe six months later, you find it and finally throw it out.

That story (not so uncommon) shows why response rates for event evaluations are often
so low. Organizers have been known to congratulate themselves for getting a 20%
response rate, falsely believing the average is 3%. If 100 people attend a seminar, and
only 20 forms are returned, what did the other 80 people think? Did they believe the
seminar was so great that they had nothing to add? Did they think it was so terrible that
they'd be embarrassed to hand in a form full of criticism? Or were they so underwhelmed
that it made no impression on them at all? The organizers will never know.

That's why it's vital to get a high response rate. If you get at least two thirds of the
questionnaires back, the other third of the audience would need to have very different
opinions to make a large difference to the results. And the way to get that two-thirds
response is to remove the barriers that prevent people from completing and returning the
questionnaire. The steps needed can be grouped into five main headings:
1. Make the questionnaire easy to fill in.
- Keep the questionnaire short and relevant.
- Avoid questions that need a lot of thinking time (unless you distribute the
questionnaires before the event begins)
- Also avoid questions that encourage an instant, thoughtless response.

2. Allow enough time.


- If you put the questionnaires on the seats before the audience arrives, people will be
able to fill them in at dull moments during the event.

3. Encourage response.
- The more strongly the presenter encourages people to fill in the forms, the better the
response rate. However the presenter should ask respondents to be critical, and should not
collect the questionnaires in person.

4. Avoid barriers to completion.


- Provide pens for people who haven't brought one. Ballpoint pens are very cheap, when
you buy 100 at a time. The cheaper they are, the less likely people are to take them away
- specially if they can't be closed, because you've removed the end-caps. Then you can
use them again next time. Another approach is to tell participants that you're giving them
a free pen, but that in return you'd like them to complete the questionnaire.
- Provide a surface to write on. If that's not possible, print the questionnaires on thick
paper, or in a small size, folded.
- Make the questionnaires easily readable, in the conditions that will exist at the venue.
For example, if the lighting is low, don't use thin, small type on dark-coloured paper.
(Garamond 10 point is about the worst; Comic Sans 12 point is among the best.)

5. Make the form easy to return.


- If audience members will have other papers, print the questionnaire distinctively, so that
it will stand out and be harder to lose - e.g. on bright yellow paper.
- Provide a collection box at every exit from the venue - or better still, have people
standing there to collect the forms. The collectors must not appear to look at the
completed forms, which might inhibit frankness.
- You could provide a reply-paid envelope for people who want to take their forms home
and fill in them in later. Though people think they might do this, in practice hardly
anybody does. Giving everybody a reply-paid envelope only encourages them to take the
form home, so it's generally not a good idea. However it is a good idea to print a mailing
address on each questionnaire, for the few people who really will post them back. Better
still, use a freepost address, so they don't have to find a stamp. On a one-page
questionnaire on a business topic, mention your fax number, so that respondents can
easily fax it back to you.

Triad discussions
Though multiple-choice questions on an evaluation questionnaire enable the comparison
of different events (e.g. in a series of events), they don't provide useful information for
improving an event. If all you know is that 73% of respondents disliked the event, how
can you use this information? You can't, and that is why you need to include open-ended
questions.

But open-ended questions have their problems too. Because they rely on respondents to
think of their own answers, you tend to get a lot of unique responses. This makes it hard
to summarize the results. If 3% of respondents commented favourably about some aspect
of the event, does that mean the other 97% disliked it? Or didn't they even notice it?

Another common problem with open-ended questions is that people write cryptic
comments. They know what they mean, but to the person processing the completed
questionnaires the answer is unclear or ambiguous. This is usually because the answer is
too short.

After thinking about these problems, I developed a solution: triad discussions. It works
like this:

1. Everybody fills in a questionnaire in the normal way: alone. The questionnaire


includes quite a lot of open-ended questions, such as
- "What did you like most about this event?"
- "What did you like least about this event?"
- "How do you think this event could have been improved?"
- "Are there any other comments or suggestions you'd like to make?"
2. The participants then divide into triads: groups of three. One is appointed as
secretary (whoever admits having the most readable handwriting), and is given a
new blank questionnaire, printed on paper of a different colour.
3. Now the people in the triad discuss their answers. Each person in turn reads out
his or her own answers to a question. If one of the other two doesn't understand it,
they say so, and the person who wrote the response should add a few words to
make it clear.
4. To prevent people automatically agreeing, not wanting to upset the others in their
group, each of the three can be assigned a particular role: supporter, clarifier, and
critic.
- The person who first made the statement obviously believes it to be true. For
example, take the statement "Letters were tiny."
- The secretary's role can be to clarify the wording, and make the statement
clearer. The statement might become "Letters on slides were too small to read."
(For statements made by the secretary, somebody else must be the clarifier.)
- The third person's role is to challenge the statement, to limit its scope, or to
admit its subjectivity. After the challenge, the statement might read: "The letters
on some slides were too small to read from the back of the room."
5. When the statement is clear, the others vote on it. The secretary writes the
statement on the different-coloured group questionnaire, followed by the number
who agree (1, 2, or 3) - e.g. "The letters on some slides were too small to read
from the back of the room (3)".
6. The coloured questionnaires from all triads are posted on the walls of the room,
and all participants are given some sticky dots to vote on the statements they
agree with most strongly.
7. Because many statements will be similar, the researchers can later combine their
numbers of votes. (It would be better to do this before the sticky-dot voting, but
that's usually not feasible: while the statements were being compared, the event
would be finished and the participants would be wanting to leave.)

The advantages of the triad method are that

• It clarifies the wording of statements


• It increases the number of responses for each statement, making it easier to judge
the generality of the comments.
• It produces more comprehensive results, because during the discussion, people in
the triads often think of extra statements to add.

If you have read about our consensus group technique, you'll recognize the method of
triads as a miniature consensus group.

Groups of four or more people can also do this. However, the larger the group, the longer
it takes - and there's usually not much time left for an evaluation at the end of an event.
The triad process can often be finished in five minutes, for an event questionnaire with up
to about 6 open ended questions.

Evaluation beyond the event


Whatever kind of event you are evaluating, consider why it was held. What was the
purpose of having that event in the first place? What are you hoping the audience will do
in response to the event? Even if the event is pure entertainment, you would probably like
the audience to follow it up in some way. Should they urge their friends to attend later
performances? Should they attend the next performance at your venue? Should they
change their lives because of this event?

If the event is some form of training, the Kirkpatrick model will apply. Donald
Kirkpatrick (in his book Evaluating Training Programs, published by Berrett-Koehler,
San Francisco, in 1994) described a 4-level model for evaluating the success of training...

1. Were the trainees pleased with the event? This is an aspect of customer
satisfaction, as commonly assessed in the kind of survey mentioned above.
2. How much did they learn? This can be assessed by educational tests, exams, etc.
3. How much did they change their behaviour? In the case of industrial training, this
can be assessed by supervisors, on-the-job performance measures, etc.
4. How much did that changed behaviour contribute towards the organization's
goals? (E.g. a training department would hope that its activities increased the
organization's operating efficiency).
With the Kirkpatrick model, success at each level depends largely on success at the
previous levels. If the trainees didn't like the course, they probably won't learn much. If
they don't learn much, they probably won't change their behaviour. And if they don't
change their behaviour, the organizational goals for the course probably won't be
achieved. Notice the word "probably" - there might be the odd exception, but it's much
harder to achieve a higher level of success if the lower levels haven't also been achieved.

The higher the level, the more difficult it is to be sure how much difference the course
made. Participant satisfaction is easily measured, but it's often not clear to what extent a
course might have increased a company's profit. For that reason, success at Kirkpatrick's
Level 4 is often judged too difficult to assess.

When we tried the Kirkpatrick model, we found that it omitted some important questions,
that Kirkpatrick perhaps took for granted...

• Was the event actually held, in the way that was planned? (For a body that's
funding an event far away, organized by others, this can't be taken for granted.)
• Did people actually attend, of the type and in the numbers planned?
• What indirect influences did the event have - other than on those who attended?
(For example, more benefits may come from personal networks formed at a
conference than from participants acting on the conference papers.)

When answers have been gathered for the above questions, interesting cost ratios can be
worked out - such as how much per person attending it cost to achieve the goals of the
event. If that figure seems unduly high, it's worth considering a different method of
achieving the goals.

If you've read our page on program logic models you'll realize the direction this is
heading: there's nothing as useful as a logic model in evaluating the success of anything -
including a simple event.

Vous aimerez peut-être aussi