Vous êtes sur la page 1sur 14

WHAT DO WE MEAN BY COLLECTI NG DATA?

WHAT DO WE MEAN BY ANALYZI NG DATA?


WHY SHOULD YOU COLLECT AND ANALYZE
DATA FOR YOUR EVALUATI ON?
WHEN AND BY WHOM SHOULD DATA BE
COLLECTED AND ANALYZED?
HOW DO YOU COLLECT AND ANALYZE DATA?
\
WHAT DO WE MEAN BY COLLECTING
DATA?
Essentially, collecting data means putting your design for collecting
information into operation. Youve decided how youre going to get
information whether by direct observation, interviews, surveys,
experiments and testing, or other methods and now you and/or other
observers have to implement your plan. Theres a bit more to collecting data,
however. If you are conducting observations, for example, youll have to
define what youre observing and arrange to make observations at the right
times, so you actually observe what you need to. Youll have to record the
observations in appropriate ways and organize them so theyre optimally
useful.
Recording and organizing data may take different forms, depending on the
kind of information youre collecting. The way you collect your data should
relate to how youre planning to analyze and use it. Regardless of what
method you decide to use, recording should be done concurrent with data
collection if possible, or soon afterwards, so that nothing gets lost and
memory doesnt fade.
Some of the things you might do with the information you collect
include:
Gathering together information from all sources and observations
Making photocopies of all recording forms, records, audio or video
recordings, and any other collected materials, to guard against loss,
accidental erasure, or other problems
Entering narratives, numbers, and other information into a computer
program, where they can be arranged and/or worked on in various ways
Performing any mathematical or similar operations needed to get
quantitative information ready for analysis. These might, for instance,
include entering numerical observations into a chart, table, or
spreadsheet, or figuring the mean (average), median (midpoint), and/or
mode (most frequently occurring) of a set of numbers.
Transcribing (making an exact, word-for-word text version of) the
contents of audio or video recordings
Coding data (translating data, particularly qualitative data that isnt
expressed in numbers, into a form that allows it to be processed by a
specific software program or subjected to statistical analysis)
Organizing data in ways that make them easier to work with. How you
do this will depend on your research design and your evaluation
questions. You might group observations by the dependent variable
(indicator of success) they relate to, by individuals or groups of
participants, by time, by activity, etc. You might also want to group
observations in several different ways, so that you can study
interactions among different variables.
There are two kinds of variables in research. An independent variable (the
intervention) is a condition implemented by the researcher or community to
see if it will create change and improvement. This could be a program,
method, system, or other action. A dependent variable is what may change as
a result of the independent variable or intervention. Adependent
variable could be a behavior, outcome, or other condition. A smoking
cessation program, for example, is an independent variable that may change
group members smoking behavior, the primary dependent variable.


WHAT DO WE MEAN BY ANALYZING
DATA?
Analyzing information involves examining it in ways that reveal the
relationships, patterns, trends, etc. that can be found within it. That may mean
subjecting it to statistical operations that can tell you not only what kinds of
relationships seem to exist among variables, but also to what level you can
trust the answers youre getting. It may mean comparing your information to
that from other groups (a control or comparison group, statewide figures,
etc.), to help draw some conclusions from the data. The point, in terms of
your evaluation, is to get an accurate assessment in order to better understand
your work and its effects on those youre concerned with, or in order to better
understand the overall situation.
There are two kinds of data youre apt to be working with, although not all
evaluations will necessarily include both. Quantitative data refer to the
information that is collected as, or can be translated into, numbers, which can
then be displayed and analyzed mathematically. Qualitative data are
collected as descriptions, anecdotes, opinions, quotes, interpretations, etc.,
and are generally either not able to be reduced to numbers, or are considered
more valuable or informative if left as narratives. As you might expect,
quantitative and qualitative information needs to be analyzed differently.
QUANTI TATI VE DATA
Quantitative data are typically collected directly as numbers. Some
examples include:
The frequency (rate, duration) of specific behaviors or conditions
Test scores (e.g., scores/levels of knowledge, skill, etc.)
Survey results (e.g., reported behavior, or outcomes to environmental
conditions; ratings of satisfaction, stress, etc.)
Numbers or percentages of people with certain characteristics in a
population (diagnosed with diabetes, unemployed, Spanish-speaking,
under age 14, grade of school completed, etc.)
Data can also be collected in forms other than numbers, and turned into
quantitative data for analysis. Researchers can count the number of times an
event is documented in interviews or records, for instance, or assign numbers
to the levels of intensity of an observed event or behavior. For instance,
community initiatives often want to document the amount and intensity of
environmental changes they bring about the new programs and policies that
result from their efforts. Whether or not this kind of translation is necessary
or useful depends on the nature of what youre observing and on the kinds of
questions your evaluation is meant to answer.
Quantitative data is usually subjected to statistical procedures such as
calculating the mean or average number of times an event or behavior occurs
(per day, month, year). These operations, because numbers are hard data
and not interpretation, can give definitive, or nearly definitive, answers to
different questions. Various kinds of quantitative analysis can indicate
changes in a dependent variable related to frequency, duration, timing
(when particular things happen), intensity, level, etc. They can allow you to
compare those changes to one another, to changes in another variable, or to
changes in another population. They might be able to tell you, at a particular
degree of reliability, whether those changes are likely to have been caused by
your intervention or program, or by another factor, known or unknown. And
they can identify relationships among different variables, which may or may
not mean that one causes another.
QUALI TATI VE DATA
Unlike numbers or hard data, qualitative information tends to be soft,
meaning it cant always be reduced to something definite. That is in some
ways a weakness, but its also a strength. A number may tell you how well a
student did on a test; the look on her face after seeing her grade, however,
may tell you even more about the effect of that result on her. That look cant
be translated to a number, nor can a teachers knowledge of that students
history, progress, and experience, all of which go into the teachers
interpretation of that look. And that interpretation may be far more valuable
in helping that student succeed than knowing her grade or numerical score on
the test.
Qualitative data can sometimes be changed into numbers, usually by counting
the number of times specific things occur in the course of observations or
interviews, or by assigning numbers or ratings to dimensions (e.g.,
importance, satisfaction, ease of use).
Qualitative data can sometimes tell you things that quantitative data cant. It
may reveal why certain methods are working or not working, whether part of
what youre doing conflicts with participants culture, what participants see
as important, etc. It may also show you patterns in behavior, physical or
social environment, or other factors that the numbers in your quantitative
data dont, and occasionally even identify variables that researchers werent
aware of.
It is often helpful to collect both quantitative and qualitative information.
Quantitative analysis is considered to be objective without any human bias
attached to it because it depends on the comparison of numbers according
to mathematical computations. Analysis of qualitative data is generally
accomplished by methods more subjective dependent on peoples opinions,
knowledge, assumptions, and inferences (and therefore biases) than that of
quantitative data. The identification of patterns, the interpretation of peoples
statements or other communication, the spotting of trends all of these can
be influenced by the way the researcher sees the world. Be aware, however,
that quantitative analysis is influenced by a number of subjective factors as
well. What the researcher chooses to measure, the accuracy of the
observations, and the way the research is structured to ask only particular
questions can all influence the results, as can the researchers understanding
and interpretation of the subsequent analyses.
.
WHEN AND BY WHOM SHOULD DATA BE
COLLECTED AND ANALYZED?
As far as data collection goes, the when part of this question is relatively
simple: data collection should start no later than when you begin your work
or before you begin in order to establish a baseline or starting point and
continue throughout. Ideally, you should collect data for a period of time
before you start your program or intervention in order to determine if there
are any trends in the data before the onset of the intervention. Additionally, in
order to gauge your programs longer-term effects, you should collect follow-
up data for a period of time following the conclusion of the program.
The timing of analysis can be looked at in at least two ways: One is that its
best to analyze your information when youve collected all of it, so you can
look at it as a whole. The other is that if you analyze it as you go along,
youll be able to adjust your thinking about what information you actually
need, and to adjust your program to respond to the information youre
getting. Which of these approaches you take depends on your research
purposes. If youre more concerned with a summative evaluation finding
out whether your approach was effective, you might be more inclined toward
the first. If youre oriented toward improvement a formative evaluation
we recommend gathering information along the way. Both approaches are
legitimate, but ongoing data collection and review can particularly lead to
improvements in your work.
The who question can be more complex. If youre reasonably familiar with
statistics and statistical procedures, and you have the resources in time,
money, and personnel, its likely that youll do a somewhat formal study,
using standard statistical tests. (Theres a great deal of software both for
sale and free or open-source available to help you.)
If thats not the case, you have some choices:
You can hire or find a volunteer outside evaluator, such as from a
nearby college or university, to take care of data collection and/or
analysis for you.
You can conduct a less formal evaluation. Your results may not be as
sophisticated as if you subjected them to rigorous statistical procedures,
but they can still tell you a lot about your program. Just the numbers
the number of dropouts (and when most dropped out), for instance, or
the characteristics of the people you serve can give you important and
usable information.
You can try to learn enough about statistics and statistical software to
conduct a formal evaluation yourself. (Take a course, for example.)
You can collect the data and then send it off to someone a university
program, a friendly statistician or researcher, or someone you hire to
process it for you.
You can collect and rely largely on qualitative data. Whether this is an
option depends to a large extent on what your program is about. You
wouldnt want to conduct a formal evaluation of effectiveness of a new
medication using only qualitative data, but you might be able to draw
some reasonable conclusions about use or compliance patterns from
qualitative information.
If possible, use a randomized or closely matched control group for
comparison. If your control is properly structured, you can draw some
fairly reliable conclusions simply by comparing its results to those of
your intervention group. Again, these results wont be as reliable as if
the comparison were made using statistical procedures, but they can
point you in the right direction. Its fairly easy to tell whether or not
theres a major difference between the numbers for the two or more
groups. If 95% of the students in your class passed the test, and only
60% of those in a similar but uninstructed control group did, you can be
pretty sure that your class made a difference in some way, although you
may not be able to tell exactly what it was that mattered. By the same
token, if 72% of your students passed and 70% of the control group did
as well, it seems pretty clear that your instruction had essentially no
effect, if the groups were starting from approximately the same place.
Who should actually collect and analyze data also depends on the form of
your evaluation. If youre doing a participatory evaluation, much of the data
collection - and analyzing - will be done by community members or program
participants themselves. If youre conducting an evaluation in which the
observation is specialized, the data collectors may be staff members,
professionals, highly trained volunteers, or others with specific skills or
training (graduate students, for example). Analysis also could be
accomplished by a participatory process. Even where complicated statistical
procedures are necessary, participants and/or community members might be
involved in sorting out what those results actually mean once the math is
done and the results are in. Another way analysis can be accomplished is by
professionals or other trained individuals, depending upon the nature of the
data to be analyzed, the methods of analysis, and the level of sophistication
aimed at in the conclusions.
HOW DO YOU COLLECT AND ANALYZE
DATA?
Whether your evaluation includes formal or informal research procedures,
youll still have to collect and analyze data, and there are some basic steps
you can take to do so.
I MPLEMENT YOUR MEASUREMENT SYSTEM
We've previously discussed designing an observational system to gather
information. Now its time to put that system in place.
Clearly define and describe what measurements or observations are
needed. The definition and description should be clear enough to enable
observers to agree on what theyre observing and reliably record data in
the same way.
Select and train observers. Particularly if this is part of a participatory
process, observers need training to know what to record; to recognize
key behaviors, events, and conditions; and to reach an acceptable level
of inter-rater reliability (agreement among observers).
Conduct observations at the appropriate times for the appropriate
period of time. This may include reviewing archival material;
conducting interviews, surveys, or focus groups; engaging in direct
observation; etc.
Record data in the agreed-upon ways. These may include pencil and
paper, computer (using a laptop or handheld device in the field,
entering numbers into a program, etc.), audio or video, journals, etc.
ORGANI ZE THE DATA YOU VE COLLECTED
How you do this depends on what youre planning to do with it, and on
what youre interested in.
Enter any necessary data into the computer. This may mean simply
typing comments, descriptions, etc., into a word processing program, or
entering various kinds of information (possibly including audio and
video) into a database, spreadsheet, a GIS (Geographic Information
Systems) program, or some other type of software or file.
Transcribe any audio- or videotapes. This makes them easier to work
with and copy, and allows the opportunity to clarify any hard-to-
understand passages of speech.
Score any tests and record the scores appropriately.
Sort your information in ways appropriate to your interest. This may
include sorting by category of observation, by event, by place, by
individual, by group, by the time of observation, or by a combination or
some other standard.
When possible, necessary, and appropriate, transform qualitative into
quantitative data. This might involve, for example, counting the
number of times specific issues were mentioned in interviews, or how
often certain behaviors were observed.
CONDUCT DATA GRAPHI NG, VI SUAL I NSPECTI ON,
STATI STI CAL ANALYSI S, OR OTHER OPERATI ONS
ON THE DATA AS APPROPRI ATE
Weve referred several times to statistical procedures that you can apply to
quantitative data. If you have the right numbers, you can find out a great deal
about whether your program is causing or contributing to change and
improvement, what that change is, whether there are any expected or
unexpected connections among variables, how your group compares to
another youre measuring, etc.
There are other excellent possibilities for analysis besides statistical
procedures, however. A few include:
Simple counting, graphing and visual inspection of frequency or rates
of behavior, events, etc., over time.
Using visual inspection of patterns over time to identify
discontinuities (marked increases, decreases) in the measures over time
(sessions, weeks, months).
Calculating the mean (average), median (midpoint), and/or mode (most
frequent) of a series of measurements or observations. What was the
average blood pressure, for instance, of people who exercised 30
minutes a day at least five days a week, as opposed to that of people
who exercised two days a week or less?
Using qualitative interviews, conversations, and participant
observation to observe (and track changes in) the people or
situation. Journals can be particularly revealing in this area because
they record peoples experiences and reflections over time.
Finding patterns in qualitative data. If many people refer to similar
problems or barriers, these may be important in understanding the
issue, determining what works or doesnt work and why, or more.
Comparing actual results to previously determined goals or
benchmarks. One measure of success might be meeting a goal for
planning or program implementation, for example.
I NTERPRET THE RESULTS
Once youve organized your results and run them through whatever statistical
or other analysis youve planned for, its time to figure out what they mean
for your evaluation. Probably the most common question that evaluation
research is directed toward is whether the program being evaluated works or
makes a difference. In research terms, that often translates to What were the
effects of the independent variable (the program, intervention, etc.) on the
dependent variable(s) (the behavior, conditions, or other factors it was meant
to change)? There are a number of possible answers to this question:
Your program had exactly the effects on the dependent variable(s) you
expected and hoped it would. Statistics or other analysis showed clear
positive effects at a high level of significance for the people in your
program and if you used a multiple-group design none, or far fewer,
of the same effects for a similar control group and/or for a group that
received a different intervention with the same purpose. Your early
childhood education program, for instance, greatly increased
development outcomes for children in the community, and also
contributed to an increase in the percentage of children succeeding in
school.
Your program had no effect. Your program produced no significant
results on the dependent variable, whether alone or compared to other
groups. This would mean no change as a result of your program or
intervention.
Your program had a negative effect. For instance, intimate partner
violence increased (or at least appeared to) as a result of your
intervention. (It is relatively common for reported events, such as
violence or injury, to increase when the intervention results in
improved surveillance and ease of reporting).
Your program had the effects you hoped for and other effects as well.
o These effects might be positive. Your youth violence prevention
program, for instance, might have resulted in greatly reduced
violence among teens, and might also have resulted in
significantly improved academic performance for the kids
involved.
o These effects might be neutral. The same youth violence
prevention program might somehow result in youth watching TV
more often after school.
o These effects might be negative. (These effects are usually called
unintended consequences.) Youth violence might decrease
significantly, but the incidence of teen pregnancies or alcohol
consumption among youth in the program might increase
significantly at the same time.
o These effects might be multiple, or mixed.For instance, a program
to reduce HIV/AIDS might lower rates of unprotected sex but
might also increase conflict and instances of partner violence.
Your program had no effect or a negative effect and other effects
as well. As with programs with positive effects, these might be
positive, neutral, or negative; single or multiple; or consistent or
mixed.
Careful and insightful interpretation of your data may allow you to answer
questions like these. You may be able to use correlations, for instance, to
generate hypotheses about your results. If positive or negative changes in
particular variables are consistently associated with positive or negative
changes in other variables, the two may be connected. (The word may is
important here. The two may be connected, but they may not, or both may be
related to a third variable that youre not aware of or that you consider
trivial.) Such a connection can point the way toward a factor (e.g., access to
support) that is causing the changes in both variables, and that must be
addressed to make your program successful. Correlations may also indicate
patterns in your data, or may lead to an unexpected way of looking at the
issue youre addressing.
You can often use qualitative data to understand the meaning of an
intervention, and peoples reactions to the results.The observation that
participants are continually suffering from a variety of health problems may
be traced, through qualitative data, to nutrition problems (due either to
poverty or ignorance) or to lack of access to health services, or to cultural
restrictions (some Muslim women may be unwilling or unable because of
family prohibition to accept care and treatment from male doctors, for
example).


Analyzing and interpreting the data youve collected brings you, in a sense,
back to the beginning. You can use the information youve gained to adjust
and improve your program or intervention, evaluate it again, and use that
information to adjust and improve it further, for as long as it runs. You have
to keep up the process to ensure that youre doing the best work you can and
encouraging changes in individuals, systems, and policies that make for a
better and healthier community.

Vous aimerez peut-être aussi