Vous êtes sur la page 1sur 9

Data collection is the process of gathering and measuring information on variables of interest, in an established

systematic fashion that enables one to answer stated research questions, test hypotheses, and evaluate outcomes.
The data collection component of research is common to all fields of study including physical and social sciences,
humanities, business, etc. While methods vary by discipline, the emphasis on ensuring accurate and honest
collection remains the same.

The importance of ensuring accurate and appropriate data collection


Regardless of the field of study or preference for defining data (quantitative, qualitative), accurate data collection is
essential to maintaining the integrity of research. Both the selection of appropriate data collection instruments
(existing, modified, or newly developed) and clearly delineated instructions for their correct use reduce the likelihood
of errors occurring.
Consequences from improperly collected data include

inability to answer research questions accurately

inability to repeat and validate the study

distorted findings resulting in wasted resources

misleading other researchers to pursue fruitless avenues of investigation

compromising decisions for public policy

causing harm to human participants and animal subjects

While the degree of impact from faulty data collection may vary by discipline and the nature of investigation, there is
the potential to cause disproportionate harm when these research results are used to support public policy
recommendations.

Issues related to maintaining integrity of data collection:


The primary rationale for preserving data integrity is to support the detection of errors in the data collection process,
whether they are made intentionally (deliberate falsifications) or not (systematic or random errors).
Most, Craddick, Crawford, Redican, Rhodes, Rukenbrod, and Laws (2003) describe quality assurance and quality
control as two approaches that can preserve data integrity and ensure the scientific validity of study results. Each
approach is implemented at different points in the research timeline (Whitney, Lind, Wahl, 1998):
1.

Quality assurance - activities that take place before data collection begins

2.

Quality control - activities that take place during and after data collection

Quality Assurance
Since quality assurance precedes data collection, its main focus is 'prevention' (i.e., forestalling problems with data
collection). Prevention is the most cost-effective activity to ensure the integrity of data collection. This proactive
measure is best demonstrated by the standardization of protocol developed in a comprehensive and detailed
procedures manual for data collection. Poorly written manuals increase the risk of failing to identify problems and
errors early in the research endeavor. These failures may be demonstrated in a number of ways:

Uncertainty about the timing, methods, and identify of person(s) responsible for reviewing data

Partial listing of items to be collected

Vague description of data collection instruments to be used in lieu of rigorous step-by-step instructions on
administering tests

Failure to identify specific content and strategies for training or retraining staff members responsible for data
collection

Obscure instructions for using, making adjustments to, and calibrating data collection equipment (if
appropriate)

No identified mechanism to document changes in procedures that may evolve over the course of the
investigation .

An important component of quality assurance is developing a rigorous and detailed recruitment and training plan.
Implicit in training is the need to effectively communicate the value of accurate data collection to trainees (Knatterud,
Rockhold, George, Barton, Davis, Fairweather, Honohan, Mowery, O'Neill, 1998). The training aspect is particularly
important to address the potential problem of staff who may unintentionally deviate from the original protocol. This
phenomenon, known as drift, should be corrected with additional training, a provision that should be specified in the
procedures manual.
Given the range of qualitative research strategies (non-participant/ participant observation, interview, archival, field
study, ethnography, content analysis, oral history, biography, unobtrusive research) it is difficult to make generalized
statements about how one should establish a research protocol in order to facilitate quality assurance. Certainly,
researchers conducting non-participant/participant observation may have only the broadest research questions to
guide the initial research efforts. Since the researcher is the main measurement device in a study, many times there
are little or no other data collecting instruments. Indeed, instruments may need to be developed on the spot to
accommodate unanticipated findings.

Quality Control
While quality control activities (detection/monitoring and action) occur during and after data collection, the details
should be carefully documented in the procedures manual. A clearly defined communication structure is a necessary
pre-condition for establishing monitoring systems. There should not be any uncertainty about the flow of information
between principal investigators and staff members following the detection of errors in data collection. A poorly
developed communication structure encourages lax monitoring and limits opportunities for detecting errors.
Detection or monitoring can take the form of direct staff observation during site visits, conference calls, or regular and
frequent reviews of data reports to identify inconsistencies, extreme values or invalid codes. While site visits may not
be appropriate for all disciplines, failure to regularly audit records, whether quantitative or quantitative, will make it
difficult for investigators to verify that data collection is proceeding according to procedures established in the manual.
In addition, if the structure of communication is not clearly delineated in the procedures manual, transmission of any
change in procedures to staff members can be compromised
Quality control also identifies the required responses, or actions necessary to correct faulty data collection practices
and also minimize future occurrences. These actions are less likely to occur if data collection procedures are vaguely
written and the necessary steps to minimize recurrence are not implemented through feedback and education
(Knatterud, et al, 1998)
Examples of data collection problems that require prompt action include:

errors in individual data items

systematic errors

violation of protocol

problems with individual staff or site performance

fraud or scientific misconduct

In the social/behavioral sciences where primary data collection involves human subjects, researchers are taught to
incorporate one or more secondary measures that can be used to verify the quality of information being collected
from the human subject. For example, a researcher conducting a survey might be interested in gaining a better
insight into the occurrence of risky behaviors among young adult as well as the social conditions that increase the
likelihood and frequency of these risky behaviors.
To verify data quality, respondents might be queried about the same information but asked at different points of the
survey and in a number of different ways. Measures of Social Desirability might also be used to get a measure of
the honesty of responses. There are two points that need to be raised here, 1) cross-checks within the data collection
process and 2) data quality being as much an observation-level issue as it is a complete data set issue. Thus, data
quality should be addressed for each individual measurement, for each individual observation, and for the entire data
set.
Each field of study has its preferred set of data collection instruments. The hallmark of laboratory sciences is the
meticulous documentation of the lab notebook while social sciences such as sociology and cultural anthropology may
prefer the use of detailed field notes. Regardless of the discipline, comprehensive documentation of the collection
process before, during and after the activity is essential to preserving data integrity.
References:
Knatterud.,G.L., Rockhold, F.W., George, S.L., Barton, F.B., Davis, C.E., Fairweather, W.R., Honohan, T., Mowery, R,
ONeill, R. (1998). Guidelines for quality assurance in multicenter trials: a position paper. Controlled Clinical Trials,
19:477-493.
Most, .M.M., Craddick, S., Crawford, S., Redican, S., Rhodes, D., Rukenbrod, F., Laws, R. (2003). Dietary quality
assurance processes of the DASH-Sodium controlled diet study. Journal of the American Dietetic Association,
103(10): 1339-1346.
Whitney, C.W., Lind, B.K., Wahl, P.W. (1998). Quality assurance and quality control in longitudinal
studies.Epidemiologic Reviews, 20(1): 71-80.

Once a research question has been agreed upon, a research design is formulated which includes an
appropriate and effective method for collecting data. It has been rightly said by (Sherlock Holmes)
that it is a capital mistake to theorize before one has data.
In business research, data is collected from various sources and a variety of methods or techniques
are used in its collection. It may be from a secondary source or primary source. In case of latter, it
could be census or survey, laboratory experiment or field experiment, open or hidden observation.
Each method has pros and cons. When population is large, it is neither necessary nor advisable to
cover each and every member but a sample would suffice. It is quicker, cheaper and, if welldesigned, it can provide precise information about characteristics of the population. On the other
hand, causal inference can only be drawn from experiments. So choice of the method depends upon
time, money and objectives.

OBSERVATIONS

Observation is a primary method of collecting data by human, mechanical, electrical or electronics


means with direct or indirect contact. As per Langley P, Observations involve looking and listening
very carefully. We all watch other people sometimes but we do not usually watch them in order to
discover particular information about their behavior. This is what observation in social science
involves.
Observation is the main source of information in the field research. The researcher goes into the
field and observes the conditions in their natural state.
There are many types of observation, direct or indirect, participant or non-participant, obtrusive or
non-obtrusive, structured or non-structured. The observation is important and actual behavior of
people is observed and not what people say they did or feel. For example, people value health but
they would pick up food they know to fatty. It is useful when the subject cannot provide information or
can only provide inaccurate information like people addicted to drugs. But at the same time, in
observation the researcher does not get any insight into what people may be thinking.

OBTRUSIVE AND UNOBTRUSIVE


Obtrusive mean visible, thrusting out or evident. It is like class monitor, traffic warden or inspector.
On the other hand, unobtrusive mean hidden, camouflaged or low-key. In such method, the
researcher is not required to intrude. Also, it reduced bias. It enables to obtain data from a source
other than people like instead of asking people which soft drink they like, an unobtrusive source
would to collect empty cans from garbage dumps and analyse their brands. There are other such
examples: to ascertain popularity of journal, one can observe its wear and tear in a library. Also,
entry counters in a super market provide very strong evidence that from which side the customers
come in. Likewise, number of hits on a web site can be related to its usefulness or popularity. Hidden
cameras can show consumer behavior in the store.

PARTICIPATIVE OR NON-PARTICIPATIVE
In participative observation, the observers becomes a participant in the program or culture or context
being observed. It may require long time as the researcher needs to become accepted as a natural
part of the culture. On the other hand, if one needs to observe child-mother interactions, one would
resort to non-participative observations looking from a one-way mirror to note verbal or non-verbal
cues being given by the mother and the child.

IN-DEPTH TECHNIQUES
In a survey, usually general questions are asked to know what customers or subjects do and think.
But it one wants to know why they feel that way, one has to conduct an in-depth research. In
survey, answers may depend on the mood of the respondent. As such, the survey shows how one
feels at one particular time. But in in-depth research, long and probing interviews are taken to find

out customers satisfaction and loyalty, usage, awareness and brand recognition etc. as discussed
below:

FOCUS GROUP
A group is formed of 8 to 12 persons. They are selected keeping in view the targeted market. The
group members are asked about their attitude towards a concept, product, service, packing or
advertisement. Questions are asked in an interactive-group-setting where members are free to talk
to each other. A moderator guides through the discussion. Through one-way mirror, the client or its
representative observes the discussion, interpret facial expression and body language.
There are some draw-backs. There is lesser control of the moderator or researcher and it lead to
irrelevant discussions. Moreoveer, individual members consciously or unconsciously conform to
what they perceive to be the consensus of the group, a situation called Group-think.
The technology has give rise to Modern Group where group-members participate on-line and can
share financial and operating data, pictures, voices and drawings etc.

PANELS
These are more or less like Focus Group. But Focus groups are formed for one-time discussion to
decide about a particular issue. On the contrary, panels are of long-term nature for meeting
frequently to resolve an issue. These can be current customers or potential customers; can be static
or dynamic (members coming and going).
Another difference is that penal are selected by the organizers through certain criteria like education,
exposure and interest. Of course, there are exception like Microsoft Panel for Research and
Evaluation of their software which are formed through open invitation.

IN-DEPTH INTERVIEWS
In-depth interviews, also called as one-to-one interviews, are expensive in term of time and money
but are good for exploring several factors. Problems identified in an interview may be a symptom of
a serious problem.
The interviews may be conducted face to face or through telephone or these could be computer
assisted interviews. Nowadays, television interviews have become more common.
But interviews are fraught with bias from three sources, the interviewer, the interviewee and the
interview setting. The interviewer may misinterpret the response or distort it while writing down. He
may unintentionally encourage certain responses though gestures and facial impressions. The
interviewee may not give his or her true opinion or avoid difficult questions. The setting may be good

or bad creating comfort or discomfort. It may be open or in presence of some colleagues or senior or
level of trust may be inadequate.
In order to minimize bias, the interviewer should have knowledge, skill and confidence. Rapport and
trust should be established in the interview.

PROJECTIVE METHODS
A psychological test in which a subject's responses to ambiguous or unstructured standard stimuli,
such as a series of cartoons, abstract patterns, or incomplete sentences, are analyzed in order to
determine underlying personality traits and feelings. This entails indirect question which enables the
respondent to project beliefs and feelings onto a third party". The respondents are expected to
interpret the situation through their own experience, attitude and personality and express hidden
opinion and emotions.
Of many techniques, word association, sentence completing and ink-blot tests are very common. In
these techniques, both verbal and non-verbal (hesitation, time-lag and facial expression) are noted
and interpreted.
Such tests are useful for finding out consumer preference, buying attitude and behavior. Eventually,
these are used for product development or finding out reason for failure of an apparently efficient
product.

EXPERIMENT FIELD AND LAB


Considerable data is generated or collected through experimentation in business research. A bank
may conduct an experiment to know what attracts depositors: profit or security or liquidity. In a lab
experiment, a group of 50 participants would be given chips representing money and would be
shown three banks of which one was giving highest profitability, second was more "secure" and third
provided liquidity through ATM and quick cheque processing system. The participants would be
asked to deposit their chips with any of them. The total with each bank would be calculated and
chips returned to participants. In the next round, the three banks would be rated equal expect one
would be giving high returns and the participants would be asked to make their deposits. The
experiment would go on and the data generated may point out a cause-and-effect relationship
between any of the three motives and deposits.
In case of field experiment, a bank may advertise that one of its branches is celebrating
100thanniversary and is offering 3% over and above its normal returns of 6%. If high return attracts
more deposits, the clients would shift their saving towards that branch but if clients feel convenience
is more important, there would not change in deposit levels..

One of the differences between lab and field experiment is use of real players or mock players. In
other words use of actual clients or volunteers.

Problem The definition of a problem is something that has to be solved or an unpleasant

or undesirable condition that needs to be corrected.


Statement of the problem A problem statement is a concise description of the issues that need
to be addressed by a problem solving team and should be presented to them (or created by them)
before they try to solve the problem. On the other hand, a statement of the problem is a claim of
one or two sentences in length that outlines the problem addressed by the study. The statement of
the problem should briefly address the question: What is the problem that the research will
address?[1]
When bringing together a team to achieve a particular purpose, provide them with a problem
statement. A good problem statement should answer these questions:
1. What is the problem? This should explain why the team is needed.
2. Who has the problem or who is the client/customer? This should explain who needs the
solution and who will decide the problem has been solved.
3. What form can the resolution be? What is the scope and limitations (in time, money,
resources, technologies) that can be used to solve the problem? Does the client want
a white paper? A web-tool? A new feature for a product? A brainstorming on a topic?
The primary purpose of a problem statement is to focus the attention of the problem solving team.
However, if the focus of the problem is too narrow or the scope of the solution too limited the
creativity and innovation of the solution can be stifling.
In project management, the problem statement is part of the project charter. It lists what's essential
about the project and enables the project manager to identify the project scope as well as the project
stakeholders.[2]
A research-worthy problem statement is the description of an active challenge (i.e. problem)
faced by researchers and/or practitioners that does not have adequate solutions available. The
adequate solutions includes the simplified theoretical foundation and argumentation for a viable
solution based on respected peer-reviewed sources. Theresearch-worthy problem
statement should address all six questions: what, how, where, when, why, and who.

INTRODUCTION

In an essay, article, or book, an introduction (also known as a prolegomenon) is a beginning


section which states the purpose and goals of the following writing. This is generally followed by the
body and conclusion.
The introduction typically describes the scope of the document and gives the brief explanation or
summary of the document. It may also explain certain elements that are important to the essay if
explanations are not part of the main text. The readers can have an idea about the following text
before they actually start reading it.
ln technical writing, the introduction typically includes one or more standard subsections: abstract or
summary, preface, acknowledgments, and foreword. Alternatively, the section labelled
introduction itself may be a brief section found side-by-side with abstract, foreword, etc. (rather than
containing them). In this case the set of sections that come before the body of the book are known
as the front matter. When the book is divided into numbered chapters, by convention the
introduction and any other front-matter sections are unnumbered and precede chapter 1.
Keeping the concept of the introduction the same, different documents have different styles to
introduce the written text. For example, the introduction of a Functional Specification consists of
information that the whole document is yet to explain. If a Userguide is written, the introduction is
about the product. In a report, the introduction gives a summary about the report contents.

Scope and limitation- A Scope Limitation is a restriction on the applicability of an auditor's


report that may arise from the inability to obtain sufficient appropriate evidence about a component
in the financial statements. When an auditor is unable to apply all the audit procedures that are
considered necessary, either by circumstances, engagement, or client limitation, the audit is limited
in scope.
Auditing standards suggest that when restrictions imposed by the client significantly limit the scope
of the engagement the auditor should consider disclaiming the opinion.
Some scope limitations arise for reasons that are beyond the control of the client, such
as fire and flood. Alternative procedures can overcome the risk of the auditor's qualified or disclaimer
opinion. Simple procedures to provide sufficient evidence would be necessary for the auditor to
adhere to US GAAP
CONCEPTUAL FRAMEWORK
A conceptual framework is an analytical tool with several variations and contexts. It is used to
make conceptual distinctions and organize ideas. Strong conceptual frameworks capture something
real and do this in a way that is easy to remember and apply.
Review on related literature and study

what is the importance of review of related literature?


-review what has been done already
-to identify problem and to answer specific question
-to provide rationale to proposed study
-to relate to previous studies
-to detect conflicting points

Vous aimerez peut-être aussi