Vous êtes sur la page 1sur 19

How to Read Research

http://www.faculty.english.ttu.edu/Rickly/5320/critassign/1rdg.htm

Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual


introduction (4th edition), pp 47-74. NY: HarpersCollins College Publishers.

Research is reported in a variety of ways, most commonly as a published article or


as a paper delivered at a conference. The purpose of the report is to indicate
clearly what the researcher has done, why it was done, and what it means. To do
this effectively, researchers use a more or less standard format. The format is
similar to the process of conceptualizing and conducting the research. Since the
process of doing research is different for quantitative as compared to qualitative
approaches, there are differences in the reporting formats used for each approach.
Thus, we will review the basic formats for reporting research for each approach
separately. [See links to the critique research assignment.]

When reading research it is important to judge the overall credibility of the study.
This judgement is based on an evaluation of each of the major sections of the
report. Each part of the report contributes to the overall credibility of the study.
Thus, following a description of the format of each type of research we introduce
guidelines that are useful in evaluating each section of the report. [Use these
guidelines linked to the list below to evaluate 3 research articles of your choice.]

The guidelines or standards to use to evaluate research include:

(link 1) how to read research

(link 2) how to read quantitative research

(link 3) standards of adequacy for true experimental designs, quasi-


experimental designs, and single-subject designs;

(link 4) standards of adequacy for descriptive research, correlational


research, survey research, and ex post facto research;

(link 5) standards of adequacy for a narrative literature review (use these


criteria to critique a literature review chapter in a dissertation);

(link 6) standards of adequacy for qualitative designs--case studies


(link 7) standards of adequacy for ethnographic methodology

(link 8) credibility standards for analytical research such as historical and legal
studies

(link 9) guidelines for a research proposal (these guidelines will be used to


constructively critique your research proposal due on May 7, 2002).

The guidelines or standards to use to evaluate research include:

(link 1) how to read research

(link 2) how to read quantitative research

(link 3) standards of adequacy for true experimental designs, quasi-


experimental designs, and single-subject designs;

(link 4) standards of adequacy for descriptive research, correlational


research, survey research, and ex post facto research;

(link 5) standards of adequacy for a narrative literature review (use these


criteria to critique a literature review chapter in a dissertation);

(link 6) standards of adequacy for qualitative designs--case studies

(link 7) standards of adequacy for ethnographic methodology

(link 8) credibility standards for analytical research such as historical and legal
studies

(link 9) guidelines for a research proposal (these guidelines will be used to


constructively critique your research proposal due on May 7, 2002).

How to Read Research


Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual
introduction (4th edition), pp 47-74. NY: HarpersCollins College Publishers.
Research is reported in a variety of ways, most commonly as a published article or
as a paper delivered at a conference. The purpose of the report is to indicate
clearly what the researcher has done, why it was done, and what it means. To do
this effectively, researchers use a more or less standard format. The format is
similar to the process of conceptualizing and conducting the research. Since the
process of doing research is different for quantitative as compared to qualitative
approaches, there are differences in the reporting formats used for each approach.
Thus, we will review the basic formats for reporting research for each approach
separately. [See links to the critique research assignment.]

When reading research it is important to judge the overall credibility of the study.
This judgement is based on an evaluation of each of the major sections of the
report. Each part of the report contributes to the overall credibility of the study.
Thus, following a description of the format of each type of research we introduce
guidelines that are useful in evaluating each section of the report. [Use these
guidelines linked to the list below to evaluate 3 research articles of your choice.]

How to Read Quantitative Research:


A Nonexperimental Example
Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual
introduction (4th edition), pp 48-59. NY: HarpersCollins College Publishers.

Although there is no universally accepted format for reporting quantitative


research, most studies adhere to the sequence of scientific inquiry. There is
variation in the terms used, but the components indicated below are included in
most studies:

1. Abstract
2. Introduction
3. Statement of research problem
4. Review of literature
5. Statement of research hypotheses or questions
6. Methodology
a. subjects
b. instruments
c. procedures
7. Results
8. Discussion, implications, conclusions
9. References

In writing a research report, the writer begins with the introduction and continues
sequentially to the conclusion. In planning to conduct research, the researchers
begin by formulating a research problem.

Abstract: The abstract is a paragraph that summarizes the journal article. It follows
the authors' names and is usually italicized or printed in type that is smaller than
the type of the article itself. Most abstracts contain a statement of the purpose of
the study, a brief description of the subjects and what they did during the study,
and a summary of important results. The abstract is useful because it provides a
quick overview of the research, and after studying it, the reader usually will know
whether to read the entire article.

Introduction [context of & background leading to problem studied]: The


introduction is usually limited to the first [one or two] paragraph[s] of the article.
The purpose of the introduction is to put the study in context. This is often
accomplished by quoting previous research in the general topic, citing leading
researchers in the area, or developing the historical context of the study. The
introduction acts as a lead-in to a statement of the more specific purpose of the
study.

Research Problem: The first step in planning a quantitative study is to formulate a


research problem. The research problem is a clear and succinct statement that
indicates the purpose of the study. Researchers begin with a general idea of what
they intend to study, such as the relationship of self-concept to achievement, and
then they refine this general goal to a concise sentence that indicates more
specifically what is being investigated--for example, what is the relationship
between fourth graders' self concept of ability in mathematics and their
achievement in math as indicated by standardized test scores?

The statement of the research problem can be found in one of several locations in
articles. It can be the last sentence of the introduction, or it may follow the review
of literature and come just before the methods section.

Review of Literature: After researchers formulate a research problem they conduct


a search for studies that are related to the problem. The review summarizes and
analyzes previous research and shows how the present study is related to this
research. The length of the review can vary, but it should be selective and should
concentrate on the way the present study will contribute to existing knowledge. It
should be long enough to demonstrate to the reader that the researcher has a
sound understanding of the relationship between what has been done and what
will be done. There is usually no separate heading to identify the review of
literature, but it is always located before the methods section.

Research Hypothesis or Question: Following the literature review researchers state


the hypothesis, hypotheses, or question(s). Based on information from the review,
researchers write a hypothesis that indicates what they predict will happen in the
study. A hypothesis can be tested empirically, and it provides focus for the
research. For some research it is inappropriate to make a prediction of results, and
some studies a research question rather than a hypothesis is indicated. Whether it
is a question or a hypothesis, the sentence should contain objectively defined
terms and state relationships in a clear, concise manner.

Methodology: In the methods or methodology section, the researcher indicates the


subject, instruments, and procedures used in the study. Ideally, this section
contains enough information so that other researchers could replicate the study.
There is usually a subheading for each part of the methods section.

See specific standards or criteria to evaluate quantitative research by clicking on


the links below.
(link 3) standards of adequacy for:
true experimental designs,
quasi-experimental designs, and
single-subject designs

(link 4) standards of adequacy for:


descriptive research,
correlational research,
survey research, and
ex post facto research

Standards of Adequacy for True Experimental Designs, Quasi-


Experimental Designs, and Single-Subject Designs
Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual
introduction (4th edition), pp 348-349. NY: HarpersCollins College Publishers.

In judging the adequacy of the designs focus on a few key criteria. These criteria
are listed below in the form of questions that should be asked for each type of
design.

True Experimental Designs

1. Was the research design described in sufficient detail to allow for replication of the study?

2. Was it clear how statistical equivalence of the groups was achieved? Was there a full
description of the specific manner in which subjects were assigned randomly to groups?

3. Was a true experimental design appropriate for the research problem?

4. Was there manipulation of the independent variable?

5. Was there maximum control over extraneous variables and errors of measurement?

6. Was the treatment condition sufficiently different from the comparison condition for a
differential effect on the dependent variable to be expected?

7. Were potential threats to internal validity reasonable ruled out or noted and discussed?

8. Was the time frame of the study described?

9. Did the design avoid being too artificial or restricted for adequate external validity?

10. Was an appropriate balance achieved between control of variables and natural
conditions?

11. Were appropriate tests of inferential statistics used?

Quasi-Experimental Designs
1. Was the research design described in sufficient detail to allow for replication of the study?

2. Was a true experiment possible?

3. Was it clear how extraneous variables were controlled or ruled out as plausible rival
hypotheses?

4. Were all potential threats to internal validity addressed?

5. Were the explanations ruling out plausible rival hypothesis reasonable?

6. Would a different quasi-design have been better?

7. Did the design approach a true experiment as closely as possible?

8. Was there an appropriate balance between control for internal validity and for external
validity?

9. Was every effort made to use groups that were as equivalent as possible?

10. If a time-series design was used,


(a) Was there an adequate number of observations to suggest a pattern of results?
(b) Was the treatment intervention introduced distinctly at one point in time?
(c) Was the measurement of the dependent variable consistent?
(d) Was it clear, if a comparison group was used, how equivalent the groups were?

Single-Subject Designs

1. Was the sample size one?

2. Was a single-subject design most appropriate, or would a group design have been better?

3. Were the observation conditions standardized?

4. Was the behavior that was observed defined operationally?

5. Was the measurement highly reliable?


6. Were sufficient repeated measures made?

7. Were the conditions in which the study was conducted described fully?

8. Was there stability in the base-line condition before the treatment was introduced?

9. Was there a difference between the length of time or number of observations between the
base-line and the treatment conditions?

10. Was only one variable changed during the treatment condition?

11. Were threats to internal and external validity addressed?

Standards of Adequacy for Descriptive Research, Correlational


Research, Survey Research, and Ex Post Facto Research
Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual
introduction (4th edition), pp 306-308. NY: HarpersCollins College Publishers.

In judging the adequacy of the designs focus on a few key criteria. These criteria
are listed below in the form of questions that should be asked for each type of
design.

Descriptive Research

1. Is the research problem clearly descriptive in nature, or is a relationship implied?

2. Is there a clear description of the sample, population, and procedures for


sampling?

3. Will the sample provide biased or distorted results?

4. Is the instrumentalism reliable and valid?

5. Do graphic presentations of the result distort the findings?

6. Are inappropriate relationship or causal conclusions made on the basis of


descriptive results?
7. If cross sectional, do subject differences affect the results?

8. If longitudinal, is loss of subjects a limitation?

9. Are differences between groups used to identify possible relationships?

Correlational Research

1. Does the research problem clearly indicate that relationships will be


investigated?

2. Is there a clear description of the sampling? Will the sample provide sufficent
variablity of responses to obtain a correlation?

3. Is the instrumention valid and reliable?

4. Is there a restricted range on the scores?

5. Are there any factors that might contribute to spurious correlations?

6. Is a shotgun approach used in the study?

7. Are inappropriate causal inferences made from the results?

8. How large is the sample? Could sample size affect the "significance" of the
results?

9. Is the correlation coefficient confused with the coefficient of determination?

10. If predictions are made, are they based on a different sample?

11. Is the size of the correlation large enough for the conclusions?

Survey Research
1. Are the objectives and purposes of the survey clear?

2. Is it likely that the target population and sampling procedure wil provides a
credible answer to the research question(s)?

3. Is the instrument clearly designed and worded? Has it been pilot tested? Is it
appropriate for the characteristics of the sample?

4. Is there assurance of confidentiality of responses? If not, is this likely to affect


the results?

5. Does the letter of transmittal establish the credibility of the research? Is there
any chance that what is said in the letter will bias the responses?

6. What is the return rate? If boderline, has there been any follow-up with
nonrespondents?

7. Do the conclusions reflect return rate and possible limitations?

Ex Post Facto Research

1. Was the primary purpose of the study to investigate cause-and-effect


relationships?

2. Have the presumed cause-and-effect conditions already occurred?

3. Was there manipulation of the independent variable?

4. Were groups being compared already different with respect to the independent
variable?

5. Were potential extraneous variables recognized and considered as plausible rival


hypotheses?

6. Were causal statements regarding the results made tenuously?


7. Were threats to external validity addressed in the conclusions?

Standards of Adequacy for Narrative Literature Review


Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual
introduction (4th edition), pp. 152-153-308. NY: HarpersCollins College Publishers.

A narrative literature review is judged by three criteria: its selection of the sources,
its criticism of the literature; and its summary and overall interpretation of the
literature on the problem. Below are questions that aid a reader in determining the
quality of the literature review.

This criteria will be useful to apply to your chapter for a self-evaluation when you
write your thesis or dissertation after the proposal is approved (beyond the scope
of this course). Therefore you may want to print this page for future reference.

Literature Review Chapter Critique

A literature review is judged adequate in the context of the proposal or the completed study.
The problem, the significance of the study, and the specific research questions or hypotheses
influence the type of literature review. A literature review is not judged by its length nor by
the number of references included. The quality of the literature review is evaluated according
to whether it furthers the understanding of the status of knowledge of the problem and
provides a rationale for the study.

Selection of the Literature


1. Is the purpose of the review (preliminary or exhaustive) indicated?

2. Are the parameters of the review reasonable?


a. Why were certain bodies of literature included in the search and others excluded
from it?

b. Which years were included in the search?


3. Is the primary literature emphasized in the review and secondary literature, if cited, used
selectively?

4. Are recent developments in the problem emphasized in the review?

5. Is the literature selected relevant to the problem?

6. Are complete bibliographic data provided for each reference?

Criticism of the Literature


1. Is the review organized by topics or ideas, not by author?

2. Is the review organized logically?

3. Are major studies or theories discussed in detail and minor studies with similar limitations
or results discussed as a group?

4. Is there adequate criticism of the design and methodology of important studies so that the
reader can draw his or her own conclusions?

5. Are studies compared and contrasted and conflicting or inconclusive results noted?

6. Is the relevance of each reference to the problem explicit?

Summary & Interpretation


1. Does the summary provide an overall interpretation and understanding of our knowledge
of the problem?

2. Do the implications provide theoretical or empirical justification for the specific research
questions or hypotheses to follow?

3. Do the methodological implications provide a rationale for the design to follow?

Standards of Adequacy for


Qualitative Designs (i.e., case studies)
Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual
introduction (4th edition), pp. 421 & 73-74. NY: HarpersCollins College Publishers.

Qualitative designs are judged by several criteria. Below are typical questions that
researchers might ask of their designs or reviewers may use to critique a
qualitative design.

Qualitative research designs are often difficult to judge because of the flexibility
and emergent nature of the design. Designs, if really emergent and for discovery,
will be modified as the study progresses. Many of the standards are related to data
collection. See also the standards for Ethnographic Methodology.

Qualitative Designs (i.e., case studies)

1. Is the one phenomenon to be studied clearly articulated and delimited?

2. Is the purpose of the case study described?

3. Which purposeful sampling technique to identify information-rich cases will be


used? Does the sampling strategy seem likely to obtain information-rich groups or
cases? (Usually preliminary information is necessary before the sampling strategy
can be chosen).

4. Is the desired minimum sample size stated? Does the sample size seem logical to
yield rich data about the phenomenon within a reasonable length or time?

5. Is the design presented in sufficient detail to enhance reliability--that is, is the


planned researcher role, informant selection, social context, data collection and
analysis strategies, and the analytical premises specified?

6. Which multiple data collection strategies are planned to increase the agreement
on the description of the phenomenon between the researcher and participants?
Does the researcher have knowledge and experience with the proposed strategies
or has he or she done a preliminary study?

7. Does the design suggest the emergent nature of the study?


8. Which strategies does the researcher plan to employ to minimize potential bias
and observer effect?

9. Which design components are included to encourage the usefulness and the
logical extension of the findings?

10. Does the researcher specify how informal consent, confidentiality, anonymity,
and other ethical principles will be handled in the field?

Standards of Adequacy for


Ethnographic Methodology
Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual
introduction (4th edition), p. 458. NY: HarpersCollins College Publishers.

Standards for assessing the quality of ethnographic studies differ from those
applied to quantitative studies. Many ethnographic studies are published as books
or reports rather than as journal articles. Studies published in journals are highly
synthesized or only one of many findings is reported to fit the procedures that
should be explicit in the full study.

A reader appraises the quality of an ethnographic study in four aspects: the focus
and purpose of the study, the research design and methodology, the presentation
of the findings and conclusions, and the contribution to educational research and
knowledge. The focus of the criteria below is on methodology standards.

Ethnographic Methodology

1. How long was the field residence? What social scenes were observed? Which
participants were interviewed?

2. Were the selection criteria reasonable for the purpose of the study?

3. What was the research role assumed by the ethnographer?

4. How did this research role affect data collection?


5. What was the training, background, and previous field work experience of the
ethnographer?

6. Did the ethnographer actively seek different perspectives? Were multiple data
collection strategies employed?

7. Is the evidence presented, such as the use of participants' language, appropriate


for an inductive analysis?

8. Are the limitations of the data collection strategies recognized?

9. How was corroboration of data accomplished?

Credibility Standards of Adequacy for Analytical Research:


Historical & Legal/Policy Studies
Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual
introduction (4th edition), pp. 494-496. NY: HarpersCollins College Publishers.

Analytical research requires methodological procedures to phrase an analytical


topic, locate and critique primary sources, establish facts, and form generalizations
for causal explanations or principles. These research processes suggest criteria for
judging a historical, legal, or policy-making study as credible research: Criteria for
judging the adequacy of historical studies is followed by criteria for evaluating legal
research.

Historical Studies

The reader judges a study in terms of the logical relationship among the problem
statement, sources, generalizations, and causal explanations. The logic for the
entire study flows from the problem statement. Implicit in the evaluation of a
study is the question, "Did the analyst accomplish the stated purpose?" If all the
elements of the research are not made explicit, the study can be criticized as
biased or containing unjustifiable conclusions.
A. Problem statements in the introduction delineate the study and are evaluated
by the following questions:
1a. Is the topic appropriated for analytical research--that is, does it focus on
the past or recent past?

2a. Does the problem statement indicate clearly the information that will be
included in the study and the information that is excluded from the study?

3a. Is the analytical framework or viewpoint stated?

B. Selection and criticism of sources are evaluated in terms of relevance to the


problem statement. Sources are listed in the bibliography, and the criticism of the
sources may be discussed in the study, the footnotes, or in a methodological
appendix.
1b. Does the study use primary sources relevant to the topic?

2b. Is the criteria for selection of primary sources stated?

3b. Were authentic sources used for documentation?

4b. Does the analyst indicate criticism of sources?

C. Facts and generalizations presented in the text are assessed by asking the
following questions.
1c. Does the study indicate the application of external criticism to ascertain
the facts? If conflicting facts are presented, is a reasonable explanation
offered?

2c. Are the generalizations reasonable and related logically to the facts?

3c. Are the generalizations appropriate for the type of analysis? One would,
for example, expect minimal generalization in a study that restores a series
of documents to their original text or puts a series of policy statements into
chronological order. One would expect some synthesis in a descriptive or
comparative analysis.
4c. Are the generalizations qualified or stated in a tentative manner?

D. Causal explanations, presented as conclusions, are evaluated by the following


criteria. [Not all historiography are designed to reveal causal explanations.]
1d. Are the causal explanations reasonable and logically related to the facts
and generalizations presented in the study?

2d. Do the explanations suggest multiple causes for complex human events?

3d. Does the study address all the questions stated in the introduction--that
is, does if fulfill the purpose of the study?

Legal or Policy Studies

Because commentaries in legal research do not follow the formats of other


analytical research, the criteria for judging a study as credible differ somewhat. A
reader first notes the reputation of the institution or organization that sponsors
the journal and the reputation of the authors.

1. Is the legal issue or topic clearly stated with the scope and limitations of the
problem explained?

2. Is the commentary organized logically for the analysis?

3. How were the sources selected and are they appropriate for the problem (e.g.,
case law, statutes, federal regulations, and so on). The reader needs to scrutinize
the bibliography and footnotes.

4. Is the topic or issue treated logically in an unbiased manner?


5. Do the conclusions logically relate to the analysis?

Criticism of a Proposal
Quoted from McMillian, J. and Schumacher, S. (1997). Research in education: A conceptual
introduction (4th edition), pp. 602-603. NY: HarpersCollins College Publishers.

After completing a draft of a proposal, authors read it critically in terms of research


criteria appropriate for the purpose and design of the study. In addition to self-
criticism, researchers give a draft to colleagues for feedback. Students give a draft
to their advisory chair and if the chair feels it is ready for full committee feedback
asks the student to provide a draft to his/her dissertation or thesis committee
members for feedback. Once revisions are complete the student is ready to present
the proposal for committee approval to go forward with the study.

Below are some common weaknesses of proposals to avoid:

1. The problem is trivial. Problems that are of only peripheral interest to the field
are seldom approved. The problem should be related to current knowledge,
scholarly thinking, research, and practices in the field.

2. The problem is not deliminated. A problem must be focused for both research
and practical reasons. Designs cannot yield valid data for every possible variable,
nor can qualitative researchers encompass extremely broad questions in a single
study. Experienced researchers know how time-consuming research processes are
from the initial conceptualization of an idea through the final report. Researchers
rationally delimit the problem. The specific research questions and/or hypothesis
or the qualitative foreshadowed problems are focused by the theoretical frame
which is stated in such as way so that the delineation of the focus is apparent.

3. The objectives of the proposal are too general. Sometimes hypotheses are stated
in such broad, general terms that only the research design really conveys what the
study is about. If the research design does not logically match the specific research
questions and/or hypothesis or the qualitative research questions, then the
planned study is not capable of meeting proposal objectives. Failure to consider
extraneous or confounding variables is a serious error in a quantitative proposal.
Qualitative proposals need to be focus too with a theoretical frame that provides
the lens for collecting data, analyzing data, and interpreting data.
4. The methodology is lacking in detail appropriate for the proposed study.
Quantitative proposals should be detailed sufficiently in subjects, instrumentation,
and data analysis to allow for replication. Qualitative proposals, by their inductive
nature, are less specific in certain aspects. A qualitative proposal, however, can be
sufficiently specific to connote possible purposeful sampling, planned data
collection strategies, and inductive data analysis techniques. This specification
ensures a review committee that the researcher is aware of subsequent decisions
to be made. Much of the specificity for either quantitative or qualitative proposals
depends on the extent of the researcher's preliminary work.

5. The design limitations are addressed insufficiently.

Vous aimerez peut-être aussi