Vous êtes sur la page 1sur 11

INVITED COMMENTARY

IJSPT RESEARCH DESIGNS IN SPORTS PHYSICAL THERAPY


Phil Page, PhD, PT, ATC, CSCS, FACSM1

ABSTRACT
Research is designed to answer a question or to describe a phenomenon in a scientific process. Sports
physical therapists must understand the different research methods, types, and designs in order to imple-
ment evidence-based practice. The purpose of this article is to describe the most common research designs
used in sports physical therapy research and practice. Both experimental and non-experimental methods
will be discussed.
Key words: Research design, research methods, scientific process

CORRESPONDING AUTHOR
Phil Page, PhD, PT, ATC, LAT, CSCS, FACSM
1
Baton Rouge, Louisiana, USA Baton Rouge, Louisiana USA
drphilpage@gmail.com E-mail: ppage100@gmail.com

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 482
INTRODUCTION patient populations; therefore, applied research pro-
Evidence-based practice requires that physical ther- vides more clinical relevance and clinical application
apists are able to analyze and interpret scientific (i.e., external validity) than basic science research.
research. When performing or evaluating research
for clinical practice, sports physical therapists must One of the most important considerations in research
first be able to identify the appropriate study design. design for internal validity is to minimize bias. Bias
Research begins by identifying a specific aim or pur- represents the intentional or unintentional favoring
pose; researchers should always attempt to use a of something in the research process. Within research
methodologically superior design when performing designs, there are 5 important features to consider
a study. Research design is one of the most impor- in establishing the validity of a study: sample, per-
tant factors to understand because: spective, randomization, control, and blinding.

1. Research design provides validity to the study; Sample size and representation is very important
for both internal and external validity. Sample size
2. The design must be appropriate to answer the is important for statistical power, but also increases
research question; and the representativeness of the target population.
3. The design provides a level of evidence used in Unfortunately, some studies use a convenience
making clinical decisions. sample, often consisting of college students, which
may not represent a typical clinical population.
Validity Obviously, a representative clinical population can
Research study designs must have appropriate valid- provide a higher level of external validity than a
ity, both internally and externally. Internal validity convenience sample.
refers to the design itself, while external validity
In terms of perspective, a study can be prospective
refers to the studys applicability in the real world.
(before the fact) or retrospective (after the fact). A
While a study may have internal validity, it may not
prospective study has more validity because of
have external validity; however, a study without
more control of the variables at the beginning of
internal validity is not useful at all.
and throughout the study, whereas a retrospective
Most clinical research suffers from a conflict between study has less control since it is performed after the
internal and external validity. Internally valid stud- end of an event. A prospective design provides a
ies are well-controlled with appropriate designs to higher level of evidence to support cause-and-effect
ensure that changes in the dependent variable result relationships, while retrospective studies are often
from manipulation of an independent variable. Well- associated with confounding variables and bias.
designed research provides controls for managing or
addressing extraneous variables that may influence Random assignment to an experimental or control
changes in the dependent variable. This is often group is performed to represent a normal distribu-
accomplished by ensuring a homogenous population; tion of the population. Randomization reduces
however, clinical populations are rarely homogenous. selection bias to ensure one group doesnt have an
An internally-valid study with control of extraneous advantage over the other. Sometimes groups, rather
variables may not represent a more heterogeneous clin- than individual subjects, are randomly assigned to
ical population; therefore, clinicians should always con- an experimental or control group; this is referred
sider the conflict between internal and external validity to as block randomization. Sample bias can also
both when choosing a research design and when apply- occur when a convenience sample is used that
ing the results of research on order to make evidence- might not be representative of the target popula-
based clinical decisions. tion. This is often seen when healthy, college-aged
students are included, rather than a representative
Furthermore, research can be basic or applied. Basic
sample of the population.
science research is often done on animals or in a con-
trolled laboratory setting using tissue samples, for A control group helps ensure that changes in
example. Applied research involves humans, including the dependent variable are due to changes in the

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 483
independent variable, and not due to chance. A 4. Analysis and interpretation of data.
control group receives no intervention, while the
Different research designs applying apply are used
experimental group receives some type of inter-
to answer a question or address a problem. Different
vention. In some situations, a true control group is
authors provide different classifications of research
not possible or ethical; therefore, quasi-experi-
designs.1-4
mental designs are often used in clinical research
where the control group receives a standard treat- Within the scientific method, there are 2 main clas-
ment. Sometimes, the experimental group can be sifications of research methodology: experimental
used as its own control by testing different con- and non-experimental. Both employ systematic col-
ditions over time. lection of data. Experimental research is used to
determine cause-and-effect relationships, while non-
Blinding (also known as masking) is performed to
experimental is used to describe observations or
minimize bias. Ideally, both the subjects and the
relationships in a systematic manner. Both experi-
investigator should be blinded to group assignment
mental and non-experimental research consist of
and intervention. For example, a double-blind study
several types and designs. (Table 1)
is one in which the subjects are not aware if they are
receiving the experimental intervention or a pla- Experimental Methods
cebo and at the same time, and the examiner is not Experimental methods follow the scientific method
aware which intervention the subjects received. in order to examine changes in one variable by
manipulating other variables to attempt to establish
While considering these 5 features, a large sample
cause-and-effect. The dependent variable is mea-
size of patients, prospective, randomized, controlled,
sured under controlled conditions while controlling
double-blinded clinical outcome study would likely
for confounding variables. It is important to remem-
provide the best design to assure very high internal
ber that statistics do not establish cause-and-effect;
and external validity.
rather, the design of the study does. Experimental
statistics can only reject a null hypothesis and iden-
Design
tify variance accounted for by the independent vari-
Most research follows the scientific method. The
able. Thomas et al.4 provide three criteria to establish
scientific method progresses through four steps:
cause-and-effect:
1. Identification of the question or problem;
1. Cause must precede effect in time;
2. Formulation of a hypothesis (or hypotheses);
2. Cause and effect must be correlated with each
3. Collection of data; and other; and

Table 1. Research Designs.

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 484
Figure 1. Two-way repeated measures experimental design to determine interactions within and between groups.

3. Relationship cannot be explained by another variables (measurements), but only one independent
variable. variable (treatment). Studies involving more than
one independent variable are considered multi-fac-
There are 3 elements of research to consider when
torial and are referred to as two-way or three-way
evaluating experimental designs: groups, measures,
(and so on) designs. Multi-factorial designs are used
and factors. Subjects in experimental research are gen-
to investigate interactions within and between differ-
erally classified into groups such as an experimental
ent variables. A mixed design factorial study
(those receiving treatment) or control group. Techni-
includes 2 more independent variables with one
cally speaking, however, groups refers to the treat-
repeated across all subjects and the other random-
ment of the data, not how the treatment is administered2.
ized to independent groups. Figure 1 is an example of
Groups are sometimes called treatment arms in order
a 2-way repeated measures design including a true
to denote subjects receiving different treatments. True
control group.
experimental designs generally use randomized assign-
ment to groups, while quasi-experimental research Factorial designs are denoted with numbers repre-
may not. senting the number of levels of each factor. A two-
way factorial (2 independent variables) with 2 levels
Next, the order of measurements and treatments of each factor is designated by 2 2. The total
should be considered. Time refers to the course of number of groups in a factorial design can be deter-
the study from start to finish. Observations, or mea- mined by multiplying the factors together; for example,
surements of the dependent variables, can be per- a 22 factorial has 4 groups while a 232 factorial
formed one or several times throughout a study. has 12. Table 2 describes the differences in factorial
The term, repeated measures denotes any mea- designs using an example of 3 studies examining
surement that is repeated on a group of subjects in strength gains of the biceps during exercise. Each
the study. Repeated measures are often used in factor has multiple levels. In the 1-way study,
pseudo-experimental research when the subjects strength of the biceps is examined after performing
act as their own control in one group, while true flexion or extension with standard isotonic resis-
experimental research can use repeated measure- tance. In the 2-way study, a 3-level factor is added by
ments of the dependent variable as a single factor comparing different types of resistance during the
(time). same movements. In the 3-way study, 2 different
intensity levels are added to the design.
Since experimental designs are used to identify
changes in a dependent variable by manipulating an Statistical analysis of a factorial design begins by
independent variable, factors are used. Factors are determining a main effect, which is an overall effect
essentially the independent variables. Individual of a single independent variable on dependent vari-
factors can also have several levels. Single-factor ables. If a main effect is found, post-hoc analysis
designs are referred to as one-way designs with examines the interaction between independent vari-
one independent variable and any number of levels. ables (factors) to identify the variance in the depen-
One-way designs may have multiple dependent dent variable.

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 485
Table 2. Examples of progressive factorial designs.

As described in Table 1 previously, there are 2 types Quasi-experimental designs are those that do not
of experimental designs: true experimental and include a true control group or randomization of
quasi-experimental. subjects. While these types of designs may reduce
the internal validity of a study, they are often used
True Experimental Designs to maximize a studys external validity. Quasi-exper-
True experimental designs are used to determine imental designs are used when true randomization
cause-and-effect by manipulating an independent or a true control group is unethical or difficult. For
variable and measuring its effect on a dependent example, a pseudo-control group may include a
variable. These designs always have at least 2 groups group of patients receiving traditional treatment
for comparison. rather than a true control group receiving nothing.

In a true experimental design, subjects are random- Block-randomization or cluster grouping may also
ized into at least 2 independent, separate groups, be more practical when examining groups, rather
including an experimental and true control. This than individual randomization. Subjects are grouped
provides the strongest internal validity to establish by similar variables (age, gender, etc) to help control
a cause-and-effect relationship within a population. for extraneous factors that may influence differences
A true control group consists of subjects that receive between groups. The block factor must be related to
no treatment while the experimental group receives dependent variable (i.e., the factor affecting response
treatment. The randomized, controlled trial design to treatment).
is the gold standard in experimental designs, but
A cross-over or counterbalanced design may also be
may not be the best choice for every project.
used in a quasi-experimental study. This design is
Table 3 provides common true experimental designs often used when only 2 levels of an independent vari-
that include 2 independent, randomly assigned able are repeated to control for order effects.3 A cross-
groups and a true control group. Notation is often over study may require twice as long since both groups
used to illustrate research designs: must undergo the intervention at different times. Dur-
ing the cross-over, both groups usually go through a
Quasi-Experimental Designs washout period of no intervention to be sure pro-
Clinical researchers often find it difficult to use true longed effects are not a factor in the outcome.
experimental designs with a true control because it Examples of quasi-experimental designs can include
may be unethical and sometimes illegal to withhold both single and multiple groups (Table 4). Quasi-
treatment within a patient population. In addition, experimental designs generally do not randomize
clinical trials are often affected by a conflict between group assignment or use true control groups. (Note:
internal and external validity. Internal validity One-group pre-post test designs are sometimes clas-
requires rigorous control of variables; however, that sified as pre-experimental designs.)
control does not support real-world generalizability
(external validity). As previously described, clinical Single-subject designs are also considered quasi-exper-
researchers must seek balance between internal and imental as they draw conclusions about the effects of
external validity. a treatment based on responses of single patients

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 486
Table 3. Common true experimental designs.

Table 4. Quasi-Experimental designs.

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 487
under controlled conditions.3 These designs are used There are generally 2 types of survey questions: open-
when withholding treatment is considered unethical ended and closed-ended. Open-ended questions have
or when random assignment is not possible or when it no fixed answer, while closed-ended questions have
is difficult to recruit subjects as is commonly seen in definitive answers including rank, scale, or category.
rare diseases or conditions. Single subject designs have Investigators should be careful not to lead answers of
2 essential elements: design phases and repeated subjects one way or another, and to keep true to the
measures.3 Design phases include baseline and inter- objectives of the study. Surveys are limited by the
vention phases. The baseline measure serves as a sample and the questions asked. External validity is
pseudo-control. Repeated measurement over time threatened, for example, if the sample was not repre-
(for example, during each treatment session) can sentative of the research question and design.
occur during the baseline and intervention phases.
A special type of survey is the Delphi technique that
Common single-subject designs are commonly denoted
uses expert opinions to make decisions about prac-
by the letters A (baseline phases) and B (intervention
tices, needs, and goals.4 The Delphi technique uses a
phases): A-B; A-B-A; and A-B-A-B. Other single-subject
series of questionnaires in successive stages called
designs include withdrawal, multiple baselines, alter-
rounds. The first round of the survey focuses on
nating treatment, multiple treatment, and interactive
opinions of the respondents, and the second round
design. For more detailed descriptions on single sub-
of questions is based on the results of the first round,
ject designs, see Portney and Watkins.3
where respondents are asked to reconsider their
answers in context of others responses. Delphi sur-
Non-Experimental Methods
veys are common in establishing expert guidelines
Studies involving non-experimental methods include
where consensus around an issue is needed.
descriptive, exploratory, and analytic designs. These
designs do not infer cause-and-effect by manipulat- Observational. A descriptive observational study
ing variables; rather, they are designed to describe or evaluates specific behaviors or variables in a specific
explain phenomena. Non-experimental designs help group of subjects. The frequency and duration of the
provide an early understanding about clinical condi- observations are noted by the researcher. An investi-
tions or situations, without a full clinical study gator observing a classroom for specific behaviors
through systematic collection of data. from students or teachers would use an observa-
tional design.
Descriptive Designs
Descriptive designs are used to describe populations Normative. Normative research describes typical or
or phenomena, and can help identify groups and standard values of characteristics within a specific
variables for new research questions.3 Descriptive population.3 These norms are usually determined
designs can be prospective or retrospective, and may by averaging the values of large samples and provid-
use longitudinal or cross-sectional methods. Phe- ing an acceptable range of values. For example, gonio-
nomena can be evaluated in subjects either over a metric measures of joint range of motion are reported
period time (longitudinal studies) or through sam- with an accepted range of degrees, which may be
pling different age-grouped subjects (cross-sectional recorded as within normal limits. Samples for nor-
studies). Descriptive research designs are used to mative studies must be large, random, and represen-
describe results of surveys, provide norms or descrip- tative of the population heterogeneity.3 The larger the
tions of populations, and to describe cases. Descrip- target population, the larger sample required to estab-
tive designs generally focus on describing one group lish norms; however, sample sizes of at least 100 are
of subjects, rather than comparing different groups. often used in normative research. Normative data is
extremely useful in clinical practice because it serves
Surveys. Surveys are one of the most common descrip- as a basis for determining the need for an interven-
tive designs.4 They can be in the form of question- tion, as well as an expected outcome or goal.
naires or interviews. The most important component
of an effective survey is to have an appropriate sample Developmental. Developmental research helps des-
that is representative of the population of interest. cribe the developmental change and the sequencing

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 488
Table 5. Comparison of quantitative and qualitative designs (Adapted from Thomas et al4 and Carter et al2).

of human behavior over time.3 This type of research is Exploratory Designs


particularly useful in describing the natural course of Exploratory designs establish relationships without
human development. For example, understanding the manipulating variables while using non-experimen-
normal developmental sequencing of motor skills can tal methods. These designs include cohort studies,
be useful in both the evaluation and treatment of young case control studies, epidemiological research, cor-
athletes. Developmental designs are classified by the relational studies, and methodological research.
method used to collect data; they can be either cross- Exploratory research usually involves comparison of
sectional or longitudinal. 2 or more groups.
Case Designs. Case designs offer thoughtful descrip- Cohort Studies. A cohort is a group of subjects
tions and analysis of clinical information;2 they being studied. Cohort studies may evaluate single
include case reports, case studies, and case series. A groups or differences between specific groups. These
case report is an in-depth understanding of a unique observations may be made in subjects one time, or
patient, while a case study focuses on a unique situ- over periods of time, using either cross-sectional or
ation. These cases may involve a series of patients longitudinal methods.
or situations, which is referred to as a case series
design. Case designs are often useful in developing In contrast to experimental designs, non-experimen-
new hypotheses and contributing to theory and prac- tally designed cohort studies do not manipulate the
tice. They also provide a springboard for moving independent variable, and lack randomization and
toward more quasi-experimental or experimental blinding. A prospective analysis of differences in cohort
designs in order to investigate cause and effect. groups is similar to an experimental design, but the
independent variable is not manipulated. For example,
Qualitative. Research measures can also be classified
outcomes after 2 different surgeries in 2 different
as quantitative or quantitative. Quantitative measures
groups can be followed without randomization of sub-
explain differences, determines causal relationships,
jects using a prospective cohort design.
or describes relationships; these designs include those
previously discussed. Qualitative research, on the Some authors2 have classified Outcomes Research as
other hand, emphasizes attempting to discern process a retrospective, non-experimental cohort design, where
and meaning without measuring quantity. Qualitative differences in groups are evaluated after the fact with-
studies focus on analysis in trying to describe a out random allocation to groups or manipulation of an
phenomenon. Qualitative research examines beliefs, independent variable. This design would include chart
understanding, and attitudes through skillful inter- reviews examining outcomes of specific interventions.
view and content analysis.5 These designs are used to
describe specific situations, cultures, or everyday Case Control Studies. Case control studies are sim-
activities. Table 5 provides a comparison between quali- ilar to cohort studies comparing groups of subjects
tative and quantitative designs. with a particular condition to a group without the

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 489
Table 6. Measurement terminology used in epidemiological research.

condition. Both groups are observed over the same Table 7. : Contingency Table to determine risk
period of time, therefore requiring a shorter time- (Adapted from Portney and Watkins3).
frame compared to cohort studies. Case control stud-
ies are better for investigations of rare disease or
conditions because the sample size required is less
than a cohort study. The control group (injury/dis-
ease-free) is generally matched to the injury/disease
group by confounding variables consistent in both
groups such as age, gender, and ethnicity.
With these formulas, the null value is 1.0. A risk or
Case control studies sometimes use odds ratios in odds ratio less than 1.0 suggests reduced risk or odds,
order to estimate the relative risk if a cohort study while a value greater than 1.0 suggests increased
would have been done.4 An odds ratio greater than 1 risk or odds. For example, if the risk is 1.5 in a group,
suggests an increased risk, while a ratio less than 1 there is a 1.5 times greater risk of suffering an injury
suggests reduced risk. in that group. Relative risk should be reported with
a confidence interval, typically 95%.
Epidemiological Research. Studies that evaluate
the exposure, incidence rates, and risk factors for Epidemiological studies can also be used to test a
disease, injury, or mortality are descriptive studies hypothesis of the effectiveness of an intervention on
of epidemiology. According to Thomas et al,4 epide- on injury prevention by using incidence as a depen-
miological studies evaluate naturally occurring dif- dent variable. These studies help link exposures and
ferences in a population. Epidemiological studies outcomes with observations, and can include case
are used to identify a variety of measures in popula- control and cohort studies mentioned previously.
tions (Table 6).
Correlational Studies. Correlations studies examine
Relative risk, (RR) which is associated with expo- relationships among variables. Correlations are
sure and incidence rates. Portney and Watkins3 use a expressed using the Pearsons r value that can range
contingency table (Table 7) to determine the rela- from 1 to +1. A Pearsons r value of +1 indicates a
tive risk and odds ratio. Usually, incidence rates are perfect linear correlation, noting the increase in one
compared between 2 groups by dividing the inci- variable is directly dependent on the other. In contrast,
dence of one group by the other. an r value of 1 indicates a perfect inverse relation-
ship. An r value of 0 indicates that the variables are
Using Table 7, independent of each other. The most important thing
to remember is that correlation does not infer causa-
a / (a+ b) a/c tion; in other words, correlational studies cant be used
Relative Risk = Odds Ratio=
d / (c+d) b/d to establish cause-and-effect. In addition, 2 variables

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 490
Table 8. Different types of validity in scientic research.

may have a high correlation (r>.80), but lack statistical Analytical Designs
significance if the p-value is not sufficient. Finally, be Analytical research designs are not just a review or
aware that correlational studies must have a represen- summary, but a method of evaluating the existing
tative sample in order to establish external validity. research to reach a conclusion. These designs pro-
Methodological. The usefulness of clinical research vide a synthesis of the literature for empirical and
and decision-making heavily depends on the valid- theoretical conclusions.4 Analytical designs explain
ity and reliability of measurements.3 Methodological phenomena and analyze existing data using system-
research is used to develop and test measuring atic reviews and meta-analysis techniques. In con-
instruments and methods used in practice and trast to systematic reviews, meta-analyses include
research. Methodological studies are important statistical analysis of data.
because they provide the reliability and validity of Systematic Reviews. Systematic reviews most com-
other studies. First, the reliability of the rater (inter- monly examine the effectiveness of interventions, but
rater and intra-rater reliability) must be established may also examine the accuracy of diagnostic tools.3 Sys-
when administering a test in order to support the tematic reviews of randomized controlled trials provide
accuracy of measurements. Inter-rater reliability the highest level of evidence possible.7 Systematic
supports consistent measurements between differ- reviews should describe their methodology in detail,
ent raters, while intra-rater reliability supports con- including inclusion and exclusion criteria for studies
sistent measures for the same individual rater. reviewed, study designs, and outcomes measures. In
Reliability can also be established for instruments by addition, the method of literature search should be
demonstrating consistent measurements over time. detailed including databases, dates, and keywords used.
Reliability is related to the ability to control error,
and thus associated with internal validity. Meta-Analysis. Systematic reviews can be extended
Methodological studies are also used to establish valid- into meta-analysis if multiple studies contain neces-
ity for a measurement, which may include clinical sary information and data. Meta-analysis techniques
diagnostic tests, performance batteries, or measure- are particularly useful when trying to analyze and
ment devices. Measurement validity establishes the interpret smaller studies and studies with inconsis-
extent to which an instrument measures what it intends tent outcomes. Meta-analysis of randomized con-
to measure. Different types of validity can be mea- trolled trials provides a high level of evidence, but
sured, including face validity, content validity, crite- may suffer in quality from heterogeneous samples,
rion-related validity and construct validity (Table 8). bias, outliers, and methodological differences.

Sports physical therapists may also be interested in Meta-analysis quantifies the results of various studies
the sensitivity and specificity of clinical tests. Sensi- into a standard metric that allows for statistical analy-
tivity refers to the ability of a test to correctly identify sis to calculate effect sizes. The effect size, calculated
those with a condition, while specificity refers to the by Cohens d value, is defined as a standardized
ability to correctly identify those without the condi- value of the relationship between two variables. Effect
tion. Unfortunately, few clinical tests possess both size provides magnitude and direction of the effect of
high sensitivity and specificity.6 a treatment, and is determined by the difference in

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 491
Table 9. Levels of Evidence (Adapted from the Center for Evidence-Based Medicine7).

Table 10. Grades of Evidence (Adapted from the Center for Evidence-Based Medicine7).

means divided by the standard deviation (M / SD). research designs. Clinicians should be aware of appro-
A Cohens d value of .2 is considered small; .5 is con- priate research design, validity, and levels of evidence
sidered moderate, and .8 and greater is a large effect in order to make informed clinical decisions. This com-
size. Confidence intervals are then reported to pro- mentary described the most common and relevant
vide an interval of certainty. experimental and non-experimental designs used and
encountered by sports physical therapists who contrib-
Levels of Evidence
ute to and utilize evidence-based practice.
Research designs are often viewed in a hierarchy of
evidence. These designs have been discussed in this REFERENCES
paper, but bear repeating in the context of evidence- 1. Payton OD. Research: The validation of clinical
based practice. Levels of Evidence have been estab- practice. 3rd ed. Philadelphia: F. A. Davis; 1994.
lished by the Center for Evidence-Based Medicine in 2. Carter RE, Lubinsky J, Domholdt E. Rehabiltiation
Oxford, England (Table 9) as well as other research research: principles and applications. 4th ed. St. Louis:
Elsevier; 2011.
consortiums. Each level is based on controlling as
3. Portney LG, Watkins MP. Foundations of clinical
many factors (variables) as possible to confidently
research: applications to practcie. 3rd ed. New Jersey:
make conclusions without bias, the highest of which Pearson Prentice Hall; 2009.
is cause-and-effect. In addition, grades of evidence 4. Thomas JR, Nelson JK, Silverman SJ. Research
have been established based on the quality and num- methods in physical activity. 5th ed. Champaign, IL:
ber of various levels of evidence to make recom- Human Kinetics; 2005.
mendations in reviews and guidelines (Table 10). 5. Labuschange A. Qualitative research - airy fairy or
Thus, a research publication could be described and fundamental? The Qualitative Report 2003; http://
labeled using a combination of a level and a grade, www.nova.edu/ssss/QR/QR8-1/labuschagne.html.
such as Level II-A or Level II-B. Accessed August 29, 2011.
6. Reiman MP, Manske RC. Functional testing in human
performance. Champaign, IL: Human Kinetics; 2009.
CONCLUSION
7. Oxford Centre for Evidence-based Medicine -Levels
In conclusion, it is important for sports physical thera- of Evidence. 2009; http://www.cebm.net/index.
pists to understand different research designs not only aspx?o=1025. Accessed August 28, 2011.
to support evidence-based practice, but also to contrib-
ute to the body of knowledge by using appropriate

The International Journal of Sports Physical Therapy | Volume 7, Number 5 | October 2012 | Page 492

Vous aimerez peut-être aussi