Vous êtes sur la page 1sur 8

What

are some of the criticisms voiced against evidence-based practice in


social work or social intervention? To what extent to you agree with these
criticism, and why?


Golshan Mahdi-Nau
Word Count: 2230



What are some of the criticisms voiced against evidence-based practice in
social work or social intervention? To what extent to you agree with these
criticism, and why?

Introduction

This paper will point out some of the criticisms voiced against evidence-based
practice in social work and social intervention and argue that while many of
these criticisms are misinformed on what evidence-based practice (EBP) actually
is, others reflect an ignorance of why the need for EBP arose in the first place and
the potential for harm interventions can have.
This essay will do this by first, defining evidence-based practice as described by
its inventors and outlining the steps needed. Second, by looking at criticisms that
are based on a fundamental misunderstanding of what EBP actually is. Finally by
looking at criticisms based on the view that EBP is not applicable in the social
sciences and arguing that it actually is necessary in order to prevent harm.

Figure 1 - An Updated Model For Evidence-Based Decisions SOURCE: (Haynes, Devereaux, & Guyatt,
2002, p. 36)

What is Evidence-Based Practice

Evidence-based practice has been defined by its originators as the


conscientious, explicit and judicious use of current best evidence in making
decisions about the care of individual [clients] (Sackett, Richardson, Rosenberg,
& Haynes, 1997, p. 2). It is based on the idea that if practitioners are to intervene
in peoples lives, they have a duty to ensure their interventions are using the best
available evidence. If not there is a very real risk of not only wasting resources
but actually causing harm (Newman, Moseley, Tierney, & Ellis, 2005).

Evidence-based practice comprises 5 distinct steps:
1. Formulating an answerable question.

2. With maximum efficiency finding the best evidence to answer that


question.
3. Critically appraising the evidence for its validity and applicability by
applying a hierarchy of evidence.
4. Applying the results to practice. This involves assessing similarities with
the client as well as the clients own preferences.
5. Evaluating the outcome and seeking ways to improve in future.


(Sackett et al., 1997, p. 3), (Gibbs & Gambrill, 2002, p. 454)

Figure 1 illustrates how EBP is the integration of clinical expertise, research
evidence and also patient preferences. All three aspects must be taken into
account in an EBP approach. One of its main features is that it is anti-
authoritarian. As oppose to an I know best approach there is much more focus
on sharing knowledge and clinical expertise whereby the client is involved in the
decision making process. Transparency is also a vital hallmark of EBP whereby
uncertainties are highlighted rather than hidden away (Gambrill, 2003, p. 4). In
short EBP can be described as closing the gaps between research and practice to
maximise opportunities to help clients and to avoid harm (Gambrill, 2006, p.
339).

Criticisms of EBP
Businesses that parade as EBP

It is important to distinguish between criticisms that are based on EBP as
described by its originators above and ones that are not.

In the aptly titled: Evidence-Based Practice: Sea Change or The Emperors New
Clothes? Gambrill points out that often criticisms of EBP are not based on true
EBP but rather on authoritarian businesses that have simply rebranded as
evidence-based without possessing any of the aforementioned characteristics.
The paper points out how such businesses are a major challenge in gaining
credibility for EBP as they possess all the characteristics that EBP is actively
trying to suppress such as psyeudosciences, fads and businesses which do not
involve transparency and ignore the real potential to harm that interventions
can have (Gambrill, 2006, p. 352). Such criticisms of EBP are invalid simply
because they are not actually referring to EBP.

Misunderstandings of EBP

Another similar area of EBP criticism can be categorised quite well as they are
largely ignorant on the processes of EBP.
Ignores clinical expertise
One unfounded criticism of EBP as pointed out by Gibbs is that it ignores clinical
expertise. Although clinical expertise is not seen as sufficient evidence that an
intervention is working, it is vital and necessary to work through the steps

mentioned above in order to obtain objective evidence. Infact figure 1 clearly


shows clinical expertise as being a core component of EBP (Gibbs & Gambrill,
2002, p. 459).

Cookbook approach
Some authors such as Webb argue that EBP is nothing more than an impersonal
cookbook approach by ignor[ing] the process of deliberation and choice
involved in decision making (Some considerations on the validity of
evidence-based practice in social work, 2001, p. 67). Webb additionally argues
that EBP ignores client values and preferences because according to EBP social
work decisions should rest solely on evidence leading to effective outcomes
(Some considerations on the validity of evidence-based practice in social work,
2001, p. 62). However as Gibbs points out, client values and preferences form a
core part of EBP which can be seen in Figure 1 as well as step 4 of the EBP steps
(Gibbs & Gambrill, 2002, p. 459).

Nothing new about EBP
Some critics suggest that there is nothing new about EBP and that for a long time
social workers have been required to incorporate research findings into their
practice. Critics point to Joel Fischer 1978, who encouraged practitioners to use
methods where evidence indicates that such application has a substantial
chance to produce successful outcome (Fischer, 1978, p. 67). However, Gibbs
points out that EBP describes a set of steps that integrate research and ethical
guidelines with practice, and of which no print social science textbook contains
mention of how to pose a well-formulated question. Additionally he points out
how modern day advances in the internet and database searching have been
necessary for the proper practice of EBP. Finally he goes on to mention how
comparatively when the term Evidence-Based was searched in the SWAB social
sciences database between 1991-2001 only 18 hits were produced whereas the
same term in the MEDLINE medicine database produced 8,805 hits. This
suggests that at least within the realm of the social sciences, there is definitely
something new about EBP. (Some considerations on the validity of evidence-
based practice in social work, 2001, pp. 460-461).

Social intervention is different to medicine therefore EBP is


inappropriate
The other area of criticism against EBP argues that fundamentally, although EBP
can be applied to medicine, it is inappropriate to apply in social situations.
Removes/supplants judgement of researcher
Webb argues that EBP is inappropriate in a social setting as he argues it
supplants professional judgement. He argues that there is a preference to
change professional practice from decisions based on opinion to those made on
the basis of evidence. He concludes that according to EBP:

Opinion-based judgement is inferior to evidence-based decision making,


and that extraneous influences such as resource constraints and professional
values should not contaminate the evaluative process. According to this view, social
work decisions should rest solely on evidence leading to effective outcomes.
(Some considerations on the validity of evidence-based practice in social work,
2001, p. 62)


This is completely in keeping with the earlier definition of EBP. Although Webb
presents this information as an inherent flaw in EBP, it is in fact its greatest
strength. As Chalmers points out: the road to hell can be paved with the best of
intentions. Chalmers uses the example of how Benjamin Spockss 1966 advice
avoiding babies from sleeping on their back (advice which led to thousands of
avoidable sudden infant death) to show that even theoretically sound
professional advice designed to be beneficial can have devastating consequences.
(Chalmers, 2003).

The classic example for illustrating how an intervention can result in negative
outcomes despite practitioner belief that intervention has worked is the
Cambridge-Somerville Youth Study, which analysed the effects of early
intervention on crime prevention. Using random allocation and matching of
study participants, 253 (matched) pairs of 10 year olds were identified at risk
of delinquency. The treatment group of boys received multiple interventions
thought to be beneficial such as counselling, sports participation, tuition and
employment assistance.

At the end of the intervention, the boys in the treatment group had made some
fairly good adjustments from the time the intervention had started. However, in
order to determine whether the improvements could be attributed to the
intervention alone, the researchers also tracked down the control group who did
not receive the intervention. Surprisingly they found that almost equal numbers
of the control and intervention group did better than anticipated at the
beginning of the study. Disappointingly after a 35 year follow-up they found that
statistically the boys who received the intervention were more likely have
negative outcomes compared to the control group including: Death by age 35,
serious convictions as well as serious mental illness. The results reflected a dose-
response in the sense that the boys who received the highest levels of
intervention were more likely to be worse off. (McCord, 2003)

Surprisingly, two thirds of the men who received the intervention believed the
program had helped them. And disturbingly, when the staff who delivered the
intervention were asked who benefited the most from the program, the boys
they selected where in fact more likely to have turned out worse than their
control group. (McCord, 2003).

This study not only shows that interventions have the capacity to do more harm
than good, but that even among the practitioners and clients who believed the
intervention had benefited them, the objective evidence failed to show this.
McCord pointed out that had there been no control group, the researchers would

have concluded that the intervention was beneficial when it was in fact harmful
(McCord, 2003, p. 23). Ironically Webb points out himself in the next sentence:
What is meant by effectiveness, of course, is often a matter of personal
interpretation. (Some considerations on the validity of evidence-based practice
in social work, 2001, p. 62). This is why it is so important that social work
decisions should be based on sound evidence and not subjective expert opinion.

Webb further goes on to argue that EBP assumes that professionals are rational
actors ignoring their complexity. He argues that by underplaying the values and
anticipations of social workers at the level of ideas [EBP] ignores the processes
of deliberation and choice involved in their decision making (Some
considerations on the validity of evidence-based practice in social work, 2001,
p. 67). However, once again Webb has missed the point in that EBP suggests that
social workers and professionals in fact are not rational actors. It is because of
their complexity that a framework is required to minimise individual biases and
judgement, which can ultimately lead to harm as seen in the Cambridge-
Somerville Study.
Systematic reviews and RCTs not relevant to social sciences
Another aspect of EBP criticised is the applicability of systematic reviews to
assess the effects of social interventions. Certain criticisms however are simply
misinformed. One criticism is that systematic reviews exclude qualitative data. In
fact, Webb argues that EBP relies on numerical data suggesting that qualitative
data isnt included. However a simple search on the Cochrane library of
systematic reviews shows many examples containing qualitative data for
example a paper titled: "Barriers and facilitators to the implementation of lay
health worker programmes to improve access to maternal and child health:
qualitative evidence synthesis."(Barriers and facilitators to the implementation
of lay health worker programmes to improve access to maternal and child
health: qualitative evidence synthesis, 2013).

Webb further criticises EBP for prioritising the systematic review and
randomised trials and ignoring other research methods taught on sociology and
cultural studies courses (Some considerations on the validity of evidence-based
practice in social work, 2001). However Chalmers points out that people such as
Webb who criticise EBP generally fail to address the bigger problem in how
different methods of reviewing evidence can lead to different conclusions
(Chalmers, 2003). One of the main advantages of the EBP approach is that it also
sheds light on what isnt known, involving clients as oppose to cherry picking
positive results to present a distorted image (Gambrill, 2003, p. 14).

Criticism of Randomised Controlled Trials

Despite the established place randomised controlled trials (RCTs) hold in the
medical field, social scientists still tend to view them with suspicion arguing that
they are inappropriate for evaluating social interventions (Oakley, 1998, p.
1239).

However the main argument EBP proponents use in favour of RCTs is that
without a control you can never be completely sure that any effects observed are
actually because of the intervention. This can be seen in the Cambridge-
Somerville study, as if it wasnt for the matched control group of boys the
practitioners would have falsely believed their intervention was doing good
when it was in fact doing harm. Oakley argued that RCTs offer the same thing in
social interventions that they promise to do in medicine: protection of the
public from potentially damaging uncontrolled experimentation and a more
rational knowledge about the benefits to be derived from professional
intervention (Oakley, 1998, p. 1242).

Conclusion
In conclusion, although many of the criticisms of EBP are simply misinformed
and will hopefully decrease as the field grows more established and the
literature grows, there are other criticisms of EBP that not only demonstrate an
ignorance of EBP but they underestimate the inherent dangers of interventions
that are not grounded in evidence.



Bibliography


Chalmers, I. (2003). Trying to do more Good than Harm in Policy and Practice:
The Role of Rigorous, Transparent, Up-to-Date Evaluations. The ANNALS of
the American Academy of Political and Social Science, 589(1), 2240.
doi:10.1177/0002716203254762
Glenton, C., Colvin, C. J., Carlsen, B., Swartz, A., Lewin, S., Noyes, J., & Rashidian, A.
(2013). Barriers and facilitators to the implementation of lay health worker
programmes to improve access to maternal and child health: qualitative
evidence synthesis. Cochrane Database Syst Rev, 10.
Fischer, J. (1978). Effective casework practice. McGraw-Hill College.
Gambrill, E. (2006). Evidence-Based Practice and Policy: Choices Ahead. Research
on Social Work Practice, 16(3), 338357. doi:10.1177/1049731505284205
Gambrill, E. D. (2003). From the Editor: Evidence-Based Practice: Sea Change or
the Emperor's New Clothes? Journal of Social Work Education.
Gibbs, L., & Gambrill, E. (2002). Evidence-Based Practice: Counterarguments to
Objections. Research on Social Work Practice, 12(3), 452476.
doi:10.1177/1049731502012003007
Haynes, R. B., Devereaux, P. J., & Guyatt, G. H. (2002). Clinical expertise in the era
of evidence-based medicine and patient choice. Evidence Based Medicine,
7(2), 3638. doi:10.1136/ebm.7.2.36
McCord, J. (2003). Cures That Harm: Unanticipated Outcomes of Crime
Prevention Programs. Annals of the American Academy of Political and Social
Science, 587, 1630. doi:10.2307/1049945?ref=no-x-
route:5189576885763da15149e1c27a47a7b3
Newman, T., Moseley, A., Tierney, S., & Ellis, A. (2005). Evidence-based social
work: A guide for the perplexed.
Oakley, A. (1998). Experimentation and social interventions: a forgotten but
important history, 317(7167), 12391242. doi:10.1136/bmj.317.7167.1239
Sackett, D. L., Richardson, W. S., Rosenberg, W., & Haynes, R. B. (1997). How to
practice and teach evidence-based medicine. New York: Churchill .
Webb, S. A. (2001). Some considerations on the validity of evidence-based practice
in social work. British Journal of Social Work, 31(1), 57-79

Vous aimerez peut-être aussi