Vous êtes sur la page 1sur 239

EDUCATION IN A COMPETITIVE AND GLOBALIZING WORLD

ASSISTING STUDENTS
STRUGGLING IN MATH
AND SCIENCE

No part of this digital document may be reproduced, stored in a retrieval system or transmitted in any form or
by any means. The publisher has taken reasonable care in the preparation of this digital document, but makes no
expressed or implied warranty of any kind and assumes no responsibility for any errors or omissions. No
liability is assumed for incidental or consequential damages in connection with or arising out of information
contained herein. This digital document is sold with the clear understanding that the publisher is not engaged in
rendering legal, medical or any other professional services.
EDUCATION IN A COMPETITIVE
AND GLOBALIZING WORLD

Additional books and e-books in this series can be found


on Nova’s website under the Series tab.
EDUCATION IN A COMPETITIVE AND GLOBALIZING WORLD

ASSISTING STUDENTS
STRUGGLING IN MATH
AND SCIENCE

TIMOTHY WINDER
Copyright © 2018 by Nova Science Publishers, Inc.

All rights reserved. No part of this book may be reproduced, stored in a retrieval system or transmitted
in any form or by any means: electronic, electrostatic, magnetic, tape, mechanical photocopying,
recording or otherwise without the written permission of the Publisher.

We have partnered with Copyright Clearance Center to make it easy for you to obtain permissions to
reuse content from this publication. Simply navigate to this publication’s page on Nova’s website and
locate the “Get Permission” button below the title description. This button is linked directly to the
title’s permission page on copyright.com. Alternatively, you can visit copyright.com and search by
title, ISBN, or ISSN.

For further questions about using the service on copyright.com, please contact:
Copyright Clearance Center
Phone: +1-(978) 750-8400 Fax: +1-(978) 750-4470 E-mail: info@copyright.com.

NOTICE TO THE READER


The Publisher has taken reasonable care in the preparation of this book, but makes no expressed or
implied warranty of any kind and assumes no responsibility for any errors or omissions. No liability is
assumed for incidental or consequential damages in connection with or arising out of information
contained in this book. The Publisher shall not be liable for any special, consequential, or exemplary
damages resulting, in whole or in part, from the readers’ use of, or reliance upon, this material. Any
parts of this book based on government reports are so indicated and copyright is claimed for those parts
to the extent applicable to compilations of such works.

Independent verification should be sought for any data, advice or recommendations contained in this
book. In addition, no responsibility is assumed by the publisher for any injury and/or damage to
persons or property arising from any methods, products, instructions, ideas or otherwise contained in
this publication.

This publication is designed to provide accurate and authoritative information with regard to the subject
matter covered herein. It is sold with the clear understanding that the Publisher is not engaged in
rendering legal or any other professional services. If legal or any other expert assistance is required, the
services of a competent person should be sought. FROM A DECLARATION OF PARTICIPANTS
JOINTLY ADOPTED BY A COMMITTEE OF THE AMERICAN BAR ASSOCIATION AND A
COMMITTEE OF PUBLISHERS.

Additional color graphics may be available in the e-book version of this book.

Library of Congress Cataloging-in-Publication Data

ISBN:  H%RRN

Published by Nova Science Publishers, Inc. † New York


CONTENTS

Preface vii
Chapter 1 Assisting Students Struggling with Mathematics:
Response to Intervention (RtI) for Elementary
and Middle Schools 1
Russell Gersten, Sybilla Beckmann,
Benjamin Clarke, Anne Foegen,
Laurel Marsh, Jon R. Star and Bradley Witzel
Chapter 2 Encouraging Girls in Math and Science:
IES Practice Guide 149
Diane Halpern, Joshua Aronson, Nona Reimer,
Sandra Simpkins, Jon R. Star and Kathryn Wentzel
Index 225
PREFACE

Students struggling with mathematics may benefit from early


interventions aimed at improving their mathematics ability and ultimately
preventing subsequent failure. This guide provides eight specific
recommendations intended to help teachers, principals, and school
administrators use Response to Intervention (RtI) to identify students who
need assistance in mathematics and to address the needs of these students
through focused interventions. The guide provides suggestions on how to
carry out each recommendation and explains how educators can overcome
potential roadblocks to implementing the recommendations. The
recommendations were developed by a panel of researchers and
practitioners with expertise in various dimensions of this topic. The panel
includes a research mathematician active in issues related to K–8
mathematics education, two professors of mathematics education, several
special educators, and a mathematics coach currently providing
professional development in mathematics in schools. The panel members
worked collaboratively to develop recommendations based on the best
available research evidence and our expertise in mathematics, special
education, research, and practice. The body of evidence they considered in
developing these recommendations included evaluations of mathematics
interventions for low-performing students and students with learning
disabilities. The panel considered high-quality experimental and quasi-
experimental studies, such as those meeting the criteria of the What Works
Clearinghouse (http://www.whatworks.ed.gov), to provide the strongest
viii Timothy Winder

evidence of effectiveness. They also examined studies of the technical


adequacy of batteries of screening and progress monitoring measures for
recommendations relating to assessment. In some cases, recommendations
reflect evidence-based practices that have been demonstrated as effective
through rigorous research. In other cases, when such evidence is not
available, the recommendations reflect what this panel believes are best
practices. Throughout the guide, they clearly indicate the quality of the
evidence that supports each recommendation. This practice guide also aims
to formulate specific and coherent evidence-based recommendations that
educators can use to encourage girls in the fields of math and science. The
target audience is teachers and other school personnel with direct contact
with students, such as coaches, counselors, and principals. The practice
guide includes specific recommendations for educators and the quality of
evidence that supports these recommendations. The authors, are a small
group with expertise on this topic. The range of evidence they considered
in developing this document is vast, ranging from experiments, to trends in
the National Assessment of Educational Progress (NAEP) data, to
correlational and longitudinal studies. For questions about what works
best, high-quality experimental and quasi-experimental studies, such as
those meeting the criteria of the What Works Clearinghouse, have a
privileged position. In all cases, they pay particular attention to findings
that are replicated across studies. Although the authors draw on evidence
about the effectiveness of specific practices, the authors use this
information to make broader points about improving practice. In this
document, we have tried to take findings from research or practices
recommended by experts and describe how the use of this recommendation
might actually unfold in school settings. In other words, they aim to they
provide sufficient detail so that educators will have a clear sense of the
steps necessary to make use of the recommendation. A unique feature of
practice guides is the explicit and clear delineation of the quality and
quantity of evidence that supports each claim. To this end, they adapted a
semi-structured hierarchy suggested by the Institute of Education Sciences.
In: Assisting Students Struggling in Math … ISBN: 978-1-53613-740-8
Editor: Timothy Winder © 2018 Nova Science Publishers, Inc.

Chapter 1

ASSISTING STUDENTS STRUGGLING


WITH MATHEMATICS:
RESPONSE TO INTERVENTION (RTI) FOR
ELEMENTARY AND MIDDLE SCHOOLS*

Russell Gersten1, Sybilla Beckmann2, Benjamin Clarke1,


Anne Foegen3, Laurel Marsh4, Jon R. Star5
and Bradley Witzel6
1
Instructional Research Group, Los Alamito, CA, US
2
University of Georgia, Athens, GA, US
3
Iowa State University, Ames, IA, US
4
Howard County Public School System, Ellicott City, MD, US
5
Harvard University, Cambridge, MA, US
6
Winthrop University, Rock Hill, SC, US

*
This is an edited, reformatted, and augmented version of an Assisting students struggling with
mathematics: Response to Intervention (RtI) for Elementary and Middle Schools (NCEE 2009-
4060). Washington, DC: National Center for Education Evaluation and Regional Assistance,
Institute of Education Sciences, U.S. Department of Education., dated April, 2009.
2 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

INTRODUCTION

Students struggling with mathematics may benefit from early


interventions aimed at improving their mathematics ability and ultimately
preventing subsequent failure. This guide provides eight specific
recommendations intended to help teachers, principals, and school
administrators use Response to Intervention (RtI) to identify students who
need assistance in mathematics and to address the needs of these students
through focused interventions. The guide provides suggestions on how to
carry out each recommendation and explains how educators can overcome
potential roadblocks to implementing the recommendations.
The recommendations were developed by a panel of researchers and
practitioners with expertise in various dimensions of this topic. The panel
includes a research mathematician active in issues related to K–8
mathematics education, two professors of mathematics education, several
special educators, and a mathematics coach currently providing
professional development in mathematics in schools. The panel members
worked collaboratively to develop recommendations based on the best
available research evidence and our expertise in mathematics, special
education, research, and practice.
The body of evidence we considered in developing these
recommendations included evaluations of mathematics interventions for
low-performing students and students with learning disabilities. The panel
considered high-quality experimental and quasi-experimental studies, such
as those meeting the criteria of the What Works Clearinghouse
(http://www.whatworks.ed.gov), to provide the strongest evidence of
effectiveness. We also examined studies of the technical adequacy of
batteries of screening and progress monitoring measures for
recommendations relating to assessment.
In some cases, recommendations reflect evidence-based practices that
have been demonstrated as effective through rigorous research. In other
cases, when such evidence is not available, the recommendations reflect
what this panel believes are best practices. Throughout the guide, we
Assisting Students Struggling with Mathematics 3

clearly indicate the quality of the evidence that supports each


recommendation.
Each recommendation receives a rating based on the strength of the
research evidence that has shown the effectiveness of a recommendation
(Table 1). These ratings— strong, moderate, or low—have been defined as
follows:
Strong refers to consistent and generalizable evidence that an
intervention program causes better outcomes.1
Moderate refers either to evidence from studies that allow strong
causal conclusions but cannot be generalized with assurance to the
population on which a recommendation is focused (perhaps because the
findings have not been widely replicated)—or to evidence from studies that
are generalizable but have more causal ambiguity than offered by experi-
mental designs (such as statistical models of correlational data or group
comparison designs for which the equivalence of the groups at pretest is
uncertain).
Low refers to expert opinion based on reasonable extrapolations from
research and theory on other topics and evidence from studies that do not
meet the standards for moderate or strong evidence.

Table 1. Institute of Education Sciences levels of evidence


for practice guides

In general, characterization of the evidence for a recommendation as strong requires


both studies with high internal validity (i.e., studies whose designs can support causal
conclusions) and studies with high external validity (i.e., studies that in total include
enough of the range of participants and settings on which the recommendation is
focused to support the conclusion that the results can be generalized to those
participants and settings). Strong evidence for this practice guide is operationalized as:
Strong
 A systematic review of research that generally meets the standards of the What
Works Clearinghouse (WWC) (see http://ies.ed.gov /ncee/wwc/) and supports
the effectiveness of a program, practice, or approach with no contradictory
evidence of similar quality; OR
 Several well-designed, randomized controlled trials or well-designed quasi-
experiments that generally meet the standards of WWC and support the

1
Following WWC guidelines, we consider a positive, statistically significant effect or large
effect size (i.e., greater than 0.25) as an indicator of positive effects.
4 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Table 1. (Continued)

effectiveness of a program, practice, or approach, with no contradictory


evidence of similar quality; OR

 One large, well-designed, randomized controlled, multisite trial that meets


WWC standards and supports the effectiveness of a program, practice, or
approach, with no contradictory evidence of similar quality; OR
 For assessments, evidence of reliability and validity that meets the Standards
for Educational and Psychological Testing.a

In general, characterization of the evidence for a recommendation as


moderate requires studies with high internal validity but moderate external
validity, or studies with high external validity but moderate internal validity.
In other words, moderate evidence is derived from studies that support strong
causal conclusions but when generalization is uncertain, or studies that
support the generality of a relationship but when the causality is uncertain.
Moderate evidence for this practice guide is operationalized as:
 Experiments or quasi-experiments generally meeting the standards of WWC
and supporting the effectiveness of a program, practice, or approach with
small sample sizes and/or other conditions of implementation or analysis that
limit generalizability and no contrary evidence; OR
Moderate  Comparison group studies that do not demonstrate equivalence of groups at
pretest and therefore do not meet the standards of WWC but that (a)
consistently show enhanced outcomes for participants experiencing a
particular program, practice, or approach and (b) have no major flaws related
to internal validity other than lack of demonstrated equivalence at pretest (e.g.,
only one teacher or one class per condition, unequal amounts of instructional
time, highly biased outcome measures); OR
 Correlational research with strong statistical controls for selection bias and
for discerning influence of endogenous factors and no contrary evidence; OR
 For assessments, evidence of reliability that meets the Standards for
Educational and Psychological Testingb but with evidence of validity from
samples not adequately representative of the population on which the
recommendation is focused.
In general, characterization of the evidence for a recommendation as low means that
the recommendation is based on expert opinion derived from strong findings or
Low
theories in related areas and/or expert opinion buttressed by direct evidence that does
not rise to the moderate or strong levels. Low evidence is operationalized as evidence
not meeting the standards for the moderate or high levels.
a American Educational Research Association, American Psychological Association, and National
Council on Measurement in Education (1999).
b Ibid.
Assisting Students Struggling with Mathematics 5

The What Works Clearinghouse Standards and Their Relevance


to This Guide

The panel relied on WWC evidence standards to assess the quality of


evidence supporting mathematics intervention programs and practices. The
WWC addresses evidence for the causal validity of instructional programs
and practices according to WWC standards. Information about these
standards is available at http://ies.ed.gov/ncee/wwc/references/standards/.
The technical quality of each study is rated and placed into one of three
categories:

 Meets Evidence Standards—for randomized controlled trials and


regression discontinuity studies that provide the strongest evidence
of causal validity.
 Meets Evidence Standards with Reservations—for all quasi-
experimental studies with no design flaws and randomized
controlled trials that have problems with randomization, attrition,
or disruption.
 Does Not Meet Evidence Screens—for studies that do not provide
strong evidence of causal validity.

Following the recommendations and suggestions for carrying out the


recommendations, Appendix B presents information on the research
evidence to support the recommendations.
The panel would like to thank Kelly Haymond for her contributions to
the analysis, the WWC reviewers for their contribution to the project, and
Jo Ellen Kerr and Jamila Henderson for their support of the intricate
logistics of the project. We also would like to thank Scott Cody for his
oversight of the overall progress of the practice guide.

Dr. Russell Gersten Ms. Laurel Marsh


Dr. Sybilla Beckmann Dr. Jon R. Star
Dr. Benjamin Clarke Dr. Bradley Witzel
Dr. Anne Foegen
6 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

OVERVIEW

Response to Intervention (RtI) is an early detection, prevention, and


support system that identifies struggling students and assists them before
they fall behind. In the 2004 reauthorization of the Individuals with Dis-
abilities Education Act (PL 108-446), states were encouraged to use RtI to
accurately identify students with learning disabilities and encouraged to
provide additional supports for students with academic difficulties
regardless of disability classification. Although many states have already
begun to implement RtI in the area of reading, RtI initiatives for
mathematics are relatively new.
Students’ low achievement in mathematics is a matter of national
concern. The recent National Mathematics Advisory Panel Report released
in 2008 summarized the poor showing of students in the United States on
international comparisons of mathematics performance such as the Trends
in International Mathematics and Science Study (TIMSS) and the Program
for International Student Assessment (PISA).2 A recent survey of algebra
teachers associated with the report identified key deficiencies of students
entering algebra, including aspects of whole number arithmetic, fractions, ratios,
and proportions.3 The National Mathematics Advisory Panel concluded that
all students should receive preparation from an early age to ensure their
later success in algebra. In particular, the report emphasized the need for
mathematics interventions that mitigate and prevent mathematics
difficulties.
This panel believes that schools can use an RtI framework to help
struggling students prepare for later success in mathematics. To date, little
research has been conducted to identify the most effective ways to initiate
and implement RtI frameworks for mathematics. However, there is a rich
body of research on effective mathematics interventions implemented
outside an RtI framework. Our goal in this practice guide is to provide

2
See, for example, National Mathematics Advisory Panel (2008) and Schmidt and Houang
(2007). For more information on the TIMSS, see http://nces.ed.gov/timss/. For more
information on PISA, see http://www.oecd.org.
3
National Mathematics Advisory Panel (2008).
Assisting Students Struggling with Mathematics 7

suggestions for assessing students’ mathematics abilities and implementing


mathematics interventions within an RtI framework, in a way that reflects
the best evidence on effective practices in mathematics interventions.
RtI begins with high-quality instruction and universal screening for all
students. Whereas high-quality instruction seeks to prevent mathematics
difficulties, screening allows for early detection of difficulties if they
emerge. Intensive interventions are then provided to support students in
need of assistance with mathematics learning.4 Student responses to
intervention are measured to determine whether they have made adequate
progress and (1) no longer need intervention, (2) continue to need some
intervention, or (3) need more intensive intervention. The levels of
intervention are conventionally referred to as “tiers.” RtI is typically
thought of as having three tiers.5 Within a three-tiered RtI model, each tier
is defined by specific characteristics.

 Tier 1 is the mathematics instruction that all students in a


classroom receive. It entails universal screening of all students,
regardless of mathematics proficiency, using valid measures to
identify students at risk for future academic failure—so that they
can receive early intervention.6 There is no clear consensus on the
characteristics of instruction other than that it is “high quality.”7
 In tier 2 interventions, schools provide additional assistance to
students who demonstrate difficulties on screening measures or
who demonstrate weak progress.8 Tier 2 students receive sup-
plemental small group mathematics instruction aimed at building
targeted mathematics proficiencies.9 These interventions are
typically provided for 20 to 40 minutes, four to five times each

4
Fuchs, Fuchs, Craddock et al. (2008).
5
Fuchs, Fuchs, and Vaughn (2008) make the case for a three-tier RtI model. Note, however, that
some states and school districts have implemented multitier intervention systems with more
than three tiers.
6
For reviews see Jiban and Deno (2007); Fuchs, Fuchs, Compton et al. (2007); Gersten, Jordan,
and Flojo (2005).
7
National Mathematics Advisory Panel (2008); National Research Council (2001).
8
Fuchs, Fuchs, Craddock et al. (2008); National Joint Committee on Learning Disabilities
(2005).
9
Fuchs, Fuchs, Craddock et al. (2008).
8 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

week.10 Student progress is monitored throughout the


intervention.11
 Tier 3 interventions are provided to students who are not
benefiting from tier 2 and require more intensive assistance.12 Tier
3 usually entails one-on-one tutoring along with an appropriate
mix of instructional interventions. In some cases, special education
services are included in tier 3, and in others special education is
considered an additional tier.13 Ongoing analysis of student
performance data is critical in this tier. Typically, specialized person-
nel, such as special education teachers and school psychologists, are
involved in tier 3 and special education services.14 However, students
often receive relevant mathematics interventions from a wide array
of school personnel, including their classroom teacher.

SUMMARY OF THE RECOMMENDATIONS

This practice guide offers eight recommendations for identifying and


supporting students struggling in mathematics (Table 2). The
recommendations are intended to be implemented within an RtI framework
(typically three-tiered). The panel chose to limit its discussion of tier 1 to
universal screening practices (i.e., the guide does not make
recommendations for general classroom mathematics instruction). Rec-
ommendation 1 provides specific suggestions for conducting universal
screening effectively. For RtI tiers 2 and 3, recommendations 2 though 8
focus on the most effective content and pedagogical practices that can be
included in mathematics interventions.
Throughout this guide, we use the term “interventionist” to refer to
those teaching the intervention. At a given school, the interventionist may

10
For example, see Jitendra et al. (1998) and Fuchs, Fuchs, Craddock et al. (2008).
11
National Joint Committee on Learning Disabilities (2005).
12
Fuchs, Fuchs, Craddock et al. (2008).
13
Fuchs, Fuchs, Craddock et al. (2008); National Joint Committee on Learning Disabilities (2005).
14
National Joint Committee on Learning Disabilities (2005).
Assisting Students Struggling with Mathematics 9

Table 2. Recommendations and corresponding levels of evidence

Recommendation Level of evidence


Tier 1
1. Screen all students to identify those at risk for potential
mathematics difficulties and provide interventions to Moderate
students identified as at risk.
Tiers 2 and 3
2. Instructional materials for students receiving
interventions should focus intensely on in-depth treatment
of whole numbers in kindergarten through grade 5 and on Low
rational numbers in grades 4 through 8. These materials
should be selected by committee.
3. Instruction during the intervention should be explicit
and systematic. This includes providing models of
proficient problem solving, verbalization of thought Strong
processes, guided practice, corrective feedback, and
frequent cumulative review.
4. Interventions should include instruction on solving
word problems that is based on common underlying Strong
structures.
5. Intervention materials should include opportunities for
students to work with visual representations of
mathematical ideas and interventionists should be Moderate
proficient in the use of visual representations of
mathematical ideas.
6. Interventions at all grade levels should devote about 10
minutes in each session to building fluent retrieval of basic Moderate
arithmetic facts.
7. Monitor the progress of students receiving supplemental
Low
instruction and other students who are at risk.
8. Include motivational strategies in tier 2 and tier 3
Low
interventions.
Source: Authors’ compilation based on analysis described in text.
10 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

be the general classroom teacher, a mathematics coach, a special education


instructor, other certified school personnel, or an instructional assistant.
The panel recognizes that schools rely on different personnel to fill these
roles depending on state policy, school resources, and preferences.
Recommendation 1 addresses the type of screening measures that
should be used in tier 1. We note that there is more research on valid
screening measures for students in kindergarten through grade 2,15 but
there are also reasonable strategies to use for students in more advanced
grades.16 We stress that no one screening measure is perfect and that
schools need to monitor the progress of students who score slightly above
or slightly below any screening cutoff score.
Recommendations 2 through 6 address the content of tier 2 and tier 3
interventions and the types of instructional strategies that should be used.
In recommendation 2, we translate the guidance by the National
Mathematics Advisory Panel (2008) and the National Council of Teachers
of Mathematics Curriculum Focal Points (2006) into suggestions for the
content of intervention curricula. We argue that the mathematical focus
and the in-depth coverage advocated for proficient students are also
necessary for students with mathematics difficulties. For most students, the
content of interventions will include foundational concepts and skills
introduced earlier in the student’s career but not fully understood and
mastered. Whenever possible, links should be made between foundational
mathematical concepts in the intervention and grade-level material.
At the center of the intervention recommendations is that
instruction should be systematic and explicit (recommendation 3). This
is a recurrent theme in the body of valid scientific research.17 We
explore the multiple meanings of explicit instruction and indicate which
components of explicit instruction appear to be most related to
improved student outcomes. We believe this information is important
for districts and state departments to have as they consider selecting
materials and providing professional development for interventionists.
15
Gersten, Jordan, and Flojo (2005); Gersten, Clarke, and Jordan (2007).
16
Jiban and Deno (2007); Foegen, Jiban, and Deno (2007).
17
Darch, Carnine, and Gersten (1984); Fuchs et al. (2003a); Jitendra et al. (1998); Schunk and
Cox (1986); Tournaki (2003); Wilson and Sindelar (1991).
Assisting Students Struggling with Mathematics 11

Next, we highlight several areas of research that have produced


promising findings in mathematics interventions. These include
systematically teaching students about the problem types associated with a
given operation and its inverse (such as problem types that indicate
addition and subtraction) (recommendation 4).18 We also recommend
practices to help students translate abstract symbols and numbers into
meaningful visual representations (recommendation 5).19 Another feature
that we identify as crucial for long-term success is systematic instruction to
build quick retrieval of basic arithmetic facts (recommendation 6). Some
evidence exists supporting the allocation of time in the intervention to
practice fact retrieval using flash cards or computer software.20 There is
also evidence that systematic work with properties of operations and
counting strategies (for younger students) is likely to promote growth in
other areas of mathematics beyond fact retrieval.21
The final two recommendations address other considerations in
implementing tier 2 and tier 3 interventions. Recommendation 7 addresses
the importance of monitoring the progress of students receiving
interventions. Specific types of formative assessment approaches and
measures are described. We argue for two types of ongoing assessment.
One is the use of curriculum-embedded assessments that gauge how well
students have learned the material in that day’s or week’s lesson(s). The
panel believes this information is critical for interventionists to determine
whether they need to spend additional time on a topic. It also provides the
interventionist and other school personnel with information that can be
used to place students in groups within tiers. In addition, we recommend
that schools regularly monitor the progress of students receiving

18
Jitendra et al. (1998); Xin, Jitendra, and Deatline-Buchman (2005); Darch, Carnine, and
Gersten (1984); Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al.
(2004); Fuchs, Fuchs, and Finelli (2004); Fuchs, Fuchs, Craddock et al. (2008) Fuchs,
Seethaler et al. (2008).
19
Artus and Dyrek (1989); Butler et al. (2003); Darch, Carnine, and Gersten (1984); Fuchs et al.
(2005); Fuchs, Seethaler et al. (2008); Fuchs, Powell et al. (2008); Fuchs, Fuchs, Craddock
et al. (2008); Jitendra et al. (1998); Walker and Poteet (1989); Wilson and Sindelar (1991);
Witzel (2005); Witzel, Mercer, and Miller (2003); Woodward (2006).
20
Beirne-Smith (1991); Fuchs, Seethaler et al. (2008); Fuchs et al. (2005); Fuchs, Fuchs, Hamlett
et al. (2006); Fuchs, Powell et al. (2008).
21
Tournaki (2003); Woodward (2006).
12 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

interventions and those with scores slightly above or below the cutoff score
on screening measures with broader measures of mathematics proficiency.
This information provides the school with a sense of how the overall
mathematics program (including tier 1, tier 2, and tier 3) is affecting a
given student.
Recommendation 8 addresses the important issue of motivation.
Because many of the students struggling with mathematics have experienced
failure and frustration by the time they receive an intervention, we suggest
tools that can encourage active engagement of students and acknowledge
student accomplishments.

SCOPE OF THE PRACTICE GUIDE


Our goal is to provide evidence-based suggestions for screening
students for mathematics difficulties, providing interventions to students
who are struggling, and monitoring student responses to the interventions.
RtI intentionally cuts across the borders of special and general education
and involves school-wide collaboration. Therefore, our target audience for
this guide includes teachers, special educators, school psychologists and
counselors, as well as administrators. Descriptions of the materials and
instructional content in tier 2 and tier 3 interventions may be especially
useful to school administrators selecting interventions, while
recommendations that relate to content and pedagogy will be most useful
to interventionists.22
The focus of this guide is on providing RtI interventions in
mathematics for students in kindergarten through grade 8. This broad grade
range is in part a response to the recent report of the National Mathematics
Advisory Panel (2008), which emphasized a unified progressive approach
to promoting mathematics proficiency for elementary and middle schools.
Moreover, given the growing number of initiatives aimed at supporting

22
Interventionists may be any number of school personnel, including classroom teachers, special
educators, school psychologists, paraprofessionals, and mathematics coaches and
specialists. The panel does not specify the interventionist.
Assisting Students Struggling with Mathematics 13

students to succeed in algebra, the panel believes it essential to provide tier


2 and tier 3 interventions to struggling students in grades 4 through 8.
Because the bulk of research on mathematics interventions has focused on
students in kindergarten through grade 4, some recommendations for
students in older grades are extrapolated from this research.
The scope of this guide does not include recommendations for special
education referrals. Although enhancing the validity of special education
referrals remains important and an issue of ongoing discussion23 and
research,24 we do not address it in this practice guide, in part because
empirical evidence is lacking.
The discussion of tier 1 in this guide revolves only around effective
screening, because recommendations for general classroom mathematics
instruction were beyond the scope of this guide. For this reason, studies of
effective general mathematics instruction practices were not included in
the evidence base for this guide.25
The studies reviewed for this guide included two types of comparisons
among groups. First, several studies of tier 2 interventions compare
students receiving multicomponent tier 2 interventions with students
receiving only routine classroom instruction.26 This type of study provides
evidence of the effectiveness of providing tier 2 interventions but does not
permit conclusions about which component is most effective. The reason is
that it is not possible to identify whether one particular component or a
combination of components within a multicomponent intervention
produced an effect. Second, several other studies examined the effects of

23
Kavale and Spaulding (2008); Fuchs, Fuchs, and Vaughn (2008); VanDerHeyden, Witt, and
Gilbertson (2007).
24
Fuchs, Fuchs, Compton et al. (2006).
25
There were a few exceptions in which general mathematics instruction studies were included in
the evidence base. When the effects of a general mathematics instruction program were
specified for low-achieving or disabled students and the intervention itself appeared
applicable to teaching tier 2 or tier 3 (e.g., teaching a specific operational strategy), we
included them in this study. Note that disabled students were predominantly learning
disabled.
26
For example, Fuchs, Seethaler et al. (2008) examined the effects of providing supplemental
tutoring (i.e., a tier 2 intervention) relative to regular classroom instruction (i.e., tier 1).
14 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

two methods of tier 2 or tier 3 instruction.27 This type of study offers


evidence for the effectiveness of one approach to teaching within a tier
relative to another approach and assists with identifying the most beneficial
approaches for this population.
The panel reviewed only studies for practices that sought to improve
student mathematics outcomes. The panel did not consider interventions
that improved other academic or behavioral outcomes. Instead, the panel
focused on practices that addressed the following areas of mathematics
proficiency: operations (either computation or estimation), concepts
(knowledge of properties of operations, concepts involving rational
numbers, prealgebra concepts), problem solving (word problems), and
measures of general mathematics achievement. Measures of fact fluency
were also included because quick retrieval of basic arithmetic facts is
essential for success in mathematics and a persistent problem for students
with difficulties in mathematics.28
Technical terms related to mathematics and technical aspects of
assessments (psychometrics) are defined in a glossary at the end of the
recommendations.

CHECKLIST FOR CARRYING OUT


THE RECOMMENDATIONS

Recommendation 1. Screen all students to identify those at risk for


potential mathematics difficulties and provide interventions to students
identified as at risk.

27
For example, Tournaki (2003) examined the effects of providing supplemental tutoring in an
operations strategy (a tier 2 intervention) relative to supplemental tutoring with a drill and
practice approach (also a tier 2 intervention).
28
Geary (2004); Jordan, Hanich, and Kaplan (2003).
Assisting Students Struggling with Mathematics 15

 As a district or school sets up a screening system, have a team


evaluate potential screening measures. The team should select
measures that are efficient and reasonably reliable and that
demonstrate predictive validity. Screening should occur in the
beginning and middle of the year.
 Select screening measures based on the content they cover, with an
emphasis on critical instructional objectives for each grade.
 In grades 4 through 8, use screening data in combination with state
testing results.
 Use the same screening tool across a district to enable analyzing
results across schools.

Recommendation 2. Instructional materials for students receiving


interventions should focus intensely on in-depth treatment of whole
numbers in kindergarten through grade 5 and on rational numbers in
grades 4 through 8. These materials should be selected by committee.

 For students in kindergarten through grade 5, tier 2 and tier 3


interventions should focus almost exclusively on properties of
whole numbers and operations. Some older students struggling
with whole numbers and operations would also benefit from in-
depth coverage of these topics.
 For tier 2 and tier 3 students in grades 4 through 8, interventions
should focus on in-depth coverage of rational numbers as well as
advanced topics in whole number arithmetic (such as long
division).
 Districts should appoint committees, including experts in
mathematics instruction and mathematicians with knowledge of
elementary and middle school mathematics curricula, to ensure
that specific criteria are covered in-depth in the curriculum they
adopt.
16 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Recommendation 3. Instruction during the intervention should be


explicit and systematic. This includes providing models of proficient
problem solving, verbalization of thought processes, guided practice,
corrective feedback, and frequent cumulative review.

 Ensure that instructional materials are systematic and explicit. In


particular, they should include numerous clear models of easy and
difficult problems, with accompanying teacher think-alouds.
 Provide students with opportunities to solve problems in a group
and communicate problem-solving strategies.
 Ensure that instructional materials include cumulative review in
each session.

Recommendation 4. Interventions should include instruction on solving


word problems that is based on common underlying structures.

 Teach students about the structure of various problem types, how


to categorize problems based on structure, and how to determine
appropriate solutions for each problem type.
 Teach students to recognize the common underlying structure
between familiar and unfamiliar problems and to transfer known
solution methods from familiar to unfamiliar problems.

Recommendation 5. Intervention materials should include


opportunities for students to work with visual representations of
mathematical ideas and interventionists should be proficient in the use of
visual representations of mathematical ideas.

 Use visual representations such as number lines, arrays, and strip


diagrams.
 If visuals are not sufficient for developing accurate abstract thought
and answers, use concrete manipulatives first. Although this can also
be done with students in upper elementary and middle school
grades, use of manipulatives with older students should be
Assisting Students Struggling with Mathematics 17

expeditious because the goal is to move toward understanding of—


and facility with—visual representations, and finally, to the abstract.

Recommendation 6. Interventions at all grade levels should devote


about 10 minutes in each session to building fluent retrieval of basic
arithmetic facts.

 Provide about 10 minutes per session of instruction to build quick


retrieval of basic arithmetic facts. Consider using technology, flash
cards, and other materials for extensive practice to facilitate au-
tomatic retrieval.
 For students in kindergarten through grade 2, explicitly teach
strategies for efficient counting to improve the retrieval of
mathematics facts.
 Teach students in grades 2 through 8 how to use their knowledge
of properties, such as commutative, associative, and distributive
law, to derive facts in their heads.

Recommendation 7. Monitor the progress of students receiving


supplemental instruction and other students who are at risk.

 Monitor the progress of tier 2, tier 3, and borderline tier 1 students


at least once a month using grade-appropriate general outcome
measures.
 Use curriculum-embedded assessments in interventions to
determine whether students are learning from the intervention.
These measures can be used as often as every day or as
infrequently as once every other week.
 Use progress monitoring data to regroup students when necessary.

Recommendation 8. Include motivational strategies in tier 2 and tier 3


interventions.

 Reinforce or praise students for their effort and for attending to and
being engaged in the lesson.
18 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

 Consider rewarding student accomplishments.


 Allow students to chart their progress and to set goals for
improvement.

RECOMMENDATION 1. SCREEN ALL STUDENTS


TO IDENTIFY THOSE AT RISK FOR POTENTIAL
MATHEMATICS DIFFICULTIES AND PROVIDE
INTERVENTIONS TO STUDENTS IDENTIFIED AS AT RISK

The panel recommends that schools and districts systematically use


universal screening to screen all students to determine which students have
mathematics difficulties and require research-based interventions. Schools
should evaluate and select screening measures based on their reliability
and predictive validity, with particular emphasis on the measures’
specificity and sensitivity. Schools should also consider the efficiency of
the measure to enable screening many students in a short time.

Level of Evidence: Moderate

The panel judged the level of evidence supporting this recommendation to


be moderate. This recommendation is based on a series of high-quality
correlational studies with replicated findings that show the ability of measures
to predict performance in mathematics one year after administration (and in
some cases two years).29

Brief Summary of Evidence to Support the Recommendation

A growing body of evidence suggests that there are several valid and
reliable approaches for screening students in the primary grades. All these

29
For reviews see Jiban and Deno (2007); Fuchs, Fuchs, Compton et al. (2007); Gersten, Jordan,
and Flojo (2005).
Assisting Students Struggling with Mathematics 19

approaches target aspects of what is often referred to as number sense.30


They assess various aspects of knowledge of whole numbers— properties,
basic arithmetic operations, understanding of magnitude, and applying
mathematical knowledge to word problems. Some measures contain only
one aspect of number sense (such as magnitude comparison) and others
assess four to eight aspects of number sense. The single-component
approaches with the best ability to predict students’ subsequent
mathematics performance include screening measures of students’
knowledge of magnitude comparison and/or strategic counting.31 The
broader, multicomponent measures seem to predict with slightly greater
accuracy than single-component measures.32
Effective approaches to screening vary in efficiency, with some taking
as little as 5 minutes to administer and others as long as 20 minutes.
Multicomponent measures, which by their nature take longer to administer,
tend to be time-consuming for administering to an entire school popu-
lation. Timed screening measures33 and untimed screening measures34 have
been shown to be valid and reliable.
For the upper elementary grades and middle school, we were able to
locate fewer studies. They suggest that brief early screening measures that
take about 10 minutes and cover a proportional sampling of grade-level
objectives are reasonable and provide sufficient evidence of reliability.35 At
the current time, this research area is underdeveloped.

How to Carry Out This Recommendation

1. As a district or school sets up a screening system, have a team


evaluate potential screening measures. The team should select measures

30
Berch (2005); Dehaene (1999); Okamoto and Case (1996); Gersten and Chard (1999).
31
Gersten, Jordan, and Flojo (2005).
32
Fuchs, Fuchs, Compton et al. (2007).
33
For example, Clarke and Shinn (2004).
34
For example, Okamoto and Case (1996).
35
Jiban and Deno (2007); Foegen, Jiban, and Deno (2007).
20 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

that are efficient and reasonably reliable and that demonstrate predictive
validity. Screening should occur in the beginning and middle of the year.

The team that selects the measures should include individuals with
expertise in measurement (such as a school psychologist or a member of
the district research and evaluation division) and those with expertise in
mathematics instruction. In the opinion of the panel, districts should
evaluate screening measures on three dimensions.

 Predictive validity is an index of how well a score on a screening


measure earlier in the year predicts a student’s later mathematics
achievement. Greater predictive validity means that schools can be
more confident that decisions based on screening data are accurate.
In general, we recommend that schools and districts employ
measures with predictive validity coefficients of at least .60 within
a school year.36
 Reliability is an index of the consistency and precision of a measure.
We recommend measures with reliability coefficients of .80 or
higher.37
 Efficiency is how quickly the universal screening measure can be
administered, scored, and analyzed for all the students. As a
general rule, we suggest that a screening measure require no more
than 20 minutes to administer, which enables collecting a substantial
amount of information in a reasonable time frame. Note that many
screening measures take five minutes or less.38 We recommend that
schools select screening measures that have greater efficiency if their
technical adequacy (predictive validity, reliability, sensitivity, and
specificity) is roughly equivalent to less efficient measures.
Remember that screening measures are intended for administration to

36
A coefficient of .0 indicates that there is no relation between the early and later scores, and a
coefficient of 1.0 indicates a perfect positive relation between the scores.
37
A coefficient of .0 indicates that there is no relation between the two scores, and a
coefficient of 1.0 indicates a perfect positive relation between the scores.
38
Foegen, Jiban, and Deno (2007); Fuchs, Fuchs, Compton et al. (2007); Gersten, Clarke, and Jordan
(2007).
Assisting Students Struggling with Mathematics 21

all students in a school, and it may be better to invest more time in


diagnostic assessment of students who perform poorly on the
universal screening measure.

Keep in mind that screening is just a means of determining which


students are likely to need help. If a student scores poorly on a screening
measure or screening battery— especially if the score is at or near a cut
point, the panel recommends monitoring her or his progress carefully to
discern whether extra instruction is necessary.
Developers of screening systems recommend that screening occur at
least twice a year (e.g., fall, winter, and/or spring).39 This panel recommends
that schools alleviate concern about students just above or below the cut
score by screening students twice during the year. The second screening in
the middle of the year allows another check on these students and also serves
to identify any students who may have been at risk and grown substantially
in their mathematics achievement—or those who were on-track at the
beginning of the year but have not shown sufficient growth. The panel
considers these two universal screenings to determine student proficiency as
distinct from progress monitoring (Recommendation 7), which occurs on a
more frequent basis (e.g., weekly or monthly) with a select group of
intervention students in order to monitor response to intervention.

2. Select screening measures based on the content they cover, with an


emphasis on critical instructional objectives for each grade.

The panel believes that content covered in a screening measure should


reflect the instructional objectives for a student’s grade level, with an
emphasis on the most critical content for the grade level. The National
Council of Teachers of Mathematics (2006) released a set of focal points
for each grade level designed to focus instruction on critical concepts for
students to master within a specific grade. Similarly, the National
Mathematics Advisory Panel (2008) detailed a route to preparing all

39
Kaminski et al. (2008); Shinn (1989).
22 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

students to be successful in algebra. In the lower elementary grades, the


core focus of instruction is on building student understanding of whole
numbers. As students establish an understanding of whole numbers,
rational numbers become the focus of instruction in the upper elementary
grades. Accordingly, screening measures used in the lower and upper
elementary grades should have items designed to assess student’s
understanding of whole and rational number concepts—as well as com-
putational proficiency.

3. In grades 4 through 8, use screening data in combination with state


testing results.

In the panel’s opinion, one viable option that schools and districts can
pursue is to use results from the previous year’s state testing as a first stage
of screening. Students who score below or only slightly above a
benchmark would be considered for subsequent screening and/or
diagnostic or placement testing. The use of state testing results would
allow districts and schools to combine a broader measure that covers more
content with a screening measure that is narrower but more focused.
Because of the lack of available screening measures at these grade levels,
districts, county offices, or state departments may need to develop
additional screening and diagnostic measures or rely on placement tests
provided by developers of intervention curricula.

4. Use the same screening tool across a district to enable analyzing


results across schools.

The panel recommends that all schools within a district use the same
screening measure and procedures to ensure objective comparisons across
schools and within a district. Districts can use results from screening to
inform instructional decisions at the district level. For example, one school
in a district may consistently have more students identified as at risk, and
the district could provide extra resources or professional development to
that school. The panel recommends that districts use their research and
Assisting Students Struggling with Mathematics 23

evaluation staff to reevaluate screening measures annually or biannually.


This entails examining how screening scores predict state testing results
and considering resetting cut scores or other data points linked to
instructional decisionmaking.

Potential Roadblocks and Solutions

Roadblock 1.1. Districts and school personnel may face resistance in


allocating time resources to the collection of screening data.

Suggested Approach. The issue of time and personnel is likely to be


the most significant obstacle that districts and schools must overcome to
collect screening data. Collecting data on all students will require
structuring the data collection process to be efficient and streamlined.
The panel notes that a common pitfall is a long, drawn-out data
collection process, with teachers collecting data in their classrooms “when
time permits.” If schools are allocating resources (such as providing an
intervention to students with the 20 lowest scores in grade 1), they must
wait until all the data have been collected across classrooms, thus delaying
the delivery of needed services to students. Furthermore, because many
screening measures are sensitive to instruction, a wide gap between when
one class is assessed and another is assessed means that many students in
the second class will have higher scores than those in the first because they
were assessed later.
One way to avoid these pitfalls is to use data collection teams to screen
students in a short period of time. The teams can consist of teachers,
special education staff including such specialists as school psychologists,
Title I staff, principals, trained instructional assistants, trained older
students, and/or local college students studying child development or
school psychology.

Roadblock 1.2. Implementing universal screening is likely to raise


questions such as, “Why are we testing students who are doing fine?”
24 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Suggested Approach. Collecting data on all students is new for many


districts and schools (this may not be the case for elementary schools,
many of which use screening assessments in reading).40 But screening
allows schools to ensure that all students who are on track stay on track
and collective screening allows schools to evaluate the impact of their
instruction on groups of students (such as all grade 2 students). When
schools screen all students, a distribution of achievement from high to low
is created. If students considered not at risk were not screened, the
distribution of screened students would consist only of at-risk students.
This could create a situation where some students at the “top” of the
distribution are in reality at risk but not identified as such. For upper-grade
students whose scores were high on the previous spring’s state assessment,
additional screening typically is not required.

Roadblock 1.3. Screening measures may identify students who do not


need services and not identify students who do need services.

Suggested Approach. All screening measures will misidentify some


students as either needing assistance when they do not (false positive) or
not needing assistance when they do (false negative). When screening
students, educators will want to maximize both the number of students
correctly identified as at risk—a measure’s sensitivity—and the number of
students correctly identified as not at risk—a measure’s specificity. As
illustrated in table 3, screening students to determine risk can result in four
possible categories indicated by the letters A, B, C, and D. Using these
categories, sensitivity is equal to A/(A + C) and specificity is equal to D/(B
+ D).
The sensitivity and specificity of a measure depend on the cut score to
classify children at risk.41 If a cut score is high (where all students below
the cut score are considered at risk), the measure will have a high degree of

40
U.S. Department of Education, Office of Planning, Evaluation and Policy Development, Policy
and Program Studies Service (2006).
41
Sensitivity and specificity are also influenced by the discriminant validity of the measure and
its individual items. Measures with strong item discrimination are more likely to correctly
identify students’ risk status.
Assisting Students Struggling with Mathematics 25

sensitivity because most students who truly need assistance will be


identified as at risk. But the measure will have low specificity since many
students who do not need assistance will also be identified as at risk.
Similarly, if a cut score is low, the sensitivity will be lower (some students
in need of assistance may not be identified as at risk), whereas the specific-
ity will be higher (most students who do not need assistance will not be
identified as at risk).

Table 3. Sensitivity and specificity

STUDENTS ACTUALLY AT RISK


Yes No
STUDENTS Yes A (true B (false
IDENTIFIED positives) positives)
AS BEING No C (false D (true
AT RISK negatives) negatives)

Schools need to be aware of this tradeoff between sensitivity and


specificity, and the team selecting measures should be aware that decisions
on cut scores can be somewhat arbitrary. Schools that set a cut score too
high run the risk of spending resources on students who do not need help,
and schools that set a cut score too low run the risk of not providing
interventions to students who are at risk and need extra instruction. If a
school or district consistently finds that students receiving intervention do
not need it, the measurement team should consider lowering the cut score.

Roadblock 1.4. Screening data may identify large numbers of students


who are at risk and schools may not immediately have the resources to
support all at-risk students. This will be a particularly severe problem in
low-performing Title I schools.

Suggested Approach. Districts and schools need to consider the


amount of resources available and the allocation of those resources when
using screening data to make instructional decisions. Districts may find
that on a nationally normed screening measure, a large percentage of their
26 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

students (such as 60 percent) will be classified as at risk. Districts will have


to determine the resources they have to provide interventions and the
number of students they can serve with their resources. This may mean not
providing interventions at certain grade levels or providing interventions
only to students with the lowest scores, at least in the first year of
implementation.
There may also be cases when schools identify large numbers of
students at risk in a particular area and decide to provide instruction to all
students. One particularly salient example is in the area of fractions.
Multiple national assessments show many students lack proficiency in
fractions,42 so a school may decide that, rather than deliver interventions at
the individual child level, they will provide a school-wide intervention to
all students. A school-wide intervention can range from a supplemental
fractions program to professional development involving fractions.

RECOMMENDATION 2. INSTRUCTIONAL MATERIALS


FOR STUDENTS RECEIVING INTERVENTIONS SHOULD
FOCUS INTENSELY ON IN-DEPTH TREATMENT OF WHOLE
NUMBERS IN KINDERGARTEN THROUGH GRADE 5
AND ON RATIONAL NUMBERS IN GRADES 4 THROUGH 8.
THESE MATERIALS SHOULD BE SELECTED
BY COMMITTEE

The panel recommends that individuals knowledgeable in instruction


and mathematics look for interventions that focus on whole numbers
extensively in kindergarten through grade 5 and on rational numbers
extensively in grades 4 through 8. In all cases, the specific content of the
interventions will be centered on building the student’s foundational
proficiencies. In making this recommendation, the panel is drawing on
consensus documents developed by experts from mathematics education

42
National Mathematics Advisory Panel (2008); Lee, Grigg, and Dion (2007).
Assisting Students Struggling with Mathematics 27

and research mathematicians that emphasized the importance of these


topics for students in general.43 We conclude that the coverage of fewer
topics in more depth, and with coherence, is as important, and probably
more important, for students who struggle with mathematics.

Level of Evidence: Low

The panel judged the level of evidence supporting this


recommendation to be low. This recommendation is based on the pro-
fessional opinion of the panel and several recent consensus documents that
reflect input from mathematics educators and research mathematicians
involved in issues related to kindergarten through grade 12 mathematics
education.44

Brief Summary of Evidence to Support the Recommendation

The documents reviewed demonstrate a growing professional


consensus that coverage of fewer mathematics topics in more depth and
with coherence is important for all students.45 Milgram and Wu (2005)
suggested that an intervention curriculum for at-risk students should not be
oversimplified and that in-depth coverage of key topics and concepts
involving whole numbers and then rational numbers is critical for future
success in mathematics. The National Council of Teachers of Mathematics
(NCTM) Curriculum Focal Points (2006) called for the end of brief
ventures into many topics in the course of a school year and also suggested
heavy emphasis on instruction in whole numbers and rational numbers.
This position was reinforced by the 2008 report of the National

43
National Council of Teachers of Mathematics (2006); National Mathematics Advisory Panel
(2008).
44
National Council of Teachers of Mathematics (2006); National Mathematics Advisory Panel
(2008); Milgram and Wu (2005).
45
National Mathematics Advisory Panel (2008); Schmidt and Houang (2007); Milgram and Wu
(2005); National Council of Teachers of Mathematics (2006).
28 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Mathematics Advisory Panel (NMAP), which provided detailed


benchmarks and again emphasized in-depth coverage of key topics
involving whole numbers and rational numbers as crucial for all students.
Although the latter two documents addressed the needs of all students, the
panel concludes that the in-depth coverage of key topics is especially
important for students who struggle with mathematics.

How to Carry Out This Recommendation

1. For students in kindergarten through grade 5, tier 2 and tier 3


interventions should focus almost exclusively on properties of whole
numbers46 and operations. Some older students struggling with whole num-
bers and operations would also benefit from in-depth coverage of these
topics.

In the panel’s opinion, districts should review the interventions they


are considering to ensure that they cover whole numbers in depth. The goal
is proficiency and mastery, so in-depth coverage with extensive review is
essential and has been articulated in the NCTM Curriculum Focal Points
(2006) and the benchmarks determined by the National Mathematics
Advisory Panel (2008). Readers are recommended to review these
documents.47
Specific choices for the content of interventions will depend on the
grade level and proficiency of the student, but the focus for struggling
students should be on whole numbers. For example, in kindergarten
through grade 2, intervention materials would typically include significant
attention to counting (e.g., counting up), number composition, and number
decomposition (to understand place-value multidigit operations).

46
Properties of numbers, including the associative, commutative, and distributive properties.
47
More information on the National Mathematics Advisory Panel (2008) report is available at
www.ed.gov/about/bdscomm/list/mathpanel/ index.html. More information on the National
Council of Teachers of Mathematics Curriculum Focal Points is available at www.nctm.org/
focalpoints. Documents elaborating the National Council of Teachers of Mathematics
Curriculum Focal Points are also available (see Beckmann et al., 2009). For a discussion of
why this content is most relevant, see Milgram and Wu (2005).
Assisting Students Struggling with Mathematics 29

Interventions should cover the meaning of addition and subtraction and the
reasoning that underlies algorithms for addition and subtraction of whole
numbers, as well as solving problems involving whole numbers. This focus
should include understanding of the base-10 system (place value).
Interventions should also include materials to build fluent retrieval of
basic arithmetic facts (see recommendation 6). Materials should
extensively use—and ask students to use—visual representations of whole
numbers, including both concrete and visual base-10 representations, as
well as number paths and number lines (more information on visual
representations is in recommendation 5).

2. For tier 2 and tier 3 students in grades 4 through 8, interventions


should focus on in-depth coverage of rational numbers as well as
advanced topics in whole number arithmetic (such as long division).

The panel believes that districts should review the interventions they
are considering to ensure that they cover concepts involving rational
numbers in depth. The focus on rational numbers should include
understanding the meaning of fractions, decimals, ratios, and percents,
using visual representations (including placing fractions and decimals on
number lines,48 see recommendation 5), and solving problems with
fractions, decimals, ratios, and percents.
In the view of the panel, students in grades 4 through 8 will also
require additional work to build fluent retrieval of basic arithmetic facts
(see recommendation 6), and some will require additional work involving
basic whole number topics, especially for students in tier 3. In the opinion
of the panel, accurate and fluent arithmetic with whole numbers is neces-
sary before understanding fractions. The panel acknowledges that there
will be periods when both whole numbers and rational numbers should
be addressed in interventions. In these cases, the balance of concepts
should be determined by the student’s need for support.

48
When using number lines to teach rational numbers for students who have difficulties, it is
important to emphasize that the focus is on the length of the segments between the whole
number marks (rather than counting the marks).
30 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

3. Districts should appoint committees, including experts in


mathematics instruction and mathematicians with knowledge of elementary
and middle school mathematics curriculum, to ensure that specific criteria
(described below) are covered in depth in the curricula they adopt.

In the panel’s view, intervention materials should be reviewed by


individuals with knowledge of mathematics instruction and by
mathematicians knowledgeable in elementary and middle school
mathematics. They can often be experts within the district, such as
mathematics coaches, mathematics teachers, or department heads. Some
districts may also be able to draw on the expertise of local university
mathematicians.
Reviewers should assess how well intervention materials meet four
criteria. First, the materials integrate computation with solving problems
and pictorial representations rather than teaching computation apart from
problem-solving. Second, the materials stress the reasoning underlying cal-
culation methods and focus student attention on making sense of the
mathematics. Third, the materials ensure that students build algorithmic
proficiency. Fourth, the materials include frequent review for both
consolidating and understanding the links of the mathematical principles.
Also in the panel’s view, the intervention program should include an
assessment to assist in placing students appropriately in the intervention
curriculum.

Potential Roadblocks and Solutions

Roadblock 2.1. Some interventionists may worry if the intervention


program is not aligned with the core classroom instruction.

Suggested Approach. The panel believes that alignment with the core
curriculum is not as critical as ensuring that instruction builds students’
foundational proficiencies. Tier 2 and tier 3 instruction focuses on
foundational and often prerequisite skills that are determined by the
students’ rate of progress. So, in the opinion of the panel, acquiring these
Assisting Students Struggling with Mathematics 31

skills will be necessary for future achievement. Additionally, because tier 2


and tier 3 are supplemental, students will still be receiving core classroom
instruction aligned to a school or district curriculum (tier 1).

Roadblock 2.2. Intervention materials may cover topics that are not
essential to building basic competencies, such as data analysis,
measurement, and time.

Suggested Approach. In the panel’s opinion, it is not necessary to


cover every topic in the intervention materials. Students will gain exposure
to many supplemental topics (such as data analysis, measurement, and
time) in general classroom instruction (tier 1). Depending on the student’s
age and proficiency, it is most important to focus on whole and rational
numbers in the interventions.

RECOMMENDATION 3. INSTRUCTION DURING


THE INTERVENTION SHOULD BE EXPLICIT
AND SYSTEMATIC. THIS INCLUDES PROVIDING MODELS
OF PROFICIENT PROBLEM SOLVING, VERBALIZATION
OF THOUGHT PROCESSES, GUIDED PRACTICE,
CORRECTIVE FEEDBACK, AND FREQUENT
CUMULATIVE REVIEW

The National Mathematics Advisory Panel defines explicit instruction


as follows (2008, p. 23):

 “Teachers provide clear models for solving a problem type using


an array of examples.”
 “Students receive extensive practice in use of newly learned
strategies and skills.”
 “Students are provided with opportunities to think aloud (i.e., talk
through the decisions they make and the steps they take).”
32 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

 “Students are provided with extensive feedback.”

The NMAP notes that this does not mean that all mathematics
instruction should be explicit. But it does recommend that struggling
students receive some explicit instruction regularly and that some of the
explicit instruction ensure that students possess the foundational skills and
conceptual knowledge necessary for understanding their grade-level
mathematics.49 Our panel supports this recommendation and believes that
districts and schools should select materials for interventions that reflect
this orientation. In addition, professional development for interventionists
should contain guidance on these components of explicit instruction.

Level of Evidence: Strong

Our panel judged the level of evidence supporting this


recommendation to be strong. This recommendation is based on six
randomized controlled trials that met WWC standards or met standards
with reservations and that examined the effectiveness of explicit and
systematic instruction in mathematics interventions.50 These studies have
shown that explicit and systematic instruction can significantly improve
proficiency in word problem solving51 and operations52 across grade levels
and diverse student populations.

Brief Summary of Evidence to Support the Recommendation

The results of six randomized controlled trials of mathematics


interventions show extensive support for various combinations of the
following components of explicit and systematic instruction: teacher

49
National Mathematics Advisory Panel (2008).
50
Darch, Carnine, and Gersten (1984); Fuchs et al. (2003a); Jitendra et al. (1998); Schunk and
Cox (1986); Tournaki (2003); Wilson and Sindelar (1991).
51
Darch, Carnine, and Gersten (1984); Jitendra et al. (1998); Fuchs et al. (2003a); Wilson and
Sindelar (1991).
52
Schunk and Cox (1986); Tournaki (2003).
Assisting Students Struggling with Mathematics 33

demonstration,53 student verbalization,54 guided practice,55 and corrective


feedback.56 All six studies examined interventions that included teacher
demonstrations early in the lessons.57 For example, three studies included
instruction that began with the teacher verbalizing aloud the steps to solve
sample mathematics problems.58 The effects of this component of explicit
instruction cannot be evaluated from these studies because the demonstra-
tion procedure was used in instruction for students in both treatment and
comparison groups.
Scaffolded practice, a transfer of control of problem solving from the
teacher to the student, was a component in four of the six studies.59
Although it is not possible to parse the effects of scaffolded instruction
from the other components of instruction, the intervention groups in each
study demonstrated significant positive gains on word problem
proficiencies or accuracy measures.
Three of the six studies included opportunities for students to verbalize
the steps to solve a problem.60 Again, although effects of the interventions
were statistically significant and positive on measures of word problems,
operations, or accuracy, the effects cannot be attributed to a single
component of these multicomponent interventions.
Similarly, four of the six studies included immediate corrective
feedback,61 and the effects of these interventions were positive and
significant on word problems and measures of operations skills, but the

53
Darch, Carnine, and Gersten (1984); Jitendra et al. (1998); Fuchs et al. (2003a); Schunk and
Cox (1986); Tournaki (2003); Wilson and Sindelar (1991).
54
Jitendra et al. (1998); Fuchs et al. (2003a); Schunk and Cox (1986); Tournaki (2003).
55
Darch, Carnine, and Gersten (1984); Jitendra et al. (1998); Fuchs et al. (2003a); Tournaki
(2003).
56
Darch, Carnine, and Gersten (1984); Jitendra et al. (1998); Schunk and Cox (1986);
Tournaki (2003).
57
Darch, Carnine, and Gersten (1984); Fuchs et al. (2003a); Jitendra et al. (1998); Schunk and
Cox (1986); Tournaki (2003); Wilson and Sindelar (1991).
58
Schunk and Cox (1986); Jitendra et al. (1998); Darch, Carnine, and Gersten (1984).
59
Darch, Carnine, and Gersten (1984); Fuchs et al. (2003a); Jitendra et al. (1998); Tournaki
(2003).
60
Schunk and Cox (1986); Jitendra et al. (1998); Tournaki (2003).
61
Darch, Carnine, and Gersten (1984); Jitendra et al. (1998); Tournaki (2003); Schunk and Cox
(1986).
34 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

effects of the corrective feedback component cannot be isolated from the


effects of other components in three cases.62
With only one study in the pool of six including cumulative review as
part of the intervention,63 the support for this component of explicit
instruction is not as strong as it is for the other components. But this study
did have statistically significant positive effects in favor of the instructional
group that received explicit instruction in strategies for solving word
problems, including cumulative review.

How to Carry Out This Recommendation

1. Ensure that instructional materials are systematic and explicit. In


particular, they should include numerous clear models of easy and difficult
problems, with accompanying teacher think-alouds.

To be considered systematic, mathematics instruction should gradually


build proficiency by introducing concepts in a logical order and by
providing students with numerous applications of each concept. For
example, a systematic curriculum builds student understanding of place
value in an array of contexts before teaching procedures for adding and
subtracting two-digit numbers with regrouping.
Explicit instruction typically begins with a clear unambiguous
exposition of concepts and step-by-step models of how to perform
operations and reasons for the procedures.64 Interventionists should think
aloud (make their thinking processes public) as they model each step of the
process.65,66 They should not only tell students about the steps and
procedures they are performing, but also allude to the reasoning behind
them (link to the underlying mathematics).

62
Darch, Carnine, and Gersten (1984); Jitendra et al. (1998); Tournaki (2003).
63
Fuchs et al. (2003a).
64
For example, Jitendra et al. (1998); Darch, Carnine, and Gersten (1984); Woodward (2006).
65
See an example in the summary of Tournaki (2003) in appendix B.
66
Darch, Carnine, and Gersten (1984); Jitendra et al. (1998); Fuchs et al. (2003a); Schunk
and Cox (1986); Tournaki (2003); Wilson and Sindelar (1991).
Assisting Students Struggling with Mathematics 35

The panel suggests that districts select instructional materials that


provide interventionists with sample think-alouds or possible scenarios for
explaining concepts and working through operations. A criterion for
selecting intervention curricula materials should be whether or not they
provide materials that help interventionists model or think through difficult
and easy examples.
In the panel’s view, a major flaw in many instructional materials is that
teachers are asked to provide only one or two models of how to approach a
problem and that most of these models are for easy-to-solve problems.
Ideally, the materials will also assist teachers in explaining the reasoning
behind the procedures and problem-solving methods.

2. Provide students with opportunities to solve problems in a group


and communicate problem-solving strategies.

For students to become proficient in performing mathematical


processes, explicit instruction should include scaffolded practice, where the
teacher plays an active role and gradually transfers the work to the
students.67 This phase of explicit instruction begins with the teacher and
the students solving problems together. As this phase of instruction
continues, students should gradually complete more steps of the problem
with decreasing guidance from the teacher. Students should proceed to
independent practice when they can solve the problem with little or no
support from the teacher.
During guided practice, the teacher should ask students to
communicate the strategies they are using to complete each step of the
process and provide reasons for their decisions.68 In addition, the panel
recommends that teachers ask students to explain their solutions.69 Note
that not only interventionists—but fellow students—can and should
communicate how they think through solving problems to the inter-

67
Tournaki (2003); Jitendra et al. (1998); Darch, Carnine, and Gersten (1984).
68
For example, Schunk and Cox (1986).
69
Schunk and Cox (1986); Tournaki (2003).
36 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

ventionist and the rest of the group. This can facilitate the development of
a shared language for talking about mathematical problem solving.70
Teachers should give specific feedback that clarifies what students did
correctly and what they need to improve.71 They should provide
opportunities for students to correct their errors. For example, if a student
has difficulty solving a word problem or solving an equation, the teacher
should ask simple questions that guide the student to solving the problem
correctly. Corrective feedback can also include re-teaching or clarifying
instructions when students are not able to respond to questions or their
responses are incorrect.

3. Ensure that instructional materials include cumulative review in


each session.

Cumulative reviews provide students with an opportunity to practice


topics previously covered in depth. For example, when students are
working with fractions, a cumulative review activity could provide them
with an opportunity to solve some problems involving multiplication and
division of whole numbers. In the panel’s opinion, this review can ensure
that the knowledge is maintained over time and helps students see
connections between various mathematical ideas.

Potential Roadblocks and Solutions

Roadblock 3.1. Interventionists may be unfamiliar with how to


implement an intervention that uses explicit instruction, and some may
underestimate the amount of practice necessary for students in tiers 2 and
3 to master the material being taught.

Suggested Approach. Districts and schools should set up professional


development sessions for interventionists to observe and discuss sample
lessons. The panel believes that it is important for professional

70
For example, Jitendra et al. (1998); Darch, Carnine, and Gersten (1984).
71
Tournaki (2003); Jitendra et al. (1998); Darch, Carnine, and Gersten (1984).
Assisting Students Struggling with Mathematics 37

development participants to observe the intervention first hand. Watching a


DVD or video of the intervention being used with students can give the
participants a model of how the program should be implemented.
Interventionists should also have hands-on experience, teaching the
lessons to each other and practicing with students. Role-playing can give
interventionists practice with modeling and think-alouds, since it is
important for them to stop and reflect before formulating an explanation
for their thinking processes. The trainers can observe these activities,
provide feedback on what participants did well, and offer explicit
suggestions for improving instruction.
As a part of professional development, be sure to convey the benefits
that extended practice (not only worksheets) and cumulative review can
have for student performance. If professional development is not an option,
teachers can also work with mathematics coaches to learn how to
implement the intervention.

Roadblock 3.2. Interventionists may not be expert with the underlying


mathematics content.

Suggested Approach. For interventionists to explain a mathematical


process accurately and develop a logical think-aloud, it is important for
them to understand the underlying mathematics concept and the
mathematical reasoning for the process. Professional development should
provide participants with in-depth knowledge of the mathematics content
in the intervention, including the mathematical reasoning underlying
procedures, formulas, and problem-solving methods.72 The panel believes
that when interventionists convey their knowledge of the content, student
understanding will increase, misconceptions will decrease, and the chances
that students solve problems by rote memory will be reduced.

Roadblock 3.3. The intervention materials may not incorporate enough


models, think-alouds, practice, and cumulative review.

72
National Mathematics Advisory Panel (2008); Wu (2005) http://math.berkeley.edu/~wu/
Northridge2004a2.pdf.
38 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Suggested Approach. Intervention programs might not incorporate


enough models, think-alouds, practice, or cumulative review to improve
students’ mathematics performance.73
Consider using a mathematics coach or specialist to develop a template
listing the essential parts of an effective lesson, including the number of
models, accompanying think-alouds, and practice and cumulative review
items students need to understand, learn, and master the content.
A team of teachers, guided by the mathematics coach/specialist, can
determine the components that should be added to the program.

RECOMMENDATION 4. INTERVENTIONS SHOULD INCLUDE


INSTRUCTION ON SOLVING WORD PROBLEMS THAT IS
BASED ON COMMON UNDERLYING STRUCTURES

Students who have difficulties in mathematics typically experience


severe difficulties in solving word problems related to the mathematics
concepts and operations they are learning.74 This is a major impediment
for future success in any math-related discipline.75
Based on the importance of building proficiency and the convergent
findings from a body of high-quality research, the panel recommends that
interventions include systematic explicit instruction on solving word
problems, using the problems’ underlying structure. Simple word problems
give meaning to mathematical operations such as subtraction or
multiplication. When students are taught the underlying structure of a
word problem, they not only have greater success in problem solving but
can also gain insight into the deeper mathematical ideas in word
problems.76 The panel also recommends systematic instruction on the
structural connections between known, familiar word problems and
unfamiliar, new problems. By making explicit the underlying structural

73
Jitendra et al. (1996); Carnine et al. (1997).
74
Geary (2003); Hanich et al. (2001).
75
National Mathematics Advisory Panel (2008); McCloskey (2007).
76
Peterson, Fennema, and Carpenter (1989).
Assisting Students Struggling with Mathematics 39

connections between familiar and unfamiliar problems, students will know


when to apply the solution methods they have learned.77

Level of Evidence: Strong

The panel judged the level of evidence supporting this


recommendation to be strong. This recommendation is based on nine
randomized controlled trials that met WWC standards or met standards
with reservations and that examined the effectiveness of word problem-
solving strategies.78 Interventions that teach students the structure of
problem types79— and how to discriminate superficial from substantive
information to know when to apply the solution methods they have
learned80—positively and marginally or significantly affect proficiency in
solving word problems.

Brief Summary of Evidence to Support the Recommendation

Research demonstrates that instruction on solving word problems


based on underlying problem structure leads to statistically significant
positive effects on measures of word problem solving.81 Three randomized
controlled trials isolated this practice. In these studies, interventionists
taught students to identify problems of a given type by focusing on the
problem structure and then to design and execute appropriate solution
strategies for each problem. These techniques typically led to significant

77
Fuchs, Fuchs, Finelli et al. (2004).
78
Jitendra et al. (1998); Xin, Jitendra, and Deatline-Buchman (2005); Darch, Carnine,
and Gersten (1984); Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs,
Prentice et al. (2004); Fuchs, Fuchs, Finelli et al. (2004); Fuchs, Fuchs, Craddock et
al. (2008); Fuchs, Seethaler et al. (2008).
79
Jitendra et al. (1998); Xin, Jitendra, and Deatline-Buchman (2005); Darch, Carnine, and Ger-
sten (1984).
80
Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004); Fuchs, Fuchs,
Finelli et al. (2004); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al. (2008).
81
Jitendra et al. (1998); Xin, Jitendra, and Deatline-Buchman (2005); Darch, Carnine, and Ger-
sten (1984).
40 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

and positive effects on word-problem outcomes for students experiencing


difficulties in mathematics across grade levels.82
Six other randomized controlled trials took the instructional
intervention on problem structure a step further. They demonstrated that
teaching students to distinguish superficial from substantive information in
problems also leads to marginally or statistically significant positive effects
on measures of word problem solving. 83 After students were explicitly
taught the pertinent structural features and problem-solution methods for
different problem types, they were taught superficial problem features that
can change a problem without altering its underlying structure. They were
taught to distinguish substantive information from superficial information
in order to solve problems that appear new but really fit into one of the
categories of problems they already know how to solve. They were also
taught that the same underlying problem structures can be applied to
problems that are presented in graphic form (for example, with tables or
maps). These are precisely the issues that often confuse and derail students
with difficulties in mathematics. These six studies consistently
demonstrated marginally or statistically significant positive effects on an
array of word problem-solving proficiencies for students experiencing
difficulties in mathematics.84

How to Carry Out This Recommendation

1. Teach students about the structure of various problem types, how to


categorize problems based on structure, and how to determine appropriate
solutions for each problem type.

82
Jitendra et al. (1998); Xin, Jitendra, and Deatline-Buchman (2005); Darch, Carnine, and Ger-
sten (1984).
83
Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004); Fuchs, Fuchs,
Finelli et al. (2004); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al. (2008).
84
Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004); Fuchs, Fuchs,
Finelli et al. (2004); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al. (2008).
Assisting Students Struggling with Mathematics 41

Students should be explicitly taught about the salient underlying


structural features of each problem type.85 Problem types are groups of
problems with similar mathematical structures. For example, change
problems describe situations in which a quantity (such as children or
pencils) is either increased or decreased (example 1). Change problems
always include a time element. For these problems, students determine
whether to add or subtract by determining whether the change in the
quantity is more or less.

Example 1. Change Problems

The two problems here are addition and subtraction problems that
students may be tempted to solve using an incorrect operation. In each
case, students can draw a simple diagram like the one shown below,
record the known quantities (two of three of A, B, and C) and then use
the diagram to decide whether addition or subtraction is the correct
operation to use to determine the unknown quantity.

Problem 1. Brad has a bottlecap collection. After Madhavi gave Brad


28 more bottle-caps, Brad had 111 bottlecaps. How many bottlecaps did
Brad have before Madhavi gave him more?
Problem 2. Brad has a bottlecap collection. After Brad gave 28 of his
bottlecaps to Madhavi, he had 83 bottlecaps left. How many bottlecaps did
Brad have before he gave Madhavi some?

In contrast, compare problems have no time element (example 2).


They focus on comparisons between two different types of items in two
different sets (pears and apples, boys and girls, hot and cold items).

85
Xin, Jitendra, and Deatline-Buchman (2005).
42 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Students add or subtract by determining whether they need to calculate the


unknown difference (subtract), unknown compared amount (add), or
unknown referent amount (subtract).

Example 2. Compare Problems

Although these problem types seem simple and intuitive to adults and
mathematically precocious students, they are not necessarily obvious for
students requiring mathematics interventions. To build understanding of
each problem type, we recommend initially teaching solution rules (or
guiding questions that lead to a solution equation) for each problem type
through fully and partially worked examples, followed by student practice
in pairs.86
Visual representations such as those in example 2 can be effective for
teaching students how to categorize problems based on their structure and
determine a solution method appropriate for the underlying structure (see
recommendation 5 for more information on visual representations). 87
Teachers can present stories with unknown information and work with
students in using diagrams to identify the problem type and transform the
information in the diagram into a mathematics equation to solve for the
unknown quantity.

86
Fuchs, Fuchs, Finelli et al. (2004).
87
Xin, Jitendra, and Deatline-Buchman (2005).
Assisting Students Struggling with Mathematics 43

2. Teach students to recognize the common underlying structure


between familiar and unfamiliar problems and to transfer known solution
methods from familiar to unfamiliar problems.

A known familiar problem often appears as a new and unfamiliar


problem to a student because of such superficial changes as format changes
(whether it is written in traditional paragraph form or as an advertisement
for a brochure), key vocabulary changes (half, one-half, 1/2), or the
inclusion of irrelevant information (additional story elements such as the
number of buttons on a child’s shirt or the size of a storage container for a
compare problem).88 These superficial changes are irrelevant to un-
derstanding the mathematical demands of a problem. But while focusing
on these irrelevant superficial changes, students can find it difficult to
discern the critical common underlying structure between the new and the
old problems and to apply the solution that is part of their repertoire to the
new unfamiliar problem.
To facilitate the transfer of the known solution from the familiar to the
unfamiliar problem, students should first be shown explicitly that not all
pieces of information in the problem are relevant to discerning the
underlying problem structure.89 Teachers should explain these irrelevant
superficial features explicitly and systematically, as described in
recommendation 3.90 This instruction may be facilitated by the use of a
poster displayed in the classroom that lists the ways familiar problems can
become unfamiliar because of new wording or situations (such as
information displayed in chart versus paragraph form) or the ways relevant
to problem type. The students must also be provided with opportunities to
explain why a piece of information is relevant or irrelevant.91
We suggest that students practice sets of problems with varied
superficial features and cover stories. Students who know how to

88
Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004); Fuchs, Fuchs,
Finelli et al. (2004); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al. (2008).
89
Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004); Fuchs, Fuchs,
Finelli et al. (2004); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al. (2008).
90
Fuchs, Fuchs, Finelli et al. (2004).
91
Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Fuchs, Finelli et al. (2004); Fuchs, Fuchs,
Prentice et al. (2004).
44 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

recognize and solve a “change” problem type with whole numbers


should know that they can apply the same strategy to a structurally
similar word problem that looks different because of changes in wording
and the presence of additional story elements (example 3).92

Example 3. Solving Different Problems with the Same Strategy

 Mike wants to buy 1 pencil for each of his friends. Each packet of
pencils contains 12 pencils. How many packets does Mike have to
buy to give 1 pencil to each of his 13 friends?
 Mike wants to buy 1 pencil for each of his friends. Sally wants to buy
10 pencils. Each box of pencils contains 12 pencils. How many boxes
does Mike have to buy to give 1 pencil to each of his 13 friends?

Potential Roadblocks and Solutions

Roadblock 4.1. In the opinion of the panel, the curricular material may
not classify problems into problem types.

Suggested Approach. The interventionist may need the help of a


mathematics coach, a mathematics specialist, or a district or state
curriculum guide in determining the problem types and an instructional
sequence for teaching them to students. The key issue is that students are
taught to understand a set of problem structures related to the mathematics
they are learning in their intervention.

Roadblock 4.2. As problems get complex, so will the problem types and
the task of discriminating among them.

Suggested Approach. As problems get more intricate (such as


multistep problems), it becomes more difficult for students to determine
the problem type, a critical step that leads to solving the problem correctly.

92
Fuchs, Fuchs, Finelli et al. (2004).
Assisting Students Struggling with Mathematics 45

It is important to explicitly and systematically teach students how to


differentiate one problem type from another.
Interventionists will need high-quality professional development
to ensure that they convey the information clearly and accurately. The
professional development program should include opportunities for
participants to determine problem types, justify their responses, and
practice explaining and modeling problem types to peers and children.
Trainers should provide constructive feedback during the practice
sessions by telling participants both what they did well and what
aspects of their instruction need improvement.

RECOMMENDATION 5. INTERVENTION MATERIALS


SHOULD INCLUDE OPPORTUNITIES FOR STUDENTS
TO WORK WITH VISUAL REPRESENTATIONS
OF MATHEMATICAL IDEAS AND INTERVENTIONISTS
SHOULD BE PROFICIENT IN THE USE OF VISUAL
REPRESENTATIONS OF MATHEMATICAL IDEAS

A major problem for students who struggle with mathematics is weak


understanding of the relationships between the abstract symbols of
mathematics and the various visual representations.93 Student
understanding of these relationships can be strengthened through the use
of visual representations of mathematical concepts such as solving
equations, fraction equivalence, and the commutative property of addition
and multiplication (see the glossary). Such representations may include
number lines, graphs, simple drawings of concrete objects such as blocks
or cups, or simplified drawings such as ovals to represent birds.
In the view of the panel, the ability to express mathematical ideas
using visual representations and to convert visual representations into
symbols is critical for success in mathematics. A major goal of

93
Hecht, Vagi, and Torgesen (2007).
46 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

interventions should be to systematically teach students how to develop


visual representations and how to transition these representations to
standard symbolic representations used in problem solving. Occasional
and unsystematic exposure (the norm in many classrooms) is insufficient
and does not facilitate understanding of the relationship between the
abstract symbols of mathematics and various visual representations.

Level of Evidence: Moderate

The panel judged the level of evidence supporting this


recommendation to be moderate. This recommendation is based on 13
randomized controlled trials that met WWC standards or met standards
with reservations.94 These studies provide support for the systematic use of
visual representations or manipulatives to improve achievement in general
mathematics,95 prealgebra concepts,96 word problems,97 and operations.98
But these representations were part of a complex multicomponent
intervention in each of the studies. So, it is difficult to judge the impact of
the representation component alone, and the panel believes that a moderate
designation is appropriate for the level of evidence for this
recommendation.

Brief Summary of Evidence to Support the Recommendation

Research shows that the systematic use of visual representations and


manipulatives may lead to statistically significant or substantively

94
Artus and Dyrek (1989); Butler et al. (2003); Darch, Carnine, and Gertsen (1984); Fuchs et al.
(2005); Fuchs, Seethaler et al. (2008); Fuchs, Powell et al. (2008); Fuchs, Fuchs, Craddock
et al. (2008); Jitendra et al. (1998); Walker and Poteet (1989); Wilson and Sindelar (1991);
Witzel (2005); Witzel, Mercer, and Miller (2003); Woodward (2006).
95
Artus and Dyrek (1989); Fuchs et al. (2005).
96
Witzel, Mercer, and Miller (2003).
97
Darch, Carnine, and Gersten (1984); Fuchs et al. (2005); Fuchs, Seethaler et al. (2008);
Fuchs, Fuchs, Craddock et al. (2008); Jitendra et al. (1998); Wilson and Sindelar
(1991).
98
Woodward (2006).
Assisting Students Struggling with Mathematics 47

important positive gains in math achievement.99 Four studies used visual


representations to help pave the way for students to understand the abstract
version of the representation.100 For example, one of the studies taught
students to use visual representations such as number lines to understand
mathematics facts.101 The four studies demonstrated gains in mathematics
facts and operations102 and word problem proficiencies,103 and may provide
evidence that using visual representations in interventions is an effective
technique.
Three of the studies used manipulatives in the early stages of
instruction to reinforce understanding of basic concepts and operations.104
One used concrete models such as groups of boxes to teach rules for
multiplication problems.105 The three studies largely showed significant
and positive effects and provide evidence that using manipulatives may be
helpful in the initial stages of an intervention to improve proficiency in
word problem solving.106
In six of the studies, both concrete and visual representations were
used, and overall these studies show that using some combination of
manipulatives and visual representations may promote mathematical
understanding.107 In two of the six, instruction did not include fading of the
manipulatives and visual representations to promote understanding of math
at a more abstract level.108 One of these interventions positively affected
general math achievement,109 but the other had no effect on outcome
measures tested.110 In the other four studies, manipulatives and visual rep-

99
Following WWC guidelines, an effect size greater than 0.25 is considered substantively
important.
100
Jitendra et al. (1998); Walker and Poteet (1989); Wilson and Sindelar (1991); Woodward
(2006).
101
Woodward (2006).
102
Woodward (2006).
103
Jitendra et al. (1998); Walker and Poteet (1989); Wilson and Sindelar (1991).
104
Darch et al. (1984); Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Craddock et al. (2008).
105
Darch et al. (1984).
106
Darch at al. (1984); Fuchs, Sethaler et al. (2008); Fuchs, Fuchs, Craddock et al. (2008).
107
Artus and Dyrek (1989); Butler et al. (2003); Fuchs, Powell et al. (2008); Fuchs et al. (2005);
Witzel (2005); Witzel, Mercer, and Miller (2003).
108
Artus and Dyrek (1989); Fuchs, Powell et al. (2008).
109
Artus and Dyrek (1989).
110
Fuchs, Powell et al. (2008).
48 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

resentations were presented to the students sequentially to promote


understanding at a more abstract level.111 One intervention that used this
method for teaching fractions did not show much promise,112 but the other
three did result in positive gains.113 One of them taught 1st graders basic
math concepts and operations,114 and the other two taught prealgebra
concepts to low-achieving students.115

How to Carry Out This Recommendation

1. Use visual representations such as number lines, arrays, and strip


diagrams.

In the panel’s view, visual representations such as number lines,


number paths, strip diagrams, drawings, and other forms of pictorial
representations help scaffold learning and pave the way for understanding
the abstract version of the representation. We recommend that
interventionists use such abstract visual representations extensively and
consistently. We also recommend that interventionists explicitly link visual
representations with the standard symbolic representations used in
mathematics.
In early grades, number lines, number paths, and other pictorial
representations are often used to teach students foundational concepts and
procedural operations of addition and subtraction. Although number lines
or number paths may not be a suitable initial representation in some
situations (as when working with multiplication and division), they can
help conceptually and procedurally with other types of problems.
Conceptually, number lines and number paths show magnitude and allow
for explicit instruction on magnitude comparisons. Procedurally, they help
teach principles of addition and subtraction operations such as “counting
down,” “counting up,” and “counting down from.”
111
Fuchs et al. (2005); Butler et al. (2003); Witzel et al. (2003); Witzel (2005).
112
Butler et al. (2003).
113
Fuchs et al. (2005); Witzel, Mercer, and Miller (2003); Witzel (2005).
114
Fuchs et al. (2005).
115
Witzel, Mercer, and Miller (2003); Witzel (2005).
Assisting Students Struggling with Mathematics 49

The figure in example 4 shows how a number line may be used to


assist with counting strategies. The top arrows show how a child learns to
count on. He adds 2 + 5 = __. To start, he places his finger on 2. Then, he
jumps five times to the right and lands on 7. The arrows under the number
line show how a child subtracts using a counting down strategy.
For 10 – 3 = __, she starts with her finger on the 10. Then, she jumps
three times to the left on the number line, where she finishes on 7.
The goal of using a number line should be for students to create a
mental number line and establish rules for movement along the line
according to the more or less marking arrows placed along the line. Such
rules and procedures should be directly tied to the explicit instruction that
guided the students through the use of the visual representation.116
Pictorial representations of objects such as birds and cups are also
often used to teach basic addition and subtraction, and simple
drawings can help students understand place value and multidigit
addition and subtraction. Example 5 (p. 34) shows how a student can draw
a picture to solve a multidigit addition problem. In the figure, circles
represent one unit and lines represent units of 10.
In upper grades, diagrams and pictorial representations used to teach
fractions also help students make sense of the basic structure underlying
word problems. Strip diagrams (also called model diagrams and bar
diagrams) are one type of diagram that can be used. Strip diagrams are
drawings of narrow rectangles that show relationships among quantities.
Students can use strip diagrams to help them reason about and solve a
wide variety of word problems about related quantities. In example 6 (p.
34), the full rectangle (consisting of all three equal parts joined together)
represents Shauntay’s money before she bought the book. Since she spent
2
⁄3 of her money on the book, two of the three equal parts represent the $26
she spent on the book. Students can then reason that if two parts stand for
$26, then each part stands for $13, so three parts stand for $39. So,
Shauntay had $39 before she bought the book.

116
Manalo, Bunnell, and Stillman (2000). Note that this study was not eligible for review because
it was conducted outside the United States.
50 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

2. If visuals are not sufficient for developing accurate abstract thought


and answers, use concrete manipulatives first. Although this can also be
done with students in upper elementary and middle school grades, use of
manipulatives with older students should be expeditious because the goal
is to move toward understanding of—and facility with— visual
representations, and finally, to the abstract.

Manipulatives are usually used in lower grades in the initial stages of


learning as teachers introduce basic concepts with whole numbers. This
exposure to concrete objects is often fleeting and transitory. The use of
manipulatives in upper elementary school grades is virtually nonexistent.117
The panel suggests that the interventionist use concrete objects in two
ways.
First, in lower elementary grades, use concrete objects more
extensively in the initial stages of learning to reinforce the understanding
of basic concepts and operations.118
Concrete models are routinely used to teach basic foundational
concepts such as place value.119 They are also useful in teaching other
aspects of mathematics such as multiplication facts. When a mul-
tiplication fact is memorized by question and answer alone, a student may
believe that numbers are to be memorized rather than understood. For
example, 4 × 6 equals 24. When shown using manipulatives (as in
example 7, p. 35), 4 × 6 means 4 groups of 6, which total as 24 objects.
Second, in the upper grades, use concrete objects when visual
representations do not seem sufficient in helping students understand
mathematics at the more abstract level.

117
Howard, Perry, and Lindsay (1996); Howard, Perry, and Conroy (1995).
118
Darch at al. (1984); Fuchs, Seethaler et al.; (2008) Fuchs, Fuchs, Craddock et al. (2008).
119
Fuchs et al. (2005); Fuchs, Seethaler et al. (2008); Fuchs, Powell et al. (2008).
Assisting Students Struggling with Mathematics 51

Example 4. Representation of the counting on strategy using


a number line

Example 5. Using visual representations for multidigit addition

Example 6. Strip diagrams can help students make sense of fractions


52 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Example 7. Manipulatives can help students understand that four


multiplied by six means four groups of six, which means
24 total objects

Example 8. A set of matched concrete, visual, and abstract


representations to teach solving single-variable equations
Assisting Students Struggling with Mathematics 53

Use manipulatives expeditiously, and focus on fading them away


systematically to reach the abstract level.120 In other words, explicitly teach
students the concepts and operations when students are at the concrete
level and consistently repeat the instructional procedures at the visual and
abstract levels. Using consistent language across representational systems
(manipulatives, visual representations, and abstract symbols) has been an
important component in several research studies.121 Example 8 (p. 35)
shows a set of matched concrete, visual, and abstract representations of a
concept involving solving single-variable equations.

Potential Roadblocks and Solutions

Roadblock 5.1. In the opinion of the panel, many intervention


materials provide very few examples of the use of visual representations.

Suggested Approach. Because many curricular materials do not include


sufficient examples of visual representations, the interventionist may need
the help of the mathematics coach or other teachers in developing the
visuals. District staff can also arrange for the development of these
materials for use throughout the district.

Roadblock 5.2. Some teachers or interventionists believe that


instruction in concrete manipulatives requires too much time.

Suggested Approach. Expeditious use of manipulatives cannot be


overemphasized. Since tiered interventions often rely on foundational
concepts and procedures, the use of instruction at the concrete level allows
for reinforcing and making explicit the foundational concepts and
operations. Note that overemphasis on manipulatives can be
counterproductive, because students manipulating only concrete objects
may not be learning to do math at an abstract level.122 The interventionist

120
Fuchs et al. (2005); Witzel (2005); Witzel, ercer, and Miller (2003).
121
Fuchs et al. (2005); Butler et al. (2003); Witzel (2005); Witzel, Mercer, and Miller (2003).
122
Witzel, Mercer, and Miller (2003).
54 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

should use manipulatives in the initial stages strategically and then scaffold
instruction to the abstract level. So, although it takes time to use
manipulatives, this is not a major concern since concrete instruction will
happen only rarely and expeditiously.

Roadblock 5.3. Some interventionists may not fully understand the


mathematical ideas that underlie some of the representations. This is likely
to be particularly true for topics involving negative numbers, proportional
reasoning, and interpretations of fractions.

Suggested Approach. If interventionists do not fully understand the


mathematical ideas behind the material, they are unlikely to be able to
teach it to struggling students.123 It is perfectly reasonable for districts to
work with a local university faculty member, high school mathematics
instructor, or mathematics specialist to provide relevant mathematics in-
struction to interventionists so that they feel comfortable with the concepts.
This can be coupled with professional development that addresses ways to
explain these concepts in terms their students will understand.

RECOMMENDATION 6. INTERVENTIONS AT ALL GRADE


LEVELS SHOULD DEVOTE ABOUT 10 MINUTES IN EACH
SESSION TO BUILDING FLUENT RETRIEVAL
OF BASIC ARITHMETIC FACTS

Quick retrieval of basic arithmetic facts is critical for success in


mathematics.124 Yet research has found that many students with difficulties
in mathematics are not fluent in such facts.125 Weak ability to retrieve
arithmetic facts is likely to impede understanding of concepts students
encounter with rational numbers since teachers and texts often assume
automatic retrieval of facts such as 3 × 9 = __ and 11 – 7 = __ as they

123
Hill, Rowan, and Ball (2005); Stigler and Hiebert (1999).
124
National Mathematics Advisory Panel (2008).
125
Geary (2004); Jordan, Hanich, and Kaplan (2003); Goldman, Pellegrino, and Mertz (1988).
Assisting Students Struggling with Mathematics 55

explain concepts such as equivalence and the commutative property.126 For


that reason, we recommend that about 10 minutes be devoted to building
this proficiency during each intervention session. Acknowledging that time
may be short, we recommend a minimum of 5 minutes a session.

Level of Evidence: Moderate

The panel judged the level of evidence supporting this


recommendation to be moderate. This recommendation is based on seven
randomized controlled trials that met WWC standards or met standards
with reservations and that included fact fluency instruction in the
intervention.127 These studies reveal a series of small but positive effects on
measures of fact fluency128 and procedural knowledge for diverse student
populations in the elementary grades.129 In some cases, fact fluency
instruction was one of several components in the intervention, and it is
difficult to judge the impact of the fact fluency component alone. 130
However, because numerous research teams independently produced
similar findings, we consider this practice worthy of serious consideration.
Although the research is limited to the elementary school grades, in the
panel’s view, building fact fluency is also important for middle school
students when used appropriately.

Brief Summary of Evidence to Support the Recommendation

The evidence demonstrates small positive effects on fact fluency and


operations for the elementary grades and thus provides support for
including fact fluency activities as either stand-alone interventions or

126
Gersten and Chard (1999); Woodward (2006); Jitendra et al. (1996).
127
Beirne-Smith (1991); Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Hamlett et al. (2006);
Fuchs et al. (2005); Fuchs, Powell et al. (2008); Tournaki (2003); Woodward (2006).
128
Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Hamlet et al. (2006); Fuchs et al. (2005); Fuchs,
Powell et al. (2008); Tournaki (2003); Woodward (2006).
129
Beirne-Smith (1991); Fuchs, Seethaler et al. (2008); Fuchs et al. (2005); Tournaki (2003);
Woodward (2006).
130
Fuchs, Seethaler et al. (2008); Fuchs et al. (2005).
56 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

components of larger tier 2 interventions.131 These positive effects did not,


however, consistently reach statistical significance, and the findings cannot
be extrapolated to areas of mathematics outside of fact fluency and
operations.
Two studies examined the effects of being taught mathematics facts
relative to the effects of being taught spelling or word identification using
similar methods.132 In both studies, the mathematics facts group
demonstrated positive gains in fact fluency relative to the comparison
group, but the effects were significant in only one of the studies.133
Another two interventions included a facts fluency component in
combination with a larger tier 2 intervention.134 For example, in the Fuchs
et al. (2005) study, the final 10 minutes of a 40 minute intervention session
were dedicated to practice with addition and subtraction facts. In both stud-
ies, tier 2 interventions were compared against typical tier 1 classroom
instruction. In each study, the effects on mathematics facts were small and
not significant, though the effects were generally positive in favor of
groups that received the intervention. Significant positive effects were
detected in both studies in the domain of operations, and the fact fluency
component may have been a factor in improving students’ operational
abilities.
Many of the studies in the evidence base included one or more of a
variety of components such as teaching the relationships among facts,135
making use of a variety of materials such as flash cards and computer-
assisted instruction,136 and teaching math facts for a minimum of 10 minutes
per session.137 Since these components were typically not independent

131
Beirne-Smith (1991); Fuchs et al. (2005); Fuchs, Fuchs, Hamlet et al. (2006); Fuchs, Seethaler
et al. (2008); Fuchs, Powell et al. (2008); Tournaki (2003); Woodward (2006)
132
Fuchs, Fuchs, Hamlett et al. (2006); Fuchs, Powell et al. (2008).
133
In Fuchs, Fuchs, Hamlett et al. (2006), the effects on addition fluency were statistically sig-
nificant and positive while there was no effect on subtraction fluency.
134
Fuchs, Seethaler et al. (2008); Fuchs et al. (2005).
135
Beirne-Smith (1991); Fuchs et al. (2005); Fuchs, Fuchs, Hamlett et al. (2006); Fuchs,
Seethaler et al. (2008); Woodward (2006).
136
Beirne-Smith (1991); Fuchs et al. (2005); Fuchs, Fuchs, Hamlett et al. (2006); Fuchs,
Seethaler et al. (2008); Fuchs, Powell et al. (2008).
137
Beirne-Smith (1991); Fuchs et al. (2005); Fuchs, Fuchs, Hamlett et al. (2006); Fuchs,
Seethaler et al. (2008); Fuchs, Powell et al. (2008); Tournaki (2003); Woodward (2006).
Assisting Students Struggling with Mathematics 57

variables in the studies, it is difficult to attribute any positive effects to the


component itself. There is evidence, however, that strategy-based
instruction for fact fluency (such as teaching the counting-on procedure) is
superior to rote memorization.138

How to Carry Out This Recommendation

1. Provide about 10 minutes per session of instruction to build quick


retrieval of basic arithmetic facts. Consider using technology, flash cards,
and other materials for extensive practice to facilitate automatic retrieval.

The panel recommends providing about 10 minutes each session for


practice to help students become automatic in retrieving basic arithmetic
facts, beginning in grade the goal is quick retrieval of facts using the digits
0 to 9 without any access to pencil and paper or manipulatives.
Presenting facts in number families (such as 7 × 8 = 56, 8 × 7 = 56,
56/7 = 8, and 56/8 = 7) shows promise for improving student fluency.139 In
the panel’s view, one advantage of this approach is that students
simultaneously learn about the nature of inverse operations.
In the opinion of the panel, cumulative review is critical if students are
to maintain fluency and proficiency with mathematics facts. An efficient
way to achieve this is to integrate previously learned facts into practice
activities. To reduce frustration and provide enough extended practice so
that retrieval becomes automatic (even for those who tend to have limited
capacity to remember and retrieve abstract material), interventionists can
individualize practice sets so students learn one or two new facts, practice
several recently acquired facts, and review previously learned facts.140 If
students are proficient in grade-level mathematics facts, then the panel
acknowledges that students might not need to practice each session,
although periodic cumulative review is encouraged.

138
Beirne-Smith (1991); Tournaki (2003); Woodward (2006).
139
Fuchs et al. (2005); Fuchs, Fuchs, Hamlett et al. (2006); Fuchs, Seethaler et al. (2008) .
140
Hasselbring, Bransford, and Goin (1988). Note that there was not sufficient information to do
a WWC review.
58 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

2. For students in kindergarten through grade 2, explicitly teach


strategies for efficient counting to improve the retrieval of mathematics
facts.

It is important to provide students in kindergarten through grade 2 with


strategies for efficiently solving mathematics facts as a step toward
automatic, fluent retrieval. The counting-up strategy has been used to
increase students’ fluency in addition facts.141 This is a simple, effective
strategy that the majority of students teach themselves, sometimes as early
as age 4.142 But students with difficulties in mathematics tend not to
develop this strategy on their own, even by grade 2.143 There is evidence
that systematic and explicit instruction in this strategy is effective.144
Students can be explicitly taught to find the smaller number in the
mathematics fact, put up the corresponding number of fingers, and count
up that number of fingers from the larger number. For example, to solve
3 + 5 = __, the teacher identifies the smaller number (3) and puts up three
fingers. The teacher simultaneously says and points to the larger number
before counting three fingers, 6, 7, 8.
Note that learning the counting-up strategy not only improves students’
fact fluency145 but also immerses students in the commutative property of
addition. For example, students learn that when the larger number is
presented second (3 + 5 = __), they can rearrange the order and start
counting up from 5. In the view of the panel, this linkage is an important part
of intervention. After this type of instruction, follow-up practice with flash
cards might help students make the new learning automatic.

3. Teach students in grades 2 through 8 how to use their knowledge of


properties, such as commutative, associative, and distributive law, to
derive facts in their heads.

141
Beirne-Smith (1991); Tournaki (2003).
142
Siegler and Jenkins (1989).
143
Tournaki (2003).
144
Tournaki (2003).
145
Tournaki (2003).
Assisting Students Struggling with Mathematics 59

Some researchers have argued that rather than solely relying on rote
memorization and drill and practice, students should use properties of
arithmetic to solve complex facts involving multiplication and division.146
These researchers believe that by teaching the use of composition and
decomposition, and applying the distributive property to situations
involving multiplication, students can increasingly learn how to quickly (if
not automatically) retrieve facts. For example, to understand and quickly
produce the seemingly difficult multiplication fact 13 × 7 = __, students
are reminded that 13 = 10 + 3, something they should have been taught
consistently during their elementary career. Then, since 13 × 7 = (10 + 3) ×
7 = 10 × 7 + 3 × 7, the fact is parsed into easier, known problems 10 × 7 =
__ and 3 × 7 = __ by applying of the distributive property. Students can
then rely on the two simpler multiplication facts (which they had already
acquired) to quickly produce an answer mentally.
The panel recommends serious consideration of this approach as an
option for students who struggle with acquisition of facts in grades 2
through 8. When choosing an intervention curriculum, consider one that
teaches this approach to students in this age range. Note, however, that the
panel believes students should also spend time after instruction with
extensive practice on quick retrieval of facts through the use of materials
such as flash cards or technology.

Roadblocks and Solutions

Roadblock 6.1. Students may find fluency practice tedious and boring.

Suggested Approach. Games that provide students with the opportunity


to practice new facts and review previously learned facts by encouraging
them to beat their previous high score can help the practice be less
tedious.147 Players may be motivated when their scores rise and the
challenge increases. Further recommendations for motivating students are
in recommendation 8.

146
Robinson, Menchetti, and Torgesen (2002); Woodward (2006).
147
Fuchs, Seethaler et al. (2008).
60 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Roadblock 6.2. Curricula may not include enough fact practice or may
not have materials that lend themselves to teaching strategies.

Suggested Approach. Some contemporary curricula deemphasize fact


practice, so this is a real concern. In this case, we recommend using a
supplemental program, either flash card or technology based.

RECOMMENDATION 7. MONITOR THE PROGRESS


OF STUDENTS RECEIVING SUPPLEMENTAL INSTRUCTION
AND OTHER STUDENTS WHO ARE AT RISK

Assess the progress of tier 2 and tier 3 students regularly with general
outcome measures and curriculum embedded measures. Also monitor
regularly the progress of tier 1 students who perform just above the cutoff
score for general outcome measures so they can be moved to tier 2 if they
begin to fall behind.
In addition, use progress monitoring data to determine when
instructional changes are needed. This includes regrouping students who
need continuing instructional support within tier 2 or tier 3, or moving
students who have met benchmarks out of intervention groups and back to
tier 1.
Information about specific progress monitoring measures is available
in Appendix B. A list of online resources is in the text below.

Level of Evidence: Low

The panel judged the level of evidence supporting this


recommendation to be low. No studies that met WWC standards supported
this recommendation.148 Instead, the recommendation is based on the

148
The technical adequacy studies of mathematics progress monitoring measures were not
experimental; the researchers typically used correlation techniques to evaluate the reliability
Assisting Students Struggling with Mathematics 61

panel’s expert opinion as well as consideration of the standards for


measurement established by a joint committee of national organizations.149

Brief Summary of Evidence to Support the Recommendation

Although we found no studies that addressed the use of valid measures


for struggling students within an RtI framework, nonexperimental studies
demonstrate the technical adequacy of various progress monitoring
measures.150 Measures for the primary grades typically reflect aspects of
number sense, including strategic counting, numeral identification, and
magnitude comparisons.151 Studies investigating measures for the
elementary grades focus mostly on the characteristics of general outcome
measures that represent grade-level mathematics curricula in computation
and in mathematics concepts and applications.152 Widely used, these
measures are recommended by the National Center for Student Progress
Monitoring.153 Less evidence is available to support progress monitoring in
middle school.154 But research teams have developed measures focusing on
math concepts typically taught in middle school,155 basic facts,156 and
estimation.157

and criterion validity of the measures and regression methods to examine sensitivity to
growth.
149
The American Psychological Association, the American Educational Research Association,
and the National Council on Measurement in Education (1999).
150
For example, Clarke et al. (2008); Foegen and Deno (2001); Fuchs et al. (1993); Fuchs, Fuchs,
Hamlett, Thompson et al. (1994); Leh et al. (2007); Lembke et al. (2008).
151
For example, Clarke et al. (2008); Lembke et al. (2008); Bryant, Bryant, Gersten, Scammacca,
and Chavez (2008).
152
Fuchs and Fuchs (1998); Fuchs et al. (1999).
153
www.studentprogress.org.
154
Foegen (2008).
155
Helwig, Anderson, and Tindal (2002).
156
Espin et al. (1989).
157
Foegen and Deno (2001); Foegen (2000).
62 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

How to Carry Out This Recommendation

1. Monitor the progress of tier 2, tier 3, and borderline tier 1 students


at least once a month using grade-appropriate general outcome measures.

General outcome measures typically take 5 to 10 minutes to administer


and should be used at least monthly to monitor tier 2 and tier 3 students.
General outcome measures use a sample of items from the array of
concepts covered over one year to assess student progress. They provide a
broad perspective on student proficiency in mathematics. They target
concepts such as magnitude comparison, counting ability, and knowledge
of place value for students in kindergarten and grade 1, and increasingly
complex aspects of place value and proficiency with operations for
students in grades 2 through 6. Examining student performance on these
measures allows teachers to determine whether students are integrating and
generalizing the concepts, skills, and strategies they are learning in the core
curriculum and the intervention.158
In addition to monitoring the progress of tier 1 and tier 2 students, the
panel recommends monitoring the progress of borderline tier 1 students
with general outcome measures on a monthly basis. Since these students
scored just above the cut score, they were not selected for supplemental
instruction. The panel suggests using one standard error of measurement (a
statistic available in the technical information for the measures) above the
cut score to define the range of scores for borderline students. Using this
approach, teachers can continue to monitor the progress of students whose
scores fell just above the cut score and determine whether they should
receive supplemental instruction.
Choose progress monitoring measures with evidence supporting their
reliability, validity, and ability to identify growth. This will require input
from individuals with expertise in these areas, typically school
psychologists or members of district research departments. Consider
whether the measure produces consistent results (reliability) and provides

158
Fuchs, Fuchs, and Zumeta (2008).
Assisting Students Struggling with Mathematics 63

information that correlates with other measures of mathematics


achievement (criterion validity). Ability to identify growth helps
interventionists ensure that students are learning and making progress
toward an annual goal through the full array of services they are receiving.
In some cases, general outcome measures may also be used for
screening, as described in recommendation 1. Resources that teachers can
turn to for identifying appropriate measures include the National Center on
Student Progress Monitoring’s review of available tools (http://
www.studentprogress.org/) and the Research Institute on Progress
Monitoring (http://www.progressmonitoring.org/).

2. Use curriculum-embedded assessments in interventions to determine


whether students are learning from the intervention. These measures can
be used as often as every day159 or as infrequently as once every other
week.160

Many tier 2 and tier 3 intervention programs (commercially developed,


researcher developed, or district developed) include curriculum-embedded
assessments (sometimes called unit tests, mastery tests, or daily probes).
The results of these assessments can be used to determine which concepts
need to be reviewed, which need to be re-taught, and which have been
mastered. Curriculum-embedded assessments are often administered daily
for students in kindergarten161 and grade 1 and biweekly for students in
grades 2 through 6.162 These assessments usually do not possess the same
high technical characteristics of the general outcome measures.
Curriculum-embedded assessments often result in very useful information

159
Bryant, Bryant, Gersten, Scammacca, and Chavez (2008).
160
Jitendra (2007).
161
For example, one tier 2 intervention program for 1st and 2nd grade students reported by
Bryant, Bryant, Gersten, Scammacca, and Chavez (2008) included daily activity-level
progress monitoring that consisted of four oral or written problems drawn from the content
focus for that day. Teachers were instructed that a majority of the students in the group had
to complete at least three of the four problems correctly to consider the daily lesson
successful.
162
A parallel example in grades 3 and beyond can be found in Jitendra’s Solving math word
problems instructional materials on teaching word problems (2007). There are many other
examples in available commercial programs.
64 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

for interventionists because they can detect changes in student performance


in the concepts and skills being taught at the time. Interventionists need to
be cautious about assuming that mastery of individual skills and concepts
will translate into improvements in overall proficiency. As a result, the
panel recommends using both general outcome measures and curriculum-
embedded assessments for students receiving interventions.
If the intervention program does not include curriculum-embedded
assessments, use efficient, reliable, and valid screening measures, which
can also be used as progress monitoring measures (see recommendation 1).

3. Use progress monitoring data to regroup students when necessary.

Since student skill levels change over time and in varying degrees, the
panel recommends using progress monitoring data to regroup students
within tiers so that the small groups used in tier 2 interventions are as
homogeneous as possible. If a student does not fit into any of the
intervention groups from his or her class, consider putting the child in an
intervention group from another class if the schedule permits.

Roadblocks and Solutions

Roadblock 7.1. Students within classes are at very different levels. This
can make it difficult to group students into appropriate tier 2 and tier 3
intervention groups.

Suggested Approach. If students within a class are at such diverse


levels that appropriate tier 2 and tier 3 intervention groups cannot be made,
consider grouping students across classes. This will facilitate clustering
students with similar needs. For example, teachers of upper elementary
students may find that students who have not yet mastered basic concepts
in a particular area (fractions) are spread across several classrooms. Putting
these students in a single tier 2 intervention group would be the most
efficient means of meeting their needs, rather than trying to provide one or
Assisting Students Struggling with Mathematics 65

two students in each class with services duplicated across classrooms. In


such a case, a math specialist, paraprofessional, or other school personnel
who have received training can conduct the intervention.

Roadblock 7.2. There is insufficient time for teachers to implement


progress monitoring.

Suggested Approach. If teachers are too busy to assess student progress


with monitoring measures, consider training paraprofessionals or other
school staff to do so.

RECOMMENDATION 8. INCLUDE MOTIVATIONAL


STRATEGIES IN TIER 2 AND TIER 3 INTERVENTIONS

Adults can sometimes forget how challenging so-called “basic”


arithmetic is for students in tier 2 and tier 3 interventions. Many of these
students have had experiences of failure and frustration with mathematics
by the time they receive an intervention. They may also have a particularly
difficult time storing and easily retrieving information in their memories.163
Therefore, it seems particularly important to provide additional motivation
for these students.164
Praising students for their effort and for being engaged as they work
through mathematics problems is a powerful motivational tool that can be
effective in increasing students’ academic achievement.165 Tier 2 and tier 3
interventions should include components that promote student effort
(engagement-contingent rewards), persistence (completion-contingent
rewards), and achievement (performance-contingent rewards). These

163
Geary (2003).
164
The scope of this practice guide limited the motivational strategies reviewed to strategies used
in studies of students struggling with mathematics. For a wider review of effective motiva-
tional strategies used in classrooms, see Epstein et al. (2008) and Halpern et al. (2007).
165
Schunk and Cox (1986); Fuchs et al. (2005). Fuchs, Fuchs, Craddock et al. (2008).
66 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

components can include praise and rewards. Even a well-designed


intervention curriculum may falter without such behavioral supports.

Level of Evidence: Low

The panel judged the level of evidence supporting this


recommendation to be low. This recommendation is based on the
professional opinion of the panel, and on nine studies that met WWC
standards or met standards with reservations that included motivational
strategies in the intervention.166 Although one of these studies
demonstrated that praising struggling students for their effort significantly
improved their ability to solve subtraction problems with regrouping,167
other studies included a motivational component as one of several
components of the intervention. In the opinion of the panel, these studies
did not show that a motivational component is essential but suggest that it
may be useful for improving mathematics achievement.168

Brief Summary of Evidence to Support the Recommendation

One study that met WWC standards examined the effects of a


motivational component by comparing the performance of students who
received praise for their effort during subtraction instruction with those
who did not receive praise.169 This study found significant positive effects
on student subtraction scores in favor of providing effort feedback.170
Although this study provides some evidence of the effectiveness of a mo-

166
Fuchs et al. (2005); Fuchs, Fuchs, Cradock et al. (2008); Schunk and Cox (1986); Fuchs,
Seethaler et al. (2008); Heller and Fantuzzo (1993); Artus and Dyrek (1989); Fuchs, Fuchs
et al. (2003b); Fuchs, Fuchs, Hamlett, Phillips et al. (1994); Fuchs, Fuchs, Finelli et al.
(2006).
167
Schunk and Cox (1986).
168
There is an extensive literature on motivational strategies outside the scope of this practice
guide. For more information on motivational strategies see Epstein et al. (2008) and
Halpern et al. (2007).
169
Schunk and Cox (1986).
170
Schunk and Cox (1986).
Assisting Students Struggling with Mathematics 67

tivational strategy, it is the only study that explicitly tested the effects of
motivational strategies on mathematics outcomes.
In two studies, students received points for engagement and
attentiveness,171 and in three studies, students were provided with prizes as
tangible reinforcers for accurate mathematics problem-solving.172 However,
in each of these studies, it was not possible to isolate the effects of rein-
forcing attentiveness and accuracy. For example, in two of the studies,
students in tier 2 tutoring earned prizes for accuracy.173 Although in both
studies, the tier 2 intervention group demonstrated substantively important
positive and sometimes significant gains on a variety of mathematics
measures relative to the students who remained in tier 1, it is not possible to
isolate the effects of the reinforcers from the provision of tier 2 tutoring.
Another study examined the impact of parental involvement on students’
mathematics achievement and found statistically significant positive effects
on operations and general math achievement.174 However, because the
parental involvement component was multifaceted, it is not possible to
attribute the positive effects to rewards alone.
Five studies in the evidence base included interventions in which students
graphed their progress and in some cases set goals for improvement on future
assessments.175 One experimental study examined the effects of student
graphing and goal setting as an independent variable and found substantively
important positive effects on measures of word problems in favor of students
who graphed and set goals.176 The other four studies did not isolate the
effects of graphing progress.177 Because this recommendation is based
primarily on the opinion of the panel, the level of evidence is identified as
low.

171
Fuchs et al. (2005); Fuchs, Fuchs, Craddock et al. (2008).
172
Fuchs et al. (2005); Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Craddock et al.
(2008).
173
Fuchs et al. (2005); Fuchs, Seethaler et al. (2008).
174
Heller and Fantuzzo (1993).
175
Artus and Dyrek (1989); Fuchs, Seethaler et al. (2008); Fuchs et al. (2003b); Fuchs,
Fuchs, Hamlett, Phillips et al. (1994); Fuchs, Fuchs, Finelli et al. (2006).
176
Fuchs et al. (2003b).
177
Artus and Dyrek (1989); Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Hamlett, Phillips et al.
(1994); Fuchs, Fuchs, Finelli et al. (2006).
68 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

How to Carry Out This Recommendation

1. Reinforce or praise students for their effort and for attending to and
being engaged in the lesson.

Verbally praise students for their effort178 and for listening carefully
and following the lesson in a systematic fashion (engagement-contingent
rewards).179 The panel believes that praise should be immediate and
specific to highlight student effort and engagement. But we also believe
that it is ineffective to offer generic and empty praise (“good job!” or
“keep up the good work!”) that is not related to actual effort. Instead,
praise is most effective when it points to specific progress that students are
making and recognizes students’ actual effort.180 Systematically praising
students for their effort and engagement may encourage them to remain
focused on the completion of their work.

2. Consider rewarding student accomplishments.

Consider using rewards to acknowledge completion of math tasks


(completion-contingent rewards) and accurate work (performance-
contingent rewards). This can be done by applauding or verbally praising
students for actual accomplishments, such as finishing assignments, im-
proving their score from 70 percent to 80 percent correct, or giving
students points or tokens each time they answer a problem correctly, which
they can use to “buy” tangible rewards at a later time.181 Again, praise
should be specific rather than generic.182 Consider notifying the student’s
parents to inform them of their child’s successes in mathematics by phone
or email or in a note sent home with the student.183 Remember that parents
of these students are likely to receive notification of problems rather than
178
Schunk and Cox (1986).
179
For example, Fuchs et al. (2005); Fuchs, Fuchs, Craddock et al. (2008).
180
See Bangert-Drowns et al. (1991) and Halpern et al. (2007) for a review.
181
For example, Fuchs et al. (2005); Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Craddock et al.
(2008).
182
Halpern et al. (2007).
183
For example, Heller and Fantuzzo (1993).
Assisting Students Struggling with Mathematics 69

successes, and some evidence suggests that this specific positive attention
might support achievement growth.184

3. Allow students to chart their progress and to set goals for


improvement.

Several of the interventions in the evidence base for this practice guide
had students graph their progress on charts185 and set goals for improving
their assessment scores.186 For example, students might graph their scores
on a chart showing a series of thermometers, one for each session of the
intervention.187 At the beginning of each session, students can examine
their charts and set a goal to beat their previous score or to receive the
maximum score. This type of goal setting is believed to help students
develop self-regulated learning because students take independent respon-
sibility for setting and achieving goals.188

Roadblocks and Suggested Approaches

Roadblock 8.1. Rewards can reduce genuine interest in mathematics


by directing student attention to gathering rewards rather than learning
math.

Suggested Approach. It is important to inform interventionists that


research in other content areas has demonstrated that rewards and praise
increase the likelihood of students’ academic success without diminishing
their interest in learning.189 Given the frequent history of failure for many
of these students, at least in the elementary grades, we suggest using
rewards and praise to encourage effort, engagement, and achievement. As

184
For example, Heller and Fantuzzo (1993).
185
Artus and Dyrek (1989); Fuchs, Fuchs, Hamlett, Phillips et al. (1994); Fuchs, Fuchs, Finelli et
al. (2006); Fuchs, Seethaler et al. (2008).
186
Fuchs et al. (2003b).
187
See the procedure in Fuchs et al. (2003b).
188
Fuchs et al. (1997).
189
Epstein et al. (2008).
70 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

students learn and succeed more often in mathematics, interventionists can


gradually fade the use of rewards because student success will become an
intrinsic reward. The WWC Reducing Behavior Problems in the Elemen-
tary School Classroom Practice Guide190 is a good reference for more
information on the use of rewards and praise.

Roadblock 8.2. It is difficult to determine appropriate rewards for


individual students.

Suggested Approach. Consider each student’s interests before choosing


an appropriate reward. Also consider using opportunities to engage in
activities students are interested in as rewards to reinforce effort,
engagement, and accurate work. Parents may also have ideas for rewards
that will help motivate their children. Schools can engage parents in
rewarding students and coordinate efforts to reward children at home as
well.191

Roadblock 8.3. Providing feedback and rewarding achievement


detracts from classroom instructional time. It is difficult to fit it into the
classroom schedule.

Suggested Approach. Verbally praising students for their effort


individually and their engagement in small group lessons requires very
little time. Awarding points or tokens for correct responses can be done
when the teacher grades the student’s work. To reduce the amount of time
it takes for students to “buy” prizes with accumulated points or tokens, ask
students to choose which prize they want before they “buy” it. The prizes
can then be distributed quickly at the end of the day, so that students are
not distracted by the items throughout the school day.

190
Epstein et al. (2008).
191
For example, Heller and Fantuzzo (1993).
Assisting Students Struggling with Mathematics 71

GLOSSARY OF TERMS AS USED IN THIS CHAPTER

The associative property of addition states that (A + B) + C = A + (B +


C) for all numbers A, B, C. This property allows for flexibility in
calculating sums. For example, to calculate 85 + 97 + 3, we do not have to
add 85 and 97 first but may instead calculate the easier sum 85 + (97 + 3),
which is 85 + 100, which equals 185. The associative property is also used
in deriving basic addition facts from other basic facts and therefore helps
with learning these basic facts. For example, to add 8 + 5, a child can think
of breaking the 5 into 2 + 3, combining the 2 with the 8 to make a 10, and
then adding on the 3 to make 13. In an equation, this can be recorded as: 8
+ 5 = 8 + (2 + 3) = (8 + 2) + 3 = 10 + 3 = 13.

Example 9: Commutative property of multiplication

Concurrent validity refers to the correlation between the assessment


that is being investigated and a similar assessment when the assessments
are completed at the same point in time. Correlation coefficients range
72 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

from -1 to 1. A correlation coefficient close to 1 indicates a strong overlap


between the assessments.
Counting up/on is a strategy that young children can use to solve
addition (and subtraction) problems. To calculate 8 + 3 by counting on, the
child starts with 8 and then “counts on” 3 more, saying “9, 10, 11.” The
child may use fingers in order to determine when to stop counting.
Counting up/on from a larger addend is a strategy in which children
apply counting on to solve an addition problem, but first apply the
commutative property, if necessary, in order to count on from the larger
number. For example, to calculate 3 + 9, a child first changes the problem
to + 3 and then counts on 3 from 9.
Counting up/on to solve unknown addend and subtraction
problems. Counting on can be used to solve unknown addend problems
such as 11 + ? = 15. To solve this problem, the child can count on from 11
to 15, counting 12, 13, 14, 15 and raising one finger for each number.
Since 4 fingers were raised, 11 + 4 = 15. To solve a subtraction problem
such as 15 minus 11 = ? using counting on, the child must first understand
that the problem can be reformulated as 11 + ? = 15. Then the child can
use counting on as described previously. Note too that solving this
subtraction problem using the counting on strategy is much easier than
counting down 11. Note too that the reformulation of a subtraction
problem as an unknown addend problem is important in its own right
because it connects subtraction with addition.
In mathematics assessment, criterion- related validity means that
student scores on an assessment should correspond to their scores or
performance on other indicators of mathematics competence, such as
teacher ratings, course grades, or standardized test scores.
Derived fact strategies in addition and subtraction are strategies in
which children use addition facts they already know to find related facts.
Especially important among derived fact strategies are the make-a-10
methods because they emphasize base 10 structure. As shown in example
10, to add 8 + 5, a child can think of breaking the 5 into 2 + 3, combining
the 2 with the 8 to make a 10, and then adding on the 3 to make 13 (see the
associative property of addition). Note that to use this make-a-10 strategy,
Assisting Students Struggling with Mathematics 73

children must know the “10 partner” (number that can be added to make
10) for each number from 1 to 9 and must also know how to break each
number into a sum of two (positive whole) numbers in all possible ways.
Furthermore, the child must understand all the “teen” numbers (from 11 to
19) as a 10 and some ones (for example, 15 is 10 and 5 ones).

Example 10: Make-a-10 strategy

Derived fact strategies in multiplication and division are strategies in


which children use multiplication facts they already know to find related
facts. For example 5 × 8 is half of 10 × 8, and similarly for all the “5
times” facts (these are examples of applying the associative property).
Also, 9 × 8 is 8 less than 10 × 8, and similarly for all the “9 times” facts
(these are examples of applying the distributive property). To calculate 4 ×
7, we can double the double of 7, that is, the double of 7 is 14 and the
double of 14 is 28, which is 4 times 7. All the “4 times” facts can be
derived by doubling the double (these are examples of applying the
associative property).
74 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

The distributive property relates addition and multiplication. It states


that A × (B + C) = (A × B) + (A × C) for all numbers A, B, C. This
property allows for flexibility in calculating products. For example, to
calculate 7 × 13, we can break 13 apart by place value as 10 + 3 and
calculate 7 x 10 = 70 and 7 x 3 = 21 and add these two results to find 7 x
13 = 91. In an equation, this can be recorded as: 7 x 13 = 7 x (10 + 3) = (7
x 10) + (7 x 3) = 70 + 21 = 91. This strategy of breaking numbers apart by
place value and applying the distributive property is the basis for the
common method of longhand multiplication. The distributive property is
also used in deriving basic multiplication facts from other basic facts and
therefore helps in learning these facts. As shown in example 11, to
calculate 6 x 7, a child who already knows 6 x 5 = 30 and 6 x 2 = 12 can
add the 30 and 12 to calculate that 6 x 7 = 30 + 12 = 42. In an equation,
this can be recorded as: 6 x 7 = 6 x (5 + 2) = (6 x 5) + (6 x 2) = 30 + 12 =
42.

Example 11: Distributive property

Efficiency is how quickly the universal screening measure can be


administered, scored, and analyzed for all the students tested.
False positives and false negatives are technical terms used to describe
the misidentification of students. The numbers of false positives and false
Assisting Students Struggling with Mathematics 75

negatives are related to sensitivity and specificity. As depicted in table 3


(p. 25) of this guide, sensitivity is equal to the number of true positives
(students properly identified as needing help in mathematics) divided by
the sum of this value and the number of false negatives, while specificity is
equal to the number of true negatives divided by the sum of this value and
the number of false positives (students misidentified during screening).
A general outcome measure refers to a measure of specific
proficiencies within a broader academic domain. These proficiencies are
related to broader outcomes. For example, a measure of oral reading flu-
ency serves as a general outcome measure of performance in the area of
reading. The measures can be used to monitor student progress over time.
Interventionist refers to the person teaching the intervention. The
interventionist might be a classroom teacher, instructional assistant, or
other certified school personnel.
The magnitude of a quantity or number is its size, so a magnitude
comparison is a comparison of size. The term magnitude is generally used
when considering size in an approximate sense. In this case, we often
describe the size of a quantity or number very roughly by its order of
magnitude, which is the power of ten (namely 1, 10, 100, 1000, . . . . or 0.1,
0.01, 0.001, . . . .) that the quantity or number is closest to.
Number composition and number decomposition are not formal
mathematical terms but are used to describe putting numbers together, as in
putting 2 and 3 together to make 5, and breaking numbers apart, as in
breaking 5 into 2 and 3. For young children, a visual representation like the
one shown on the next page is often used before introducing the traditional
mathematical notation 2 + 3 = 5 and 5 = 2 + 3 for number composition and
decomposition.
A number line is a line on which locations for 0 and 1 have been
chosen (1 is to the right of 0, traditionally). Using the distance between 0
and 1 as a unit, a positive real number, N, is located N units to the right of
0 and a negative real number, -N, is located N units to the left of 0. In this
way, every real number has a location on the number line and every
location on the number line corresponds to a real number.
76 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Example 12: Number decomposition

A number path is an informal precursor to a number line. It is a path of


consecutively numbered “steps,” such as the paths found on many
children’s board games along which game pieces are moved. Determining
locations on number paths only requires counting, whereas determining
locations on number lines requires the notion of distance.
Reliability refers to the degree to which an assessment yields
consistency over time (how likely are scores to be similar if students take
the test a week or so later?) and across testers (do scores change when
different individuals administer the test?). Alternate form reliability tells us
the extent to which an educator can expect similar results across
comparable forms or versions of an assessment.
Predictive validity is the extent to which a test can predict how well
students will do in mathematics a year or even two or three years later.
Response to Intervention (RtI) is an early detection, prevention, and
support system in education that identifies struggling students and assists
them before they fall behind.
Sensitivity indicates how accurately a screening measure predicts
which students are at risk. Sensitivity is calculated by determining the
number of students who end up having difficulty in mathematics and then
examining the percentage of those students predicted to be at risk on the
screening measure. A screening measure with high sensitivity would have
a high degree of accuracy. In general, sensitivity and specificity are related
(as one increases the other usually decreases).
Assisting Students Struggling with Mathematics 77

Specificity indicates how accurately a screening measure predicts


which students are not at risk. Specificity is calculated by determining the
number of students who do not have a deficit in mathematics and then
examining the percentage of those students predicted to not be at risk on
the screening measure. A screening measure with high specificity would
have a high degree of accuracy. In general, sensitivity and specificity are
related (as one increases the other usually decreases).
Strip diagrams (also called model diagrams and bar diagrams) are
drawings of narrow rectangles that show relationships among quantities.
A validity coefficient serves as an index of the relation between two
measures and can range from -1.0 to 1.0, with a coefficient of .0 meaning
there is no relation between the two scores and increasing positive scores
indicating a stronger positive relation.

APPENDIX A. POSTSCRIPT FROM THE INSTITUTE


OF EDUCATION SCIENCES

What Is a Practice Guide?

The health care professions have embraced a mechanism for


assembling and communicating evidence-based advice to practitioners
about care for specific clinical conditions. Variously called practice
guidelines, treatment protocols, critical pathways, best practice guides, or
simply practice guides, these documents are systematically developed
recommendations about the course of care for frequently encountered
problems, ranging from physical conditions, such as foot ulcers, to psy-
chosocial conditions, such as adolescent development.192
Practice guides are similar to the products of typical expert
consensus panels in reflecting the views of those serving on the panel
and the social decisions that come into play as the positions of individ -
ual panel members are forged into statements that all panel members

192
Field and Lohr (1990).
78 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

are willing to endorse. Practice guides, however, are generated under


three constraints that do not typically apply to consensus panels. The
first is that a practice guide consists of a list of discrete
recommendations that are actionable. The second is that those
recommendations taken together are intended to be a coherent approach
to a multifaceted problem. The third, which is most important, is that
each recommendation is explicitly connected to the level of evidence
supporting it, with the level represented by a grade (strong, moderate,
or low).
The levels of evidence, or grades, are usually constructed around the
value of particular types of studies for drawing causal conclusions
about what works. Thus, one typically finds that a strong level of
evidence is drawn from a body of randomized controlled trials, the
moderate level from well-designed studies that do not involve
randomization, and the low level from the opinions of respected
authorities (see table 1, p. 3). Levels of evidence also can be
constructed around the value of particular types of studies for other
goals, such as the reliability and validity of assessments.
Practice guides also can be distinguished from systematic reviews or
meta-analyses such as What Works Clearinghouse (WWC) intervention
reviews or statistical meta-analyses, which employ statistical methods to
summarize the results of studies obtained from a rule-based search of the
literature. Authors of practice guides seldom conduct the types of
systematic literature searches that are the backbone of a meta-analysis,
although they take advantage of such work when it is already published.
Instead, authors use their expertise to identify the most important
research with respect to their recommendations, augmented by a search
of recent publications to ensure that the research citations are up-to-date.
Furthermore, the characterization of the quality and direction of the
evidence underlying a recommendation in a practice guide relies less on a
tight set of rules and statistical algorithms and more on the judgment of
the authors than would be the case in a high- quality meta-analysis.
Another distinction is that a practice guide, because it aims for a
comprehensive and coherent approach, operates with more numerous and
Assisting Students Struggling with Mathematics 79

more contextualized statements of what works than does a typical meta-


analysis.
Thus, practice guides sit somewhere between consensus reports and
meta-analyses in the degree to which systematic processes are used for
locating relevant research and characterizing its meaning. Practice guides
are more like consensus panel reports than meta-analyses in the breadth
and complexity of the topic that is addressed. Practice guides are different
from both consensus reports and meta-analyses in providing advice at the
level of specific action steps along a pathway that represents a more-or-less
coherent and comprehensive approach to a multifaceted problem.

Practice Guides in Education at the Institute


of Education Sciences

IES publishes practice guides in education to bring the best available


evidence and expertise to bear on the types of systemic challenges that
cannot currently be addressed by single interventions or programs.
Although IES has taken advantage of the history of practice guides in
health care to provide models of how to proceed in education, education is
different from health care in ways that may require that practice guides in
education have somewhat different designs. Even within health care, where
practice guides now number in the thousands, there is no single template in
use. Rather, one finds descriptions of general design features that permit
substantial variation in the realization of practice guides across
subspecialties and panels of experts.193 Accordingly, the templates for IES
practice guides may vary across practice guides and change over time and
with experience.
The steps involved in producing an IES sponsored practice guide are
first to select a topic, which is informed by formal surveys of practitioners
and requests. Next, a panel chair is recruited who has a national reputation
and up-to-date expertise in the topic. Third, the chair, working in collabo-

193
American Psychological Association (2002).
80 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

ration with IES, selects a small number of panelists to co-author the


practice guide. These are people the chair believes can work well together
and have the requisite expertise to be a convincing source of rec-
ommendations. IES recommends that at least one of the panelists be a
practitioner with experience relevant to the topic being addressed. The
chair and the panelists are provided a general template for a practice guide
along the lines of the information provided in this appendix. They are also
provided with examples of practice guides. The practice guide panel works
under a short deadline of six to nine months to produce a draft document.
The expert panel members interact with and receive feedback from staff at
IES during the development of the practice guide, but they understand that
they are the authors and, thus, responsible for the final product.
One unique feature of IES-sponsored practice guides is that they are
subjected to rigorous external peer review through the same office that is
responsible for independent review of other IES publications. A critical
task of the peer reviewers of a practice guide is to determine whether the
evidence cited in support of particular recommendations is up-to-date and
that studies of similar or better quality that point in a different direction
have not been ignored. Peer reviewers also are asked to evaluate whether
the evidence grade assigned to particular recommendations by the practice
guide authors is appropriate. A practice guide is revised as necessary to
meet the concerns of external peer reviews and gain the approval of the
standards and review staff at IES. The process of external peer review is
carried out independent of the office and staff within IES that instigated the
practice guide.
Because practice guides depend on the expertise of their authors and
their group decisionmaking, the content of a practice guide is not and
should not be viewed as a set of recommendations that in every case
depends on and flows inevitably from scientific research. It is not only
possible but also likely that two teams of recognized experts working
independently to produce a practice guide on the same topic would
generate products that differ in important respects. Thus, consumers of
practice guides need to understand that they are, in effect, getting the
advice of consultants. These consultants should, on average, provide
Assisting Students Struggling with Mathematics 81

substantially better advice than an individual school district might obtain


on its own because the authors are national authorities who have to reach
agreement among themselves, justify their recommendations in terms of
supporting evidence, and undergo rigorous independent peer review of
their product. (Institute of Education Sciences)

APPENDIX B. TECHNICAL INFORMATION ON THE STUDIES

Recommendation 1. Screen All Students to Identify Those


at Risk for Potential Mathematics Difficulties and Provide
Interventions to Students Identified as at Risk

Level of Evidence: Moderate


The panel examined reviews of the technical adequacy of screening
measures for students identified as at risk when making this
recommendation. The panel rated the level of evidence for
recommendation 1 as moderate because several reviews were available for
evidence on screening measures for younger students. However, there was
less evidence available on these measures for older students. The panel
relied on the standards of the American Psychological Association, the
American Educational Research Association, and the National Council on
Measurement in Education194 for valid screening instruments along with
expert judgment to evaluate the quality of the individual studies and to
determine the overall level of evidence for this recommendation.
Relevant studies were drawn from recent comprehensive literature
reviews and reports195 as well as literature searches of databases using key
terms (such as “formative assessment”). Journal articles summarizing
research studies on screening in mathematics,196 along with summary

194
American Educational Research Association, American Psychological Association, and
National Council on Measurement in Education (1999).
195
For example, the National Mathematics Advisory Panel (2008).
196
Gersten et al. (2005); Fuchs, Fuchs, Compton et al. (2007); Foegen et al. (2007).
82 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

information provided by the Research Institute on Progress Monitoring197


and the National Center on Progress Monitoring were also used.198
The studies of screening measures all used appropriate correlational
designs.199 In many cases, the criterion variable was some type of
standardized assessment, often a nationally normed test (such as the
Stanford Achievement Test) or a state assessment. In a few cases, however,
the criterion measure was also tightly aligned with the screening
measure.200 The latter set is considered much weaker evidence of validity.
Studies also addressed inter-tester reliability,201 internal consistency,202
test-retest reliability,203 and alternate form reliability.204 Many researchers
discussed the content validity of the measure.205 A few even discussed the
consequential validity206—the consequences of using screening data as a tool
for determining what requires intervention.207 However, these studies all used
standardized achievement measures as the screening measure.
In recent years, a number of studies of screening measures have also
begun to report sensitivity and specificity data.208 Because sensitivity and
specificity provide information on the false positive and false negative
rates respectively, they are critical in determining the utility of a measure
used in screening decisions linked to resource allocation. Note that work
on sensitivity and specificity in educational screening is in its infancy and
no clear standards have been developed.
The remainder of this section presents evidence in support of the
recommendation. We discuss the evidence for measures used in both the
early elementary and upper elementary grades and conclude with a more in-

197
http://www.progressmonitoring.net/.
198
www.studentprogress.org.
199
Correlational studies are not eligible for WWC review.
200
For example, Bryant, Bryant, Gersten, Scammacca, and Chavez (2008).
201
For example, Fuchs et al. (2003a).
202
For example, Jitendra et al. (2005).
203
For example, VanDerHeyden, Witt, and Gilbertson (2003).
204
For example, Thurber, Shinn, and Smolkowski (2002).
205
For example, Clarke and Shinn (2004); Gersten and Chard (1999); Foegen, Jiban, and Deno
(2007).
206
Messick (1988); Gersten, Keating, and Irvin (1995).
207
For example, Compton, Fuchs, and Fuchs (2007).
208
Locuniak and Jordan (2008); VanDerHeyden et al. (2001); Fuchs, Fuchs, Compton et al.
(2007).
Assisting Students Struggling with Mathematics 83

depth example of a screening study to illustrate critical variables to consider


when evaluating a measure.

Summary of Evidence

In the early elementary grades, measures examined included general


outcome measures reflecting a sampling of objectives for a grade level that
focused on whole numbers and number sense. These included areas of
operations and procedures, number combinations or basic facts, concepts,
and applications.209 Measures to assess different facets of number sense—
including measures of rote and strategic counting, identification of numer-
als, writing numerals, recognizing quantities, and magnitude
comparisons—were also prevalent.210 Some research teams developed
measures focused on a single aspect of number sense (such as strategic
counting),211 and others developed batteries to create a composite score
from single proficiency measures.212 Still others developed a broader
measure that assessed multiple proficiencies in their screening.213 An
example of a single proficiency embedded in a broader measure is having
students compare magnitudes of numbers. As an individual measure,
magnitude comparison has predictive validity in the .50 to .60 range,214 but
having students make magnitude comparisons is also included in broader
measures. For example, the Number Knowledge Test (NKT)215 requires
students to name the greater of two verbally presented numbers and
includes problems assessing strategic counting, simple addition and
subtraction, and word problems. The broader content in the NKT provided

209
Fuchs, Fuchs, Compton et al. (2007).
210
Gersten, Clarke, and Jordan (2007); Fuchs, Fuchs, and Compton et al. (2007).
211
Clarke and Shinn (2004).
212
Bryant, Bryant, Gersten, Scammacca, and Chavez (2008).
213
Okamoto and Case (1996).
214
Lembke et al. (2008); Clarke and Shinn (2004); Bryant, Bryant, Gersten, Scammacca, and
Chavez (2008).
215
Okamoto and Case (1996).
84 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

stronger evidence of predictive validity216 than did single proficiency


measures.
Further information on the characteristics and technical adequacy of
curriculum-based measures (CBM) for use in screening in the elementary
grades was summarized by Foegen, Jiban, and Deno (2007). They
explained that measures primarily assessed the objectives of operations or
the concepts and applications standards for a specific grade level. A
smaller number of measures assessed fluency in basic facts, problem
solving, or word problems. Measures were timed and administration time
varied between 2 and 6 minutes for operations probes and 2 to 8 minutes
for concepts and applications. Reliability evidence included test-retest,
alternate form, internal consistency, and inter-scorer, with most reliabilities
falling between .80 and .90, meeting acceptable standards for educational
decisionmaking. Similar evidence was found for validity with most
concurrent validity coefficients in the .50 to .70 range. Lower coefficients
were found for basic fact measures ranging from .30 to .60. Researchers
have also begun to develop measures that validly assess magnitude
comparison, estimation, and prealgebra proficiencies.217

A Study of Evaluating a Mathematics Screening Instrument—


Locuniak and Jordan (2008)
A recent study by Locuniak and Jordan (2008) illustrates factors that
districts should consider when evaluating and selecting measures for use in
screening. The researchers examined early mathematics screening
measures from the middle of kindergarten to the end of second grade. The
two-year period differs from many of the other screening studies in the
area by extending the interval from within a school year (fall to spring) to
across several school years. This is critical because the panel believes the
longer the interval between when a screening measure and a criterion
measure are administered, the more schools can have confidence that stu-
dents identified have a significant deficit in mathematics that requires
intervention. The Locuniak and Jordan (2008) study also went beyond

216
Chard et al. (2005).
217
Foegen et al. (2007).
Assisting Students Struggling with Mathematics 85

examining traditional indices of validity to examine specificity and sen-


sitivity. Greater sensitivity and specificity of a measure ensures that
schools provide resources to those students truly at risk and not to students
misidentified.
The various measures studied by Locuniak and Jordan (2008) also
reflected mathematics content that researchers consider critical in the
development of a child’s mathematical thinking and that many researchers
have devised screening measures to assess. Included were number sense
measures that assessed knowledge of counting, number combinations,
nonverbal calculation, story problems, number knowledge, and short and
working memory. The authors used block regression to examine the
added value of the math measures in predicting achievement above and
beyond measures of cognition, age, and reading ability (block 1), which
accounted for 26 percent of the variance on 2nd grade calculation
fluency. Adding the number sense measures (block 2) increased the
variance explained to 42 percent. Although the research team found
strong evidence for the measures assessing working memory (digit span),
number knowledge, and number combinations, the array of measures
investigated is indicative that the field is still attempting to understand
which critical variables (mathematical concepts) best predict future dif-
ficulty in mathematics. A similar process has occurred in screening for
reading difficulties where a number of variables (such as alphabetic
principle) are consistently used to screen students for reading difficulty.
Using the kindergarten measures with the strongest correlations to grade
2 mathematics achievement (number knowledge and number
combinations), the researchers found rates of .52 for sensitivity and .84
for specificity.
Another feature that schools will need to consider when evaluating and
selecting measures is whether the measure is timed. The measures studied
by Locuniak and Jordan (2008) did not include a timing component. In
contrast, general outcome measures include a timing component.218 No

218
Deno (1985).
86 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

studies were found by the panel that examined a timed and untimed version
of the same measure.

Recommendation 2. Instructional Materials for Students


Receiving Interventions Should Focus Intensely on In-Depth
Treatment of Whole Numbers in Kindergarten Through Grade
5 and on Rational Numbers in Grades 4 Through 8. These
Materials Should Be Selected by Committee

Level of Evidence: Low


The panel based this recommendation on professional opinion;
therefore, the evidence rating is low. The professional opinion included not
only the views of the panel members, but also several recent consensus
documents that reflect input from mathematics educators and research
mathematicians involved in issues related to K–12 mathematics educa-
tion.219 Each of these documents was influenced to some extent by
comparisons of curricula standards developed by the 50 states in the
United States with nations with high scores on international tests of
mathematics performance, such as the Trends in International Mathematics
and Science Study (TIMSS) and the Program for International Student
Assessment (PISA) (including the Czech Republic, Flemish Belgium,
Korea, and Singapore).220 We note, however, that these international
comparisons are merely descriptive and thus do not allow for causal
inferences. In other words, we do not know whether their more focused
curricula or other factors contribute to higher performance.
We note that some of the other reports we describe here do not directly
address the needs of students who receive interventions to boost their
knowledge of foundational concepts and procedures. However, we
concluded that the focus on coverage of fewer topics in more depth, and

219
Milgram and Wu (2005); National Council of Teachers of Mathematics (2006); National
Mathematics Advisory Panel (2008).
220
For more information on the TIMSS, see http://nces.ed.gov/timss/. For more information on
PISA, see www.oecd.org.
Assisting Students Struggling with Mathematics 87

with coherence, advocated for general education students is as important,


and probably more important, for students who struggle with mathematics.
We could not locate any experimental research that supported our belief,
however. Therefore, we indicate clearly that we are reflecting a growing
consensus of professional opinion, not a convergent body of scientific
evidence—and conclude that the level of evidence is low.

Summary of Evidence
Three seminal publications were consulted in forming our opinion.221
Milgram and Wu (2005) were among the first to suggest that an
intervention curriculum for at-risk students should not be oversimplified
and that in-depth coverage of key topics and concepts involving whole
numbers and then rational numbers was critical for future success in
mathematics. They stressed that mastery of this material was critical,
regardless of how long it takes. Many before had argued about the
importance of mastery of units before proceeding forward.222 Milgram and
Wu argued that stress on precise definitions and abstract reasoning was
“even more critical for at-risk students” (p. 2). They acknowledged this
would entail extensive practice with feedback and considerable
instructional time.
The National Council of Teachers of Mathematics Curriculum Focal
Points (2006) made a powerful statement about reform of mathematics
curriculum for all students by calling for the end of brief ventures into
many topics in the course of a school year.
The topics it suggests emphasize whole numbers (properties,
operations, problem solving) and especially fractions and related topics
involving rational numbers (proportion, ratio, decimals). The report is
equally clear that algorithmic proficiency is critical for understanding
properties of operations and related concepts and that algorithmic
proficiency, quick retrieval of mathematics facts, and in-depth knowledge
of such concepts as place value and properties of whole numbers are all

221
Milgram and Wu (2005); National Council of Teachers of Mathematics (2006); National
Mathematics Advisory Panel (2008).
222
For example, Bloom (1980); Guskey (1984); Silbert, Carnine, and Stein (1989).
88 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

equally important instructional goals. This position was reinforced by the


report of the National Mathematics Advisory Panel (2008) two years later,
which provided detailed benchmarks and again emphasized in-depth
coverage of key topics involving whole numbers and rational numbers as
crucial for all students.
In the view of the panel, students in intervention programs need to
master material on whole numbers and rational numbers, and they must
ultimately work with these concepts and principles at an abstract level. We
feel that it is less important for 4th graders in an intervention program to
cover the entire scope and sequence of topics from the year before. Instead,
the aim is to cover the key benchmarks articulated in the National
Mathematics Advisory Panel report, involving whole numbers and rational
numbers that students do not fully grasp, and build proficiencies they lack.

Recommendation 3. Instruction During the Intervention Should


Be Explicit and Systematic. This Includes Providing Models of
Proficient Problem Solving, Verbalization of Thought Processes,
Guided Practice, Corrective Feedback, and Frequent
Cumulative Review

Level of Evidence: Strong


The panel judged the level of evidence supporting the recommendation
to be strong.
The panel found six studies223 conducted with low achieving or
learning disabled students224 between 2nd and 8th grades that met WWC
standards or met standards with reservations and included components of
explicit and systematic instruction.225 Appendix Table D1 provides an
overview of the components of explicit instruction in each intervention,

223
Darch, Carnine, and Gersten (1984); Fuchs et al. (2003a); Jitendra et al. (1998); Tournaki
(2003); Schunk and Cox (1986); Wilson and Sindelar (1991).
224
These students specifically had difficulties with mathematics.
225
For this practice guide, the components of explicit and systematic mathematics instruction are
identified as providing models of proficient problem solving, verbalizing teacher and
student thought processes, scaffolded practice, cumulative review, and corrective feedback.
Assisting Students Struggling with Mathematics 89

including the use of teacher demonstration (such as verbalization during


demonstration and the use of multiple examples), student verbalization
(either as a procedural requirement or as a response to teacher questions),
scaffolded practice, cumulative review, and corrective feedback. The
relevant treatment and comparison groups compared in each study and the
outcomes found for each domain are included in the table, as are grade-
level, typical session length, and duration of the intervention.
Because of the number of high-quality randomized and quasi-
experimental design studies using explicit and systematic mathematics
instruction across grade levels and diverse student populations, the
frequency of significant positive effects, and the fact that numerous
research teams independently produced similar findings, the panel
concluded that there is strong evidence to support the recommendation to
provide explicit and systematic instruction in tier 2 mathematics
interventions.

Summary of Evidence

Teacher Demonstrations and Think-Alouds


The panel suggests that teachers verbalize the solution to problems as
they model problem solving for students. Tournaki (2003) assessed this
approach by comparing a group of students whose teachers had
demonstrated and verbalized an addition strategy (the Strategy group)
against a group of students whose teacher did not verbalize a strategy (the
Drill and Practice group). As depicted in appendix Table D1, the effects on
an assessment of single-digit addition were significant, positive, and
substantial in favor of the students whose teacher had verbalized a
strategy.226
All six studies examined interventions that included teacher
demonstrations early in the mathematics lessons.227 For example, Schunk

226
Note that during the intervention, students in the Strategy condition were also encouraged to
verbalize the problem-solving steps and that this may also be a factor in the success of the
intervention. The Tournaki (2003) study is described in more detail below.
227
Darch, Carnine, and Gersten (1984); Fuchs et al. (2003a); Jitendra et al. (1998); Tournaki
(2003); Schunk and Cox (1986); Wilson and Sindelar (1991).
90 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

and Cox (1986), Jitendra et al. (1998), and Darch, Carnine, and Gersten
(1984) all conducted studies in which instruction began with the teacher
verbalizing the steps to solve sample mathematics problems. Because this
demonstration procedure was used to instruct students in both treatment
and comparison groups, the effects of this component of explicit in-
struction cannot be evaluated from these studies. However, the widespread
use of teacher demonstration in interventions that include other
components of explicit instruction supports the panel’s contention that this
is a critical component of explicit instructional practice.
For teacher demonstration, the panel specifically recommends that
teachers provide numerous models of solving easy and hard problems
proficiently. Demonstration with easy and hard problems and the use of
numerous examples were not assessed as independent variables in the
studies reviewed. However, Wilson and Sindelar (1991) did use numerous
examples in instruction for both groups evaluated. The key difference
between the groups was that students in the treatment group were explicitly
taught problem-solving strategies through verbal and visual demonstrations
while students in the comparison group were not taught these strategies.
This study demonstrated substantively important positive effects with
marginal significance in favor of the treatment group.228

Scaffolded Practice
Scaffolded practice, a transfer of control of problem solving from the
teacher to the student, was a component of mathematics interventions in
four of the six studies.229 In each study, the intervention groups that
included scaffolded practice demonstrated significant positive effects;
however, it is not possible to parse the effects of scaffolded instruction
from the other components of explicit instruction in these multicomponent
interventions.

228
For this guide, the panel defined marginally significant as a p-value in the range of .05 to .10.
Following WWC guidelines, an effect size greater than 0.25 is considered substantively
important.
229
Darch, Carnine, and Gersten (1984); Fuchs et al. (2003a); Jitendra et al. (1998); Tournaki
(2003).
Assisting Students Struggling with Mathematics 91

Student Verbalization
Three of the six studies230 included student verbalization of problem-
solution steps in the interventions. For example, Schunk and Cox (1986)
assessed the effect of having students verbalize their subtraction problem-
solving steps versus solving problems silently. There were significant and
substantial positive effects in favor of the group that verbalized steps. The
Tournaki (2003) intervention also included student verbalization among
other components and had significant positive effects. Among other
intervention components, Jitendra et al. (1998) included student
verbalization through student responses to a teacher’s facilitative
questions. Again, the effects were substantively important or statistically
significant and positive, but they cannot be attributed to a single
component in this multi component intervention.

Corrective Feedback
Four of the six studies included immediate corrective feedback in the
mathematics interventions.231 For example, in the Darch, Carnine, and
Gersten (1984) study, when a student made an error, teachers in the
treatment group would first model the appropriate response, then prompt
the students with questions to correct the response, then reinforce the
problem-solving strategy steps again. In three of the studies,232 the effects
of the corrective feedback component cannot be isolated from the effects of
the other instructional components; however, the effects of the
interventions including corrective feedback were positive and significant.

Cumulative Review
The panel’s assertion that cumulative review is an important
component of explicit instruction is based primarily on expert opinion
because only one study in the evidence base included cumulative review as
a component of the intervention.233 This study had positive significant

230
Schunk and Cox (1986); Jitendra et al. (1998); Tournaki (2003).
231
Darch, Carnine, and Gersten (1984); Jitendra et al. (1998); Tournaki (2003); Schunk and Cox
(1986).
232
Darch, Carnine, and Gersten (1984); Jitendra et al. (1998); Tournaki (2003).
233
Fuchs et al. (2003a).
92 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

effects in favor of the instructional group that received explicit instruction


in strategies for solving word problems.
In summary, the components of explicit and systematic instruction are
consistently associated with significant positive effects on mathematics
competency, most often when these components are delivered in
combination. An example of a study that examines the effects of a
combination of these components is described here.

A Study of Explicit and Systematic Instruction—Tournaki (2003)


Explicit and systematic instruction is a multicomponent approach, and
an intervention examined in Tournaki (2003) exemplifies several
components in combination. This study was conducted with 42 students in
grade 2 special education classrooms.234 The students, between 8 and 10
years old, were classified as learning disabled with weaknesses in both
reading and mathematics. Twenty-nine were boys, and 13 were girls.
Prior to the intervention, the students completed a pretest assessment
consisting of 20 single-digit addition problems (such as 6 + 3 = ). Internal
consistency of the assessment was high (Cronbach’s alpha of .91). Student
performance on the assessment was scored for accuracy and latency (the
time it took each student to complete the entire assessment). The accuracy
score is a measure of student ability to perform mathematical operations,
and the latency score is an indication of student fluency with single-digit
addition facts. After the intervention, students completed a posttest
assessment that was identical to the pretest.
Students were randomly assigned to one of three groups (two
instruction groups and a comparison group).235 Students in the two
instruction groups met individually with a graduate assistant for a
maximum of eight 15-minute supplemental mathematics sessions on
consecutive school days. Instructional materials for both groups consisted

234
The sample also included 42 grade 2 students from general education classrooms, but only the
results for the special education students are presented as relevant to this practice guide.
235
Students in the comparison group received only the pretest and posttest without any supple-
mental mathematics instruction outside their classroom. Because the scope of the practice
guide is examining the effects of methods of teaching mathematics for low-achieving stu-
dents, the comparison group findings are not included here.
Assisting Students Struggling with Mathematics 93

of individual worksheets for each lesson with a group of 20 single-digit


addition problems covering the range from 2 + 2 =__ to 9 + 9 = __.
The Strategy instruction group received explicit and systematic
instruction to improve fact fluency. The instruction began with the teacher
modeling the minimum addend strategy for the students and thinking
aloud. This strategy is an efficient approach for solving single-digit
addition problems (such as 5 + 3 = __). The teacher began by saying,
“When I get a problem, what do I do? I read the problem: 5 plus 3 equals
how many? Then I find the smaller number.” Pointing to the number, the
teacher says, “Three. Now I count fingers. How many fingers am I going to
count? Three.” The teacher counts three fingers, points to the larger
number and says, “Now, starting from the larger number, I will count the
fingers.” The teacher points to the 5, then touches each finger as she says,
“5, 6, 7, 8. How many did I end up with? Eight. I’ll write 8 to the right of
the equal sign.” After writing the number, the teacher finishes modeling by
saying, “I’ll read the whole problem: 5 plus 3 equals 8.”
The teacher and student solved two problems together through
demonstration and structured probing. The student was then asked to solve
a third problem independently while verbalizing the strategy steps aloud.
When a student made an error, the teacher gave corrective feedback. The
student was asked to solve the remaining problems without verbalization
and to work as fast as possible, but when an error occurred, the teacher
interrupted the lesson and reviewed the steps in the strategy.
Students in the Drill and Practice group were asked to solve the
problems as quickly as possible. At the completion of each lesson, the
teacher marked the student’s errors and asked the student to recompute. If
the error persisted, the teacher told the student the correct answer. Results
indicate significant and substantial positive effects in favor of the Strategy
group, which received explicit and systematic instruction, relative to Drill
and Practice group, which received a more traditional approach. In this
study, the combination of teacher demonstration, student verbalization, and
corrective feedback was successful in teaching students with mathematics
difficulties to accurately complete single-digit addition problems.
Table D1. Studies of interventions that included explicit instruction and met WWC Standards
(with and without reservations)

Source: Authors’ analysis based on studies in table.


a Outcomes are reported as effect sizes. For a p-value < .05, the effect size is significant (*); for a p-value < .10, the effect size is marginally

significant (~); for a p-value ≥ .10, the effect size is not significant (n.s.).
Assisting Students Struggling with Mathematics 95

Recommendation 4. Interventions Should Include Instruction


on Solving Word Problems That Is Based on Common
Underlying Structures

Level of Evidence: Strong


The panel rated the level of evidence as strong. We located nine
studies that met the standards of the WWC or met the standards with
reservations and demonstrated support for the practice of teaching students
to solve word problems based on their underlying structures.236 Appendix
Table D2 provides an overview of each of the interventions examined in
these nine studies.
In all nine interventions, students were taught to recognize the
structure of a word problem in order to solve it, and they were taught how
to solve each problem type.237 Six of the studies took the instruction on
problem structure a step further. Students were taught to distinguish
superficial from substantive information in word problems in order to
transfer solution methods from familiar problems they already knew how
to solve to problems that appeared unfamiliar.238 Because of the large
number of high-quality randomized studies conducted that examined this
practice and because most of the interventions examined led to significant
and positive effects on word problem outcomes for children designated as
low achieving and/or learning disabled, we conclude that there is strong
evidence to support this recommendation.

236
Jitendra et al. (1998); Xin, Jitendra, and Deatline-Buchman (2005); Darch, Carnine, and Gersten
(1984); Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004); Fuchs, Fuchs,
Finelli et al. (2004); Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Craddock et al. (2008).
237
Jitendra et al. (1998); Xin, Jitendra, and Deatline-Buchman (2005); Darch, Carnine, and
Gersten (1984); Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004);
Fuchs, Fuchs, Finelli et al. (2004); Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Craddock et
al. (2008).
238
Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004); Fuchs, Fuchs,
Finelli et al. (2004); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al. (2008).
96 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Summary of Evidence

Teach the Structure of Problem Types


In three of the studies, students were taught to identify problems of a
given type and then to design and execute appropriate solution strategies
for each type.239 In one of these interventions, students learned to represent
the problem using a schematic diagram.240 Once students learned to identify
the key problem features and map the information onto the diagram, they
learned to solve for unknown quantities in word problems while still
representing the problem using a schematic diagram. This intervention had
significant and positive effects on a word problem outcome based on a test
of problems similar to those taught during the intervention.
In another intervention that also led to a significant and positive effect on a
word problem outcome, students were taught to discriminate between
multiplication and addition problems, and between multiplication and division
problems.241 To discriminate multiplication from addition problems, students
were taught that if a problem asks them to use the same number multiple times
(sometimes signaled by the words “each” and “every”) to obtain the total
number, the problem requires multiplication. If the problem does not ask the
student to use the same number multiple times to obtain the total number, the
problem requires addition. Next, after students learned the relationship between
multiplication and division through the concept of number families, they
learned to multiply when the two smaller numbers are given without the big
number and to divide when the big number is given.

Transfer Solution Methods from Familiar Problem Types to Problems


That Appear Unfamiliar
In addition to teaching students to recognize and solve different
problem types, six of these studies taught students how to transfer solution
methods to problems that appear different but really require the same

239
Jitendra et al. (1998); Xin, Jitendra, and Deatline-Buchman (2005); Darch, Carnine, and
Gersten (1984).
240
Xin, Jitendra, and Deatline-Buchman (2005).
241
Darch, Carnine, and Gersten (1984).
Assisting Students Struggling with Mathematics 97

solution methods as those they already know how to solve.242 In each of


these interventions, students were first taught the pertinent structural
features and problem-solution methods for different problem types. Next,
they were taught about superficial problem features that can change a
problem without altering its structure or solution (for example, different
format, different key vocabulary, additional or different question,
irrelevant information) and how to solve problems with varied cover
stories and superficial features.
In all six studies, word problem outcome measures ranged from those
where the only source of novelty was the cover story (immediate transfer),
to those that varied one or more superficial features (near or far transfer).
In five cases243 the average impact of the intervention on these outcome
measures was positive and significant for the samples designated as low
achieving and/or learning disabled, and in one case,244 the impact was
marginally significant. These studies show that instruction on problem
structure and transferring known solution methods to unfamiliar problems
is consistently associated with marginally or statistically significant
positive effects on word problem solving proficiencies for students
experiencing mathematics difficulties.

A Study of Teaching Students to Transfer Solution Methods—Fuchs,


Fuchs, Finelli, Courey, and Hamlett (2004)
Fuchs and colleagues (2004) conducted a study that investigated the
effects of teaching students how to transfer known solution methods to
problems that are only superficially different from those they already know
how to solve.245 The authors randomly assigned 24 teachers to three groups:
1) transfer instruction, 2) expanded transfer instruction, and 3) regular

242
Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004); Fuchs, Fuchs,
Finelli et al. (2004); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al. (2008).
243
Fuchs et al. (2003a); Fuchs et al. (2003b); Fuchs, Fuchs, Prentice et al. (2004); Fuchs, Fuchs,
Finelli et al. (2004); Fuchs, Fuchs, Craddock et al. (2008).
244
Fuchs, Seethaler et al. (2008).
245
Fuchs, Fuchs, Finelli et al. (2004).
98 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

basal instruction (comparison group).246 The 351 students in these 24


classes that were present for each of the pretests and posttests were
participants in the study.
The intervention included 25- to 40-minute lessons, approximately
twice per week for 17 weeks.247 Students in the expanded transfer condition
learned basic math problem-solving strategies in the first unit of instruction
(six sessions over three weeks). They were taught to verify that their
answers make sense; line up numbers from text to perform math operations;
check operations; and label their work with words, monetary signs, and
mathematical symbols.
The remaining units each focused on one of four problem types: 1)
shopping list problems (buying multiple quantities of items, each at a different
price); 2) buying bag problems (determining how many bags containing a
specified number of objects are needed to come up with a desired total number
of objects); 3) half problems (determining what half of some total amount
is); and 4) pictograph problems (summing two addends, one derived from a
pictograph). There were seven sessions within each unit.
In sessions one through four, with the help of a poster listing the steps,
students learned problem-solution rules for solving the type of problem
being taught in that particular unit. In the first session, teachers discussed
the underlying concepts related to the problem type, presented a worked
example, and explained how each step of the solution method was applied
in the example. After presenting several worked examples, the teachers
presented partially worked examples while the students applied the steps of
the solution method. Students then completed one to four problems in
pairs. Sessions two through four were similar, but more time was spent on
partially worked examples and practice, and at the end of each session,
students completed a problem independently.

246
Since the comparison between the expanded transfer condition and the control condition (reg-
ular basal instruction) is most relevant to this practice guide, we do not discuss the transfer
instruction condition here.
247
Although this intervention was taught in a whole-class format, the authors reported separate
effects for students classified as low achieving and for students classified as learning disabled;
therefore, the results are relevant to this practice guide.
Assisting Students Struggling with Mathematics 99

In sessions five and six, teachers taught students how to transfer the
solution methods using problems that varied cover stories, quantities, and
one transfer feature per problem. In session five, the teachers began by
explaining that transfer means to move and presented examples of how
students transfer skills. Then, teachers taught three transfer features that
change a problem without changing its type or solution, including
formatting, unfamiliar vocabulary, and posing a different question. These
lessons were facilitated by a poster displayed in the classroom about the
three ways problems change. Again, teachers presented the information
and worked examples, and moved gradually to partially worked examples
and practice in pairs. Session six was similar to session five, but the
students spent more time working in pairs, and they completed a transfer
problem independently.
In the seventh session, teachers instructed students on three additional
superficial problem features including irrelevant information, combining
problem types, and mixing superficial problem features. Teachers taught
this lesson by discussing how problems encountered in “real life”
incorporate more information than most problems that the students know
how to solve. They used a poster called Real-Life Situations to illustrate
each of these superficial problem features with a worked example. Next,
students worked in pairs to solve problems that varied real-life superficial
problem features and then completed a problem independently.
The authors used four measures to determine the results of their
intervention on word problem-solving proficiencies. The first measure
used novel problems structured the same way as problems used in the
intervention. The second incorporated novel problems that varied from
those used in instruction in terms of the look or the vocabulary or question
asked. The third incorporated novel problems that varied by the three
additional transfer features taught in session seven. The fourth was a
measure designed to approximate real-life problem solving. Although this
intervention was taught in a whole-class format, the authors separated
Table D2. Studies of interventions that taught students to discriminate problem types that met WWC
standards (with or without reservations)
a Outcomes are reported as effect sizes. For a p-value < .05, the effect size is significant (*); for a p-value < .10, the effect size is marginally
significant (~); for a p-value ≥ .10, the effect size is not significant (n.s.).
b Thirteen students in this sample were classified as having a learning disability, one as having mental retardation, eight as having a speech

disorder, and two as having attention deficit/hyperactivity disorder.


c Fifteen students in this sample were classified as having a learning disability and five were classified as having an “other” disability.

d Seventeen students in this sample were classified as having a learning disability, five as being educable mentally retarded, and three as being

seriously emotionally disturbed.


e Eighteen students in this sample were classified as having a learning disability, one as being seriously emotionally disturbed, and three were

not labeled.
f Twenty-two students in this sample were classified as having a learning disability, one as being mildly mentally retarded, one as having a

behavior disorder, and three as having speech delay. Source: Authors’ analysis based on studies in table.
102 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

results for students classified as low performing248 and for students


classified as learning disabled. The average impacts on these four outcome
measures were positive and significant for both the sample designated as
low performing and the sample designated as learning disabled. It is
notable that the intervention had a positive and significant impact on the
far transfer measure (the measure that approximated real-life problem
solving). This study demonstrates a successful approach for instructing
students with mathematics difficulties on solving word problems and
transferring solution methods to novel problems.

Recommendation 5. Intervention Materials Should Include


Opportunities for Students to Work with Visual Representations
of Mathematical Ideas and Interventionists Should Be Proficient
in the Use of Visual Representations of Mathematical Ideas

Level of Evidence: Moderate


The panel judged the level of evidence for this recommendation to be
moderate. We found 13 studies conducted with students classified as
learning disabled or low achieving that met WWC standards or met
standards with reservations.249 Four in particular examined the impact of
tier 2 interventions against regular tier 1 instruction.250 Appendix Table D3
provides an overview of these 13 studies. Note that in an attempt to
acknowledge meaningful effects regardless of sample size, the panel
followed WWC guidelines and considered a positive statistically
significant effect, or an effect size greater than 0.25, as an indicator of
positive effects.251

248
Using pretest scores on the first transfer problem-solving measure, the authors designated each
student as low performing, average performing, or high performing.
249
Artus and Dyrek (1989); Darch, Carnine, and Gersten (1984); Fuchs, Powell et al. (2008);
Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al. (2008); Fuchs et al. (2005);
Jitendra et al. (1998); Butler et al. (2003); Walker and Poteet (1989); Wilson and Sindelar
(1991); Witzel, Mercer, and Miller (2003); Witzel (2005); Woodward (2006).
250
Fuchs, Powell et al. (2008); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al.
(2008); Fuchs et al. (2005).
251
For more details on WWC guidelines for substantively important effects, see the What Works
Clearinghouse Procedures and Standards Handbook (WWC, 2008).
Assisting Students Struggling with Mathematics 103

Summary of Evidence
The representations in 11 of the 13 studies were used mainly to teach
word problems and concepts (fractions and prealgebra).252
In one study, visual representations were used to teach mathematics
facts.253 In all 13 studies, representations were used to understand the
information presented in the problem. Specifically, the representations
helped answer such questions as what type of problem it is and what oper-
ation is required. In all 13 studies, visual representations were part of a
complex multicomponent instructional intervention. Therefore, it is not
possible to ascertain the role and impact of the representation component.
Of the 13 studies, 4 used visual representations, such as drawings or
other forms of pictorial representations, to scaffold learning and pave the
way for the understanding of the abstract version of the representation.254
Jitendra et al. (1998) examined the differential effects of two instructional
strategies, an explicit strategy using visual representations and a traditional
basal strategy. Students were taught explicitly to identify and differentiate
among word problems types and map the features of the problem onto the
given diagrams specific to each problem type. The intervention
demonstrated a nonsignificant substantively important positive effect.
Wilson and Sindelar (1991) used a diagram to teach students the “big num-
ber” rule (e.g., when a big number is given, subtract) (ES = .82~).
Woodward (2006) explored the use of visuals such as a number line to help
students understand what an abstract fact such as 6 × 7 = meant. The study
yielded a substantively important positive effect on mathematics facts, and
a positive and marginally significant average effect on operations.
Three studies used manipulatives in the early stages of instruction to
reinforce understanding of basic concepts and operations.255 For
example, Darch et al. (1984) used concrete models such as groups of
boxes to teach rules for multiplication problems. Similarly, Fuchs,
252
Jitendra et al. (1998); Butler et al. (2003); Witzel (2005); Darch, Carnine, and Gersten (1984);
Fuchs, Powell et al. (2008); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al.
(2008); Fuchs et al. (2005); Walker and Poteet (1989); Wilson and Sindelar (1991); Witzel,
Mercer, and Miller (2003).
253
Woodward (2006).
254
Jitendra et al. (1998); Walker and Poteet (1989); Wilson and Sindelar (1991); Woodward (2006).
255
Darch et al. (1984); Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Craddock et al. (2008).
104 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Fuchs, Craddock et al. (2008) used manipulatives in their tutoring


sessions to target and teach the most difficult concepts observed in the
classroom. In another study, Fuchs, Seethaler et al. (2008) used concrete
materials and role playing to help students understand the underlying
mathematical structure of each problem type. In all these studies,
manipulatives were one aspect of a complex instructional package. The
studies resulted in mostly significant positive domain average effect
sizes in the range of .60 to 1.79.
In six studies, both concrete and visual representations were used to
promote mathematical understanding.256 For example, Artus and Dyrek
(1989) used concrete objects (toy cars, empty food wrappers) and visuals
(drawings) to help students understand the story content, action, and
operation in the word problems (ES = .87~). Likewise, Fuchs, Powell et
al. (2008) used manipulatives in the initial stages and later pictorial
representations of ones and tens in their software program (ES = .55, n.s.).
However, in both studies, concrete objects and visual representations were
not part of an instructional format that promoted systematic scaffolded
learning. In other words, instruction did not include fading the
manipulatives and visual representations to promote understanding of
math at the more typical abstract level.
In the remaining four studies, manipulatives and visual representations
were presented to the students sequentially to promote scaffolded
instruction.257 This model of instruction, with its underpinning in Bruner’s
(1966) work, is referred to as a concrete to representation to abstract
(CRA) method of instruction. The CRA method is a process by which stu-
dents learn through the manipulation of concrete objects, then through
visual representations of the concrete objects, and then by solving
problems using abstract notation.258 Fuchs et al. (2005) taught 1st grade
students basic math concepts (e.g., place value) and operations initially
using concrete objects, followed by pictorial representations of blocks, and
finally at the abstract level (e.g., 2 + 3 = __) without the use of
256
Artus and Dyrek (1989); Butler et al. (2003); Fuchs, Powell et al. (2008); Fuchs et al. (2005);
Witzel (2005); Witzel, Mercer, and Miller (2003).
257
Butler et al. (2003); Fuchs et al. (2005); Witzel (2005); Witzel, Mercer, and Miller (2003).
258
Witzel (2005).
Assisting Students Struggling with Mathematics 105

manipulatives or representations. Butler et al. (2003) examined the


differential impact of using two types of scaffolded instruction for teaching
fractions, one that initiated scaffolding at the concrete level (concrete-
representation-abstract) and the other that started at the representation level
(representation-abstract). Neither variation resulted in significant
differences.
Witzel (2005) and Witzel, Mercer, and Miller (2003) investigated the
effectiveness of the scaffolded instruction using the CRA method to teach
prealgebra (e.g., X – 4 = 6) to low-achieving students and students with
disabilities. Using an explicit instructional format, Witzel taught students
initially using manipulatives such as cups and sticks. These were replaced
with drawings of the same objects and finally faded to typical abstract
problems using Arabic symbols (as seen in most textbooks and
standardized exams). Both studies resulted in statistically significant or
substantively important positive gains (Witzel, Mercer, and Miller, 2003
and ES = .83*; Witzel, 2005 and ES = .54, n.s.). One of these studies is
described in more detail here.

A Study of CRA Instruction—Witzel, Mercer, and Miller (2003)


In 2003, Witzel, Mercer, and Miller published a study that investigated
the effects of using the CRA method to teach prealgebra.259 The
participants in the study were teachers and students in 12 grade 6 and 7
classrooms in a southeastern urban county. Each teacher taught one of two
math classes using CRA instruction (treatment group) and the other using
abstract-only traditional methods (traditional instruction group). Of those
participating, 34 students with disabilities260 or at risk for algebra
difficulty261 in the treatment group were matched with 34 students with
similar characteristics across the same teacher’s classes in the traditional
instruction group.
259
Witzel, Mercer, and Miller (2003).
260
These students were identified through school services as those who needed additional
support, had a 1.5 standard deviation discrepancy between ability and achievement, and had
math goals listed in their individualized education plans.
261
These students met three criteria: performed below average in the classroom according to the
teacher, scored below the 50th percentile in mathematics on the most recent statewide
achievement test, and had socioeconomically disadvantaged backgrounds.
Table D3. Studies of interventions that used visual representations that met WWC standards
(with and without reservations)
Source: Authors’ analysis based on studies in table.
a
Instructional interventions in all the studies listed were multicomponent in nature, with visuals being one of those components.
b
Outcomes are reported as effect sizes. For a p-value < .05, the effect size is significant (*), for a p-value < .10, the effect size is
marginally significant (~); for a p-value ≥ .10, the effect size is not significant (n.s.).
108 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

The students in both groups were taught to transform equations with


single variables using a five-step 19-lesson sequence of algebra equations.
In each session, the teacher introduced the lesson, modeled the new
procedure, guided students through procedures, and began to have students
working independently. For the treatment group, these four steps were used
for instruction at the concrete, representational, and abstract stages of each
concept. Teachers taught the concrete lessons using manipulative objects
such as cups and sticks, the representational lessons using drawings of the
same objects, and the abstract lessons using Arabic symbols. For the
traditional instruction group, the teachers covered the same content for the
same length of time (50 minutes), but the teachers used repeated abstract
lessons rather than concrete objects and pictorial representations.
A 27-item test to measure knowledge on single-variable equations and
solving for a single variable in multiple-variable equations was
administered to the students one week before treatment (pretest), after the
last day of the treatment (posttest), and three weeks after treatment ended
(follow-up). The CRA intervention had a positive and significant effect on
knowledge of the prealgebra concepts assessed.

Recommendation 6. Interventions at All Grade Levels Should


Devote About 10 Minutes in Each Session to Building Fluent
Retrieval of Basic Arithmetic Facts

Level of Evidence: Moderate


The panel judged the level of evidence supporting the recommendation
to be moderate. We found seven studies conducted with low-achieving or
learning disabled students between grades 1 and 4 that met WWC
standards or met standards with reservations and included fact fluency
instruction in an intervention.262 Appendix Table D4 provides an overview
of the studies and indicates whether fact fluency was the core content of

262
Fuchs, Fuchs, Hamlett et al. (2006); Fuchs, Seethaler et al. (2008); Fuchs et al. (2005). Beirne-
Smith (1991); Fuchs, Powell et al. (2008); Tournaki (2003); Woodward (2006).
Assisting Students Struggling with Mathematics 109

the intervention or a component of a larger intervention. The relevant


treatment and comparison groups in each study and the outcomes for each
domain are included in the table. Grade level, typical session length, and
duration of the intervention are also in the table.
Given the number of high-quality randomized and quasi-experimental
design studies conducted across grade levels and diverse student
populations that include instruction in fact fluency as either an intervention
or a component of an intervention, the frequency of small but substantively
important or significant positive effects on measures of fact fluency and
mathematical operations (effect sizes ranged from .11 to 2.21), and the fact
that numerous research teams independently produced similar findings, the
panel concluded that there is moderate evidence to support the
recommendation to provide instruction in fact fluency for both tier 2 and
tier 3 mathematics interventions across grade levels. The panel
acknowledges that the broader implication that general mathematics
proficiency will improve when fact fluency improves is the opinion of
the panel.

Summary of Evidence
The panel recognizes the importance of knowledge of basic facts
(addition, subtraction, multiplication, and division) for students in
kindergarten through grade 4 and beyond. Two studies examined the ef-
fects of teaching mathematics facts relative to the effects of teaching
spelling or word identification using similar methods.263 In both studies, the
mathematics facts group demonstrated substantively important or
statistically significant positive gains in facts fluency relative to the
comparison group, although the effects were significant in only one of
these two studies.264
Another two interventions included a facts fluency component in
combination with a larger tier 2 intervention.265 For example, in the Fuchs
et al. (2005) study, the final 10 minutes of a 40-minute intervention session

263
Fuchs, Fuchs, Hamlett et al. (2006); Fuchs, Powell et al. (2008).
264
In Fuchs, Fuchs, Hamlett et al. (2006), the effects on addition fluency were positive while
there was no effect on subtraction fluency.
265
Fuchs, Seethaler et al. (2008); Fuchs et al. (2005).
110 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

were dedicated to practice with addition and subtraction facts. In both


studies, tier 2 interventions were compared against typical tier 1 classroom
instruction. In each study, the effects on mathematics facts were not
significant. Significant positive effects were detected in both studies in the
domain of operations, and the fact fluency component may have been a
factor in improving students’ operational abilities.

Relationships among Facts


The panel suggests emphasizing relationships among basic facts, and
five of the studies examined exemplify this practice.266 In Woodward
(2006), the Integrated Strategies group was specifically taught the
connection between single-digit facts (e.g., 4 × 2 = ) and extended
facts (40 × 2 = ). In Fuchs et al. (2005, 2006c, 2008e), mathematics facts
are presented in number families (e.g., 1 + 2 = 3 and 3 – 2 = 1). Beirne-
Smith (1991) examined the effects of a counting up/on procedure that
highlighted the relationship between facts versus a rote memorization
method that did not highlight this relationship. There was a substantively
important nonsignificant positive effect in favor of the group that was
taught the relationship between facts. Note that fact relationships were not
isolated as independent variables in this study.

Materials to Teach Math Facts


The studies used a variety of materials to teach mathematics facts.
Woodward (2006) used worksheets, number lines, and arrays of blocks
projected on overheads to help students visualize fact strategies. Tournaki
(2003) also used worksheets. Three studies included flash cards. 267 For
example, the Fuchs, Seethaler et al. (2008) intervention included flash
cards with individual addition and subtraction problems on each card.
Students had up to two minutes to respond to as many cards as they could,
and they were provided with corrective feedback on up to five errors each
session.

266
Beirne-Smith (1991); Fuchs et al. (2005); Fuchs, Fuchs, Hamlett et al. (2006), Fuchs,
Seethaler et al. (2008); Woodward (2006).
267
Beirne-Smith (1991); Fuchs, Seethaler et al. (2008); Fuchs, Powell et al. (2008).
Assisting Students Struggling with Mathematics 111

Three studies included computer assisted instruction to teach


mathematics facts.268 In all three interventions, students used a computer
program designed to teach addition and subtraction facts. In this program,
a mathematics fact was presented briefly on the computer screen. When the
fact disappeared, the student typed the fact. If the student made an error,
the correct fact was displayed with accompanying audio, and the student
had the opportunity to type the fact again. For two of the studies, the
duration of the presentation on the screen was tailored to the student’s
performance (with less time as the student gained proficiency) and the
difficulty of facts increased as competency increased.269

Time
The panel advocates dedicating about 10 minutes a session to building
fact fluency in addition to the time dedicated to tier 2 and tier 3
interventions. The seven studies supporting this recommendation dedicated
a minimum of 10 minutes a session to fact fluency activities.

Explicit Teaching Strategies for Building Fact Fluency


Another three studies in the evidence base address methods for teach-
ing basic facts to students by comparing instructional approaches. 270 Both
BeirneSmith (1991) and Tournaki (2003) investigated the effects of being
taught a counting up/on strategy relative to a rote memorization procedure
for promoting fact fluency. In the Beirne-Smith (1991) study, perhaps not
surprisingly as both interventions were taught to enhance fact fluency, the
addition facts competency of students in both groups improved. However,
there was a substantively important nonsignificant positive effect in favor
of the counting-on group when the two groups were compared. In the
Tournaki (2003) study, the latency of responses on a fact fluency posttest
decreased271 while the accuracy of posttest responses significantly

268
Fuchs et al. (2005); Fuchs, Fuchs, Hamlett et al. (2006); Fuchs, Powell et al. (2008).
269
Fuchs et al. (2005); Fuchs, Fuchs, Hamlett et al. (2006).
270
Beirne-Smith (1991); Tournaki (2003); Woodward (2006).
271
The latency decrease was marginally significant. A decrease in latency indicates that students
in the counting-on group were answering fact problems more quickly than students in the rote
memorization group
112 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

increased for the counting-on group relative to the rote memorization


group.
Similarly, Woodward (2006) examined an integrated approach that
combined instruction in strategies, visual representations, and timed
practice drills (the Integrated Strategies group) versus a traditional timed
drill and practice approach for building multiplication fact fluency (the
Timed-Practice Only group). In the Integrated Strategies group, difficult
facts were taught through derived fact strategies or doubling and doubling
again strategies. When WWC multiple comparison adjustments were
applied to outcomes, none of the multiplication fact outcomes were
significant, though effects were substantively important and positive in
favor of the integrated approach.272 The operations domain showed mixed
effects with approximation scores in favor of the integrated approach and
operations scores in favor of the Timed-Practice Only group.
In summary, the evidence demonstrates substantively important or
statistically significant positive effects for including fact fluency activities
as either stand-alone interventions or components of larger tier 2
interventions. However, because these effects did not consistently reach
statistical significance, the panel is cautious and acknowledges that the
level of evidence for this recommendation is moderate. There is also
evidence that strategy-based instruction for fact fluency (e.g., teaching the
counting-on procedure) is a superior approach over rote memorization.
Further, many of the studies included here taught the relationships among
facts, used a variety of materials such as flash cards and computer assisted
instruction, and taught math facts for a minimum of 10 minutes a session.
Although these components of the interventions were most often not inde-
pendent variables in the studies, they are all advocated by the panel. An
example of a study investigating the effects of a fact fluency intervention is
detailed here.

272
When a study examines many outcomes or findings simultaneously, the statistical significance
of findings may be overstated. The WWC makes a multiple comparison adjustment to prevent
drawing false conclusions about the number of statistically significant effects (WWC, 2008).
Assisting Students Struggling with Mathematics 113

A Study of a Fact Fluency Intervention— Fuchs, Powell, Hamlett,


and Fuchs (2008)
This study was conducted with 127 students in grade 3 classrooms in
Tennessee and Texas.273 The students were all identified as having either
math difficulties or math and reading difficulties.
Before the intervention, the students completed several pretest
assessments. The assessment that related to fact retrieval consisted of one
subtest with three sets of 25 addition fact problems and a second subtest
with three sets of 25 subtraction fact problems. Students had one minute to
write answers for each set within a subtest. Internal consistency for the sets
ranged between .88 and .93. Scores on sets of items were combined into a
single fact retrieval score. After the intervention, students completed this
same fact retrieval assessment among a battery of posttests.
Students were randomly assigned to one of four groups (three
mathematics instruction groups and a reading instruction comparison
group). For this recommendation, we report only on the comparison
between the Fact Retrieval group (n = 32) and the Word Identification
comparison group (n = 35).274 Students in both groups met individually
with a tutor275 for 15 to 18 minutes during three sessions each week for 15
weeks.
Sessions for the Fact Retrieval instruction group consisted of three
activities. First, the students received computer assisted instruction (CAI).
In the computer program, an addition or subtraction mathematics fact
appeared on the screen for 1.3 seconds. When the fact disappeared, the
student typed the fact using short-term memory. A number line illustrated
the mathematics fact on the screen with colored boxes as the student typed.
If the student typed the fact correctly, applause was heard, and the student

273
This study met standards with reservations because of high attrition. The sample initially
included 165 students randomized to the conditions and 127 in the postattrition sample. The
authors did demonstrate baseline equivalence of the postattrition sample.
274
The third group was Procedural/Estimation Tutoring, which targeted computation of two-digit
numbers. The fourth group was a combination of Procedural/Estimation Tutoring and Fact
Retrieval.
275
There were 22 tutors. Some were masters or doctoral students. Most had teaching or tutoring
experience.
Table D4. Studies of interventions that included fact fluency practices that met WWC standards
(with and without reservations)

Source: Authors’ analysis based on studies in table.


a
Intervention means that the entire intervention was on math facts. Component means that math facts were just one part of the
intervention.
b
Outcomes are reported as effect sizes. For a p-value < .05, the effect size is significant (*); for a p-value < .10, the effect size is
marginally significant (~); for a p-value ≥ .10, the effect size is not significant (n.s.).
Assisting Students Struggling with Mathematics 115

was awarded points. Each time the student accumulated five points,
animated prizes (e.g., a picture of a puppy) appeared in the student’s
animated “treasure chest.” If the student typed the mathematics fact
incorrectly, the fact reappeared and the student was prompted to type it
again.
The second instructional activity, flash card practice, began after 7.5
minutes of CAI. Flash card practice with corrective feedback included two
types of flash cards. The first set of flash cards depicted written facts
without answers. Students were encouraged to answer as many problems
as possible in two minutes. After three consecutive sessions with a
minimum of 35 correct responses, the student was presented with a second
set of flash cards that contained a number line similar to the CAI number
line. The student was asked to respond with the appropriate mathematics
facts to accompany the number line for as many cards as possible within
the time frame. Corrective feedback was provided for a maximum of five
errors per flash card activity. The third activity during Fact Retrieval
instruction focused on cumulative review. Students were allotted two
minutes to complete 15 mathematics fact problems using paper and pencil.
Students in the Word Identification comparison group received
computer assisted instruction and participated in repeated reading with
corrective feedback during their sessions. The content was tailored to the
student’s reading competency level as determined by a pretest.
Results indicated significant positive effects on fact fluency in favor of
the group that received fact retrieval instruction relative to the comparison
group that received instruction in word identification. These results suggest
that it is possible to teach struggling students mathematics facts in as small
an amount of time as 45 minutes of instruction a week when using flash
cards and CAI.
116 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Recommendation 7. Monitor the Progress of Students Receiving


Supplemental Instruction and Other Students Who Are at Risk

Level of Evidence: Low


The panel rated the level of evidence for this recommendation as low.
The panel relied on the standards of the American Psychological
Association, the American Educational Research Association, and the
National Council on Measurement Education276 for valid assessment
instruments, along with expert judgment, to evaluate the quality of the
individual studies and to determine the overall level of evidence for this
recommendation.
Evidence for the recommendation included research studies on
mathematics progress monitoring,277 summary reviews of mathematics
progress monitoring research,278 and summary information provided by the
Research Institute on Progress Monitoring279 and the National Center on
Progress Monitoring.280 Very little research evidence specifically addresses
the use of mathematics progress monitoring data within the context of RtI.
Most research on mathematics progress monitoring measures falls into
two categories. One group of studies examines the technical adequacy of
the measures, including their reliability, validity, and sensitivity to growth.
The second investigates teacher use of the measures to modify instruction
for individual students in order to enhance achievement; the bulk of this
second body of research has been conducted primarily in special education
settings and therefore is less relevant to the RtI focus of this practice guide.
As a result, we focus on the technical adequacy studies in this appendix.
Note that because similar and often identical measures are used for
screening and progress monitoring, many of the studies reviewed here
overlap with those discussed for recommendation 1 on screening. The

276
American Educational Research Association, American Psychological Association, and National
Council on Measurement in Education (1999).
277
Clarke et al. (2008); Foegen and Deno (2001); Fuchs et al. (1993); Fuchs, Fuchs, Hamlett,
Thompson et al. (1994); Leh et al. (2007); Lembke et al. (2008).
278
Foegen, Jiban, and Deno (2007).
279
www.progressmonitoring.net/.
280
www.studentprogress.org/.
Assisting Students Struggling with Mathematics 117

same measure may be used as both a screening measure and a progress


monitoring measure; however, the psychometric properties of these mea-
sures are more firmly established when used as screening measures with
fewer researchers investigating the function of the measures for modeling
growth when used for progress monitoring. This disparity in the research
base leads to the panel assigning a moderate level of evidence to
Recommendation 1 and a low level of evidence to Recommendation 7.
The technical adequacy studies of mathematics progress monitoring
measures were not experimental; the researchers typically used
correlational techniques to evaluate the reliability and criterion validity of
the measures and regression methods to examine sensitivity to growth. If
progress monitoring measures are to be deemed trustworthy, relevant
empirical evidence includes data on reliability, concurrent criterion
validity, and sensitivity to growth. Evidence of reliability generally
includes data on inter-scorer agreement,281 internal consistency,282 test-
retest reliability,283 and alternate form reliability.284 Evidence of concurrent
criterion validity is gathered by examining relations between scores on the
progress monitoring measures and other indicators of proficiency in
mathematics. Common criterion measures include scores on group and
individual standardized tests of mathematics, course grades, and teacher
ratings.285
Although issues of reliability and criterion validity are common to both
screening and progress monitoring measures, a critical feature specific to
progress monitoring is that the measures be sensitive to growth. If teachers
are to use progress monitoring measures to evaluate the effects of in-
struction on student learning, researchers must provide evidence that
student scores on the measures change over time, thus providing an
indication of their learning. Most research studies have examined sen-
sitivity to growth by administering parallel forms of a measure across a
period of several weeks or months. In some studies, students receive

281
For example, Fuchs, Fuchs, Hamlett, Thompson et al. (1994).
282
For example, Jitendra, Sczesniak, and Deatline-Buchman (2005).
283
For example, Clarke and Shinn (2004).
284
VanDerHeyden et al. (2001).
285
For example, Foegen and Deno (2001); Fuchs et al. (2003a); Chard et al. (2005).
118 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

typical instruction, and in others, teachers adapt and refine the instruction
in response to the progress monitoring data (often in special education
contexts). In either case, evidence of sensitivity to growth typically
involves computing regression equations to determine slopes of
improvement and reporting these as mean weekly growth rates for a group
of students. As an example, if a progress monitoring measure has a mean
weekly growth rate of .5, teachers could expect that, on average, a
student’s score would increase by 1 point every two weeks. Growth rates
reported in the literature vary considerably across measures and grade
levels; no established standards exist for acceptable rates of student growth
under typical instruction.
We discuss the evidence for measures used across the elementary and
middle school grades and conclude with a more in-depth example of a
technical adequacy study of mathematics progress monitoring measures.

Summary of Evidence

Progress Monitoring in the Primary Grades


Measures for the primary grades typically reflect aspects of number
sense, including strategic counting, numeral identification, and magnitude
comparison. Among the studies examining sensitivity to growth in the
early grades, researchers have relied on separate measures for each of the
different aspects of numeracy.286 Other researchers have combined
individual measures to create composite scores287 or used more
comprehensive multiskill measures.288 But so far, the focus of these studies
has been on screening rather than on progress monitoring. Reliability coef-
ficients for these measures generally exceed .85. Concurrent criterion
validity coefficients with standardized achievement tests are generally in
the .5 to .7 range.289 Mean rates of weekly growth reported in the literature

286
For example, Clarke et al. (2008); Lembke et al. (2008).
287
Bryant, Bryant, Gersten, Scammacca, and Chavez (2008).
288
Fuchs, Fuchs, Compton et al. (2007).
289
Chard et al. (2005); Clarke and Shinn (2004); Clarke et al. (2008); Lembke et al. (2008).
Assisting Students Struggling with Mathematics 119

vary widely, ranging from .1 to .3290 problems a week to .2 to more than 1.0
problems.291

Progress Monitoring in the Elementary Grades


Two types of measures have been investigated for monitoring the
mathematics learning of students in the elementary grades. The bulk of the
research, conducted by a research group led by Dr. Lynn Fuchs,
investigates the characteristics of general outcome measures that represent
grade-level mathematics curricula in computation and in mathematics
concepts and applications.292 These measures were developed in the late
1980s and early 1990s, reflecting the Tennessee state elementary
mathematics curriculum of that time. The measures continue to be widely
used and are recommended by the National Center for Student Progress
Monitoring.293 Teachers should carefully examine the content of the
measures to ensure that they are representative of the existing mathematics
curricula in their states and districts.
The second type of measure is not broadly representative of the
instructional curriculum as a whole, but instead serves as an indicator of
general proficiency in mathematics. Examples of such measures include
basic facts (number combinations)294 and word problem solving.295 Because
the general outcome measures are representative of the broader curriculum,
they offer teachers more diagnostic information about student performance
in multiple aspects of mathematics competence; this advantage is often
gained by using measures that are longer and require more administration
time. The indicator measures are more efficient to administer for regular
progress monitoring but may be as useful for diagnostic purposes.
Evidence of the reliability of the measures is generally strong, with
correlation coefficients above .8, except for the word problem-solving
measures developed by Jitendra’s research team, which are slightly

290
Lembke et al. (2008).
291
Chard et al. (2005).
292
Fuchs and Fuchs (1998); Fuchs, Hamlett, and Fuchs (1998).
293
www.studentprogress.org.
294
Espin et al. (1989); VanDerHeyden, Witt, and Naquin (2003).
295
Jitendra, Sczesniak, and Deatline-Buchman (2005); Leh et al. (2007).
120 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

lower.296 Concurrent criterion validity measures have included group and


individual achievement tests. Validity correlation coefficients range widely
across measure types and grade levels. At the lower end, Espin et al.
(1989) found correlations between the Wide Range Achievement Test and
basic fact measures in the .3 to .5 range for basic facts. In contrast, Fuchs,
Hamlett, and Fuchs (1998) found correlations between the Stanford
Achievement Test Math Computation subtest297 and general outcome
measures of computation to range from .5 to .9 across grades 2 through 5.
In general, concurrent criterion validity coefficients for elementary
mathematics progress monitoring measures are in the .5 to .6 range.
Evidence of sensitivity to growth for elementary measures exists for
the computation and concepts/applications measures developed by Fuchs
and for the word problem-solving measures developed by Jitendra. Mean
growth rates for the Fuchs measures range from .25 to .70. A study by
Shapiro and colleagues,298 using the same measures for students with
disabilities, resulted in mean growth rates of .38 points per week for both
types of measures. Mean weekly growth rates for the Jitendra measures
were .24 points per week.

Progress Monitoring in Middle School


Less evidence is available to support progress monitoring in middle
school.299 Research teams have developed measures focusing on math
concepts typically taught in middle school,300 basic facts301 and estimation.302
Criterion validity across the types of measures varies, but the majority of
correlations coefficients fall in the .4 to .5 range. Helwig and colleagues 303
found higher correlation coefficients with high-stakes state tests in the
range of .6 to .8. Reliability estimates including alternate form, inter-rater,
and test-retest were all of sufficient quality. Greater rates of growth were

296
Jitendra, Sczesniak, and Deatline-Buchman (2005); Leh et al. (2007).
297
Gardner, Rudman, Karlsen, and Merwin (1982).
298
Shapiro, Edwards, and Zigmond (2005).
299
Foegen (2008).
300
Helwig, Anderson, and Tindal (2002).
301
Espin et al. (1989).
302
Foegen and Deno (2001); Foegen (2000).
303
Helwig, Anderson, and Tindal (2002).
Assisting Students Struggling with Mathematics 121

found for the non–concept-based measures with rates around .25 units per
week. A recent study304 compared these measure types with two grade 6
measures305 similar to the measures described above assessing student
understanding of operations and concepts for their grade level. In this case,
middle school students in grades 6, 7, and 8 were assessed using multiple
measures. Evidence was found that even into grades 7 and 8, using grade 6
measures focusing on operations and mathematical concepts still shows
reliability, validity, and sensitivity to growth.

An Example of a Study of the Technical Adequacy of Mathematics


Progress Monitoring Measures—Fuchs, Fuchs, Hamlett, Thompson,
Roberts, Kupek, and Stecker (1994)
A study conducted by Fuchs, Fuchs, Hamlett, Thompson, Roberts,
Kupek, and Stecker (1994) illustrates the type of technical adequacy
evidence that education professionals can use when evaluating and
selecting mathematics progress monitoring measures. The research team
examined the technical features of grade-level general outcome measures
of mathematics concepts and applications. The measures were developed
by analyzing the Tennessee mathematics curriculum at grades 2 through 6
to identify critical objectives essential for mathematics proficiency at each
grade level. The researchers created 30 alternate forms at each grade level
and conducted numerous pilot tests to refine the items and determine
appropriate time limits for administration.
A total of 140 students in grades 2 through 4 participated in the study,
completing weekly versions of the measures for 20 weeks. All students
were enrolled in general education classrooms; about 8 percent of the
students had been identified as having learning disabilities. The students’
general education teachers administered the measures using standardized
procedures, including administration time limits of 6 to 8 minutes, de-
pending on grade level.
The results of the study illustrate the types of data educators should
consider to determine if a mathematics progress monitoring measure is

304
Foegen (2008).
305
Fuchs, Hamlett, and Fuchs (1998); Fuchs et al. (1999).
122 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

trustworthy. The authors report evidence of the reliability of the concepts


and application measures by describing internal consistency coefficients
for students at each grade level (which ranged from .94 to .98). Concurrent
criterion validity was examined by computing correlations between student
scores on the concepts and applications general outcome measures and
their scores on three subscales of the Comprehensive Test of Basic Skills
(Computation, Concepts and Applications, and Total Math Battery).
Results are reported for each subscale at each of the three grade levels,
with coefficients ranging from .63 to .81. Considering these results in the
general context of mathematics progress monitoring measures summarized
above, teachers could feel confident that the concepts and applications
measures demonstrated strong levels of reliability and criterion validity in
this study.
A final consideration is the degree to which the measures are sensitive
to student growth. To explore this feature, the researchers completed a
least-squares regression analysis between calendar days and scores on the
progress monitoring measures; the scores were then converted to represent
weekly rates of improvement. The results ranged from an average increase
of .40 problems per week in grade 2 to .69 in grade 4. Together with the
evidence of reliability and criterion validity, the mean growth rate data
suggest that teachers can have confidence that students will show
improvements in their scores on the measures as their mathematics learn-
ing progresses.
One factor not evident in the technical adequacy data in this study, but
critical for teachers to consider, is the alignment between the general
outcome measure, the instructional curriculum, and expected learning
outcomes. This study produced strong technical adequacy data when it was
conducted in the early 1990s. Teachers considering alternative
mathematics progress monitoring measures to represent the instructional
curriculum are advised to review the content of these measures in light of
current learning expectations for students at each grade level. Given
changes in mathematics curricula over the past 10 to 15 years, it is impor-
tant to evaluate the degree to which the measures continue to represent
important mathematics outcomes.
Assisting Students Struggling with Mathematics 123

Recommendation 8. Include Motivational Strategies in Tier 2


and Tier 3 Interventions

Level of Evidence: Low


The panel judged the level of evidence supporting this
recommendation to be low. The panel found nine studies306 conducted with
low-achieving or learning disabled students between grades 1 and 8 that
met WWC standards or met standards with reservations and included
motivational strategies in an intervention. However, because only two of
these studies investigated a motivational strategy in a tier 2 or tier 3
mathematics intervention as an independent variable, the panel concluded
that there is low evidence to support the recommendation. The panel
recommends this practice for students in tier 2 and tier 3 based both on our
opinion and on the limited evidence base.307 The evidence base is described
below.

Summary of Evidence
Reinforce effort. The panel advocates reinforcing or praising students
for their effort. Schunk and Cox (1986) examined the effects of providing
effort-attributional feedback (e.g., “You’ve been working hard”) during
subtraction instruction versus no effort feedback and found significant
positive effects on subtraction posttests in favor of providing effort
feedback. This study, described in greater detail below, was one of two
studies in the evidence base that examined a motivational strategy as an
independent variable.
Reinforce engagement. The panel also recommends reinforcing
students for attending to and being engaged in lessons. In two of the
studies, students received “points” for engagement and attentiveness as

306
Fuchs et al. (2005); Fuchs, Fuchs, Craddock et al. (2008); Fuchs, Seethaler et al. (2008); Artus
and Dyrek (1989); Fuchs et al. (2003b); Fuchs, Fuchs, Hamlett, Phillips et al. (1994); Fuchs,
Fuchs, Finelli et al. (2006); Heller and Fantuzzo (1993); Schunk and Cox (1986).
307
The scope of this practice guide limited the evidence base for this recommendation to studies
that investigated mathematics interventions for students with mathematics difficulties and
included motivational components. There is an extensive literature on motivational strategies
outside the scope of this practice guide, and the panel acknowledges that there is considerable
debate in that literature on the use of rewards as reinforcers.
124 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

well as for accuracy.308 Accumulated points could be applied toward


“purchasing” tangible rein-forcers. It is not possible to isolate the effects of
reinforcing attentiveness in the studies. In Fuchs et al. (2005), both the
treatment and comparison groups received reinforcement, and in Fuchs,
Fuchs, Craddock et al. (2008), the contrast between reinforcement and no
reinforcement was not reported. But the presence of reinforcers for
attention and engagement in these two studies echoes the panel’s
contention that providing reinforcement for attention is particularly
important for students who are struggling.

Consider Rewarding Accomplishments


The panel recommends that interventionists consider using rewards to
acknowledge accurate work and possibly notifying parents when students
demonstrate gains. In three of the studies, students were provided prizes as
tangible reinforcers for accurate mathematics problem solving.309 In both
Fuchs et al. (2005) and Fuchs, Seethaler et al. (2008), students in tier 2
tutoring earned prizes for accuracy. In both studies, the tier 2 intervention
group demonstrated substantively important positive and sometimes
significant gains relative to the students who remained in tier 1. But it is
not possible to isolate the effects of the reinforcers from the provision of
tier 2 tutoring. In Fuchs, Fuchs, Craddock et al. (2008), the authors note
that the provision of “dollars” that could be exchanged for prizes was more
effective than rewarding students with stickers alone. Because this was not
the primary purpose of the study, the reporting of the evidence for that
finding was not complete; therefore, a WWC review was not conducted for
that finding.
In a fourth study, Heller and Fantuzzo (1993) examined the impacts of
a parental involvement supplement to a mathematics intervention. The
parental involvement component included parents providing rewards for
student success as well as parental involvement in the classroom. The
performance of students who received the parental involvement component
in addition to the school-based intervention significantly exceeded the

308
Fuchs et al. (2005); Fuchs, Fuchs, Craddock et al. (2008).
309
Fuchs et al. (2005); Fuchs, Seethaler et al. (2008); Fuchs, Fuchs, Craddock et al. (2008).
Assisting Students Struggling with Mathematics 125

performance of students in only the school-based intervention. Because the


parental involvement component was multifaceted, it is not possible to
attribute the statistically significant positive effects to rewards alone.

Allow Students to Chart Their Progress and to Set Goals for


Improvement
Five studies included interventions in which students graphed their
progress and in some cases set goals for improvement on future as-
sessments.310 One experimental study examined the effects of student
graphing and goal setting as an independent variable and found
substantively important positive nonsignificant effects in favor of students
who graphed and set goals.311 In two studies, the interventions included
graphing in both groups being compared; therefore, it was not possible to
isolate the effects of this practice.312 In another two studies, students in the
treatment groups graphed their progress as one component of
multicomponent interventions.313 Although it is not possible to discern the
effect of graphing alone, in Artus and Dyrek (1989), the treatment group
made marginally significant gains over the comparison group on a general
mathematics assessment, and in Fuchs, Seethaler et al. (2008), there were
substantively important positive non-significant effects on fact retrieval in
favor of the treatment group.
In summary, the evidence base for motivational components in studies of
students struggling with mathematics is limited. One study that met evidence
standards demonstrated benefits for praising struggling students for their
effort. Other studies included reinforcement for attention, engagement, and
accuracy. Because the effects of these practices were not examined as
independent variables, no inferences can be drawn about effectiveness based
on these studies. Because this recommendation is based primarily on the
opinion of the panel, the level of evidence is identified as low.

310
Artus and Dyrek (1989); Fuchs, Seethaler et al. (2008); Fuchs et al. (2003b); Fuchs, Fuchs,
Hamlett, Phillips et al. (1994); Fuchs, Fuchs, Finelli et al. (2006).
311
Fuchs et al. (2003b).
312
Fuchs, Fuchs, Hamlett, Phillips et al. (1994); Fuchs, Fuchs, Finelli et al. (2006).
313
Artus and Dyrek (1989); Fuchs, Seethaler et al. (2008).
126 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

An Example of a Study That Investigated a Motivational


Component—Schunk and Cox (1986)
This study was conducted with 90 students in grades 6 through 8
classrooms in six schools in Texas. The mean age of the students was 13
years and 7 months, and all students were identified as having learning
disabilities in mathematics.
Before the intervention, the students completed a pretest assessment
that consisted of 25 subtraction problems that required regrouping
operations. After the intervention, a similar assessment of 25 subtraction
problems was completed as a posttest. A separate reliability assessment
demonstrated that the two forms of the subtraction assessment were
highly correlated (r = .82).
Students were stratified by gender and school and then randomly
assigned to one of nine experimental groups. In all groups, the students
received instruction for solving subtraction problems in 45-minute
sessions conducted over six consecutive school days. For this
recommendation, we report only on the comparison between three groups.
One group (n = 30) received effort feedback in addition to performance
feedback during the first three sessions. Another group (n = 30) received
effort feedback in addition to performance feedback during the last three
sessions. A third group (n = 30) did not receive effort feedback (received
only performance feedback).314 Effort feedback consisted of the proctor
commenting to the student, “You’ve been working hard.” Students in both
effort feedback groups received 15 statements of effort feedback across the
entire intervention.
Results indicated significant positive effects for effort feedback
relative to the comparison group regardless of when the student received
the effort feedback. These results suggest that effort feedback is beneficial
for learning disabled students who may not otherwise recognize the causal
link between effort and outcomes.

314
Other group distinctions were related to student verbalization and are described in the
discussion of recommendation 3 (explicit instruction).
Assisting Students Struggling with Mathematics 127

APPENDIX C. DISCLOSURE OF POTENTIAL


CONFLICTS OF INTEREST

Practice guide panels are composed of individuals who are nationally


recognized experts on the topics about which they are rendering
recommendations. The Institute of Education Sciences (IES) expects that
such experts will be involved professionally in a variety of matters that
relate to their work as a panel. Panel members are asked to disclose their
professional involvements and to institute deliberative processes that
encourage critical examination of the views of panel members as they
relate to the content of the practice guide. The potential influence of panel
members’ professional engagements is further muted by the requirement
that they ground their recommendations in evidence documented in the
practice guide. In addition, the practice guide undergoes independent
external peer review prior to publication, with particular focus on whether
the evidence related to the recommendations in the practice guide has been
appropriately presented.
The professional engagements reported by each panel member that
appear most closely associated with the panel recommendations are noted
below.
Russell Gersten has written articles on issues related to assessment and
screening of young children with potential difficulties in learning
mathematics and is currently revising a manuscript on this topic for the
scholarly journal, Exceptional Children. However, there is no fiscal reward
for this or other publications on the topic. He is a royalty author for what
may become the Texas or national edition of the forthcoming (2010/11)
Houghton Mifflin reading series, Journeys. At the time of publication,
Houghton Mifflin has not determined whether they will ever release this
series. Dr. Gersten provided guidance on the product as it relates to
struggling and English language learner students. This topic is not covered
in this practice guide. The panel never discussed the Houghton Mifflin
series. Russell Gersten has no financial stake in any program or practice
mentioned in the practice guide.
128 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Sybilla Beckmann receives royalties on her textbook, Mathematics for


Elementary Teachers, published by Addison-Wesley, a division of Pearson
Education. This textbook, used in college mathematics courses for
prospective elementary teachers, is not within the scope of the review of
this practice guide.
Ben Clarke has developed and authored journal articles about early
numeracy measures that are referenced in the RtI Mathematics practice
guide. Dr. Clarke does not have a current financial stake in any company
or publishing of the measures.
Anne Foegen conducts research and has developed progress
monitoring assessments that are referred to in the RtI-Mathematics practice
guide. Currently, she has no financial interests in these products, which are
not available commercially. The algebra measures are disseminated
through Iowa State University on a fee-based schedule to cover the costs of
personnel for training and materials (not for profit). Dr. Foegen also has
published papers that describe the measures and received a grant that is
supporting a portion of the research. She occasionally does private
consulting related to research on use of the mathematics progress
monitoring measures.
Jon R. Star consults for a company owned by Scholastic, which
produces mathematics educational software. Scholastic may produce other
curricula related to mathematics, but the panel makes no recommendations
for selecting specific curricula.
Bradley Witzel wrote the book, Computation of Fractions, and is
currently writing Computation of Integers and Solving Equations, through
Pearson with Dr. Paul Riccomini. He is also writing the book, RtI
Mathematics, through Corwin Press with Dr. Riccomini. Additionally, Dr.
Witzel has delivered workshop presentations on the structure of RtI (not
associated with the RtI-Mathematics practice guide). The work on his
books is separate from that of the RtI-Mathematics practice guide panel,
and he does not share his work from the panel with the books’ co-authors.
Assisting Students Struggling with Mathematics 129

APPENDIX D. ABOUT THE AUTHORS

Panel

Russell Gersten, PhD, is the director of the Instructional Research


Group in Los Alamitos, California, as well as professor emeritus in the
College for Education at the University of Oregon. Dr. Gersten recently
served on the National Mathematics Advisory Panel, where he co-chaired
the Work Group on Instructional Practices. He has conducted meta-
analyses and research syntheses on instructional approaches for teaching
students with difficulties in mathematics, early screening in mathematics,
RtI in mathematics, and research on number sense. Dr. Gersten has
conducted numerous randomized trials, many of them published in major
education journals. He has either directed or co-directed 42 applied
research grants addressing a wide array of issues in education and has been
a recipient of many federal and non-federal grants (more than $17.5
million). He has more than 150 publications and serves on the editorial
boards of 10 prestigious journals in the field. He is the director of the Math
Strand for the Center on Instruction (which provides technical assistance to
the states in terms of implementation of No Child Left Behind) and the
director of research for the Southwest Regional Educational Laboratory.
Sybilla Beckmann, PhD, is a professor of mathematics at the
University of Georgia. Prior to arriving at the University of Georgia, Dr.
Beckmann taught at Yale University as a J. W. Gibbs Instructor of
Mathematics. Dr. Beckmann has done research in arithmetic geometry, but
her current main interests are the mathematical education of teachers and
mathematics content for students at all levels, but especially for Pre-K
through the middle grades. Dr. Beckmann developed three mathematics
content courses for prospective elementary school teachers at the Uni-
versity of Georgia and wrote a book for such courses, Mathematics for
Elementary Teachers, published by Addison-Wesley, now in a second
edition. She is especially interested in helping college faculty learn to teach
mathematics content courses for elementary and middle grade teachers,
and she works with graduate students and postdoctoral fellows toward that
130 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

end. As part of this effort, Dr. Beckmann directs the Mathematicians


Educating Future Teachers component of the University of Georgia
Mathematics Department’s VIGRE II grant. Dr. Beckmann was a member
of the writing team of the National Council of Teachers of Mathematics
Curriculum Focal Points for Prekindergarten through Grade 8
Mathematics, is a member of the Committee on Early Childhood
Mathematics of the National Research Council, and has worked on the
development of several state mathematics standards. Recently, Dr.
Beckmann taught an average grade 6 mathematics class every day at a
local public school in order to better understand school mathematics
teaching. She has won several teaching awards, including the General
Sandy Beaver Teaching Professorship, awarded by the College of Arts and
Sciences at the University of Georgia.
Benjamin Clarke, PhD, is a research associate at the Instructional
Research Group and Pacific Institutes for Research. He serves as a co-
principal investigator on three federally funded research grants in
mathematics instructions and assessment. His current research includes
testing the efficacy of a kindergarten mathematics curriculum, evaluating
the effectiveness of a grade 1 mathematics intervention program for at-risk
students and examining the effects of a computer software program to
build student understanding of and fluency with computational procedures.
He also serves as the deputy director of the Center on Instruction
Mathematics. Dr. Clarke was a 2002 graduate of the University of Oregon
School Psychology program. He was the recipient of AERA Special
Education Dissertation Award for his work in early mathematics assess-
ment. He has continued to investigate and publish articles and materials in
this area and has presented his work at national conferences.
Anne Foegen, PhD, is an associate professor in the Department of
Curriculum and Instruction at Iowa State University. Her research focuses
on the mathematics development of students with disabilities, including
efforts to develop measures that will allow teachers to monitor the progress
of secondary students in mathematics. Dr. Foegen has also been involved
in examining the mathematics performance of students with disabilities on
large-scale assessments, such as the National Assessment of Educational
Assisting Students Struggling with Mathematics 131

Progress. Her current work in progress monitoring extends research in


curriculum-based measurement (CBM) in mathematics from kindergarten
through grade 12. Her particular focus is on studies at the middle school
and high school levels. In a related project, Dr. Foegen developed CBM
measures and conducted technical adequacy research on the use of the
measures to track secondary students’ progress in algebra and prealgebra.
Laurel Marsh, M.S.E. and M.A.T., is a professional education
instructor who serves as a math coach at Swansfield Elementary School,
Howard County School District, in Columbia, Maryland. Also an instructor
for the University of Maryland Baltimore County and Johns Hopkins Uni-
versity, she has served as an elementary teacher at multiple levels. Ms.
Marsh received a Master of Arts in Teaching from Johns Hopkins
University, with a concentration in both Early Childhood Education and
Elementary Education, as well as a Master of Science in Education from
Johns Hopkins University, with a concentration in School Administration
and Supervision. As a math support teacher, she provides professional
development to teachers of kindergarten through grade 5 for multiple
schools in Howard County. She works with both general educators and
special educators through demonstration lessons, co-teaching situations,
and school-based professional development. She also oversees and
coordinates interventions for students struggling in mathematics.
Jon R. Star, PhD, is an assistant professor of education at Harvard
University’s Graduate School of Education. Dr. Star is an educational
psychologist who studies children’s learning of mathematics in middle and
high school, particularly algebra. His current research explores the
development of flexibility in mathematical problem solving, with
flexibility defined as knowledge of multiple strategies for solving
mathematics problems and the ability to adaptively choose among known
strategies on a particular problem. He also investigates instructional and
curricular interventions that may promote the development of
mathematical understanding. Dr. Star’s most recent work is supported by
grants from the Institute of Education Sciences at the U.S. Department of
Education and the National Science Foundation. In addition, he is
interested in the preservice preparation of middle and secondary
132 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

mathematics teachers. Before his graduate studies, he spent six years


teaching middle and high school mathematics.
Bradley Witzel, PhD, is an associate professor and coordinator of
special education at Winthrop University in Rock Hill, South Carolina. He
has experience in the classroom as an inclusive and self-contained teacher
of students with higher incidence disabilities as well as a classroom
assistant and classroom teacher of students with low incidence disabilities.
Dr. Witzel has taught undergraduate and graduate courses in educational
methods for students with disabilities and secondary students with
disabilities coupled with the needs of English language learning. He has
supervised interns in elementary, secondary, and special education
certification tracks as well as inclusion practices. He has given numerous
professional presentations including strategic math, algebra instruction,
word problem solving, parent involvement, and motivational classroom
management. He has published research and practitioner articles in algebra
and math education as well as positive behavior interventions for students
with and without learning disabilities. He has also written several books
and book chapters on mathematics education and interventions. Overall, he
is concerned with the development of special education teachers and works
to provide researched-validated practices and interventions to professionals
and preprofessionals.

Staff

Joseph A. Dimino, PhD, is a senior research associate at the


Instructional Research Group in Los Alamitos, California, where he is the
coordinator of a national research project investigating the impact of
teacher study groups as a means to enhance the quality of reading
instruction for 1st graders in high-poverty schools and co-principal
investigator for a study assessing the impact of collaborative strategic
reading on the comprehension and vocabulary skills of English language
learners and English-speaking 5th graders. Dr. Dimino has 36 years of
experience as a general education teacher, special education teacher,
Assisting Students Struggling with Mathematics 133

administrator, behavior specialist, and researcher. He has extensive


experience working with teachers, parents, administrators, and
instructional assistants in instruction, early literacy, reading comprehension
strategies, and classroom and behavior management in urban, suburban,
and rural communities. He has published in numerous scholarly journals
and co-authored books in reading comprehension and early reading inter-
vention. Dr. Dimino has delivered papers at various state, national, and
international conferences, including the American Educational Research
Association, Society for the Scientific Study of Reading, Council for
Exceptional Children, International Reading Association, and Association
for Supervision and Curriculum Development. He consults nationally in
early literacy and reading comprehension instruction.
Madhavi Jayanthi, PhD, is a research associate at Instructional
Research Group, Los Alamitos, California. Dr. Jayanthi has more than 10
years experience in working on grants by the Office of Special Education
Programs and Institute of Education Sciences. She has published
extensively in peer-reviewed journals such as Journal of Learning
Disabilities, Remedial and Special Education, and Exceptional Children.
Her research interests include effective instructional techniques for
students with disabilities and at-risk learners, both in the areas of reading
and mathematics.
Shannon Monahan, PhD, is a survey researcher at Mathematica Policy
Research. She has served as a reviewer for the What Works Clearinghouse
for the Reducing Behavior Problems in the Elementary School Classroom
practice guide and the early childhood interventions topic area, and she
coordinated the reviews for this practice guide. Dr. Monahan has worked
extensively on the development and evaluation of mathematics curricula
for low-income children. Currently, she contributes to several projects that
evaluate programs intended to influence child development. Her current
interests include early childhood program evaluation, emergent numeracy,
culturally competent pedagogy, measures development, and research
design.
Rebecca A. Newman-Gonchar, PhD, is a research associate with the
Instructional Research Group. She has experience in project management,
134 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

study design and implementation, and quantitative and qualitative analysis.


Dr. Newman-Gonchar has worked extensively on the development of
observational measures for beginning and expository reading instruction
for two major IES-funded studies of reading interventions for Title I
students. She currently serves as a reviewer for the What Works
Clearinghouse for reading and mathematics interventions and Response to
Intervention. Her scholarly contributions include conceptual, descriptive,
and quantitative publications on a range of topics. Her current interests
include Response to Intervention, observation measure development for
reading and mathematics instruction, and teacher study groups. She has
served as the technical editor for several publications and is a reviewer for
Learning Disability Quarterly.
Libby Scott, M.P.P., is a research analyst at Mathematica Policy
Research and a former classroom educator. She has experience providing
research support and conducting data analysis for various projects on
topics related to out-of-school time programs, disconnected youth, home
schooling households, and item development for a teacher survey. She also
has experience evaluating an out-of-school time program for middle school
students. Ms. Scott used her background in classroom teaching and
education-related research to support the panel in translating research
findings into practitioner friendly text.

REFERENCES

American Educational Research Association, American Psychological


Association, & National Council on Measurement in Education.
(1999). The standards for educational and psychological testing.
Washington, DC: American Educational Research Association
Publications.
American Psychological Association. (2001). Publication manual of the
American Psychological Association (5th ed.). Washington, DC:
Author.
Assisting Students Struggling with Mathematics 135

American Psychological Association. (2002). Criteria for practice


guideline development and evaluation. American Psychologist,
57(December), 1058–1061.
Artus, L. M., & Dyrek, M. (1989). The effects of multiple strategy
intervention on achievement in mathematics. Unpublished master’s
thesis, Saint Xavier College, Chicago.
Baker, S. K., Gersten, R., Dimino, J., & Griffiths, R. (2004). The sustained
use of research-based instructional practice: A case study of peer-
assisted learning strategies in mathematics. Remedial and Special
Education, 25(1), 5–24.
Baker, S., Gersten, R., Flojo, J., Katz, R., Chard, D., & Clarke, B. (2002).
Preventing mathematics difficulties in young children: Focus on
effective screening of early number sense delays (Technical Report No.
0305). Eugene, OR: Pacific Institutes for Research.
Bangert-Drowns, R. L., Kulik, C. C., Kulik, J. A., & Morgan, M. (1991).
The instructional effects of feedback in test-like events. Review of
Educational Research, 61(2), 213–238.
Beckmann, S., Fuson, K., SanGiovanni, J., & Lott Adams, T. (Eds.),
(2009). Focus in grade 5: Teaching with curriculum focal points.
Reston, VA: National Council of Teachers of Mathematics.
Berch, D. B. (2005). Making sense of number sense. Journal of Learning
Disabilities, 38(4), 333–339.
Beirne-Smith, M. (1991). Peer tutoring in arithmetic for children with
learning disabilities. Exceptional Children, 57 (4), 330–337.
Bloom, B. S. (1980). All our children learning. New York: McGraw-Hill.
Bruner, J. (1966). Toward a theory of instruction. Cambridge, MA:
Belknap Press of Harvard University Press.
Bryant, B. R., Bryant, D. P., Kethley, C., Kim, S. A., Pool, C., & Seo, Y.
(2008). Preventing mathematics difficulties in the primary grades: The
critical features of instruction in textbooks as part of the equation.
Learning Disability Quarterly, 31(1), 21.
Bryant, D. P., & Bryant, B. R. (2005). Early mathematics identification
and intervention: Working with students at-risk for mathematics
difficulties. [Power Point presentation]. University of Texas Sys-
136 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

tem/TEA. Retrieved January 2007 from www.k8accesscenter.org/


documents/ SERP-Math.DCAIRppt.ppt.
Bryant, D. P., Bryant, B. R., Gersten, R., Scammacca, N., & Chavez, M.
M. (2008). Mathematics intervention for first- and second-grade
students with mathematics difficulties: The effects of tier 2 in-
tervention delivered as booster lessons. Remedial and Special
Education, 29(1), 20–32.
Bryant, D. P., Bryant, B. R., Gersten, R. M., Scammacca, N. N., Funk, C.,
Winter, A., et al. (2008). The effects of tier 2 intervention on the
mathematics performance of first-grade students who are at risk for
mathematics difficulties. Learning Disability Quarterly, 31(2), 47–63.
Bryant, D. P., Smith, D. D., & Bryant, B. R. (2008). Teaching students
with special needs in inclusive classrooms. Boston, MA: Allyn &
Bacon.
Butler, F. M., Miller, S. P., Crehan, K., Babbitt, B., & Pierce, T. (2003).
Fraction instruction for students with mathematics disabilities:
Comparing two teaching sequences. Learning Disabilities Research &
Practice, 18(20), 99–111.
Carnine, D., Jitendra, A., & Silbert, J. (1997). A descriptive analysis of
mathematics curricular materials from a pedagogical perspective.
Remedial and Special Education, 18(2), 66–81.
Chard, D. J., Clarke, B., Baker, S., Otterstedt, J., Braun, D., & Katz, R.
(2005). Using measures of number sense to screen for difficulties in
mathematics: Preliminary findings. Assessment for Effective
Intervention, 30(2), 3–14.
Clarke, B., Baker, S., Smolkowski, K., & Chard, D. J. (2008). An analysis
of early numeracy curriculum-based measurement: Examining the role
of growth in student outcomes. Remedial and Special Education,
29(1), 46–57.
Clarke, B., & Shinn, M. R. (2004). A preliminary investigation into the
identification and development of early mathematics curriculum-based
measurement. School Psychology Review, 33(2), 234–248.
Assisting Students Struggling with Mathematics 137

Compton, D., Fuchs, L., & Fuchs, D. (2007). The course of reading
disability in first grade: Latent class trajectories and early predictors.
Unpublished manuscript.
Darch, C. (1989). Comprehension instruction for high school learning
disabled students. Research in Rural Education, 5(3), 43–49.
Darch, C., Carnine, D., & Gersten, R. (1984). Explicit instruction in
mathematics problem solving. Journal of Educational Research, 77(6),
351–359.
Dehaene, S. (1999). The number sense: How the mind creates
mathematics. New York: Oxford University Press.
Deno, S. (1985). Curriculum-based measurement: The emerging
alternative. Exceptional Children, 52(3), 219–232.
Epstein, M., Atkins, M., Cullinan, D., Kutash, K., and Weaver, R. (2008).
Reducing Behavior Problems in the Elementary School Classroom:
A Practice Guide (NCEE #2008-012). Washington, DC: National
Center for Education Evaluation and Regional Assistance, Institute of
Education Sciences, U.S. Department of Education. Retrieved January
2009 from http://ies.ed.gov/ncee/wwc/publications/practiceguides.
Espin, C. A., Deno, S. L., Maruyama, G., & Cohen, C. (1989, April). The
basic academic skills sample (BASS): An instrument for the screening
and identification of children at risk for failure in regular education
classrooms. Paper presented at annual meeting of American
Educational Research Association, San Francisco, CA.
Field, M., & Lohr, K. (Eds.). (1990). Clinical practice guidelines:
Directions for a new program. Washington, DC: National Academies
Press.
Foegen, A. (2000). Technical adequacy of general outcome measures for
middle school mathematics. Diagnostique, 25(3), 175–203.
Foegen, A. (2008). Progress monitoring in middle school mathematics:
options and issues. Remedial and Special Education, 29(4), 195–207.
Foegen, A., & Deno, S. L. (2001). Identifying growth indicators for low-
achieving students in middle school mathematics. Journal of Special
Education, 35(1), 4–16. Foegen, A., Jiban, C., & Deno, S. (2007).
138 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Progress monitoring measures in mathematics: A review of the


literature. The Journal of Special Education, 41(2), 121–139.
Fuchs, D., Compton, D. L., & Fuchs, L. (2007). Innovations in identifying
learning disabilities using responsiveness-to instruction. Unpublished
manuscript.
Fuchs, L. S., Compton, D. L., Fuchs, D., Paulsen, K., Bryant, J. D., &
amlett, C. L. (2005). The prevention, identification, and cognitive
determinants of math difficulty. Journal of Educational Psychology,
97(3), 493–513.
Fuchs, L. S., & Fuchs, D. (1998). Treatment validity: A unifying concept
for reconceptualizing the identification of learning disabilities.
Learning Disabilities Research & Practice, 13(4), 204–219.
Fuchs, L. S., & Fuchs, D. (2008). Best practices in progress monitoring
reading and mathematics at the elementary grades. In A. Thomas & J.
Grimes (Eds.), Best practices in school psychology (5th ed.). Bethesda,
MD: National Association of School Psychologists.
Fuchs, L. S., Fuchs, D. L., Compton, D. L., Bryant, J. D., Hamlett, C. L., &
Seethaler, P. M. (2007). Mathematics screening and progress
monitoring at first grade: Implications for responsiveness to in-
tervention. Exceptional Children, 73(3), 311–330.
Fuchs, L. S., Fuchs, D., Compton, D. L., Powell, S. R., Seethaler, P. M.,
Capizzi, A. M., et al. (2006). The cognitive correlates of third-grade
skill in arithmetic, algorithmic computation, and arithmetic word
problems. Journal of Educational Psychology, 98, 29–43.
Fuchs, L. S., Fuchs, D., Craddock, C., Hollenbeck, K. N., & Hamlett, C. L.
(2008). Effects of small-group tutoring with and without validated
classroom instruction on at-risk students’ math problem solving: Are
two tiers of prevention better than one? Journal of Educational
Psychology, 100(3), 491–509.
Fuchs, L. S., Fuchs, D., Finelli, R., Courey, S. J., & Hamlett, C. L. (2004).
Expanding schema-based transfer instruction to help third graders
solve real-life mathematical problems. American Educational
Research Journal, 41(2), 419–445.
Assisting Students Struggling with Mathematics 139

Fuchs, L. S., Fuchs, D., Finelli, R., Courey, S. J., Hamlett, C. L., Sones, E.
M., et al. (2006). Teaching third graders about real-life mathematical
problem solving: A randomized controlled study. Elementary School
Journal, 106(4), 293–312.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., & Karns, K. (1998). High
achieving students’ interactions and performance on complex
mathematical tasks as a function of homogeneous and heterogeneous
pairings. American Educational Research Journal, 35(2), 227–267.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Phillips, N. B., & Bentz, J. L.
(1994). Class-wide curriculum-based measurement: Helping general
educators meet the challenge of student diversity. Exceptional
Children, 60(6), 518–537.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Powell, S. R., Capizzi, A. M., &
Seethaler, P. M. (2006). The effects of computer-assisted instruction
on number combination skill in at-risk first graders. Journal of Learn-
ing Disabilities, 39(5), 467–475.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Thompson, A., Roberts, P. H., &
Kupek, P., et al. (1994). Formative evaluation of academic progress:
How much growth can we expect? Diagnostique, 19(4), 23–49.
Fuchs, L. S., Fuchs, D., Hamlett, C. L., Walz, & Germann, G. (1993).
Formative evaluation of academic progress: How much growth can we
expect? School Psychology Review, 22(1), 27–48.
Fuchs, L. S., Fuchs, D., & Hollenbeck, K. N. (2007). Extending
responsiveness to intervention to mathematics at first and third grades.
Learning Disabilities Research & Practice, 22(1), 13–24.
Fuchs, L. S., Fuchs, D., & Karns, K. (2001). Enhancing kindergartners’
mathematical development: Effects of peer-assisted learning strategies.
Elementary School Journal, 101(5), 495–510.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C. L., Katzaroff, M., & Dutka,
S. (1997). Effects of task-focused goals on low-achieving students with
and without learning disabilities. American Educational Research
Journal, 34(3), 513–543.
Fuchs, L. S., Fuchs, D., Karns, K., Hamlett, C., Katzaroff, M., & Dutka, S.
(1998). Comparisons among individual and cooperative performance
140 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

assessments and other measures of mathematics competence.


Elementary School Journal, 99(1), 23–51.
Fuchs, L. S., Fuchs, D., Phillips, N. B., Hamlett, C. L., & Karns, K. (1995).
Acquisition and transfer effects of classwide peer-assisted learning
strategies in mathematics for students with varying learning histories.
School Psychology Review, 24(4), 604–620.
Fuchs, L. S., Fuchs, D., Prentice, K., Burch, Hamlett, C. L., Owen, R., et
al. (2003a). Explicitly teaching for transfer: Effects on third-grade
students’ mathematical problem solving. Journal of Educational
Psychology, 95(2), 293–305.
Fuchs, L. S., Fuchs, D., Prentice, K., Burch, M., Hamlett, C. L., Owen, R.,
et al. (2003b). Enhancing third-grade students’ mathematical problem
solving with self-regulated learning strategies. Journal of Educational
Psychology, 95(2), 306–315.
Fuchs, L. S., Fuchs, D., Prentice, K., Hamlett, C. L., Finelli, R., & Courey,
S. J. (2004). Enhancing mathematical problem solving among third-
grade students with schema-based instruction. Journal of Educational
Psychology, 96(4), 635–647.
Fuchs, L. S., Fuchs, D., Stuebing, K., Fletcher, J. M., Hamlett, C. L., &
Lambert, W. (2008). Problem solving and computational skill: Are
they shared or distinct aspects of mathematical cognition? Journal of
Educational Psychology, 100(1), 30–47.
Fuchs, D., Fuchs, L. S., & Vaughn, S. (Eds.). (2008). Response to
intervention. Newark, DE: International Reading Association.
Fuchs, L. S., Fuchs, D., & Zumeta, R. O. (2008). A curricular-sampling
approach to progress monitoring: Mathematics concepts and
applications. Assessment for Effective Intervention, 33(4), 225–233.
Fuchs, L. S., Hamlett, C. L., & Fuchs, D. (1998). Monitoring basic skills
progress: Basic math computation (2nd ed.). Austin, TX: PRO-ED.
Fuchs, L. S., Hamlett, C. L., & Fuchs, D. (1998). Monitoring basic skills
progress: Basic math concepts and applications. Austin, TX: PRO-
ED.
Assisting Students Struggling with Mathematics 141

Fuchs, L. S., Powell, S. R., Hamlett, C. L., & Fuchs, D. (2008).


Remediating computational deficits at third grade: A randomized field
trial. Journal of Research on Educational Effectiveness, 1(1), 2–32.
Fuchs, L. S., Seethaler, P. M., Powell, S. R., Fuchs, D., Hamlett, C. L., &
Fletcher, J. M. (2008). Effects of preventative tutoring on the
mathematical problem solving of third- grade students with math and
reading difficulties. Exceptional Children, 74(2), 155–173.
Gardner, E. F., Rudman, H. C., Karlsen, G. A., & Merwin, J. C., (1982).
Stanford achievement test (17th edition). Cleveland, OH:
Psychological Corporation.
Geary, D. C. (2003). Learning disabilities in arithmetic: Problem-solving
differences and cognitive deficits. In H. L. Swanson, K. R. Harris, & S.
Graham (Eds.), Handbook of learning disabilities (pp. 199– 212). New
York: Guildford Press.
Geary, D. C. (2004). Mathematics and learning disabilities. Journal of
Learning Disabilities, 37(1), 4–15.
Gersten, R., & Chard, D. (1999). Number sense: Rethinking arithmetic
instruction for students with mathematical disabilities. The Journal of
Special Education, 33(1), 18–28.
Gersten, R., Clarke, B. S., & Jordan, N. C. (2007). Screening for
mathematics difficulties in K–3 students. Portsmouth, NH: RMC
Research Corporation, Center on Instruction.
Gersten, R., Jordan, N. C., & Flojo, J. R. (2005). Early identification and
interventions for students with mathematics difficulties. Journal of
Learning Disabilities, 38(4), 293–304.
Gersten, R., Keating, T., & Irvin, L. (1995). The burden of proof: Validity
as improvement of instructional practice. Exceptional Children, 61,
510–519.
Goldman, S. R., Pellegrino, J. W., & Mertz, D. L. (1988). Extended
practice of basic addition facts: Strategy changes in learning disabled
students. Cognition and Instruction, 5(3), 223–265.
Guskey, T. R. (1984). The influence of change in instructional
effectiveness upon the affective characteristics of teachers. American
Educational Research Journal, 21(2), 245–259.
142 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Haager, D., Klinger, J., & Vaughn, S. (2007). Evidence-based reading


practices for response to intervention. Baltimore, MD: Paul H. Brooks
Publishing Company.
Halpern, D., Aronson, J., Reimer, N., Simpkins, S., Star, J., & Wentzel, K.
(2007). Encouraging girls in math and science (NCER 2007-2003).
Washington, DC: National Center for Education Research, Institute of
Education Sciences, U.S. Department of Education. Retrieved January
2009, from http://ies.ed.gov/ncee/ wwc/publications/practiceguides.
Hanich, L. B., Jordan, N. C., Kaplan, D., & Dick, J. (2001). Performance
across different areas of mathematical cognition in children with
learning difficulties. Journal of Educational Psychology, 93(3), 615–
626.
Harry, B., & Klingner, J. K. (2006). Why are so many minority students in
special education? Understanding race and disability in schools. New
York: Teachers College Press.
Hasselbring, T. S., Bransford, J. D., & Goin, L. I. (1988). Developing math
automaticity in learning handicapped children: The role of
computerized drill and practice. Focus on Exceptional Children, 20(6),
1–7.
Hecht, S. A., Vagi, K. J., & Torgesen, J. K. (2007). Fraction skills and
proportional reasoning. In D. B. Berch & M. M. Mazzocco (Eds.), Why
is math so hard for some children? The nature and origins of
mathematical learning difficulties and disabilities (pp. 121–132).
Baltimore, MD: Paul H. Brookes Publishing Co.
Heller, L. R., & Fantuzzo, J. W. (1993). Reciprocal peer tutoring and
parent partnership: Does parent involvement make a difference?
School Psychology Review, 22(3), 517–534.
Heller, K., Holtzman, W., & Messick, S. (1982). Placing children in
special education: A strategy for equity. Washington, DC: National
Academies Press.
Helwig, R., Anderson, L., & Tindal, G. (2002). Using a concept-grounded,
curriculum-based measure in mathematics to predict statewide test
scores for middle school students with LD. The Journal of Special
Education, 36(2), 102–112.
Assisting Students Struggling with Mathematics 143

Hill, H. C., Rowan, B., & Ball, D. L. (2005). Effects of teachers’


mathematical knowledge for teaching on student achievement.
American Educational Research Journal, 42(2), 371–406.
Howard, P., Perry, B. & Conroy, J. (1995, November). Manipulatives in K-
6 mathematics learning and teaching. Paper presented at the annual
conference of the Australian Association for Research in Education,
Hobart.
Howard, P., Perry, B., & Lindsay, M. (1996, November). Mathematics and
manipulatives: Views from the secondary schools. Paper presented at
the joint annual meeting of the Educational Research Association,
Singapore and Australian Association for Research in Education,
Singapore.
Jiban, C., & Deno, S. L. (2007). Using math and reading curriculum-based
measurements to predict state mathematics test performance.
Assessment for Effective Intervention, 32(2), 78–89.
Jitendra, A. K. (2007). Solving math word problems: Teaching students
with learning disabilities using schema-based instruction. Austin, TX:
PRO-ED.
Jitendra, A., Carnine, D., & Silbert, J. (1996). Descriptive analysis of fifth
grade division instruction in basal mathematics programs: Violations
of pedagogy. Journal of Behavioral Education, 6(4), 381–403.
Jitendra, A. K., Griffin, C. C., McGoey, K., Gardill, M. C., Bhat, P., &
Riley, T. (1998). Effects of mathematical word problem solving by
students at risk or with mild disabilities. The Journal of Educational
Research, 91(6), 345–355.
Jitendra, A. K., Sczesniak, E., & DeatlineBuchman, A. (2005). An
exploratory validation of curriculum-based mathematical word-
problem-solving tasks as indicators of mathematics proficiency for
third graders. School Psychology Review, 34(3), 358–371.
Johnson, E. S., Jenkins, J. R., Petscher, Y., & Catts, H. W. (2008). How
can we improve the accuracy of screening instruments? Manuscript
submitted for publication.
Jordan, N. C., Hanich, L. B., & Kaplan, D. (2003). A longitudinal study of
mathematical competencies in children with specific mathematics
144 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

difficulties versus children with comorbid mathematics and reading


difficulties. Child Development, 74(3), 834–850.
Kaminski, R., Cummings, K. D., Powell-Smith, K. A., & Good, R. H. I.
(2008). Best practices in using Dynamic Indicators of Basic Early
Literacy Skills (DIBELS) for formative assessment and evaluation.
In A. Thomas & J. Grimes (Eds.), Best practices in school psychology
V: Vol. 4 (pp. 1181–1204). Bethesda, MD: National Association of
School Psychologists.
Kavale, K., & Spaulding, L. (2008). Is response to intervention good
policy for specific learning disability? Learning Disabilities Research
& Practice, 23(4), 169–179.
Lee, J., Grigg, W., & Dion, G. (2007). The nation’s report card:
Mathematics 2007 (NCES 2007–494). Washington, DC: National
Center for Education Statistics, Institute of Education Sciences, U.S.
Department of Education.
Leh, J. M., Jitendra, A. K., Caskie, G. I. L., & Griffin, C. C. (2007). An
evaluation of curriculum-based measurement of mathematics word
problem-solving measures for monitoring third-grade students’
mathematics competence. Assessment for Effective Intervention, 32(2),
90–99.
Lembke, E. S., & Foegen, A. (2009). Identifying early numeracy indicators
for kindergarten and first-grade students. Learning Disabilities
Research & Practice, 24(1), 12–20.
Lembke, E. S., Foegen, A., Whittaker, T. A., & Hampton, D. (2008).
Establishing technically adequate measures of progress in early
numeracy. Assessment for Effective Intervention, 33(4), 206–214.
Locuniak, M. N., & Jordan, N. C. (2008). Using kindergarten number
sense to predict calculation fluency in second grade. Journal of
Learning Disabilities, 41(5), 451–459.
Manalo, E., Bunnell, J. K., & Stillman, J. A. (2000). The use of process
mnemonics in teaching students with mathematics learning disabilities.
Learning Disability Quarterly, 23(2), 137–156.
McCloskey, M. (2007). Quantitative literacy and developmental
dyscalculias. In D. B. Berch & M. M. Mazzocco (Eds.), Why is math
Assisting Students Struggling with Mathematics 145

so hard for some children? The nature and origins of mathematical


learning difficulties and disabilities. Baltimore: Paul H. Brookes
Publishing.
Messick, S. (1988). The once and future issues of validity: Assessing the
meaning and consequences of measurement. In H. Wainer & H. I.
Braun (Eds.), Test validity (pp. 33–44). Hillsdale, NJ: Lawrence
Erlbaum Associates.
Milgram, R. J., & Wu, H. S. (2005). The key topics in a successful math
curriculum. Retrieved November 26, 2008, from http://math.
berkeley.edu/~wu/
National Academy of Sciences. (2002). Minority students in special and
gifted education. Washington, DC: National Academies Press.
National Council of Teachers of Mathematics. (2006). Curriculum focal
points for prekindergarten through grade 8 mathematics: A quest for
coherence. Reston, VA: Author.
National Joint Committee on Learning Disabilities. (2005).
Responsiveness to intervention and learning disabilities. Learning
Disability Quarterly, 28(4), 249–260.
National Mathematics Advisory Panel. (2008). Foundations for success:
The final report of the national mathematics advisory panel.
Washington, DC: U.S. Department of Education.
National Research Council. (2001). Adding it up: Helping children learn
mathematics. Washington, DC: National Academies Press.
Okamoto, Y., & Case, R. (1996). Exploring the microstructure of
children’s central conceptual structures in the domain of number.
Monographs of the Society for Research in Child Development, 61(1–
2), 27–58.
Organisation for Economic Co-operation and Development. (2008).
Programm for international student assessment (PISA).Retrieved
December 5, 2008, from http://www.pisa.oecd.org/.
Parmar, R. S. (1992). Protocol analysis of strategies used by students with
mild disabilities when solving arithmetic word problems.
Diagnostique, 17(4), 227–243.
146 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

Peterson, P. L., Fennema, E., & Carpenter, T. (1989). Using knowledge of


how students think about mathematics. Educational Leadership, 46(4),
42–46.
President’s Commission of Excellence in Special Education. (2002). A new
era: Revitalizing special education for children and their families.
Washington, DC: U.S. Department of Education.
Robert, R. S., & Jenkins, E. (1989). How children discover new strategies.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Robinson, C. S., Menchetti, B. M., & Torgesen, J. K. (2002). Toward a
two-factor theory of one type of mathematics disabilities. Learning
Disabilities Research & Practice, 17(2), 81–89.
Schmidt, W. H., & Houang, R. T. (2007). Lack of focus in mathematics:
Symptom or cause? Lessons learned. In T. Loveless (Ed.), What
international assessments tell us about math achievement. Washington,
DC: Brookings Institution Press.
Schunk, D. H., & Cox, P. D. (1986). Strategy training and attributional
feedback with learning disabled students. Journal of Educational
Psychology, 78(3), 201–209.
Shapiro, E. S., Edwards, L., & Zigmond, N. (2005). Progress monitoring of
mathematics among students with learning disabilities. Assessment for
Effective Intervention, 30(2), 15–32.
Shinn, M. R. (Ed.). (1989). Curriculum-based measurement: Assessing
special children. New York: Guilford Press.
Siegler, R. S., & Jenkins, E. (1989). How children discover new strategies.
Hillsdale, NJ: Lawrence Erlbaum Associates.
Silbert, J., Carnine, D., & Stein, M. (1989). Direct instruction
mathematics. Columbus, OH: Merrill.
Stigler, J. W., & Hiebert, J. (1999). The teaching gap: Best ideas from the
world’s teachers for improving education in the classroom. New York:
Free Press.
Thurber, R. S., Shinn, M. R., & Smolkowski, K. (2002). What is measured
in mathematics tests? Construct validity of curriculum-based
mathematics measures. School Psychology Review, 31(4), 498.
Assisting Students Struggling with Mathematics 147

Tournaki, N. (2003). The differential effects of teaching addition through


strategy instruction versus drill and practice to students with and
without learning disabilities. Journal of Learning Disabilities, 36(5),
449–458.
University of Minnesota. (2008). Research Institute on Progress
Monitoring. Retrieved December 5, 2008, from http:// www.progress
monitoring.net/
U.S. Department of Education. (n.d.). Trends in international mathematics
and science study (TIMSS). [Overview]. Retrieved December 5, 2008,
from http:// nces.ed.gov/timss/
U.S. Department of Education, Office of Planning, Evaluation and Policy
Development, Policy and Program Studies Service. (2006). Reading
first implementation evaluation: Interim report. Washington, DC:
Author.
U.S. Office of Special Education. (2008). National Center on Student
Progress Monitoring. [Web site]. Retrieved December 5, 2008, from
http://www.studentprogress.org/.
VanDerHeyden, A. M., Witt, J. C., & Gilbertson, D. (2007). A multiyear
evaluation of the effects of a response to intervention (RtI) model on
identification of children for special education. Journal of School
Psychology, 45(2), 225–256.
VanDerHeyden, A. M., Witt, J. C., & Naquin, G. (2003). Development and
validation of a process for screening referrals to special education.
School Psychology Review, 32(2), 204–227.
VanDerHeyden, A. M., Witt, J. C., Naquin, G., & Noell, G. (2001). The
reliability and validity of curriculum-based measurement readiness
probes for kindergarten students. School Psychology Review, 30(3),
363.
Vaughn, S., & Fuchs, L. S. (2003). Redefining learning disabilities as
inadequate response to instruction: The promise and potential
problems. Learning Disabilities Research & Practice, 18, 137–146.
Vaughn, S., & Fuchs, L. S. (2006). A response to Competing views: A
dialogue on response to intervention: Why response to intervention is
148 Russell Gersten, Sybilla Beckmann, Benjamin Clarke et al.

necessary but not sufficient for identifying students with learning


disabilities. Assessment for Effective Intervention, 32(1), 58–61.
Walker, D. W., & Poteet, J. A. (1989). A comparison of two methods of
teaching mathematics story problem-solving with learning disabled
students. National Forum of Special Education Journal, 1, 44–51.
What works clearinghouse: Procedures and standards handbook (version
2.0). (2008). Retrieved March 2009, from http:// ies.ed.gov/ncee/wwc/
pdf/wwc_procedures_v2_standards_handbook.pdf.
Wilson, C. L., & Sindelar, P. T. (1991). Direct instruction in math word
problems: Students with learning disabilities. Exceptional Children,
57(6), 512–519.
Witzel, B. S. (2005). Using CRA to teach algebra to students with math
difficulties in inclusive settings. Learning Disabilities—A
Contemporary Journal, 3(2), 49–60.
Witzel, B. S., Mercer, C. D., & Miller, M. D. (2003). Teaching algebra to
students with learning difficulties: An investigation of an explicit
instruction model. Learning Disabilities Research & Practice, 18(2),
121–131.
Woodward, J. (2006). Developing automaticity in multiplication facts:
Integrating strategy instruction with timed practice drills. Learning
Disability Quarterly, 29(4), 269–289.
Wu, H. (2005). Must content dictate pedagogy in mathematics education
(No. 3840). University of California: Department of Mathematics.
Xin, Y. P., Jitendra, A. K., & Deatline-Buchman, A. (2005). Effects of
mathematical word-problem-solving instruction on middle school
students with learning problems. Journal of Special Education, 39(3),
181–192.
In: Assisting Students Struggling in Math … ISBN: 978-1-53613-740-8
Editor: Timothy Winder © 2018 Nova Science Publishers, Inc.

Chapter 2

ENCOURAGING GIRLS IN MATH


AND SCIENCE: IES PRACTICE GUIDE*

Diane Halpern1, Joshua Aronson2, Nona Reimer3,


Sandra Simpkins4, Jon R. Star5 and Kathryn Wentzel6
1
Claremont McKenna College, Claremont, CA, US
2
New York University, New York, NY, US
3
John Malcom Elementary School, Laguna Niguel, CA, US
California State University, Fullerton, CA, US
4
Arizona State University, Tempe, AZ, US
5
Harvard University, Cambridge, MA, US
6
University of Maryland, College Park, MD, US

PREAMBLE FROM THE INSTITUTE


OF EDUCATION SCIENCES

What Is a Practice Guide?

The health care professions have embraced a mechanism for


assembling and communicating evidence-based advice to practitioners

*
This is an edited, reformatted, and augmented version of an Encouraging Girls in Math and
Science (NCER 2007-2003). Washington, DC: National Center for Education Research,
Institute of Education Sciences, U.S. Department of Education., dated September, 2007.
150 Diane Halpern, Joshua Aronson, Nona Reimer et al.

about care for specific clinical conditions. Variously called practice


guidelines, treatment protocols, critical pathways, best-practice guides, or
simply practice guides, these documents are systematically developed
recommendations about the course of care for frequently encountered
problems, ranging from physical conditions such as foot ulcers to
socioemotional issues such as navigating the demands of adolescence.1
Practice guides are similar to the products of typical expert consensus
panels in that they reflect the views of those serving on the panel, as well as
the social decision processes that come into play as the positions of individual
panel members are forged into statements that all are willing to endorse.
However, practice guides are generated under three constraints that do not
typically apply to consensus panels. The first is that a practice guide consists
of a list of discrete recommendations that are intended to be actionable. The
second is that those recommendations taken together are intended to comprise
a coherent approach to a multifaceted problem. The third, which is most
important, is that each recommendation is explicitly connected to the level of
evidence supporting it (e.g., strong, moderate, and low). The levels of
evidence are usually constructed around the value of particular types of
studies for drawing causal conclusions about what works. Thus one typically
finds that the top level of evidence is drawn from a body of randomized
controlled trials, the middle level from well-designed studies that do not
involve randomization, and the bottom level from the opinions of respected
authorities. Levels of evidence can also be constructed around the value of
particular types of studies for other goals, such as the reliability and validity
of assessments.
Practice guides can also be distinguished from systematic reviews or meta-
analyses, which employ statistical methods to summarize the results of studies
obtained from a rule-based search of the literature. Authors of practice guides
seldom conduct the types of systematic literature searches that are the backbone
of a meta-analysis, though they take advantage of such work when it is already
published. Instead they use their expertise to identify the most important
research with respect to their recommendations, augmented by a search of

1
Field and Lohr (1990).
Encouraging Girls in Math and Science: IES Practice Guide 151

recent publications to assure that the research citations are up-to-date. Further,
the characterization of the quality and direction of evidence underlying a
recommendation in a practice guide relies less on a tight set of rules and
statistical algorithms and more on the judgment of the authors than would be
the case in a high-quality meta-analysis. Another distinction is that a practice
guide, because it aims for a comprehensive and coherent approach, operates
with more numerous and more contextualized statements of what works than
does a typical meta-analysis.
Thus practice guides sit somewhere between consensus reports and
meta-analyses in the degree to which systematic processes are used for
locating relevant research and characterizing its meaning. Practice guides
are more like consensus panel reports than meta-analyses in the breadth
and complexity of the topic that is addressed. Practice guides are different
from both consensus reports and meta-analyses in providing advice at the
level of specific action steps along a pathway that represents a more or less
coherent and comprehensive approach to a multifaceted problem.

Practice Guides in Education at the Institute


of Education Sciences

The Institute of Education Sciences (IES) publishes practice guides in


education to bring the best available evidence and expertise to bear on the
types of systemic challenges that cannot be addressed by single
interventions or approaches. Although IES has taken advantage of the
history of practice guides in health care to provide models of how to
proceed in education, education is different from health care in ways that
may require that practice guides in education have somewhat different
designs. Even within health care, where practice guides now number in the
thousands, there is no single template in use. Rather, one finds descriptions
of general design features that permit substantial variation in the realization
of practice guides across subspecialties and panels of experts. 2

2
American Psychological Association (2002).
152 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Accordingly, the templates for IES practice guides may vary across
practice guides and change over time and with experience.
The steps involved in producing an IES-sponsored practice guide are
first to select a topic, which is informed by formal surveys of practitioners
and spontaneous requests from the field. Next, a panel chair is recruited
who has a national reputation and up-to-date expertise in the topic. Third,
the chair, working in collaboration with IES, selects a small number of
panelists to co-author the practice guide. These are people the chair
believes can work well together and have the requisite expertise to be a
convincing source of recommendations. IES recommends that at least one
of the panelists be a practitioner with considerable experience relevant to
the topic being addressed. The chair and the panelists are provided a
general template for a practice guide along the lines of the information
provided in this preamble. They are also provided with examples of
practice guides. The practice guide panel works under a short deadline of 6
to 9 months to produce a draft document. The expert panel interacts with
and receives feedback from staff at IES during the development of the
practice guide, but the panel members understand that they are the authors
and thus responsible for the final product.
One unique feature of IES-sponsored practice guides is that they are
subjected to rigorous external peer review through the same office that is
responsible for independent review of other IES publications. A critical
task of the peer reviewers of a practice guide is to determine whether the
evidence cited in support of particular recommendations is up-to-date, and
that studies of similar or better quality that point in a different direction
have not been ignored. Peer reviewers also are asked to evaluate whether
the evidence grades assigned to particular recommendations by the practice
guide authors are appropriate. A practice guide is revised as necessary to
meet the concerns of external peer reviews and to gain the approval of the
standards and review staff at IES. The process of external peer review is
carried out independently of the office and staff within IES that initiated
the practice guide.
Because practice guides depend on the expertise of their authors and
their group decision-making, the content of a practice guide is not and
Encouraging Girls in Math and Science: IES Practice Guide 153

should not be viewed as a set of recommendations that in every case


depends on and flows inevitably from scientific research. It is not only
possible but also likely that two teams of recognized experts working
independently to produce a practice guide on the same topic would
generate products that differ in important respects. Thus consumers of
practice guides need to understand that they are, in effect, getting the
advice of consultants. These consultants should, on average, provide
substantially better advice than educators might obtain on their own
because the authors are national authorities who have to achieve consensus
among themselves, justify their recommendations in terms of supporting
evidence, and undergo rigorous independent peer review of their product.

INTRODUCTION

The goal of this practice guide is to formulate specific and coherent


evidence-based recommendations that educators can use to encourage girls
in the fields of math and science. The target audience is teachers and other
school personnel with direct contact with students, such as coaches,
counselors, and principals. The practice guide includes specific
recommendations for educators and the quality of evidence that supports
these recommendations.
We, the authors, are a small group with expertise on this topic. The
range of evidence we considered in developing this document is vast,
ranging from experiments, to trends in the National Assessment of
Educational Progress (NAEP) data, to correlational and longitudinal
studies. For questions about what works best, high-quality experimental
and quasi-experimental studies, such as those meeting the criteria of the
What Works Clearinghouse, have a privileged position. In all cases, we
pay particular attention to findings that are replicated across studies.
Although we draw on evidence about the effectiveness of specific
practices, we use this information to make broader points about improving
practice. In this document, we have tried to take findings from research or
practices recommended by experts and describe how the use of this
154 Diane Halpern, Joshua Aronson, Nona Reimer et al.

recommendation might actually unfold in school settings. In other words,


we aim to provide sufficient detail so that educators will have a clear sense
of the steps necessary to make use of the recommendation.
A unique feature of practice guides is the explicit and clear delineation
of the quality and quantity of evidence that supports each claim. To this
end, we adapted a semi-structured hierarchy suggested by the Institute of
Education Sciences. This classification system helps determine whether the
quality and quantity of available evidence in support of a practice is of
strong, moderate, or low quality. This system appears in table 1 below.
Strong refers to consistent and generalizable evidence that an approach
or practice causes improved performance in math or science among girls or
that an assessment is reliable and valid. Moderate refers either to evidence
from (a) studies that allow strong causal conclusions but which cannot be
generalized with assurance to the target population because, for example,
the findings have not been sufficiently replicated, or (b) studies that are
generalizable but have more causal ambiguity than offered by experimental
designs, for example, statistical models of correlational data or group
comparison designs where equivalence of the groups at pretest is uncertain.
For assessments, moderate refers to high quality studies from a small
number of samples that are not representative of the whole population. Low
refers to expert opinion based on reasonable extrapolations from research
and theory on other topics and/or evidence from studies that do not meet
the standards for moderate or strong evidence.

Table 1. Institute of Education Sciences Levels of Evidence

In general, characterization of the evidence for a recommendation as strong requires


both studies with high internal validity (i.e., studies whose designs can support
causal conclusions), as well as studies with high external validity (i.e., studies that
in total include enough of the range of participants and settings on which the
recommendation is focused to support the conclusion that the results can be
Strong generalized to those participants and settings). Strong evidence for this practice
guide is operationalized as:
 A systematic review of research that generally meets the standards of the
What Works Clearinghouse (see http://ies.ed.gov/ncee/wwc/) and supports
the effectiveness of a program, practice, or approach with no contradictory
evidence of similar quality; OR
Encouraging Girls in Math and Science: IES Practice Guide 155

 Several well-designed, randomized, controlled trials or well-designed quasi-


experiments that generally meet the standards of the What Works
Clearinghouse and support the effectiveness of a program, practice, or
approach, with no contradictory evidence of similar quality; OR
 One large, well-designed, randomized, controlled, multisite trial that meets
the standards of the What Works Clearinghouse and supports the
effectiveness of a program, practice, or approach, with no contradictory
evidence of similar quality; OR
 For assessments, evidence of reliability and validity that meets the Standards
for Educational and Psychological Testing.3
In general, characterization of the evidence for a recommendation as moderate
requires studies with high internal validity but moderate external validity, or studies
with high external validity but moderate internal validity. In other words, moderate
evidence is derived from studies that support strong causal conclusions but where
generalization is uncertain, or studies that support the generality of a relationship
but where the causality is uncertain. Moderate evidence for this practice guide is
operationalized as:
 Experiments or quasi-experiments generally meeting the standards of the
What Works Clearinghouse and supporting the effectiveness of a program,
practice, or approach with small sample sizes and/or other conditions of
implementation or analysis that limit generalizability, and no contrary
evidence; OR
 Comparison group studies that do not demonstrate equivalence of groups at
Moderate
pretest and therefore do not meet the standards of the What Works
Clearinghouse but that (a) consistently show enhanced outcomes for
participants experiencing a particular program, practice, or approach and (b)
have no major flaws related to internal validity other than lack of
demonstrated equivalence at pretest (e.g., only one teacher or one class per
condition, unequal amounts of instructional time, highly biased outcome
measures); OR
 Correlational research with strong statistical controls for selection bias and
for discerning influence of endogenous factors and no contrary evidence; OR
 For assessments, evidence of reliability that meets the Standards for
Educational and Psychological Testing 4 but with evidence of validity from
samples not adequately representative of the population on which the
recommendation is focused.
In general, characterization of the evidence for a recommendation as low means that
the recommendation is based on expert opinion derived from strong findings or
Low theories in related areas and/or expert opinion buttressed by direct evidence that
does not rise to the moderate or strong levels. Low evidence is operationalized as
evidence not meeting the standards for the moderate or high levels

3
American Educational Research Association, American Psychological Association, and
National Council on Measurement in Education (1999).
4
Ibid.
156 Diane Halpern, Joshua Aronson, Nona Reimer et al.

For each recommendation, we include an appendix that provides more


technical information about the studies and our decisions regarding level of
evidence for the recommendation. To illustrate the types of studies
reviewed, we describe one study in considerable detail for each
recommendation. Our goal in doing this is to provide interested readers
with more detail about the research designs, the intervention components,
and how impact was measured. By including a particular study, we do not
mean to suggest that it is the best study reviewed for the recommendation
or necessarily an exemplary study in any way.

OVERVIEW

Although there is a general perception that men do better than women


in math and science, researchers have found that the differences between
women’s and men’s math- and science-related abilities and choices are
much more subtle and complex than a simple “men are better than women
in math and science.” 5 In fact, experts disagree among themselves on the
degree to which women and men differ in their math- and science-related
abilities.6 A quick review of the postsecondary paths pursued by women
and men highlight the areas in math and science where women are not
attaining degrees at the same rate as men.
In 2004, women earned 58 percent of all bachelor’s degrees, 78
percent of bachelor’s degrees in psychology, 62 percent in biological
sciences, 51 percent in chemistry, 46 percent in mathematics, 25 percent in
computer sciences, 22 percent in physics, and 21 percent in engineering.7
In general, women earn substantial proportions of the bachelor’s degrees in
math and the sciences, except in computer sciences, physics, and
engineering. At the master’s level, women earned 59 percent of all
master’s degrees. The pattern at the master’s degree level is similar (see

5
See Hyde (2005), Spelke (2005), and Halpern (2000) for recent discussions of the literature.
6
See Gallagher and Kaufman (2005) for a collection of chapters representing different
researchers’ views.
7
National Science Foundation (2006d).
Encouraging Girls in Math and Science: IES Practice Guide 157

figure 1). At the doctoral level, however, gender imbalances become more
prevalent, including in math and chemistry (see figure 1). Women earned
45 percent of all doctoral degrees, but they earn less than one-third of all
doctoral degrees in chemistry, computer sciences, math, physics, and
engineering.8 In contrast, women earn 67 percent of the doctoral degrees in
psychology and 44 percent in other social sciences.9 This disproportionate
representation in math and science graduate degrees is also reflected in
math and science career pathways. While women make up nearly half of
the U.S. workforce, they make up only 26 percent of the science and
engineering workforce.10 The question many are asking is why women are
choosing not to pursue degrees and careers in the physical sciences,
engineering, or computer science. Several potential reasons for the gender
disparity include previous coursework, ability, interests, and beliefs.

Source: National Science Foundation, Science and Engineering Degrees: 1966-2004.

Figure 1. Percent of degrees awarded to women by major field.

An examination of course-taking patterns shows that girls are taking


math and science courses in high school. On the 2005 National Assessment
of Educational Progress (NAEP) High School Transcript Study, girls who
graduated from high school, on average, earned slightly more credits in
mathematics and science (7.3) than boys earned (7.1). Boys, however,
earned slightly more credits in computer-related courses (1.1) than girls

8
Ibid.
9
Ibid.
10
National Science Foundation (2006c).
158 Diane Halpern, Joshua Aronson, Nona Reimer et al.

earned (0.8).11 Figure 2 shows the percentages of female and male high
school graduates in 2000 that completed math and science courses.
Although a greater percentage of boys completed physics (34 percent) and
calculus (12 percent) than girls (physics, 29 percent; calculus, 11 percent),
girls were more likely to complete biology (girls, 93 percent; boys, 89
percent), advanced placement (AP) or honors biology (girls, 19 percent;
boys, 14 percent), and chemistry (girls, 66 percent; boys, 58 percent) than
boys were. Although some gender differences are present in high school
math and science course enrollments, similarities between the genders is
also common. This gender parity in course-taking patterns may be less
surprising than it appears, given that high school graduation requirements
typically include multiple science courses, as well as mathematics.
A second reason for the observed differences in college and
occupational choices may be that males and females have variant math and
science abilities, as measured by standardized tests. Although girls
generally do as well as, or better than, boys on homework assignments and
course grades in math and science classes,12 boys tend to outscore girls
when tested on the same content in high-pressure situations, such as
standardized tests with time limits. These tests are typically not linked to
instructed curriculum, and so can be understood to be measures of more
general abilities in math and science.13 For example, on the 2005 NAEP
math and science assessments, girls scored lower than boys when
controlling for highest course completed at all levels, except the lowest
level (see figures 3 and 4).14 Performance differences on timed
standardized tests do not necessarily mean that girls are not as capable as
boys in math or science. Researchers have found, for instance, that SAT
math scores underpredict young women’s performance in college math
courses.15 This suggests that it is not ability, per se, that hinders girls and
women from pursuing careers in math and science. If not ability, then
what?

11
Shettle, Roey, Mordica, et al. (2007).
12
College Board (2006, August 29); Shettle, Roey, Mordica, et al. (2007).
13
See Halpern, Benbow, Geary, et al. (2007) for a more thorough discussion of this point.
14
Shettle, Roey, Mordica, et al. (2007).
15
Wainer and Steinberg (1992).
Encouraging Girls in Math and Science: IES Practice Guide 159

Source: U.S. department of Education, Institute of Education Sciences, National Center


for Education statistics, Trends in Educational Equity of Girls & Women: 2004.

Figure 2. Percent of public high school graduates who completed various mathematics
and science courses in high school, by gender: 2000.

Source: U.S. department of Education, Institute of Education Sciences, National Center


for Education statistics, High School Transcript Sttudy (HSTS), 2005.
Note: Advanced mathematics courses, other than calculus, are cources generally taken
after Algebra, (e.g., AP statistics and precalculus).

Figure 3. NAEP mathematics scores by highest course completed and gender: 2005.
160 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Areas where consistent gender differences have emerged are children’s


and adolescents’ beliefs about their abilities in math and science, their
interest in math and science, and their perceptions of the importance of
math and science for their futures. In general, researchers have found that
girls and women have less confidence in their math abilities than males do
and that from early adolescence, girls show less interest in math or science
careers.16 This gender difference is interesting, and somewhat puzzling,
given that males and females generally enroll in similar courses and
display similar abilities (at least as measured by course grades). In other
words, girls, particularly as they move out of elementary school and into
middle and high school and beyond, often underestimate their abilities in
mathematics and science. However, it is important to note that not all girls
have less confidence and interest in mathematics and science, and that
girls, as well as boys, who have a strong self-concept regarding their
abilities in math or science are more likely to choose and perform well in
elective math and science courses and to select math- and science-related
college majors and careers.17 This is noteworthy because it suggests that
improving girls’ beliefs about their abilities could alter their choices and
performance. Theory and empirical research suggest that children’s beliefs
about their abilities are central to determining their interest and
performance in different subjects, the classes they choose to take, the after-
school activities they pursue, and, ultimately, the career choices they
make.18
What can teachers do to encourage girls to choose career paths in
math- and science-related fields? One major way to encourage girls to
choose careers in math and science is to foster the development of strong
beliefs about their abilities in these subjects– beliefs that more accurately
reflect their abilities – and more accurate beliefs about the participation of
women in math- and science-related careers (see table 2). Our first two
recommendations, therefore, focus on strategies that teachers can use to

16
Andre, Whigham, Hendrickson, et al. (1999); Herbert and Stipek, (2005); Jacobs, Lanza,
Osgood, et al. (2002); Simpkins and Davis-Kean (2005); Wigfield, Eccles, Mac Iver, et al.
(1991).
17
Simpkins and Davis-Kean (2005); Updegraff and Eccles (1996).
18
Pajares (2006).
Encouraging Girls in Math and Science: IES Practice Guide 161

strengthen girls’ beliefs regarding their abilities in math and science: (1)
Teach students that academic abilities are expandable and improvable
(Level of Evidence: Moderate); and (2) Provide prescriptive, informational
feedback (Level of Evidence: Moderate). Our third recommendation
addresses girls’ beliefs about both their own abilities and the participation
of women in math- and science-related careers: (3) Expose girls to female
role models who have succeeded in math and science (Level of Evidence:
Low).
162 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Table 2. Recommendations and corresponding Levels of Evidence


to support each

Recommendation Level of Evidence


1. Teachers should explicitly teach students that academic
abilities are expandable and improvable in order to enhance
girls’ beliefs about their abilities. Students who view their
cognitive abilities as fixed from birth or unchangeable are more
likely to experience decreased confidence and performance when
Moderate
faced with difficulties or setbacks. Students who are more
confident about their abilities in math and science are more
likely to choose elective math and science courses in high school
and more likely to select math and science-related college majors
and careers.
2. Teachers should provide students with prescriptive,
informational feedback regarding their performance.
Prescriptive, informational feedback focuses on strategies, effort,
and the process of learning (e.g., identifying gains in children’s
Moderate
use of particular strategies or specific errors in problem solving).
Such feedback enhances students’ beliefs about their abilities,
typically improves persistence, and improves performance on
tasks.
3. Teachers should expose girls to female role models who have
achieved in math or science in order to promote positive beliefs
regarding women’s abilities in math and science. Even in
elementary school, girls are aware of the stereotype that men are
Low
better in math and science than women are. Exposing girls to
female role models (e.g., through biographies, guest speakers, or
tutoring by older female students) can invalidate these
stereotypes.
4. Teachers can foster girls’ long-term interest in math and
science by choosing activities connecting math and science
activities to careers in ways that do not reinforce existing gender
stereotypes and choosing activities that spark initial curiosity Moderate
about math and science content. Teachers can provide ongoing
access to resources for students who continue to express interest
in a topic after the class has moved on to other areas.
5. Teachers should provide opportunities for students to engage
in spatial skills training. Spatial skills training is associated with Low
performance in mathematics and science.
Encouraging Girls in Math and Science: IES Practice Guide 163

Girls are more likely to choose courses and careers in math and science
if their interest in these fields is sparked and cultivated throughout the
school years.19 Our fourth recommendation focuses on the importance of
fostering long-term interest (Level of Evidence: Moderate) and provides
concrete strategies that teachers might use to promote greater interest in
math and science.
A final way to encourage girls in math and science is to help them
build the spatial skills that are crucial to success in many math- and
science-related fields, such as physics, engineering, architecture, geometry,
topology, chemistry, and biology. Research suggests that spatial skills, on
which boys have typically outperformed girls, can be improved through
specific types of training. Thus, our final recommendation is that teachers
provide students, especially girls, with specific training in spatial skills
(Level of Evidence: Low).

SCOPE OF THE PRACTICE GUIDE

This practice guide provides five recommendations for encouraging


girls in math and science. These recommendations together form a
coherent statement: To encourage girls in math and science, we need to
begin first with their beliefs about their abilities in these areas, second with

19
Wigfield, Eccles, Schiefele, et al. (2006).
164 Diane Halpern, Joshua Aronson, Nona Reimer et al.

sparking and maintaining greater interest in these topics, and finally with
building associated skills. Our specific recommendations cover these three
domains in a representative but not exhaustive way. In particular, we have
chosen to focus on specific recommendations that have the strongest
research backing available. In addition, we limit our focus to
recommendations that teachers can carry out in the classroom and that do
not require systemic change within a school district. We remind the reader
that students’ choices to pursue careers in math and science reflect multiple
influences that accumulate over time. We have identified practices that
elementary, middle, and high school teachers can implement during
instruction that we believe would increase the likelihood that girls and
women will not prematurely decide that careers in math and science are not
for them.

CHECKLIST FOR CARRYING OUT


THE RECOMMENDATIONS

Recommendation 1: Teach Students That Academic Abilities


Are Expandable and Improvable

 Teach students that working hard to learn new knowledge leads to


improved performance.
 Remind students that the mind grows stronger with use and that
over time and with continued effort, understanding the material
will get easier.

Recommendation 2: Provide Prescriptive, Informational


Feedback

 Provide students with feedback that focuses on strategies used


during learning, as opposed to simply telling them whether they
Encouraging Girls in Math and Science: IES Practice Guide 165

got an answer correct. This strategy encourages students to correct


misunderstandings and learn from their mistakes.
 Provide students with positive feedback about the effort they
expended on solving a difficult problem or completing other work
related to their performance.
 Avoid using general praise, such as “good job,” when providing
feedback to individual students or an entire class.
 Make sure that there are multiple opportunities for students to
receive feedback on their performance.

Recommendation 3: Expose Girls and Young Women to Female


Role Models Who Have Succeeded in Math and Science

 Invite older girls and women who have succeeded in math- or


science-related courses and professions to be guest speakers or
tutors in your class.
 Assign biographical readings about women scientists,
mathematicians, and engineers, as part of students’ assignments.
 Call attention to current events highlighting the achievements of
women in math or science.
 When talking about potential careers, make students aware of the
numbers of women who receive advanced degrees in math- and
science-related disciplines.
 Provide girls and young women with information about mentoring
programs designed to support students who are interested in
mathematics and science.
 Encourage parents to take an active role in providing opportunities
for girls to be exposed to women working in the fields of math and
science.
166 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Recommendation 4: Create a Classroom Environment That


Sparks Initial Curiosity and Fosters Long-Term Interest
in Math and Science

 Embed mathematics word problems and science activities in


contexts that are interesting to both boys and girls.
 Provide students with access to rich, engaging relevant
informational and narrative texts as they participate in classroom
science investigations.
 Capitalize on novelty to spark initial interest. That is, use project-
based learning, group work, innovative tasks, and technology to
stir interest in a topic.
 Encourage middle and high school students to examine their
beliefs about which careers are typically female-oriented and
which are typically male-oriented. Encourage these students to
learn more about careers that are interesting to them but that they
believe employ more members of the opposite gender.
 Connect mathematics and science activities to careers in ways that
do not reinforce existing gender stereotypes of these careers.

Recommendation 5: Provide Spatial Skills Training

 Recognize that children may not automatically recognize when


spatial strategies can be used to solve problems and that girls are
less likely to use spatial strategies than boys. Teach students to
mentally image and draw spatial displays in response to
mathematics and science problems.
 Require students to answer mathematics and science problems
using both verbal responses and spatial displays.
 Provide opportunities for specific training in spatial skills such as
mental rotation of images, spatial perspective, and embedded
figures.
Encouraging Girls in Math and Science: IES Practice Guide 167

RECOMMENDATION 1: TEACH STUDENTS THAT


ACADEMIC ABILITIES ARE EXPANDABLE
AND IMPROVABLE

To enhance girls’ beliefs about their abilities, we recommend that


teachers understand and communicate this understanding to students: Math
and science abilities—like all abilities—can be improved through
consistent effort and learning. Research shows that even students with
considerable ability who view their cognitive abilities as fixed or
unchangeable are more likely to experience greater discouragement, lower
performance, and, ultimately, reduce their effort when they encounter
difficulties or setbacks. Such responses may be more likely in the context
of math, given stereotypes about girls’ innate mathematics abilities.20 In
contrast, students who tend to view their abilities as expandable tend to
keep trying in the face of frustration in order to increase their performance.
To help girls and young women resist negative reactions to the difficulty of
math and science work, it can be very helpful for them to learn that their
math and science abilities can improve over time with continuous effort
and engagement.

20
See Dweck (2006) for a recent discussion focused on girls’ beliefs about intelligence.
168 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Level of Evidence: Moderate

The panel judges the level of evidence supporting this recommendation


to be moderate, based on the two small experimental studies that
examined the effect of this practice for improving K–12 students’
performance on math,21 one experimental study that examined the effect of
this strategy for improving college students’ general academic
performance,22 and supporting correlational research demonstrating the
relation between students’ beliefs about the stability and malleability of
intellectual abilities and their performance.23

Brief Summary of Evidence to Support the Recommendation


What students believe about the nature of intelligence and ability
affects their achievement.24 Some students believe that people, in general,
are born with a fixed amount of intelligence, such that some people are
born smart and others less smart and that little can be done to change this.
Similarly, some students believe that their abilities were determined at
birth and cannot be changed. An alternative way to think about intelligence
or ability is that it is not fixed but can be improved through hard work and
effort. Research shows that children who view intelligence as a fixed trait
or believe that their own abilities cannot be changed tend to pursue
“performance goals.” That is, they tend to be more concerned with
demonstrating their intelligence and prefer to complete tasks that will show
that they are “smart.” In contrast, students who believe that intelligence or
ability can be improved with effort are more likely to pursue learning
goals.25 That is, they tend to be more concerned with learning new material
and are more likely to seek to master difficult material, even if doing so
does not make them look “smart” (e.g., they might not be able to solve
problems initially). When tasks become more challenging, students who
believe that abilities or intelligence cannot be changed are more likely to

21
Good, Aronson, and Inzlicht (2003); Blackwell, Trzesniewski, and Dweck (2007).
22
Aronson, Fried, and Good (2002).
23
Weiner (1986); Graham (1991); see Dweck (1999) for an overview of research in this area.
24
Grant and Dweck (2003).
25
Dweck (1999); Dweck and Leggett (1988).
Encouraging Girls in Math and Science: IES Practice Guide 169

become anxious, downgrade their assessment of their ability, and give up.
However, students who believe that abilities can be improved through
effort and hard work are more likely to respond to challenge with increased
effort. In the long run, the students who are able to persist in their attempts
to master difficult material perform better than the students who doubt
their ability and give up.26
These different orientations toward learning may have long-term
implications. For example, if a child’s goal is to look smart, she may shy
away from challenging tasks with potential for failure in favor of easier
tasks with higher potential for success. In addition, if a child believes that
intelligence or abilities are fixed, then she is likely to attribute failure to
lack of ability, and her belief in her own abilities may eventually decline.
Thus, failures or challenges can have a negative impact on children who
view intelligence or abilities as a fixed trait. In contrast, research shows
that when students are taught that intelligence and abilities can be
increased with hard work, their test scores and their grades improve.27
Finally, why is it important to foster girls’ belief in the malleability of
intellectual abilities? As discussed in the overview, girls tend to lack
confidence in their math and science abilities even when they do well in
their math and science courses. Teaching girls that knowledge and
intellectual skills increase, for example, when students learn how to solve
problems that they previously could not do explicitly provides girls with a
way to interpret failure that does not discourage them from persevering to
master new material in class.

How to Carry Out the Recommendation


To help modify students’ beliefs about their intelligence or abilities,
teachers can:

 Expose students to and discuss the neuroscience research that


shows that brains grow new synaptic connections when new

26
Utman (1997).
27
Aronson, Fried, and Good (2002); Good, Aronson, and Inzlicht (2003); Blackwell,
Trzesniewski, and Dweck (2007).
170 Diane Halpern, Joshua Aronson, Nona Reimer et al.

material is learned and practiced, thus making the brain more


complex and “smarter”—that working hard to learn new
knowledge leads to improved intelligence. Sports analogies can
buttress this learning: practicing academic skills, like solving math
problems, improves performance much like practicing free throws
improves basketball performance or practicing serves helps one’s
tennis game.28
 When students are struggling, teachers can explicitly remind their
students that the mind grows stronger as a result of doing hard
work, and that over time and with continued effort, understanding
the material and solving the problems will get easier.29
 Teachers can also remind students about the malleability of
intelligence when they make progress, pointing out that their
brains are actively building new connections as they study.

Potential Roadblocks and Solutions

Roadblock 1.1. Some adults may believe that intelligence and abilities
are innate or fixed and that people who are “naturally” good at something
will excel in that domain. It can be difficult to convince students that effort
will make a significant difference when some adults seem to favor “natural
ability” explanations for success over “effort and hard work”
explanations.

Solution. Some teachers may view abilities and intelligence as static


characteristics that are fixed at birth. Neurologists used to believe this as
well, but they no longer do. Thus, teachers as well as students need to take
the neuroscience seriously and examine and modify their preconceptions
about the nature of human intelligence. Only then can they genuinely help
both students and parents understand that our brains are constantly creating
and refining new synaptic connections, based on our experiences and the
activities we regularly practice. Teachers should consider the following

28
Good, Aronson, and Inzlicht (2003).
29
Ibid.
Encouraging Girls in Math and Science: IES Practice Guide 171

two studies. In one, researchers found that cab drivers had enlarged
portions of the part of the brain that is important in performing spatial tasks
(right posterior hippocampus) relative to a control group of adults whose
employment required less use of spatial navigational skills.30 Among the
cab drivers in this study, the number of years spent driving taxis was
positively correlated with the size of the right posterior hippocampus. In a
related study, researchers found that in a sample of adults who were not
taxi drivers, there was no correlation between size of the posterior
hippocampus and navigational expertise.31 These two sets of findings
suggest that experience with complex route finding caused the increased
size of the relevant brain structure. This is just one example of research
that underlines the fundamental flexibility of the human brain in creating
new synaptic connections for tasks that are repeated and practiced over
time.

Roadblock 1.2. By the time students enter high school, some girls
perceive themselves to be less capable than boys in math and science. They
may believe that their abilities in this domain are not significantly
expandable or that they are innately less likely than boys to do well in
these domains.

Solution. Teachers will need to keep in mind that girls perform as well
as or even better than boys in school, including on exams and course
grades in math and science classes; boys outperform girls only when we
look at scores on advanced standardized tests. Experts disagree as to why
girls’ equal or superior classroom performance in math and science does
not carry over to their performance on high-stakes standardized tests.32
Thus, teachers can emphasize to students that high scores on advanced
standardized tests do not in the long run determine success in science- and
math-related fields. Many different skills are needed for success in these
domains, including the content knowledge gained in coursework and
through experience, as well as excellent writing and communication skills.

30
Maguire, Gadian, Johnsrude, et al. (2000).
31
Maguire, Spiers, Good, et al. (2003).
32
Gallagher and Kaufman (2005).
172 Diane Halpern, Joshua Aronson, Nona Reimer et al.

There is no reason to believe that girls are biologically programmed to


perform less well than boys in math- and science-related careers, and there
are many reasons to believe that women, once rare in math and science
careers, will continue to close the gap, as they have for the past several
decades. Again, the best way for teachers to respond with genuine
encouragement to girls’ doubts about their innate aptitudes in math and
science is to be clear that the male advantage in math has little grounding
in science and is limited to standardized tests. Appreciating the fact that
girls perform better on standardized math tests when they believe that their
math abilities are not fixed is one powerful way to start.

RECOMMENDATION 2: PROVIDE PRESCRIPTIVE,


INFORMATIONAL FEEDBACK

We recommend that teachers provide students with prescriptive,


informational feedback regarding their performance in math and science
courses. Prescriptive, informational feedback focuses on strategies, effort,
and the process of learning. Examples include identifying gains in
children’s strategy use, praising effort, or identifying gaps or errors in
problem-solving. Although this type of feedback overlaps with the type of
feedback that teachers provide during formative assessment, this
recommendation specifically targets feedback that focuses students’
Encouraging Girls in Math and Science: IES Practice Guide 173

attention on their beliefs about why they did or did not perform well on a
particular task. Prescriptive, informational feedback enhances students’
beliefs about their abilities, typically improves persistence, and improves
performance on tasks. In addition, students’ beliefs about their abilities are
related to their math- and science-related choices.33

Level of Evidence: Moderate

The panel judges the quality of the evidence on the relation between
prescriptive, informational feedback and students’ beliefs about their math
and science abilities and their performance on math- and science-related
tasks to be moderate, based on a set of small experimental studies using
random assignment that focus specifically on children performing math or
math-related tasks34 and supporting research on the effects of different
types of feedback on a variety of tasks.35 The supporting research on
feedback includes many studies that vary in terms of design, including
small experimental studies, longitudinal and cross-sectional correlational
studies, and qualitative studies. Many of the experimental studies on the
effects of different types of feedback have been conducted with children.

Brief Summary of Evidence to Support the Recommendation


Students often receive feedback regarding their performance in the
form of grades, test scores, or statements from teachers regarding the
accuracy of a response. However, all forms of feedback are not equal in
their impact on students’ beliefs about their abilities in a given domain,
such as math or science, nor in their impact on performance. In particular,
when teachers provide specific, informational feedback in terms of
strategies, effort, and the process of learning (e.g., “you worked really hard

33
See Hackett (1985) for a classic study supporting this conclusion in the context of
mathematics; Fouad and Smith (1996) discuss this relationship in middle school students.
34
Mueller and Dweck (1998); Elawar and Corno (1985); Miller, Brickman, and Bolen (1975).
35
See Bangert-Drowns, Kulik, Kulik, et al. (1991) for a synthesis of studies on feedback and
Henderlong and Lepper (2002) for a recent review on the effects of praise on children’s
intrinsic motivation.
174 Diane Halpern, Joshua Aronson, Nona Reimer et al.

at that subtraction problem”), rather than general praise (e.g., “good job”)
or feedback regarding global intelligence (e.g., “you’re smart”), students’
beliefs about their abilities and their performance are positively
influenced.36
Many teachers know that providing informational feedback helps
create a positive learning environment. Indeed, the use of classroom
formative assessment is linked to substantial learning gains.37 When
teachers give informational feedback (e.g., pointing out to a student a
specific problem in her logic rather than simply noting that the answer is
incorrect) students’ achievement and attitudes improve.38 During whole-
class instruction, when teachers combine positive comments with specific
information about how to solve a problem, students are less likely to report
that they engage in self-defeating behaviors (e.g., putting off doing their
homework until the last minute) or avoid asking for help when they don’t
understand assignments.39 In addition, research suggests that positive
substantive feedback that provides information about students’ progress
toward goals and progress in learning is related to children’s motivational
beliefs, such as their self-concept of ability and self-efficacy. An
observational study of math classrooms illustrates how including such
feedback during instruction can support students’ self-efficacy in
mathematics.40 Even though the research demonstrates the critical and
potentially powerful role that appropriate feedback can play, it does not
appear that teachers typically use prescriptive, informational feedback. In
fact, a recent descriptive study of teacher feedback used in 58 third-grade
mathematics classrooms suggests that the primary form of feedback
teachers use during instruction is general praise, such as “that’s very
good,” which does not provide any useful information to students.41

36
Mueller and Dweck (1998); Elawar and Corno (1985); Miller, Brickman, and Bolen (1975).
37
See Black and Wiliam (1998) for a recent discussion of the literature on feedback and
formative assessment.
38
Elawar and Corno (1985).
39
Turner, Midgley, Meyer, et al. (2002).
40
Schweinle, Turner, and Meyer (2006).
41
Foote (1999).
Encouraging Girls in Math and Science: IES Practice Guide 175

Experimental work suggests that feedback given in the form of praise


focused on global intelligence (e.g., “you’re smart”) may have a negative
impact on future learning behavior in comparison to praise about effort
(e.g., “you must have worked hard”).42 Elementary school students who
were given praise about their intelligence after correctly solving a problem
were likely to attribute future failures to lack of ability, have lower interest,
show less persistence on future tasks, and have a goal for future tasks of
looking smart. In contrast, children who were given praise about their
effort were more likely to believe that subsequent failure was due to lack
of effort, show higher persistence on difficult tasks, and have a goal of
mastering challenging tasks or concepts rather than just “looking smart.”
Thus, teacher feedback that attributes student success to effort (e.g.,
“you’ve been working hard”) and task-specific ability (e.g., “you did very
well at solving this division problem”) strengthens self-efficacy beliefs
about mathematics. These beliefs, in turn, influence a child’s future
persistence on difficult tasks and, ultimately, overall performance.
Finally, why is prescriptive, informational feedback important to
enhancing girls’ beliefs about their abilities? As discussed in the overview,
girls tend to lack confidence in their math and science abilities even when
they do well in math and science courses. Providing informative feedback
focuses students’ attention on what to do when they do not solve a problem
correctly rather than letting girls attribute wrong answers to a lack of
ability. When students experience success, providing informative feedback
directs their attention to what they did to achieve that success (e.g., worked
hard, tried multiple strategies, used the procedures in the correct order)
rather than allowing girls to attribute that success to having a certain
amount of ability.

How to Carry Out the Recommendation


What can teachers do to make sure that the feedback they give students
will help improve both their motivation to learn new material and their
performance, even in the face of failure?

42
Mueller and Dweck (1998).
176 Diane Halpern, Joshua Aronson, Nona Reimer et al.

 Provide positive, substantive feedback to students as they solve


problems to encourage students to correct misunderstandings and
learn from their mistakes.43 Teachers should create a classroom
environment in which learning, improving, and understanding are
emphasized. In such an environment, when children give an
incorrect answer, it becomes an opportunity for learning.
 Highlight the importance of effort for succeeding at difficult tasks.
By attributing success to effort rather than to global intelligence,
expectations for future success are supported. Praising general
intelligence implies that natural intellectual gifts determine success
(and failure) rather than effort; this can be a debilitating mindset
for students when confronted with failure on a difficult task.44
 Keep a balance between learning on the one hand and performance
on the other. Grades matter, but students who focus single-
mindedly on their grades may come to care so much about
performance that they sacrifice learning opportunities.45

Potential Roadblocks and Solutions


Roadblock 2.1. Some teachers may find it difficult to focus on effort
and strategy use rather than on performance. Too often, the attention of
many students (and sometimes their parents) is on report card grades and
exam scores. In addition, many teachers are required to assess and report
performance in terms of grades or exam scores.

Solution. Teachers can draw attention to students’ efforts when


possible. When explaining exam scores or grades on an assignment,
teachers can provide comments on effort and strategy. Teachers can
routinely comment on the combined efforts of a class as students are

43
Schweinle, Turner, and Meyer (2006); Turner, Midgley, Meyer, et al. (2002).
44
Mueller and Dweck (1998); see Foersterling (1985) for a review of research on attributional
feedback.
45
Dweck (2002); Mueller and Dweck (1998).
Encouraging Girls in Math and Science: IES Practice Guide 177

working on assignments or projects.46 Feedback specific to individual


students is best delivered in a one-on-one context.47 Teachers can also
design assignments that reward effort. For example, students can be
encouraged to submit drafts on which feedback can be given and then
revised versions submitted for a grade.

Roadblock 2.2. Teachers whose schedules are already stretched may


find it difficult in the course of the average school day to give each student
detailed feedback on problem-solving and strategy use.

Solution. Feedback or praise does not need to be given all the time.48 In
fact, informative feedback, and particularly praise focused on effort, should
be given only when it is genuine. Giving students praise on simple tasks
may undermine motivation. When praise is warranted, teachers can focus
on effort, using phrases such as, “you worked really hard.” Teachers can be
strategic in when and how they provide detailed informative feedback. For
example, it often is appropriate to give such feedback to an entire class
after a test or exam, especially when most students make a specific error. A
class review after an assignment or test also is a good way to provide all
students with informative feedback.

Roadblock 2.3. Many teachers rely heavily on standardized assessment


techniques, which provide little feedback and can foster a performance
rather than a learning orientation regarding scores.

Solution. Effective math and science programs provide continual,


multiple assessments of student knowledge so that appropriate adjustments
can be made throughout the year. Lessons should include formative and
summative assessments of student progress, providing feedback to students
long before the annual standardized assessments are taken. Teachers can
also use peer feedback and critique as a classroom activity— providing
clear criteria for feedback and critique to students at the outset.

46
Turner, Midgley, Meyer, et al. (2002).
47
Ward (1976).
48
Henderlong and Lepper (2002) provide a recent discussion of the research on praise.
178 Diane Halpern, Joshua Aronson, Nona Reimer et al.

RECOMMENDATION 3: EXPOSE GIRLS TO FEMALE ROLE


MODELS WHO HAVE SUCCEEDED IN MATH AND SCIENCE

We recommend that teachers expose girls to female role models who


have succeeded in math and science. Research demonstrates that triggering
negative gender stereotypes can create problems for girls and women on
tests of mathematics and spatial reasoning.49 Exposure to female role
models who have succeeded in math has been shown to improve
performance on math tests and to invalidate these stereotypes.50

Level of Evidence: Low

We rated the level of evidence that supports this recommendation as


low. This recommendation is based on our extrapolation of relevant
research, including four small experimental studies with college students.51
Although the experiments that support the recommendation have strong
research designs (internal validity) for supporting causal claims, these
studies were conducted with college students rather than girls from

49
Aronson (2002); Aronson and Steele (2005); Steele, Spencer, and Aronson (2002).
50
Marx and Roman (2002); McIntyre, Paulson, and Lord (2003).
51
Ibid.
Encouraging Girls in Math and Science: IES Practice Guide 179

kindergarten through high school and were short laboratory experiments,


rather than real-world classroom studies conducted with students over
extended periods of time. Thus, the applicability of these studies to the
effects of exposing girls to female role models in natural contexts (e.g.,
classrooms) is limited.
In addition to these studies that explicitly address the effect of
exposure to female role models on young women’s math performance and
beliefs about their math abilities, there is related research that supports this
recommendation. This research includes experimental evidence that
negative stereotypes can impede performance52 and one small, cross-
sectional observational study showing that children are aware of math-
related gender stereotypes.53

Brief Summary of Evidence to Support the Recommendation

Researchers have found that negative stereotypes can affect


performance in test-taking situations54 and have labeled this phenomenon
“stereotype threat.” Stereotype threat arises from a psychologically
threatening concern about confirming a negative stereotype, both in one’s
own eyes and in the eyes of others. For both self and others, the existence
of the stereotype fosters negative beliefs about the meaning of difficulty or
low performance—namely, that one lacks ability. Thus, when a woman is
told that her math abilities are being evaluated, she is likely to perform
worse on a standardized math test than a man with similar course grades
and performance on assignments, because of the anxiety, deficits in short-
term memory, and negative thoughts that have been shown to accompany
stereotype threat. Studies also show that stereotype threat can lead young

52
For reviews of the research, see Aronson and Steele (2005); Steele, Spencer, and Aronson
(2002).
53
Steele (2003).
54
Several experimental studies have been conducted with college students that demonstrate the
stereotype threat phenomenon. For example, see Gonzales, Blanton, and Williams (2002);
Spencer, Steele, and Quinn (1999); Steele and Aronson (1995).
180 Diane Halpern, Joshua Aronson, Nona Reimer et al.

adolescent girls and women to choose unchallenging problems to solve,55


lower their performance expectations,56 and devalue mathematics as a
career choice.57 Thus, negative stereotypes can impair engagement and
confident performance of girls and women in science, technology,
engineering, and mathematics.
Research also indicates that when some women take tests, a certain amount
of stereotype threat is generally operative—that is, stereotype threat is the default
unless measures are taken to counter it in the testing situation.58 It is also the case
that circumstances can be more or less threatening depending on the number of
stereotype cues in the environment. For example, when men outnumber women
in the room when taking a test, women perform worse than when they
outnumber men, and women seem to perform better still when no men are
present.59
Evidence from four random-assignment experiments indicate that
exposing women to female role models who are high-achieving or who are
perceived as math experts can mitigate the effects of stereotype threat on math
test performance.60 These studies show that even brief exposure to women
who are perceived to be experts in math can improve female students’
performance on math tests.
Although we did not identify experimental studies testing the effect of
exposure to successful role models on the math or science performance of girls,
experimental studies demonstrate that calling attention to gender decreased
performance on math tests in 12-year-old girls but not in younger girls (10-
and 11-year-olds).61 In addition, researchers have found that by age 10 or so,
simply being evaluated is enough to evoke stereotype threat with respect to
academically stigmatized ethnic minority students.62 These studies
demonstrate that stereotype threat can be a problem by the time girls reach
middle school.

55
Aronson and Good (2002).
56
Stangor, Carr, and Kiang (1998).
57
Davies, Spencer, Quinn, et al. (2002).
58
E.g., Inzlicht and Ben-Zeev (2000); Inzlicht, Aronson, Good, et al. (2006).
59
Ibid.
60
Marx and Roman (2002); McIntyre, Paulson, and Lord (2003).
61
Good and Aronson (2007).
62
McKown and Weinstein (2003).
Encouraging Girls in Math and Science: IES Practice Guide 181

Because the research on the effects of stereotype threat and exposure to


positive female role models has primarily been documented with adult women,
we considered related research on awareness of gender stereotypes by girls in
general. Evidence from one small, cross-sectional observational study suggests
that elementary-school-aged girls are aware of the stereotype that men are
considered to be better at math than women, but that they view girls and boys to
be equally good at math.63

Table 3. Percent of degrees awarded to women in engineering


subfields in 1966, 1985, and 2004

Bachelor’s Master’s Doctorate


1966 1985 2004 1966 1985 2004 1966 1985 2004
Aeronautical Engineering .3 8.4 17.8 .8 5.1 17.1 0 4.0 11.9
Chemical Engineering .8 23.4 35.4 .7 15.7 27.7 .5 8.1 23.9
Civil Engineering .4 13.8 24.2 .4 11.7 27.2 0 5.1 19.6
Electrical Engineering .3 11.5 14.2 .6 8.8 19.6 .4 4.9 13.5
Industrial Engineering .4 29.1 33.2 .5 15.5 21.3 0 6.5 19.4
Mechanical Engineering .2 10.5 13.6 .3 7.0 12.4 .2 5.1 11.1
Materials & Metallurgical
.9 22.4 31.2 .8 15.8 24.9 .9 10.6 17.7
Engineering
Other Engineering
.8 16.0 29.9 .8 11.5 24.7 .4 6.3 24.9
Subfields
Source: National Science Foundation. Division of Science Resources Statistics. 2006. Science
and Engineering Degrees: 1966-2004.

Although the negative stereotype that women cannot perform as well


as men in math and science exists and can affect students’ performance,
the stereotype is not necessarily accurate. As we indicated in the overview,
women earn a substantial proportion of the bachelor’s and master’s degrees
in math and in many science disciplines (see figure 2). Even in
engineering, the proportions of degrees that women now earn have
substantially changed over the last 40 years.64 Table 3 shows the percent of
degrees awarded to women by engineering subfields in 1966, 1985, and

63
Steele (2003).
64
National Science Foundation (2006b).
182 Diane Halpern, Joshua Aronson, Nona Reimer et al.

2004. In 1966, women earned less than 1 percent of the bachelor’s degrees
in any engineering subfield. By 2004, women earned about one-third of the
bachelor’s degrees in chemical engineering (35 percent), industrial
engineering (33 percent), and materials and metallurgical engineering (31
percent). Exposing girls to female role models may help negate the
stereotype and encourage more girls to pursue math- and science-related
careers.

How to Carry Out the Recommendation


To counteract the negative stereotypes regarding women’s math and
science abilities, teachers should provide exposure to female role models
who are experts in math and science fields. Experimental studies with
college women indicate that learning about women who have achieved
success in math or science can help attenuate the effects of negative
stereotypes.65 Teachers can expose students to female role models in a
number of ways:

 Assign biographical readings about women scientists,


mathematicians, and engineers.
 Call attention to current events highlighting the achievements of
women in math or science.

When talking about potential careers, make students aware of the


numbers of women who receive advanced degrees in math- and science-
related disciplines. The National Science Foundation publishes statistics on
women in the sciences, math, and engineering each year. This information
is available on its website (www.nsf.gov/statistics).
In addition, we suggest that teachers invite women or older students
who can serve as role models in math or science to be guest speakers or
tutors. Learning about role models who have achieved in math or science,
whether through biographies or personal conversations:

65
Marx and Roman (2002); McIntyre, Paulson, and Lord (2003).
Encouraging Girls in Math and Science: IES Practice Guide 183

 Teaches students that struggle and eventual success are normal.


This knowledge seems to reduce anxiety and boost motivation
when the student encounters challenges in work related to science,
technology, engineering, or mathematics. A role model who
communicates this may serve as a greater inspiration to persist
through difficulty than someone for whom achievement appears
effortless.66
 Conveys to students that becoming good at math or science takes
hard work and that self-doubts are a normal part of the process of
becoming expert at anything worthwhile.67

At least some female role models should be “attainable.” Research


supports the idea that older students who overcame initial difficulty with
hard work eventually to become high performers can be effective role
models.68 A famously gifted female engineer for whom math and science
always came easily and naturally is not always the best role model because
she can be written off as an the exception—a rare case who triumphed
through talent rather than hard work.
In addition, many mentoring programs have been created for young
women in an attempt to provide them with role models and foster their
interest in mathematics and science.69 Mentoring is a broad term used to
describe formal and informal programs in which mentors, who are people
with expertise in a field, help people develop and accomplish their
educational and career goals. Little rigorous research exists assessing
mentors’ effectiveness in math and science per se, but mentoring programs
may provide many high-school-age girls with exposure to and connections
with a woman who has succeeded in math and science. However, some
promising research illustrates the effects of mentoring programs in
increasing the numbers of minority students pursuing advanced degrees in
science, technology, mathematics, and engineering.70 In addition, rigorous

66
Ibid.
67
Ibid.; Wilson and Linville (1985).
68
Good, Aronson, and Inzlicht (2003).
69
Building Engineering and Science Talent (2004).
70
Maton and Hrabowski (2004); Summers and Hrabowski (2006).
184 Diane Halpern, Joshua Aronson, Nona Reimer et al.

experimental research has shown that mentors can positively affect young
adolescents’ behaviors (e.g., school attendance, drug and alcohol use).71
Teachers may choose to support a young girl’s interest in math or science
by helping her to find a suitable mentoring program.

Potential Roadblocks and Solutions


Roadblock 3.1. School hours alone may not provide enough
opportunities for girls to be exposed to female role models, particularly if
many of the science and math teachers in a given school are male.

Solution. Teachers need to encourage parents to take an active role in


providing opportunities for girls to be exposed to women working in the
fields of math and science. Teachers might encourage parents to sign up
girls for activities where women work in math- and science-related careers,
such as at an aquarium, a hospital, or a scientific laboratory. In addition,
several national organizations provide conferences designed to encourage
girls’ participation in math and science careers.

Roadblock 3.2. Many girls see math- and science-related careers as


male oriented. If girls believe that careers in mathematics and science are
nontraditional choices for women, they may be less likely to take advanced
courses in math and science in high school or to choose a college path that
leads to careers in math and science.

Solution. It is certainly true that the stereotype of the high achieving


woman in math is a nontraditional one, and media images do not help
this.72 To counteract this stereotype, teachers can call attention to the fact
that many women today are becoming mathematicians and scientists. The
National Science Foundation publishes current statistics on the numbers of
women and men receiving bachelor’s, master’s, and doctoral degrees in
math and science.73

71
Tierney and Grossman (2000).
72
E.g., Davies, Spencer, Quinn, et al. (2002).
73
See National Science Foundation at http://www.nsf.gov/statistics/wmpd/ for current statistics
on the numbers of women and men receiving degrees in math and science.
Encouraging Girls in Math and Science: IES Practice Guide 185

RECOMMENDATION 4: CREATE A CLASSROOM


ENVIRONMENT THAT SPARKS INITIAL CURIOSITY
AND FOSTERS LONG-TERM INTEREST IN MATH
AND SCIENCE

To encourage more girls to choose careers in the fields of science,


technology, engineering, and math, we recommend that teachers use
strategies designed to generate initial interest in specific math and science
activities and build on this initial interest to foster sustained interest in
math and science content. Few teachers would be surprised by research
indicating that students’ interest is linked to academic performance and
choices for both girls and boys. When students are interested in
mathematics and science, they tend to get better grades in mathematics and
science, take more advanced mathematics and science courses, and are
more likely to pursue mathematics- and science-related college majors.74

Level of Evidence: Moderate

A number of studies have shown that students’ interest predicts both


concurrent and long-term performance in and choice of math and science

74
Simpkins, Davis-Kean, and Eccles (2006); Updegraff and Eccles (1996).
186 Diane Halpern, Joshua Aronson, Nona Reimer et al.

courses, majors, and careers.75 In addition, several small-scale


experimental studies have examined the effect of specific strategies (e.g.,
using practical problems as a context for learning math or science skills) on
students’ interest and learning. The panel judges the quality of the evidence
supporting this recommendation to be moderate, based on five small-scale
experiments that focus specifically on children learning in math and
science contexts76 as well as supporting experimental studies on interest in
other domains or with adult populations and correlational studies of
students’ interest and course-taking or career choices.77

Brief Summary of Evidence to Support the Recommendation


Researchers have found gender differences in students’ interests, with
boys typically more interested in activities and careers involving scientific,
technical, and mechanical pursuits and girls more interested in activities
and careers that involve social and artistic pursuits.78 These differences in
interests are apparent by middle school and are reflected in undergraduate
degree and career choices. Even highly mathematically able youth who
were identified in adolescence and followed longitudinally showed gender
effects in their choice of degrees. The ratio of male to female under-
graduate degrees in math and science is 2.4:1.79
To make math and science content more interesting to students, it is
helpful to think about what “interest” means. In research on learning, the
word interest is used in two different ways. The first, which might be
called long-term or individual interest, is a relatively stable and personal
preference toward certain types of activities. A second kind of interest,
which might be called curiosity or situational interest, is a more immediate
response to particular aspects of situations or problems. 80 It is true, as
noted above, that gender differences have been found in students’ long-

75
Ibid.
76
Cordova and Lepper (1996); Parker and Lepper (1992); Ginsburg-Block and Fantuzzo (1998);
Turner and Lapan (2005); Phelps and Damon (1989).
77
For a recent review of this literature, see Hidi and Renninger (2006).
78
Lapan, Adams, Turner, et al. (2000).
79
Webb, Lubinski, and Benbow (2002).
80
Mitchell (1993); Hidi and Renninger (2006).
Encouraging Girls in Math and Science: IES Practice Guide 187

term interests. But many expert teachers realize that an important way to
cultivate students’ long-term interests in math and science is to build upon
their initial curiosity. Research (as well as intuition) suggests that curiosity
can serve as a hook to engage students in math and science content.81 Once
students’ interest in a topic or content area is sparked, teachers can then
build on that curiosity, providing students with opportunities to engage
with interesting material and potentially transforming that initial curiosity
into long-term interest.

How to Carry Out the Recommendation


Teachers can take several actions to create a classroom environment
that sparks initial interest and may foster long-term interest.82 To spark
initial interest, teachers can provide students the opportunity to see how the
skills and knowledge they are learning in mathematics and science classes
can be used to solve interesting and meaningful problems. For example,
teachers can:

 Embed mathematics word problems and science activities in


interesting contexts.83 For elementary school children, embedding
mathematical practice in fantasy contexts (e.g., saving a planet
from an alien invasion, searching for buried treasure or criminals)
has been found to be highly motivating and interesting and has led
to improved performance.84 For middle school children,
embedding mathematical practice in the contexts of real-world
problems (e.g., figuring out how to build an effective skateboard
ramp given a limited budget or a hovercraft that “flies” multiple
students) can be both motivating and support learning.85
 Provide students with access to rich, engaging, relevant
informational and narrative texts as they participate in classroom
science investigations. For example, during weeklong series of

81
Hidi and Renninger (2006).
82
Mitchell (1993); Hidi and Renninger (2006).
83
Renninger, Ewen, and Lasher (2002).
84
Cordova and Lepper (1996); Parker and Lepper (1992).
85
Bottge, Rueda, Serlin, et al. (2007).
188 Diane Halpern, Joshua Aronson, Nona Reimer et al.

science investigation activities where elementary school children


learn about owls and birds and have the opportunity to dissect owl
pellets and discuss what they find, the children are also asked to
read from both informational and narrative texts that focus on
owls. Embedding the reading activities into the context of science
investigation serves to generate situational interest and further
curiosity.86
 Use project-based learning, group work, innovative tasks, and
technology to stir interest in a topic. Using web-based
presentations of content and animated presentations of changes
that occur during chemical bonding supports student exploration of
key chemical concepts and is associated with improved learning.
Other research shows that when students are asked to solve the
series of problems presented in a video-based adventure as
members of teams, they outperform their counterparts who are
asked to solve the problems individually.87
 Examine the large variety of tools designed for teachers to use with
girls and young women to spark initial curiosity about science and
mathematics content. Many of these tools incorporate the
principles that have been discussed in this practice guide. A listing
of these tools is published in a volume entitled New tools for
America’s workforce: girls in science and engineering and can be
downloaded at http://www.nsf.gov/publications/. 88

Teachers can capitalize on that initial situational curiosity by providing


students with opportunities to deepen their knowledge and understanding
of particular math and science content and to broaden their understanding
of how what they are learning today connects to their future goals.
Teachers should also consider providing students with access to female
role models who have been successful in mathematics and science careers
to nurture long-term individual interest (see Recommendation 3).

86
Guthrie, Wigfield, Humenick, et al. (2006).
87
Barron (2000); Linn, Lee, Tinker, et al. (2006); Phelps and Damon (1989); Kaelin, Huebner,
Nicolich, et al. (2007).
88
National Science Foundation (2006a).
Encouraging Girls in Math and Science: IES Practice Guide 189

Encourage middle and high school students to examine their beliefs


about which careers are typically female-oriented and which are typically
male-oriented. Encourage these students to learn more about careers that
are interesting to them but that they believe employ more members of the
opposite gender. Connect mathematics and science activities to careers in
ways that do not reinforce existing gender stereotypes of these careers.89

Potential Roadblocks and Solutions


Roadblock 4.1. Teachers may be reluctant to incorporate aspects of
youth culture into a lesson because they lack familiarity with students’
interests. Teachers may be concerned that they might look silly by
connecting to something long out-of-date in students’ minds.

Solution. Certainly it is a challenge to follow cultural trends and fads in


students’ interests; it is likely that students do not expect their teachers to
be able to keep up and may even be surprised when one is able to do so.
However, connecting to trends in popular culture is only one potential
strategy for arousing students’ curiosity. Teachers can also seek to connect
math and science content with other contexts that are of interest to students
of all ages and generations, including history and current events. In
addition, incorporating hands-on activities, group work, and technology
into lessons can make the content more interesting for students.

Roadblock 4.2. Teachers may be reluctant to incorporate interesting


activities, because of the belief that some activities might distract students
and not serve the goals of the lesson.

Solution. Teachers should not interpret this recommendation as


suggesting that all activities in math and science classes should take the
form of games or fun activities. While games can be fun and exciting for
students, it is also true that some games and activities that are not tightly
linked to learning objectives may indeed distract students and be
counterproductive to learning. Teachers should strive to achieve a balance
between enhancing activities so that they incorporate some material that
89
Turner and Lapan (2005); Ji, Lapan, and Tate (2004); Lapan, Adams, Turner, et al. (2000).
190 Diane Halpern, Joshua Aronson, Nona Reimer et al.

sparks interest while ensuring that content learning remains the primary
goal of the lesson.

Roadblock 4.3. Teachers may be hesitant to use time-intensive methods


such as hands-on activities and group work, given the pressure that many
feel to keep up with curriculum pacing guides.

Solution. It is true that creating interesting and innovative tasks is time-


consuming on many levels— increased preparation time for teachers, more
time needed to cover the material in class, and frequently more time for
teachers to grade or evaluate students’ work. Teachers need to achieve a
balance between incorporating interesting tasks and lesson formats and
covering the required curriculum. But, despite increased time demands,
using interesting tasks and instructional formats may be worth the extra
effort: Research indicates a strong link between interest and academic
performance, for both girls and boys. In addition, as a teacher and her
students become more accustomed to doing things a bit differently (e.g.,
using group work or using project-based learning), the extra effort required
to implement these kinds of activities can decrease.

RECOMMENDATION 5: PROVIDE SPATIAL


SKILLS TRAINING

We recommend that teachers provide spatial skills training for girls.


Researchers have found that spatial skills are associated with performance
Encouraging Girls in Math and Science: IES Practice Guide 191

on math tests and that spatial skills can be improved with practice on
certain types of tasks.90

Level of Evidence: Low

We consider the level of evidence to support this recommendation to


be low because of the lack of experimental or rigorous quasi-experimental
studies that directly examine the effects of spatial skills training on girls’
math or science performance. Our recommendation to improve math and
science performance through spatial skills training represents an
extrapolation based on the broader relevant research literature.91

Brief Summary of Evidence to Support the Recommendation


When we look at girls’ math and science performance in school, we
see that they are doing well on tests that are closely related to the
curriculum they are taught in school. As noted earlier, girls, on average,
graduate from high school with slightly more math and science classes
than boys and with higher grades. Girls now obtain almost half of all
undergraduate degrees in mathematics and the majority of degrees in
health-related fields. But, as noted at the beginning of this guide, girls and
women are underrepresented in physics, computer science, engineering,
and chemistry at the undergraduate level, and their numbers fall off as they
move beyond the undergraduate degree.92
We find the largest between-gender differences with standardized
examinations that do not match any particular curriculum. We are referring
to the standardized tests that are used for admission to undergraduate and
graduate programs and for state and international comparisons. As already
discussed, one possible reason that girls and women perform less well on
these tests in mathematics and science is stereotype threat, but the
differences are found on specific items on these tests, which suggests that

90
Doolittle (1989); Newcombe (2002); McGraw, Lubienski, and Struchens (2006).
91
Ben-Chaim, Lappan, and Houang (1988); Piburn, Reynolds, McAuliffe, et al. (2005).
92
A summary of research can be found in Halpern, Benbow, Geary, et al. (2007).
192 Diane Halpern, Joshua Aronson, Nona Reimer et al.

there are gender differences in how certain types of questions are solved or
not solved. For example, numerous researchers have found that when math
items are highly spatial in nature, boys solve more of these questions
correctly.93 Consider the conclusion from a study of 24,000 ninth-graders,
which showed that males perform better on items that require significant
spatial processing and females outperform males on items requiring
memorization.94 Additional support comes from an analysis of the
mathematical test questions that showed the largest gender differences
favoring males on an international math assessment.95 The items included
calculating the height of a mountain, calculating the distance between two
intercepts on a plane, calculating the length of a string, calculating the
perimeter of a polygon, and other similar problems that are spatial in
nature. There is evidence that gender differences in math problem-solving
strategies begin as early as first grade, with girls using more overt
strategies, such as counting, and boys using more conceptual spatial
strategies.96
A large research literature shows that boys outperform girls on many
tests of spatial skills, especially ones that require visualizing what an object
will look like when it is rotated in space.97 Researchers have established
that spatial skill performance is correlated with performance in
mathematics and science.98 For example, researchers have found that
kindergarteners’ ability to perceive and discriminate among various shapes
and geometric forms predicts their later performance in fourth-grade
math.99 Scores on a spatial visualization test, for example, have correlated
with subsequent test scores in geology.100 Other evidence supporting the
idea that spatial abilities are important in math and science was provided

93
E.g., Bielinksi and Davison (2001); Doolittle (1989); Gallagaher, De Lisi, Holst, et al. (2000);
Geary, Saults, Liu, et al. (2000); Gierl, Bisanz, Bisanz, et al. (2003).
94
Gierl, Bisanz, Bisanz, et al. (2003).
95
Casey, Nuttall, and Pezaris (2001).
96
Carr, Jessup, and Fuller (1999).
97
E.g., Battista (1990); Bielinksi and Davison (2001); Doolittle (1989); Harris and Carlton
(1993); McGraw, Lubienski, and Struchens (2006).
98
Casey, Nuttall, and Pezaris (2001); Kurdek and Sinclair (2001); Piburn, Reynolds, McAuliffe, et
al. (2005).
99
Kurdek and Sinclair (2001).
100
Piburn, Reynolds, McAuliffe, et al. (2005).
Encouraging Girls in Math and Science: IES Practice Guide 193

by researchers who found that scores on a test of mental rotation, which


measures how well students can visualize an irregular shape when it is
rotated in space, accounted for a large proportion of the male-female
difference on standardized examinations.101 Additionally, geometry items
comprise close to one-third of the questions on the math portion of the
SAT, and the largest differences between males and females are found on
geometry items.102 Spatial skills, therefore, figure importantly in the male-
female test score gap.103
In mathematics, researchers have studied the way that representations
affect the learning of complex mathematical ideas. For example, in a study
of the strategies used to solve mathematical problems, researchers found
that overall, the male students were more likely than female students to use
a flexible set of general strategies and more likely to solve problems
correctly when the solution required a spatial representation, a short cut, or
the maintenance of information in spatial working memory.104
Several studies have documented that women who undergo specific
spatial skills training programs show improved performance in the specific
domain of their training.105 In one study, for example, researchers designed
and implemented a course to improve the spatial visualization skills of
first-year engineering students.106 In this course, students learned effective
strategies for mentally representing objects, and for using graphs,
diagrams, charts, and maps as tools for thinking about topics in science and
mathematics. Notably, retention in the engineering program for female
engineering students who took the spatial visualization course was 77
percent, whereas retention was only 47 percent among those who did not
take the course. Although both females and males participated in the
spatial training course, more females scored low on tests of spatial
visualization at the start of the program, and females showed greater gains
in retention in the engineering program as well as in grades. Other

101
Casey, Nuttall, and Pezaris (2001).
102
Harris and Carlton (1993).
103
E.g., Bielinksi and Davison (2001); McGraw, Lubienski, and Struchens (2006).
104
Gallagher, De Lisi, Holst, et al. (2000).
105
Halpern (2000).
106
Sorby (2001).
194 Diane Halpern, Joshua Aronson, Nona Reimer et al.

researchers examined the effects of spatial training when using information


about maps and found that when students were taught how to project lines
of longitude and latitude and to visualize the curve of the Earth’s surface,
their map skills improved.107
Several studies, including a few experiments using random
assignment, have directly examined spatial skills training in school-age
children. In these studies, children who received training in a specific
spatial skill (e.g., mental rotation, spatial perspective, embedded figures)
improved their performance on related visual and spatial measures more
than children who did not receive this training.108 In a recent synthesis of
the spatial skills training literature, such training was found to improve the
visual and spatial skills of both children and adults.109 Thus, targeted
training can serve to improve spatial skills performance beginning in early
childhood and continuing into adulthood.
To reiterate, the mathematical test items that show the greatest
difference favoring boys are spatial in nature. Spatial skills can be
improved with training. When spatial skills training was given to college
students in engineering, grades in courses that use spatial skills improved
and retention in engineering programs, especially for women students,
improved. In addition, spatial ability predicted which courses high school
students liked best (with math and science courses positively related to
spatial ability) and the careers in which students were employed when they
were in their 30s (with spatial ability positively related to careers in math,
science, and engineering).110

How to Carry Out the Recommendation


Teachers can provide spatial skills training with a variety of age-
appropriate activities. In particular, teachers can:

107
Muehrcke and Muehrcke (1992).
108
Connor, Schackman, and Serbin (1978); DeLisi and Wolford (2002).
109
Marulis, Liu, Warren, et al. (2007).
110
Shea, Lubinski, and Benbow (2001).
Encouraging Girls in Math and Science: IES Practice Guide 195

 Encourage young girls to play with toys that require the


application of spatial knowledge, such as building toys.111
 Teach older girls to mentally image and draw mathematics or other
assignments so that they become as comfortable with spatially
displayed information as they are with verbal information.112
 Require answers that use both words and a spatial display.113
 Provide opportunities for specific training in spatial skills, such as
mental rotation of images, spatial perspective, and embedded
figures.114

Potential Roadblocks and Solutions


Roadblock 5.1. Some teachers may not feel prepared to provide spatial
skills training.

Solution. Many materials are available to help teachers include spatial


skills in their everyday curriculum. Free software and lesson plans for grades
K to 12 are available on websites dedicated to the topic of spatial skills
training. In addition, published materials are available offering ideas to
teachers for incorporating spatial skills training in their everyday
curriculum.115

Roadblock 5.2. Learners in any classroom will vary in their spatial


abilities, making it difficult for teachers to know how to target their
teaching in this domain.

Solution. Tools and lesson plans available on the Web can be used by
learners at different levels of ability. In addition to published materials,
special workshops for teachers that vary by grade level could also help
teachers begin to plan lessons, as well as ready-to-use sample exercises and
online training programs.

111
Deno (1995).
112
Gerson, Sorby, Wysocki, et al. (2001).
113
Casey, Nutall, and Pezaris (2001).
114
Sorby and Baartmans (2000).
115
Casey (2003).
196 Diane Halpern, Joshua Aronson, Nona Reimer et al.

CONCLUSION

To conclude, let us revisit our recommendations. What do we


recommend that teachers do to encourage girls and young women to
choose career paths in math- and science-related fields? One major way is
to foster girls’ development of strong beliefs about their abilities in these
subjects—beliefs that more accurately reflect their abilities and more
accurate beliefs about the participation of women in math- and science-
related careers. Our first two recommendations, therefore, focus on
strategies that teachers can use to strengthen girls’ beliefs regarding their
abilities in math and science: (1) Teach students that academic abilities are
expandable and improvable; and (2) Provide prescriptive, informational
feedback. Our third recommendation addresses girls’ beliefs about both
their abilities and the participation of women in math- and science-related
careers: (3) Expose girls to female role models who have succeeded in
math and science.
In addition to beliefs about abilities, girls are more likely to choose
courses and careers in math and science if their interest in these fields is
sparked and cultivated throughout the school years.116 Our fourth
recommendation focuses on the importance of fostering both situational and
long-term interest in math and science, and provides concrete strategies that
teachers can use to do so.
In addition to beliefs and interests, a final way to encourage girls in math
and science is to help them build the spatial skills that are crucial to success in
many math- and science-related fields, such as physics, engineering,
architecture, geometry, topology, chemistry, and biology. Research suggests
that spatial skills, on which boys have typically outperformed girls, can be
improved through specific types of training. Thus, our final recommendation
is that teachers provide students, especially girls, with specific training in
spatial skills.

116
Wigfield, Eccles, Schiefele, et al. (2006).
Encouraging Girls in Math and Science: IES Practice Guide 197

APPENDIX: TECHNICAL INFORMATION


ON THE STUDIES

Recommendation 1: Teach Students That Academic Abilities


Are Expandable and Improvable

Level of Evidence: Moderate


The panel rated the level of evidence as Moderate. We were able to
locate two small experiments demonstrating support for the practice of
improving students’ performance by teaching them that academic abilities
are expandable and improvable.117 In the first study, 138 girls and boys in
seventh grade participated in the study that examined the effect of teaching
students about the expandability of their abilities on end-of-year math and
reading achievement test scores.118 The second study was conducted with
95 girls and boys in seventh grade and examined the effect of such in-
struction on students’ end-of-semester math grades.119 In addition, we

117
Blackwell, Trzesniewski, and Dweck (2007); Good, Aronson, and Inzlicht (2003).
118
Good, Aronson, and Inzlicht (2003).
119
Blackwell, Trzesniewski, and Dweck (2007).
198 Diane Halpern, Joshua Aronson, Nona Reimer et al.

identified a small experiment with 79 female and male college students


that demonstrated support for this practice improving students’ end-of-term
grade point averages.120 All three studies included students from ethnically
and socioeconomically diverse backgrounds.
These studies share the following features: Students or groups of
students were randomly assigned to treatment and control conditions,
where some were taught that intelligence is malleable and abilities are
expandable, while others were not taught this but spent an equal amount of
time engaged in learning something new. In all three studies, students in
the group that was taught that abilities are expandable performed better
than students in the control condition.

Example of an Intervention That Teaches Students That Academic


Abilities Are Expandable and Improvable
In our example study,121 shortly after the school year began, 138
seventh-graders (both boys and girls) were randomly assigned a mentor,
with whom they communicated throughout the school year. The mentors
were 25 college students who were completing a required mentor-training
course designed by the school district. The mentors’ role was to offer
advice to their assigned students regarding study skills and the transition to
junior high, explicitly teach one of four messages, and help the students
design and create a web page for their computer skills class in which the
students advocated the experimental message conveyed to them by the
mentor throughout the year.
The participating seventh-graders were randomly assigned to one of
four conditions, representing a message taught by a mentor: (a) the
incremental condition, in which students learned that intelligence is an
expandable capacity that increases with mental effort; (b) the attribution
condition, in which participants learned that many students experience
difficulty when they move to a new educational level (such as junior high),
but then improve their performance once they are familiar with their new
environment; (c) a combined condition, in which students learned both the

120
Aronson, Fried, and Good (2002).
121
Good, Aronson, and Inzlicht (2003).
Encouraging Girls in Math and Science: IES Practice Guide 199

incremental and attribution messages; and (d) an anti-drug control condition,


in which students learned about the dangers of drug use. Each mentor had about
six randomly assigned students across three of the four messages.
Students’ math achievement was measured at the end of the school
year using the Texas Assessment of Academic Skills, a statewide
standardized achievement test administered to all students in the district.
Study results indicate that the boys in the anti-drug condition
performed significantly better on the math test than the girls; that is, the
math score mean for the boys was 81.55, but the mean math score for girls
was 74.00. However, in the three other conditions, the gender gap in math
performance disappeared. The math test scores of the girls in the
experimental conditions were statistically equivalent to the scores of the
boys across all conditions (i.e., in the incremental condition, the average
math score for males was 85.25 and for females was 82.11; in the
attribution condition, the average math scores of both boys and girls was
84.53; in the combined condition, the average math scores for females was
84.06 and for males was 82.30). Importantly, the large difference in test
scores between boys and girls that was found in the control condition (i.e.,
anti-drug message) was eliminated in the three experimental conditions.
The experimental manipulations were particularly beneficial for the female
students and demonstrated significant effects closing the gender gap in
performance, indicating that the intervention procedures meaningfully
increased girls’ math scores compared to the control condition.

RECOMMENDATION 2: PROVIDE PRESCRIPTIVE,


INFORMATIONAL FEEDBACK

Level of Evidence: Moderate

The panel rated the level of evidence as Moderate. We were able to


locate one high-quality study, which presented a series of six random
assignment experiments demonstrating support for the practice of
200 Diane Halpern, Joshua Aronson, Nona Reimer et al.

providing K–12 students with prescriptive, informational feedback as a


way to improve student motivation and performance.122 Two additional
classroom-based experimental studies provide support for providing
prescriptive, informational feedback during mathematics instruction.123 We
also drew on a recent substantive review of the literature that discusses the
effects of praise on children’s intrinsic motivation.124 In addition, two high-
quality longitudinal studies demonstrated the link between students’ self-
efficacy beliefs and their math- and science-related choices.125
In one of the longitudinal studies, researchers used a large random
sample of students in 10 different school districts in southeastern Michigan
to test the utility of students’ math-related self-beliefs to explain variation
in math course-taking.126 Students examined in this study (n = 1,039) were
followed from 9th to 12th grade, and were primarily from middle and
lower middle class European American backgrounds. Self-concept of
ability in math, utility of math, and interest in math were measured in the
ninth grade using questionnaires. Grades in math, standardized test scores
in math, specific course enrollment choices, and the number of math
classes taken throughout high school were gathered from school record
data. Findings indicated a strong and significant association between self-
concept of math ability and performance in mathematics. In addition, the
perceived utility of math had the strongest and most consistent association
with the number of high school math courses taken, with boys perceiving
math as having significantly greater utility than girls.
In the second longitudinal study, researchers analyzed data from a
sample of 227 youths from 5th through 12th grade to address longitudinal
associations among students’ math- and science-related activities, their
math and science related beliefs, and math and science courses taken
throughout high school.127 The researchers focused on the physical
sciences. Results indicated that youths’ participation in out-of-school math

122
Mueller and Dweck (1998).
123
Elawar and Corno (1985); Miller, Brickman, and Bolen (1975).
124
Henderlong and Lepper (2002).
125
Simpkins, Davis-Keans, and Eccles (2006); Updegraff and Eccles (1996).
126
Updegraff and Eccles (1996).
127
Simpkins, Davis-Kean, and Eccles (2006).
Encouraging Girls in Math and Science: IES Practice Guide 201

and science activities during fifth grade significantly predicted their math
and science beliefs at sixth grade (e.g., math and science self-concepts,
perceptions of math and science importance, and interest in math and
science). These beliefs measured at 6th grade predicted beliefs at 10th
grade, and the 10th-grade beliefs predicted the number of high school
courses students took, even after taking into account the predictive power
of their math and science grades in 10th grade.
Together, these two studies suggest a strong relation between the math
and science courses that students choose to take in high school and their
beliefs regarding their abilities and interest in these subjects, as well as
their perception of the importance of these subjects. Although these studies
have high external validity, they do not answer questions regarding the
direction of causality between students’ self-concepts in math and science
and their performance in these areas. The third study128 presents a series of
six randomized controlled experiments that demonstrate that performance
on a math-related skill can be improved by manipulating fifth-graders’
attributions regarding success and failure, thus increasing their beliefs
regarding the likelihood that they will succeed on future related tasks.

Example of an Intervention That Uses Prescriptive,


Informational Feedback
In a series of six experiments, the impact of ability versus effort
feedback on fifth-graders’ subsequent performance on the Standard
Progressive Matrices,129 designed to measure ability to form perceptual
relations and to reason by analogy, was examined. The Standard
Progressive Matrices is a nonverbal measure that correlates strongly with
the Stanford Binet and Wechsler Scales of intelligence. In addition, studies
of the Wechsler Preschool and Primary Test of Intelligence (Revised) show
that math performance is moderately related to perceptual organization for
every age group sampled. Thus, the Standard Progressive Matrices
measures perceptual relations skills that are related to mathematics
performance.

128
Mueller and Dweck (1998).
129
Ibid.
202 Diane Halpern, Joshua Aronson, Nona Reimer et al.

The number of participating fifth-graders for each of the six studies


was: 128, 51, 88, 51, 46, and 48. The ethnically and socioeconomically
diverse students (both boys and girls) were drawn from public schools in
the Midwest and the Northeast. Students were randomly assigned to one of
three conditions: (a) ability or intelligence praise; (b) effort praise; and (c)
a control condition in which students received a general statement of praise
with no attributions regarding either ability or effort.
In this series of studies, students were asked to do three sets of
Standard Progressive Matrices. Following the first set, students received
either positive praise regarding their intelligence or ability; positive praise
regarding their effort; or nonspecific praise that did not offer an attribution
for success. Students then completed a second set of matrices, following
which they were told that they had performed “a lot worse” on this set than
they had on the first set. After completing items measuring dimensions
such as desire to persist on the problems, enjoyment of the problems, and
attributions for their failure, students were then given a third and easier set
of matrices to complete, thus yielding a measure of post-failure
performance.
Findings from this series of studies indicate that fifth-graders who
were praised for ability performed less well following subsequent failure,
whereas students who were praised for effort did not. Following failure,
students praised for effort rather than ability also showed greater task
persistence and enjoyment and greater orientation toward mastery learning.
Students in the control condition typically had scores between those of
students in the ability-praise and effort-praise conditions.
In this series of studies, several alternative explanations were ruled out,
including the possibility that: (a) praise for ability might have led children
to have higher expectations for future performance, which might then have
led to greater disappointment following failure; (b) having the same
teacher administer both positive and negative feedback following task
completion might have increased the desire of ability-praised children to
perform well on the second set of matrices; and (c) ability-praised children
might have interpreted the task as an intelligence test, thus leading to
greater disappointment following failure. With these competing
Encouraging Girls in Math and Science: IES Practice Guide 203

explanations ruled out through experimental manipulation, the authors


conclude that ability-focused praise led the fifth-graders to view
performance as an index of ability, resulting in decreased motivation and
performance following failure. Students who were praised for effort rather
than ability did not show these post-failure deficits in motivation and
performance.

RECOMMENDATION 3: PROMOTE POSITIVE BELIEFS


REGARDING WOMEN’S ABILITIES BY EXPOSING GIRLS TO
FEMALE ROLE MODELS WHO HAVE SUCCEEDED
IN MATH AND SCIENCE

Level of Evidence: Low

The panel rated the level of evidence that supports this


recommendation as Low, based on four small experimental studies with
college students. Although the experiments that support the
recommendation have strong internal validity for supporting causal claims,
these studies were conducted with college students rather than girls in
kindergarten through high school and were short laboratory experiments,
rather than real-world classroom studies conducted with students over
extended periods of time. Thus, the generalizability of these studies to the
effects of exposing girls to female role models over, for example, the
course of a school year is limited.
In addition to these studies that explicitly address the effect of female
role models on young women’s math performance and beliefs about their
math abilities, there is evidence that exposure to positive role models or
mentors can have a positive effect on students’ behaviors.130
In a set of three small experimental studies with college students,
researchers demonstrated that the presence of female role models who are
competent in math mitigated the negative impact of gender stereotypes on

130
E.g., Tierney and Grossman (2000).
204 Diane Halpern, Joshua Aronson, Nona Reimer et al.

the math performance of female college students.131 These studies provide


evidence that exposure to competent female role models can close the
female-male math gap in testing and improve young women’s beliefs about
their math abilities. However, these studies do not examine the efficacy of
female role models with girls from kindergarten through high school.
In another small, random-assignment experiment with college women,
a team of researchers found significantly improved performance on a test
of items similar to those used for the GRE math test when the students first
read about women’s accomplishments in architecture, law, medicine, and
invention.132
These four studies provide evidence of the positive effect of female
role models on the test performance of young college women and on their
beliefs about their math abilities. Some evidence suggests that exposure to
positive role models or mentors can have a positive effect on students’
behaviors. For example, a large-scale evaluation of Big Brothers/Big
Sisters found positive effects of mentoring on young adolescents’ school
attendance and drug and alcohol use.133 We cite these studies as providing
evidence that the strategy (i.e., exposing girls to positive role models) in
general has been shown to be effective for elementary to secondary school
students in some outcome domains, although not specifically for improving
outcomes in math or science.

Example of Intervention Using Competent Female Role Models


with College Students
In our example study, a research team conducted a series of three
random-assignment experiments examining the impact of competent
female role models on the performance of college women in a setting that
was represented to participants as a diagnostic testing situation for math.134
The participants in each of these experiments were students who
considered themselves to be good in math and to be motivated to do well
on math exams. In experiment 1, 43 undergraduate women and men were

131
Marx and Roman (2002).
132
McIntyre, Paulson, and Lord (2003).
133
Tierney and Grossman (2000).
134
Marx and Roman (2002).
Encouraging Girls in Math and Science: IES Practice Guide 205

randomly assigned to undergo the math testing situation with either a


female or a male experimenter, both of whom were presented to the
participants as being competent in math. Women performed as well as their
male counterparts when the experimenter was a woman; women did not do
as well on the math test when the experimenter was a man.
In experiments 2 and 3, participants were provided with the biographical
sketch of a fictitious female experimenter who was not actually present; in the
biographical sketches, the female experimenter’s level of math competence
was manipulated to be either high or low. The results of both experiments
demonstrated that the improved performance on the math exam obtained in
experiment 1 was dependent on the undergraduate women perceiving the
female role model to be very competent in math, as opposed to simply taking
the test with a female experimenter with average math skills. In addition, in
experiment 3, the researchers found that young women’s perceptions of their
math ability were higher when they were exposed to a female role model who
was competent in math.
These studies provide evidence that a brief exposure to a competent female
role model can affect college women’s performance on tests and their
perceptions of their math abilities. An important limitation of these studies for
our purposes is that the participants were students who self-identified as
being good in math and being motivated to do well on math tests.

RECOMMENDATION 4: CREATE A CLASSROOM


ENVIRONMENT THAT SPARKS INITIAL CURIOSITY
AND FOSTERS LONG-TERM INTEREST IN MATH
AND SCIENCE

Level of Evidence: Moderate

The panel rated the level of evidence as Moderate. There is a long and
rich tradition of exploring ways to increase student interest in mathematics
and science content in education research. In determining the level of
206 Diane Halpern, Joshua Aronson, Nona Reimer et al.

evidence supporting this recommendation, we drew on several


experimental and quasi-experimental studies. For example, we found two
small-scale, randomized controlled trials that demonstrate support for the
practice of providing elementary school students with an environment that
improves learning in the areas of mathematics and computer science by
fostering greater interest.135 Others have experimentally evaluated the
facilitative effect of peer collaboration on solving mathematics
problems;136 while yet another study 137 examined how participating in a
career exploration module fostered middle school girls’ interest in
nontraditional careers (e.g., those in the areas of mathematics and science).
Two additional quasi-experimental studies provide support for the benefits
of this practice on students’ motivation and comprehension of science
texts.138
One study examined the impact of three interest- enhancing strategies
on the motivation and performance of 70 fourth- and fifth-grade students in
a related math task.139 Another examined the influence of embedding a
computer programming task in a fantasy context with 27 children.140 In
another program of research, two studies involving a total of 885 third-
grade students investigated reading comprehension of science texts.141
Finally, another set of studies examined 133 third-grade and 106 fifth-
grade students from two low-income schools with predominantly African-
American populations.142
Across these studies, students in the high-interest conditions
outperformed students in control conditions on study-specific indices of
interest, motivation, and learning. The learning activities in each of these
interventions share some or all of the following features: They were
designed around contexts that students found engaging, made available a

135
Cordova and Lepper (1996); Parker and Lepper (1992).
136
Ginsburg-Block and Fantuzzo (1998); Phelps and Damon (1989).
137
Turner and Lapan (2005).
138
Guthrie, Wigfield, Barbosa, et al. (2004); Guthrie, Anderson, Alao, et al. (1999).
139
Cordova and Lepper (1996).
140
Parker and Lepper (1992).
141
Guthrie, Wigfield, Barbosa, et al. (2004).
142
Guthrie, Anderson, Alao, et al. (1999).
Encouraging Girls in Math and Science: IES Practice Guide 207

range of texts, used real-life settings, used technology, provided students’


with choice, and used group work.

Example of Intervention That Improves Math Performance through


Increasing Interest
In this study, mathematics instruction was embedded in fantasy or
nonfantasy computer games in order to study the impact of intrinsic
interest on math performance.143 Seventy students (both boys and girls)
were randomly assigned to play computer games designed to teach specific
math skills (the hierarchy of the order of operations and the proper use of
parentheses in arithmetic expressions) in one of five conditions: (a) in a
basic, unembellished version that served as a control condition, students
played a math game on the computer in which they were required to learn
and correctly use the target skills in order to win the game; (b) in two
generic fantasy conditions, the activity was presented within fantasy
contexts (space trip and treasure island) designed to increase children’s
intrinsic interest in the game; in the first generic fantasy condition, the
students had some choice regarding at least six details that were incidental
to the game itself (e.g., type and color of spaceship to use), whereas (c) in
the second generic fantasy condition, these details were provided randomly
by the computer; (d) in two personalized fantasy conditions, several
generic referents in the program were replaced with details that were
personally relevant to the child (e.g., child’s name, favorite foods to bring
along on the space trip), obtained from a pretest questionnaire; in the first
personalized fantasy condition, the students had some choice regarding at
least six details incidental to the game itself, whereas (e) in the second
personalized fantasy condition, the students did not have a choice. Thus,
the five conditions were: no-fantasy control condition; generic fantasy, no
choice; generic fantasy, choice; personalized fantasy, no choice;
personalized fantasy, choice.
The intervention consisted of five sessions. In the first session,
participating students were pretested on their knowledge of the target math

143
Cordova and Lepper (1996).
208 Diane Halpern, Joshua Aronson, Nona Reimer et al.

skills, and also completed several measures of their global motivational


orientation. In the next three sessions, each child played with one randomly
assigned version of the computer games. These sessions were scheduled about
five days apart and lasted about 30 minutes each. In the fifth session, students
completed posttests on the target math skills, as well as several additional
attitudinal measures. Dependent measures included self-reports of interest and
enjoyment, behavioral commitments to continued task engagement,
preferences for increasingly challenging tasks, and direct measures of
students’ online involvement in the activities.
The results of the intervention indicate that students exposed to each of
the three specific strategies designed to heighten student interest
(contextualization, choice, and personalization) showed significantly
greater motivation, involvement, and learning of the target math skills than
those students in the control condition.

RECOMMENDATION 5:
PROVIDE SPATIAL SKILLS TRAINING

Level of Evidence: Low

The panel rated the level of evidence as Low. That is, the evidence for
the recommendation is based on the expert opinion of panel members,
justified by high-quality research in related domains.
The panel located two high-quality studies of spatial skills training
focused on skills that are generally considered important in math and
science achievement. One study was a small-scale, random-assignment
experiment with college students144; the other was a quasi-experiment with
elementary or middle school students.145 Both studies focused on the
improvement of specific spatial skills. Although there is agreement among
experts that the relevant skills are important to specific aspects of math and

144
Piburn, Reynolds, McAuliffe, et al. (2005).
145
Ben-Chaim, Lappan, and Houang (1988).
Encouraging Girls in Math and Science: IES Practice Guide 209

science performance, the transfer of these skills more generally to


achievement in math and science courses has not been directly
demonstrated. The evidence supporting the practice of spatial skills
training as a way to improve performance in math and science courses in
K–12 is thus indirect.
In the first study, 103 college students taking an introductory geology
course were in one of four sections randomly assigned to either an
experimental or a control condition.146 Both experimental and control
sections used the same laboratory manual; however, the two experimental
sections used additional computer-based modules that allowed extensive
student involvement with images that could be manipulated. Students were
administered a content assessment and two spatial-visual measures as
pretests and posttests. Study results indicate that students in the
experimental sections improved their spatial visualization skills
significantly more than did students in the control sections and that pre-
existing gender differences with regard to spatial visualization were
eliminated in the experimental sections. That is, final posttest scores of
males and females on the spatial visualization tasks were not different from
one another.
In the second study, researchers examined the impact of spatial
visualization skills training on about 1,000 fifth- through eighth-grade
students.147 Before instruction, there were significant differences by sex in
spatial visualization performance (favoring boys). After the intervention,
middle school students, regardless of sex, gained significantly from the
training program. Retention of the impact persisted after a 4-week period
and after 1 year. The persistent gender differences found in performance
after training in this younger sample are important to note. This research
suggests that the spatial skills of both boys and girls are susceptible to
training; however, it leaves open the question of how to close the gender
gap in spatial skills performance.

146
Piburn, Reynolds, McAuliffe, et al. (2005).
147
Ben-Chaim, Lappan, and Houang (1988).
210 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Example of Intervention That Improves Spatial Skills


The quasi-experiment supports the practice of using spatial skills
training to improve specific skills that experts consider to bear a
relationship to performance in mathematics, science, and engineering.148
Spatial visualization is a subset of spatial skills that involves the ability to
mentally manipulate and rotate objects. In this study, about 1,000 fifth- to
eighth-grade students at three sites in and around a large midwestern city
participated in a 3-week spatial visualization unit that engaged them in
concrete activities with small cubes, such as building and drawing
representations. The spatial visualization test was used as a pretest and
posttest measure. After the instruction intervention, middle school
students, regardless of sex, gained significantly from the training program
in spatial visualization tasks. Boys and girls responded similarly to the
training program, indicating that spatial visualization skills can be
improved when appropriate training is provided. The retention of the
training effects 4 weeks and 1 year later underscores the potential long-
term benefits of spatial skills training.

DISCLOSURES OF POTENTIAL
CONFLICTS OF INTEREST

Practice guide panels are composed of individuals who are nationally


recognized experts on the topics about which they are rendering
recommendations. IES expects that such experts will be involved
professionally in a variety of matters that relate to their work as a panel.
Panel members are asked to disclose their professional involvements and to
institute deliberative processes that encourage critical examination of the
views of panel members as they relate to the content of the practice guide.
The potential influence of panel members’ professional engagements
reported by each panel member is further muted by the requirement that
they ground their recommendations in evidence that is documented in the

148
Ibid.
Encouraging Girls in Math and Science: IES Practice Guide 211

practice guide. In addition, the practice guide is subjected to independent


external peer review prior to publication, with particular focus on whether
the evidence related to the recommendations in the practice guide has been
appropriately presented.
The professional engagements reported by each panel member that
appear most closely associated with the panel recommendations are noted
below.
Dr. Aronson has developed interventions aimed at changing students’
conceptions of intelligence in order to improve their academic
achievement. He has also developed interventions to reduce stereotype
threat. These interventions are not referenced in the practice guide.

REFERENCES

American Educational Research Association, American Psychological


Association, & National Council on Measurement in Education.
(1999). Standards for educational and psychological testing.
Washington, DC: AERA Publications.
American Psychological Association (2002). Criteria for practice guideline
development and evaluation. American Psychologist, 57, 1048-1051.
An intervention to reduce the effects of stereotype threat. Applied
Developmental Psychology, 24, 645–662.
Andre, T., Whigham, M., Hendrickson, A., and Chambers, S. (1999).
Competency beliefs, positive affect, and gender stereotypes of
elementary students and their parents about science versus other school
subjects. Journal of Research in Science Teaching, 36, 719-747.
Aronson, J. (2002). Stereotype threat: Contending and coping with
unnerving expectations. In J. Aronson (Ed.), Improving academic
achievement: Impact of psychological factors on education. San
Diego, CA: Academic Press.
Aronson, J., Fried, C. and Good, C. (2002). Reducing the effects of
stereotype threat on African American college students by shaping
212 Diane Halpern, Joshua Aronson, Nona Reimer et al.

theories of intelligence. Journal of Experimental Social Psychology,


38, 113-125.
Aronson, J., and Good, C. (2002). The development and consequences of
stereotype vulnerability in adolescents.
Aronson, J., and Steele, C.M. (2005). Stereotypes and the fragility of
human competence, motivation, and self-concept. In C. Dweck & E.
Elliot (Eds.), Handbook of competence & motivation. New York:
Guilford.
Bangert-Drowns, R.L., Kulik, C-L. C., Kulik, J.A., and Morgan, M.
(1991). The instructional effect of feedback in test-like events. Review
of Educational Research, 61(2), 213-238.
Barron, B. (2000). Problem solving in video-based microworlds:
Collaborative and individual outcomes of high-achieving sixth-grade
students. Journal of Educational Psychology, 92(2), 391-398.
Battista, M. (1990). Spatial visualization and gender differences in high
school geometry. Journal for Research in Mathematics Education, 21,
47–60.
Ben-Chaim, D., Lappan, G., and Houang, R. T. (1988). Effect of
instruction on spatial visualization skills of middle school boys and
girls. American Educational Research Journal, 25, 51 – 71.
Bielinksi, J., and Davison, M.L. (2001). A sex difference by item
interaction in multiple choice mathematics items administered to
national probability samples. Journal of Educational Measurement, 38,
51–77.
Black, P. and Wiliam, D. (1998). Assessment and classroom learning.
Assessment in Education: Principles, Policy & Practice, 5(1), 7-74.
Blackwell, L., Trzesniewski, K., and Dweck, C.S. (2007). Implicit theories
of intelligence predict achievement across an adolescent transition: A
longitudinal study and an intervention. Child Development, 78, 246-
263.
Bottge, B.A., Rueda, E., Serlin, R.C., Hung, Y., and Kwon, J.M. (2007).
Shrinking achievement differences with anchored math problems:
Challenges and possibilities. The Journal of Special Education, 41(1),
31-49.
Encouraging Girls in Math and Science: IES Practice Guide 213

Building Engineering and Science Talent. (2004). What it takes: PreK-12


design principles to broaden participation in science, technology,
engineering, and mathematics. Retrieved September 12, 2007 from
http://www.bestworkforce.org/.
Carr, M., Jessup, D.L., and Fuller, D. (1999). Gender differences in first
grade strategy use: Parent and teacher contributions. Journal for
Research in Mathematics Education, 30, 20–46.
Casey, B. (2003). Mathematics problem-solving adventures: A language-
arts based supplementary series for early childhood that focuses on
spatial sense. In D. Clements, J. Sarama, and A.-M. DiBaise (Eds.),
Engaging young children in mathematics: Standards for early
childhood mathematics education (pp. 7-75). Mahwah, NJ: Erlbaum.
Casey, M.B., Nutall, R.L., and Pezaris, E. (2001). Spatial-mechanical
reasoning skills versus mathematics self-confidence as mediators of
gender differences on mathematics subsets using cross-national
gender-based items. Journal for Research in Mathematics Education,
32, 28–57.
College Board. (2006, August 29). College Board announces scores for
new SAT with writing section. Retrieved November 15, 2006, from
http://www.collegeboard.com/press/releases/150054.html.
Connor, J.M., Schackman, M., and Serbin, L.A. (1978). Sex-related
differences in response to practice on a visual-spatial test and
generalization to a related test. Child Development, 49, 24–29.
Cordova, D., and Lepper, M.R. (1996). Intrinsic motivation and the
process of learning: Beneficial effects of contextualization,
personalization, and choice. Journal of Educational Psychology, 88(4),
715–730.
Davies, P.G., Spencer, S.J., Quinn, D.M., and Gerhardstein, R. (2002).
Consuming images: How television commercials that elicit stereotype
threat can restrain women academically and professionally.
Personality and Social Psychology Bulletin, 28, 1615–1628.
DeLisi, R., and Wolford, J.L. (2002). Improving children’s mental rotation
accuracy with computer game playing. Journal of Genetic Psychology,
163(3), 272–282.
214 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Deno, J.A. (1995). The relationship of previous experiences to spatial


visualization ability. Engineering Design Graphics Journal, 59, 5–17.
Doolittle, A.E. (1989). Gender differences in performance on mathematics
achievement items. Applied Measurement in Education, 2, 161–178.
Dweck, C.S. (1999). Self-theories: Their role in motivation, personality,
and development. Philadelphia: Taylor & Francis.
Dweck, C.S. (2002). Messages that motivate: How praise molds students’
beliefs, motivation, and performance (in surprising ways). In J.
Aronson (Ed.), Improving academic achievement: Impact of
psychological factors on education (pp. 37–60). San Diego: Academic
Press.
Dweck, C.S. (2006). Is math a gift? Beliefs that put females at risk. In S.J.
Ceci and W. Williams (Eds.), Why aren’t more women in science? Top
researchers debate the evidence. Washington, DC: American
Psychological Association.
Dweck, C.S., and Leggett, E.L. (1988). A social cognitive approach to
motivation and personality. Psychological Review, 95, 256–273.
Elawar, M.C., and Corno, L. (1985). A factorial experiment in teachers’
written feedback on student homework: Changing teacher behavior a
little rather than a lot. Journal of Educational Psychology, 77(2), 162–
173.
Field, M.J., and Lohr, K.N. (Eds.). (1990). Clinical practice guidelines:
Directions for a new program. Washington, DC: National Academy
Press.
Foersterling, F. (1985). Attributional retraining: A review. Psychological
Bulletin, 98, 495–512.
Foote, C.J. (1999). Attribution feedback in the elementary classroom.
Journal of Research in Childhood Education, 13(2), 155–166.
Fouad, N.A., and Smith, P.L. (1996). A test of a social cognitive model for
middle school students math and science. Journal of Counseling
Psychology, 43(3), 338–346.
Gallagher, A.M., De Lisi, R., Holst, P.C., McGillicuddy-De Lisi, A.V.,
Morely, M., and Cahalan, C. (2000). Gender differences in advanced
Encouraging Girls in Math and Science: IES Practice Guide 215

mathematical problem solving. Journal of Experimental Child


Psychology, 75, 165–190.
Gallagher, A.M., and Kaufman, J.C. (Eds.) (2005). Gender differences in
mathematics: An integrative psychological approach. New York:
Cambridge University Press.
Geary, D.C., Saults, S.J., Liu, F., and Hoard, M.K. (2000). Sex differences
in spatial cognition, computational fluency, and arithmetical reasoning.
Journal of Experimental Child Psychology, 77, 337–353.
Gerson, H.B.P., Sorby, S.A., Wysocki, A.F., and Baartmans, B.J. (2001).
The development and assessment of multimedia software for
improving 3-D spatial visualization skills. Computer Applications in
Engineering Education, 9, 105–113.
Gierl, M.J., Bisanz, J., Bisanz, G.L., and Boughton, K.A. (2003).
Identifying content and cognitive skills that produce gender differences
in mathematics: A demonstration of the multidimensionality-based
DIF analysis. Journal of Educational Measurement, 40, 281–306.
Ginsburg-Block, M., and Fantuzzo, J. (1998). An evaluation of the relative
effectiveness of NCTM standards-based interventions for low-
achieving urban elementary students. Journal of Educational
Psychology, 90(3), 560–569.
Gonzales, P.M., Blanton, H., and Williams, K.J. (2002). The effects of
stereotype threat and double-minority status on the test performance of
Latino women. Personality and Social Psychology Bulletin, 28(5),
659–670.
Good, C., and Aronson, J. (2007). The development of stereotype threat:
Consequences for educational and social equality. In E. Turiel, J.
Smetana, and C. Wainryb (Eds.), Social development, social
inequalities, and social justice (pp. 155-185). Mahwah, NJ: Erlbaum.
Good, C., Aronson, J., and Inzlicht, M. (2003). Improving adolescents’
standardized test performance:
Graham, S. (1991). A review of attribution theory in achievement contexts.
Educational Psychology Review, 3, 5–39.
Grant, H., and Dweck, C.S. (2003). Clarifying achievement goals and their
impact. Journal of Personality and Social Psychology, 85(3), 541–553.
216 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Guthrie, J.T., Anderson, E.A., Alao, S., and Rhinehart, J. (1999).


Influences of concept-oriented reading instruction on strategy use and
conceptual learning from text. Elementary School Journal, 99(4), 343–
366.
Guthrie, J.T., Wigfield, A., Barbosa, P., Perencevich, K., Taboada, A.,
Davis, M.H., Scafiddi, N.T., and Tonks, S. (2004). Increasing reading
comprehension and engagement through concept-oriented reading
instruction. Journal of Educational Psychology, 96(3), 403–423.
Guthrie, J.T., Wigfield, A., Humenick, N.M., Perencevich, K.C., Taboada,
A., and Barbosa, P. (2006). Influences of stimulating tasks on reading
motivation and comprehension. The Journal of Educational Research,
99(4), 232–245.
Hackett, G. (1985). The role of mathematics self-efficacy in the choice of
math-related majors of college women and men: A path analysis.
Journal of Counseling Psychology, 32, 47–56.
Harris, A.M., and Carlton, S.T. (1993). Patterns of gender differences on
mathematics items on the Scholastic Aptitude Test. Applied
Measurement in Education, 6, 137–151.
Halpern, D.F. (2000). Sex differences in cognitive abilities (3rd ed.).
Mahwah, NJ: Erlbaum.
Halpern, D.F., Benbow, C.P., Geary, D.C., Gur, R.C., Hyde, J.S., and
Gernsbacher, M.A. (2007). The science of sex differences in science
and mathematics. Psychological Science in the Public Interest, 8(1), 1–
51.
Henderlong, J., and Lepper, M.R. (2002). The effects of praise on
children’s intrinsic motivation: A review and synthesis. Psychological
Bulletin, 128(5), 774–795.
Herbert, J., and Stipek, D. (2005). The emergence of gender differences in
children’s perceptions of their academic competence. Applied
Developmental Psychology, 26, 276–295.
Hidi, S., and Renninger, K.A. (2006). The four-phase model of interest
development. Educational Psychologist, 41(2), 111–127.
Hyde, J.S. (2005). The gender similarities hypothesis. American
Psychologist, 60(6), 581–592.
Encouraging Girls in Math and Science: IES Practice Guide 217

Inzlicht, M., Aronson, J., Good, C., and McKay, L. (2006). A particular
resiliency to threatening environments. Journal of Experimental Social
Psychology, 42, 323–336.
Inzlicht, M., and Ben-Zeev, T. (2000). A threatening intellectual
environment: Why females are susceptible to experiencing problem-
solving deficits in the presence of males. Psychological Science, 11,
365–371.
Jacobs, J.E., Lanza, S., Osgood, D.W., Eccles, J.S., and Wigfield, A.
(2002). Changes in children’s self-competence and values: Gender and
domain differences across grades one through twelve. Child
Development, 73, 509–527.
Ji, P.Y., Lapan, R.T., and Tate, K. (2004). Vocational interests and career
efficacy expectations in relation to occupational sex-typing beliefs for
eighth grade students. Journal of Career Development, 31(2), 143–
154.
Kaelin, M.A., Huebner, W.W., Nicolich, M.J., and Kimbrough, M.L.
(2007). Field test of an epidemiology curriculum for middle school
students. American Journal of Health Education, 38(1), 16–31.
Kurdek, L.A., and Sinclair, R.J. (2001). Predicting reading and
mathematics achievement in fourth-grade children from kindergarten
readiness scores. Journal of Educational Psychology, 93, 451–455.
Lapan, R. T., Adams, A., Turner, S.L., and Hinkleman, J.M. (2000).
Seventh graders’ vocational interest and efficacy expectation patterns.
Journal of Career Development, 26(3), 215–229.
Linn, M.C., Lee, H.S., Tinker, R., Husic, F., and Chiu, J.L. (2006).
Teaching and assessing knowledge integration in science. Science,
313(5790), 1049–1050.
Maguire, E.A., Gadian, D.G., Johnsrude, I.S., Good, C.D., Ashburner, J.,
Frackowiak, R.S., and Frith, C.D. (2000). Navigation-related structural
change in the hippocampi of taxi drivers. Proceedings of the National
Academy of Sciences of the United States of America, 97, 4398–4403.
Maguire, E.A., Spiers, H.J., Good, C.D., Hartley, T., Frackowiak, R.S.J.,
and Burgess, N. (2003). Navigation expertise and the human
218 Diane Halpern, Joshua Aronson, Nona Reimer et al.

hippocampus: A structural brain imaging analysis. Hippocampus,


13(2), 250–259.
Marulis, L., Liu, L., Warren, C., Uttal, D., and Newcombe, N. (2007,
March). Effects of training or experience on spatial cognition in
children and adults: A meta-analysis. Poster presented at the biennial
meetings of the Society for Research in Child Development, Boston.
Marx, D.M., and Roman, J.S. (2002). Female role models: Protecting
women’s math performance. Personality and Social Psychology
Bulletin, 28(9), 1183–1193.
Maton, K., and Hrabowski III, F.A. (2004). Increasing the number of
African American PhDs in the sciences and engineering: A strengths-
based approach. American Psychologist, 59(6), 547–556.
McGraw, R., Lubienski, S.T., and Strutchens, M.E. (2006). A closer look
at gender in NAEP mathematics achievement and affect data:
Intersections with achievement, race/ethnicity, and socioeconomic
status. Journal for Research in Mathematics Education, 37, 129–150.
McIntyre, R.B., Paulson, R.M., and Lord, C.G. (2003). Alleviating
women’s mathematics stereotype threat through salience of group
achievements. Journal of Experimental Social Psychology, 39(1), 83–
90.
McKown, C., and Weinstein, R.S. (2003). The development and
consequences of stereotype-consciousness in middle childhood. Child
Development, 74, 498–515.
Miller, R.L., Brickman, P., and Bolen, D. (1975). Attribution versus
persuasion as a means for modifying behavior. Journal of Personality
and Social Psychology, 31(3), 430–441.
Mitchell, M. (1993). Situational interest: Its multifaceted structure in the
secondary school mathematics classroom. Journal of Educational
Psychology, 85(3), 424–436.
Muehrcke, P.C., and Muehrcke, J.O. (1992). Map use: Reading, analysis,
and interpretation (3rd ed). Madison, WI: JP Publications.
Mueller, C.M., and Dweck, C.S. (1998). Praise for intelligence can
undermine children’s motivation and performance. Journal of
Personality and Social Psychology, 75(1), 33–52.
Encouraging Girls in Math and Science: IES Practice Guide 219

National Science Foundation. (2006a). New tools for America’s workforce:


Girls in science and engineering. Retrieved August 15, 2007, from
http://www.nsf.gov/publications/.
National Science Foundation. (2006b). Science and engineering degrees:
1966–2004. Retrieved September 12, 2007, from http://www.nsf.gov/
statistics/nsf07307/pdf/nsf07307.pdf.
National Science Foundation. (2006c). Science and engineering indicators
2006. Retrieved August 17, 2007, from http://www.nsf.gov/statistics/
seind06/c3/c3s1.htm.
National Science Foundation. (2006d). Women, minorities, and persons
with disabilities in science and engineering. Retrieved June 25, 2006,
from http://www.nsf.gov/statistics/wmpd/.
Newcombe, N. (2002). Maximization of spatial competence: More
important than finding the cause of sex differences. In A.
McGillicuddy-De Lisi and R. De Lisi (Eds.), Biology, society, and
behavior: The development of sex differences in cognition. Advances in
applied developmental psychology, Vol. 21. (pp. 183–206). New York:
Ablex Publishing.
Pajares, F. (2006). Self-efficacy during childhood and adolescence:
Implications for teachers and parents.
Pajares, F. and Urdan, T., (Eds.) Academic motivation of adolescents
(pp.299-330). Greenwich, CT: Information Age Publishing.
Pajares, F. and Urdan, T., (Eds.), Self-efficacy beliefs of adolescents (pp.
339–367). Greenwich, CT: Information Age Publishing.
Parker, L., and Lepper, M.R. (1992). Effects of fantasy contexts on
children’s learning and motivation: Making learning more fun. Journal
of Personality and Social Psychology, 62, 625–633.
Phelps, E., and Damon, W. (1989). Problem solving with equals: Peer
collaboration as a context for learning mathematics and spatial
concepts. Journal of Educational Psychology, 81(4), 639–646.
Piburn, M.D., Reynolds, S.J., McAuliffe, C., Leedy, D.E., Birk, J.P., and
Johnson, J.K. (2005). The role of visualization in learning from
computer-based images. International Journal of Science Education,
27, 513–527.
220 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Renninger, K.A., Ewen, L., and Lasher, A. (2002). Individual interest as


context in expository text and mathematical word problems. Learning
and Instruction, 12, 467–491.
Schweinle, A., Turner, J.C., and Meyer, D.K. (2006). Striking the right
balance: Students’ motivation and affect in elementary mathematics.
The Journal of Educational Research, 99(5), 271–290.
Shea, D.L., Lubinski, D., and Benbow, C.P. (2001). Importance of
assessing spatial ability in intellectually talented young adolescents: A
20-year longitudinal study. Journal of Educational Psychology, 93,
604–614.
Shettle, C., Roey, S., Mordica, J., Perkins, R., Nord, C., Teodorovic, J.,
Brown, J., Lyons, M., Averett, C., and Kastberg, D. (2007). The
nation’s report card: America’s high school graduates (NCES 2007-
467). National Center for Education Statistics, Institute of Education
Sciences, U.S. Department of Education. Washington, DC.
Simpkins, S.D., and Davis-Kean, P.E. (2005). The intersection between
self-concept and values: Links between beliefs and choices in high
school. New Directions for Child and Adolescent Development, 110,
31–47.
Simpkins, S.D., Davis-Kean, P.E., and Eccles, J.S. (2006). Math and
science motivation: A longitudinal examination of the links between
choices and beliefs. Developmental Psychology, 42, 70–83.
Sorby, S.A. (2001). A course in spatial visualization and its impact on the
retention of women engineering students. Journal of Women and
Minorities in Science and Engineering, 7, 153–172.
Sorby, S.A., and Baartmans, B.J. (2000). The development and assessment
of a course for enhancing the 3-D spatial skills of first year engineering
students. Journal of Engineering Education, 89, 301–307.
Spelke, E.S. (2005). Sex differences in intrinsic aptitude for mathematics
and science? A critical review. American Psychologist, 60(9), 950–
958.
Spencer, S.J., Steele, C.M., and Quinn, D.M. (1999). Stereotype threat and
women’s math performance. Journal of Experimental Social
Psychology, 35(1), 4–28.
Encouraging Girls in Math and Science: IES Practice Guide 221

Stangor, C., Carr, C., and Kiang, L. (1998). Activating stereotypes


undermines task performance expectations. Journal of Personality and
Social Psychology, 75, 1191–1197.
Steele, C.M., and Aronson, J. (1995). Stereotype threat and the intellectual
test performance of African Americans. Journal of Personality and
Social Psychology, 69(5), 797–811.
Steele, C.M., Spencer, S., and Aronson, J. (2002). Contending with group
image: The psychology of stereotype and social identity threat. In M.
Zanna (Ed.), Advances in experimental social psychology, Vol. 37. San
Diego: Academic Press.
Steele, J. (2003). Children’s gender stereotypes about math: The role of
stereotype stratification. Journal of Applied Social Psychology, 33(12),
2587–2606.
Summers, M.F., and Hrabowski III, F.A. (2006). Preparing minority
scientists and engineers. Science, 311(5769), 1870–1871.
Tierney, J.P., and Grossman, J.B. (2000). Making a difference: An impact
study of Big Brothers/Big Sisters. Retrieved May 18, 2007, from
http://www.ppv.org/ppv/publications/publicationsdescription.asp?searc
hid=7&publication id=111.
Turner, J.C., Midgley, C., Meyer, D.K., Gheen, M., Anderman, E.M.,
Kang, Y., and Patrick, H. (2002). The classroom environment and
students’ reports of avoidance strategies in mathematics: A
multimethod study. Journal of Educational Psychology, 94(1), 88–106.
Turner, S.L., and Lapan, R.T. (2005). Evaluation of an intervention to
increase non-traditional career interests and career-related self-efficacy
among middle-school adolescents. Journal of Vocational Behavior, 66,
516–531.
Updegraff, K.A., and Eccles, J.S. (1996). Course enrollment as self-
regulatory behavior: Who takes optional high school math courses?
Learning and Individual Differences, 8, 239–259.
Utman, C.H. (1997). Performance effects of motivational state: A meta-
analysis. Personality and Social Psychology Review, 1, 170–182.
222 Diane Halpern, Joshua Aronson, Nona Reimer et al.

Wainer, H., and Steinberg, L.S. (1992). Sex differences in performance on


mathematics section of the Scholastic Aptitude Test: A bidirectional
validity study. Harvard Educational Review, 62(3), 323–336.
Ward, J. (1976). Behaviour modification in education: An overview and a
model for programme implementation. Bulletin of the British
Psychological Society, 29, 257–268.
Webb, R.M., Lubinski, D., and Benbow, C.P. (2002). Mathematically
facile adolescents with math-science aspirations: New perspectives on
their educational and vocational development. Journal of Educational
Psychology, 94(4), 785–794.
Weiner, B. (1986). An attributional theory of motivation and emotion. New
York: Springer-Verlag.
Wigfield, A., Eccles, J.S., Mac Iver, D., Reuman, D.A., and Midgley, C.
(1991). Transitions during early adolescence: Changes in children’s
domain-specific self-perceptions and general self-esteem across the
transition to junior high school. Developmental Psychology, 27, 552–
565.
Wigfield, A., Eccles, J.S., Schiefele, U., Roser, R.W., and Davis-Kean, P.
(2006). Development of achievement motivation. In W. Damon and R.
M. Lerner (Series Eds.) and N. Eisenberg (Vol. Ed.), Handbook of
child psychology, Vol. 3: Social, emotional, and personality
development (6th ed). New York: Wiley.
Wilson, T.D., and Linville, P.W. (1985). Improving the performance of
college freshmen with attributional techniques. Journal of Personality
and Social Psychology, 49, 287–293.

ABOUT THE AUTHORS

Diane F. Halpern is the Director of the Berger Institute for Work,


Family, and Children, and Professor of Psychology at Claremont McKenna
College. She earned her Ph.D. in Psychology from the University of
Cincinnati. Her research focuses on the development of critical thinking
skills and on gender differences.
Encouraging Girls in Math and Science: IES Practice Guide 223

Joshua Aronson is an associate professor of Developmental, Social,


and Educational Psychology at New York University. He earned his Ph.D.
in Psychology from Stanford University. His research focuses on
“stereotype threat” and minority student achievement.
Nona Reimer has been an elementary school teacher in Capistrano
Unified School District, California, for 20 years. She is also a part-time
faculty member in biology at the California State University at Fullerton.
She earned her B.A. in History and Environmental Studies from the
University of California, Santa Barbara and her Multiple Subjects
Credential from California State University, Hayward. She currently
teaches fifth grade at John S. Malcom Elementary School, a National Blue
Ribbon School.
Sandra Simpkins is an assistant professor in the School of Social and
Family Dynamics at Arizona State University. She earned her Ph.D. in
Psychology from the University of California, Riverside. Her research
focuses on children’s pursuits, such as their participation in after-school
activities or enrollment in an elective science course.
Jon R. Star is an assistant professor in the Graduate School of
Education at Harvard University. He earned a Ph.D. in Education and
Psychology from the University of Michigan. Star is a former middle and
high school mathematics teacher. His research focuses on students’
learning of middle and secondary school mathematics.
Kathryn Wentzel is a professor of Human Development at the
University of Maryland, College Park. She earned a Ph.D. in Psychological
Studies in Education from Stanford University. Her research focuses on
teachers, peers, and parents as social motivators of young adolescents’
classroom behavior and academic accomplishments.
INDEX

A B

access, 64, 191, 195, 220, 222 base, 16, 33, 63, 76, 78, 81, 103, 128, 135,
achievement test, 120, 137, 138, 163, 232, 142, 143, 145, 151, 158, 162, 167, 170,
233 176, 180, 221, 223, 245, 258
administrators, ix, 2, 15, 153 benefits, 42, 145, 242, 246
adolescents, 188, 216, 239, 248, 253, 257,
258, 260, 261, 262
C
age, 7, 36, 65, 67, 95, 145, 212, 216, 228,
229, 233, 236
calculus, 185, 188
American Educational Research
childhood, 154, 228, 249, 256, 257
Association, 5, 69, 91, 134, 154, 155,
children, 28, 47, 51, 79, 80, 81, 82, 84, 85,
159, 182, 247
108, 147, 152, 154, 156, 157, 159, 165,
American Psychological Association, 5, 69,
167, 168, 169, 170, 171, 188, 191, 196,
89, 91, 134, 155, 156, 178, 182, 247, 251
198, 199, 203, 204, 205, 206, 207, 211,
arithmetic, 7, 12, 13, 17, 18, 20, 22, 33, 34,
218, 220, 228, 234, 237, 242, 243, 249,
61, 64, 66, 73, 149, 156, 160, 163, 164,
250, 254, 255, 256, 257, 261, 262
169, 243
classes, 73, 111, 120, 186, 188, 201, 220,
assessment, x, 3, 14, 24, 28, 35, 78, 80, 81,
223, 225, 235
85, 91, 92, 100, 104, 129, 134, 145, 147,
classroom, 8, 10, 15, 16, 35, 36, 49, 63, 79,
150, 167, 169, 181, 199, 203, 205, 209,
84, 104, 112, 118, 120, 126, 144, 152,
225, 245, 252, 259
153, 155, 160, 170, 193, 195, 201, 205,
assistance in mathematics, ix, 2
207, 209, 210, 219, 220, 230, 234, 238,
249, 251, 256, 260, 262
classroom environment, 207, 219, 260
classroom teacher, 10, 15, 84, 152
226 Index

cognition, 95, 162, 164, 252, 255, 257 engineering, 184, 192, 211, 213, 214, 215,
college students, 27, 198, 210, 211, 228, 216, 217, 221, 225, 227, 228, 231, 246,
232, 233, 238, 239, 244, 245, 248 249, 256, 257, 259
computation, 17, 35, 69, 130, 137, 138, 139, environment, 205, 207, 212, 220, 233, 241,
160, 163 254, 260
control condition, 111, 232, 233, 234, 237, examinations, 225, 226
242, 243, 244, 245 experimental design, 3, 100, 125, 181
controlled trials, 4, 6, 37, 38, 45, 46, 52, 62, expertise, ix, 2, 23, 35, 71, 88, 89, 90, 177,
87, 176, 182, 241 178, 179, 180, 201, 215, 255
correlation, 68, 80, 138, 139, 201 exposure, 36, 52, 56, 210, 212, 214, 216,
curriculum, 14, 18, 20, 32, 34, 35, 39, 50, 239, 240
67, 68, 70, 71, 72, 74, 94, 98, 137, 138, external validity, 4, 5, 182, 236
140, 141, 150, 151, 156, 158, 161, 165,
166, 167, 168, 170, 171, 186, 223, 224,
F
225, 229, 255
false negative, 28, 83, 92
D false positive, 28, 83, 92
families, 64, 109, 126, 169
data analysis, 36, 155
data collection, 27
G
decomposition, 33, 66, 84, 85
demonstrations, 38, 100, 101
gender differences, 186, 188, 218, 219, 225,
Department of Education, 1, 28, 152, 159,
245, 248, 250, 252, 253, 254, 261
164, 167, 169, 171, 175, 258
general education, 14, 97, 103, 140, 153
depth, 11, 12, 18, 31, 32, 33, 34, 41, 43, 93,
generalizability, 5, 182, 238
97, 98, 136
geometry, 149, 192, 226, 231, 248
disability, 7, 115, 158, 165, 167
grades, 11, 15, 18, 20, 22, 23, 25, 31, 33, 34,
55, 56, 57, 62, 63, 66, 67, 69, 70, 72, 78,
E 79, 81, 87, 93, 94, 99, 124, 136, 137,
138, 140, 142, 145, 149, 157, 160, 161,
education, ix, 2, 9, 10, 14, 15, 27, 31, 85, 179, 186, 188, 199, 201, 204, 207, 208,
89, 97, 103, 120, 135, 136, 140, 149, 211, 218, 225, 227, 228, 229, 232, 236,
151, 152, 153, 155, 159, 165, 168, 169, 254
170, 171, 173, 178, 241, 248, 250, 251, grants, 149, 150, 152, 154
260 group work, 195, 221, 222, 223, 242
educators, ix, 2, 15, 28, 31, 96, 140, 152, growth, 13, 25, 68, 71, 77, 134, 135, 136,
161, 179, 180, 181 137, 139, 141, 158, 159, 161
elementary school, 27, 57, 62, 150, 188, growth rate, 136, 139, 141
191, 220, 241, 262 guidance, 12, 37, 41, 147
elementary students, 73, 248, 252 guidelines, 3, 53, 86, 101, 116, 117, 159,
176, 251
Index 227

H K

high school, 61, 151, 152, 158, 185, 187, kindergarten, 10, 11, 15, 18, 20, 30, 31, 32,
188, 190, 193, 195, 201, 210, 217, 222, 33, 65, 70, 72, 95, 96, 125, 150, 151,
224, 228, 235, 236, 238, 239, 248, 258, 168, 171, 210, 238, 239, 255
259, 260, 261, 262
homework, 186, 205, 251
L

I learner(s), 153, 154, 230


learning difficulties, 165, 168, 172
identification, 63, 69, 93, 125, 133, 137, learning disabilities, x, 3, 7, 140, 145, 153,
157, 158, 159, 160, 164, 171 157, 159, 160, 162, 163, 166, 168, 169,
images, 196, 217, 229, 245, 250, 258 170, 171, 172
improvements, 72, 141 learning environment, 205
independent variable, 64, 76, 101, 126, 129, learning outcomes, 141
142, 143, 144 lesson plan, 229, 230
individual students, 79, 134, 194, 208 literacy, 153, 168
individuals, 23, 30, 34, 71, 85, 146, 246 longitudinal study, 167, 235, 249, 258
Individuals with Disabilities Education Act, low-performing students, x, 3
7
Institute of Education Sciences (IES), xi, 4,
M
89, 90, 91, 147, 152, 154, 155, 159, 164,
167, 175, 178, 179, 181, 187, 246, 258
majority, 65, 72, 139, 225
instructional materials, 19, 39, 40, 41, 72
materials, 11, 12, 13, 15, 18, 19, 20, 33, 34,
instructional practice, 101, 156, 164
35, 36, 37, 39, 40, 41, 43, 60, 64, 67, 72,
instructional time, 5, 79, 98, 183
104, 118, 127, 129, 148, 151, 158, 213,
iintelligence, 197, 198, 199, 200, 204, 205,
229, 230
207, 232, 233, 236, 237, 238, 247, 248,
mathematical knowledge, 22, 165
249, 256
mathematical learning difficulties, 165, 168
internal consistency, 92, 94, 135, 141
mathematics ability, ix, 2
internal validity, 4, 5, 181, 182, 183, 210,
mathematics coach, ix, 2, 10, 15, 34, 42, 43,
238
50, 60
issues, ix, 2, 31, 46, 97, 136, 147, 149, 159,
mathematics education, ix, 2, 31, 97, 148,
168, 176
153, 172, 250
mathematics interventions, x, 3, 7, 8, 9, 10,
J 13, 15, 37, 38, 48, 100, 102, 125, 142,
155
junior high school, 261 measurement, 23, 29, 36, 68, 70, 151, 158,
161, 167, 168, 170, 171
meta-analysis, 88, 177, 255, 260
228 Index

minority students, 165, 212, 216 practice guides, xi, 4, 87, 88, 89, 90, 176,
models, 4, 11, 19, 36, 39, 40, 43, 53, 57, 89, 177, 178, 179, 181
99, 101, 118, 178, 181, 189, 191, 209, predictive validity, 17, 21, 23, 24, 93
210, 212, 214, 215, 216, 222, 230, 238, preparation, 7, 152, 223
239, 240, 255 prevention, 7, 85, 160
motivation, 14, 74, 204, 206, 208, 215, 234, principals, ix, 2, 27, 180
238, 242, 244, 248, 250, 251, 253, 254, principles, 35, 55, 99, 221, 249
256, 257, 258, 259, 261 problem solving, 11, 17, 19, 37, 38, 41, 44,
multiplication, 42, 44, 52, 53, 55, 57, 66, 80, 45, 46, 52, 54, 94, 98, 99, 100, 102, 110,
82, 109, 118, 125, 128, 172 113, 116, 138, 143, 152, 153, 158, 160,
161, 162, 163, 166, 191, 252
problem-solving, 19, 35, 40, 43, 45, 46, 75,
N
100, 101, 102, 103, 111, 112, 116, 138,
139, 166, 167, 172, 173, 203, 208, 226,
National Academy of Sciences, 168, 255
249, 254
National Assessment of Educational
professional development, ix, 2, 13, 26, 30,
Progress (NAEP), x, 151, 180, 185, 186,
37, 42, 51, 61, 151
188, 256
project, 6, 151, 153, 155, 195, 221, 223, 227
National Center for Education Statistics
public schools, 237
(NCES), 167, 258
National Research Council, 9, 150, 169
No Child Left Behind, 149 R

random assignment, 204, 228, 234


O
reading, 7, 28, 84, 95, 104, 129, 130, 133,
147, 153, 154, 155, 158, 160, 163, 164,
operations, 13, 16, 17, 18, 22, 32, 33, 37,
166, 167, 220, 232, 242, 253, 255
38, 39, 40, 44, 52, 53, 54, 55, 57, 59, 60,
reading difficulties, 96, 129, 163, 167
63, 64, 70, 76, 93, 94, 98, 104, 111, 118,
reasoning, 33, 35, 40, 43, 61, 98, 165, 210,
119, 125, 126, 128, 139, 145, 243
250, 252
opportunities, 12, 19, 36, 38, 40, 41, 49, 51,
recommendation(s), ix, 2, 3, 6, 10, 12, 14,
79, 191, 194, 195, 196, 207, 216, 219,
15, 16, 17, 67, 87, 88, 89, 90, 146, 147,
221, 229
148, 176, 177, 178, 179, 180, 189, 193,
230, 246, 247
P regression, 6, 68, 95, 135, 136, 141
reliability, 4, 5, 21, 23, 24, 68, 71, 85, 87,
parents, 77, 79, 143, 144, 153, 195, 201, 92, 134, 135, 136, 138, 140, 141, 146,
207, 216, 248, 257, 262 171, 177, 182, 183
participants, 4, 5, 42, 43, 51, 111, 120, 182, researchers, ix, 2, 66, 68, 92, 95, 135, 136,
233, 240, 241 137, 140, 141, 183, 184, 188, 201, 212,
peer review, 90, 91, 147, 179, 180, 247 225, 226, 227, 235, 239, 240, 245, 251
physics, 184, 185, 192, 225, 231
Index 229

response, 15, 25, 100, 103, 136, 164, 167, stereotypes, 191, 196, 197, 210, 211, 212,
171, 172, 196, 204, 219, 250 214, 222, 239, 248, 259, 260
Response to Intervention (RtI), ix, 1, 2, 7, 8, strategy use, 203, 207, 208, 249, 253
10, 14, 15, 69, 85, 134, 135, 148, 149, stress, 11, 35, 98
171 student achievement, 166, 262
rewards, 74, 76, 77, 78, 79, 142, 143, 144 student populations, 37, 62, 100, 125
risk, 9, 11, 12, 17, 20, 25, 26, 28, 29, 30, 32, students with learning disabilities, x, 3, 7,
86, 91, 95, 98, 120, 150, 154, 157, 159, 166, 170, 172
160, 161, 166, 251 subtraction, 13, 33, 44, 47, 55, 63, 74, 75,
rules, 48, 53, 55, 88, 111, 118, 177 80, 81, 94, 102, 125, 126, 127, 129, 130,
143, 145, 146, 204

S
T
Scholastic Aptitude Test, 253, 260
school activities, 189, 262 teacher(s), ix, 2, 7, 9, 12, 15, 25, 27, 31, 32,
school administrator(s), ix, 2, 15 33, 35, 36, 40, 41, 42, 43, 48, 49, 56, 60,
school learning, 158 61, 70, 71, 72, 73, 97, 98, 100, 101, 102,
school psychology, 27, 160, 167 111, 112, 120, 124, 136, 138, 140, 141,
schooling, 155 147, 149, 151, 152, 153, 156, 164, 165,
scope, 15, 16, 74, 75, 99, 104, 142, 148 168, 170, 180, 189, 190, 191, 192, 193,
secondary school students, 239 197, 199, 200, 201, 203, 204, 205, 206,
secondary schools, 166 207, 208, 209, 214, 216, 217, 218, 219,
secondary students, 151, 152 221, 222, 223, 224, 229, 230, 231, 251,
self-concept, 188, 205, 235, 236, 248, 258 257, 262
self-efficacy, 205, 206, 234, 253, 260 teaching strategies, 67, 128
sensitivity, 21, 24, 28, 29, 68, 83, 86, 92, 95, techniques, 45, 68, 135, 154, 209, 261
96, 134, 135, 136, 137, 139, 140 technology, 20, 64, 67, 195, 211, 215, 216,
sex, 245, 246, 249, 254, 257 217, 221, 222, 242, 249
skills training, 191, 224, 227, 228, 229, 244, test scores, 81, 165, 199, 204, 226, 232,
245, 246 233, 235
solution, 19, 44, 45, 46, 48, 49, 100, 102, testing, 18, 25, 26, 27, 150, 156, 212, 239,
108, 109, 110, 111, 112, 116, 227 240, 247
spatial ability, 228, 258 test-retest reliability, 92, 135
special education, x, 2, 9, 10, 15, 27, 103, textbook(s), 119, 147, 157
135, 152, 153, 165, 169, 171 training, 73, 148, 170, 191, 192, 196, 224,
special educators, ix, 2, 15, 152 227, 228, 229, 230, 231, 233, 245, 246,
state(s), 7, 8, 10, 13, 18, 25, 26, 28, 50, 79, 255
82, 92, 97, 137, 138, 139, 149, 150, 154, treatment, 11, 18, 38, 87, 100, 101, 103,
166, 225, 260 120, 124, 125, 143, 145, 176, 232
statistics, 187, 188, 214, 217, 257 tutoring, 9, 16, 75, 118, 130, 143, 156, 160,
163, 165, 191
230 Index

88, 92, 99, 101, 106, 108, 114, 116, 117,


V
121, 124, 128, 131, 142, 144
variables, 64, 93, 96, 101, 124, 126, 129,
145 Y
visualization, 226, 227, 245, 246, 248, 249,
250, 252, 258, 259 young women, 186, 195, 197, 210, 215,
221, 230, 238, 239, 240

What Works Clearinghouse (WWC), 3, 4, 5,


6, 37, 45, 52, 53, 62, 65, 68, 74, 75, 78,

Vous aimerez peut-être aussi