Académique Documents
Professionnel Documents
Culture Documents
Ben Levin
Deputy Minister of Education
Government of Ontario
Research Report
No 662
Kenneth Leithwood
Professor
OISE/University of Toronto
Ben Levin
Deputy Minister of Education
Government of Ontario
The views expressed in this report are the authors and do not necessarily reflect those of the Department for
Education and Skills.
Kenneth Leithwood and Ben Levin 2005
ISBN 1 84478 527 0
Table of Contents
Summary ..............................................................................................................3
Introduction ...........................................................................................................6
The Meaning of Leadership...............................................................................6
Leadership Effects ............................................................................................7
Leadership Programme Effects.......................................................................10
A General Framework to Guide Research on Programme and Leader Effects
on Pupil Learning ................................................................................................12
Leadership Practices: The Independent Variables .............................................14
Leadership Effects: The Dependent Variables....................................................18
Mediating Variables for Leadership Effects Research ........................................23
Variables Moderating Leadership Effects ...........................................................29
The Antecedents of Leaders Practices ..............................................................33
Evaluating Leadership Programme Effects.........................................................35
Methodological Challenges.................................................................................40
Conclusions and Recommendations...................................................................44
Summary
Leadership is widely considered a variable critical to school improvement; considerable
evidence now justifies the claim that leadership has important effects on pupil learning.
While largely indirect, evidence summarized in this paper indicates that such effects
explain as much as a quarter of the variation in pupil learning across schools accounted
for by school factors. These leadership effects, furthermore, are usually greatest where
they are needed most; leadership acts as a catalyst unleashing the potential of other
factors contributing to the improvement of pupil learning.
Given the key role of leadership in school improvement, the Department for Education
and Skills (DfES) commissioned this paper to identify the pathways by which school
leadership programmes and practices influence, or positively impact on, pupil learning
and to inform future research and development in this field. More precisely, our goals
were to develop a model or framework of variables and relationships clarifying how
leadership programmes and practices impact on pupil outcomes.
Our paper describes challenges associated with conceptualizing the relationship between
leadership programmes, changes in leader practices and the effects of such changed
practices on schools and pupils. We also grapple with some of the methodological
challenges facing evaluators and researchers with an interest in programme and leader
effects, offering suggestions about how these challenges might be addressed.
Conceptual Challenges
Four conceptual challenges are addressed in our paper:
How to most usefully frame the relationship between leadership programmes,
What can leadership theory and research developed outside of schools contribute to
our understanding of school leadership practices and their effects? Building on our
earlier description of leadership models in non-school contexts, we identified
perspectives on leadership yet to be explored in school-based leadership research but
with potential.
How to define and what do we know about - dependent, moderating and mediating
variables in school leader and leadership programme effects studies? We illustrate
the state-of-the-art of knowledge about these variables through a review of leadership
research carried out exclusively in U.K. contexts. It is clear from this review that
recently published U.K. leadership research is almost exclusively small scale and
qualitative in nature.
Methodological Challenges
Although not as extensive as our treatment of conceptual challenges, the paper also
responds to a series of methodological challenges typical of field based leadership
research; these challenges are addressed more fully in a subsequent paper1. One of the
problems we take up in this paper is the measurement of leadership practices, examining
a small set of instruments commonly used for this purpose.
We also discuss such common difficulties in conducting both programme and leadership
effects research as the narrow and unreliable nature of commonly used student
assessment instruments and how to deal with missing data for individual schools. The
paper presents evidence that, while the school is the unit of analysis in much leadership
effects research, there is typically greater within - than across - school variation in
measures of leadership an important, largely unaddressed, challenge for future research.
Promising approaches to the evaluation of leadership programme effects are also
outlined.
Conclusions and Recommendations
Our conclusions and recommendations bear on selected aspects of future DfES sponsored research that might be carried out on both leadership, and leadership
programme, effects.
Future leadership effects research should:
measure a more comprehensive set of leadership practices than has been included in
most research to date: these measures should be explicitly based on coherent images
of desirable leadership practices. Such research is likely to produce larger estimates
of leadership effects on pupil outcomes than has been provided to date;
mediating their effects on pupils: we now have considerable evidence about what are
the most potentially powerful variables mediating school leader effects but we know
much less about how leaders influence these mediators;
Introduction
Purposes and Methods
The Department for Education and Skills (DfES) commissioned this paper to identify the
pathways by which school leadership programmes and practices influence, or positively
impact on, pupil learning and to inform future research and development in this field.
More precisely, our goals were to develop a model or framework of variables and
relationships clarifying how leadership programmes and practices impact on pupil
outcomes.
In this paper we describe alternative approaches to conceptualizing the relationship
between leadership programmes, changes in leader practices and the effects of such
changed practices on schools and students. We also identify some of the methodological
challenges facing evaluators with an interest in programme and leader effects and offer
some suggestions about how these challenges might be addressed.
Our methods for developing this paper entailed systematic analyses of current research
and theory. In the case of each challenge or issue taken up in the paper, we aimed to
reflect the best of current thinking and to be quite comprehensive about the relevant
literatures to which we paid attention; at least a sample of our sources are cited
throughout the paper. We have grappled with many of these issues in our own recent
research, as well, and our commentary also reflects that experience.
The Meaning of Leadership
We begin by reflecting on the thorny question of Just what is leadership, anyway? not
because we are able to provide a precise answer but because the concept at least deserves
some preliminary exploration in a paper with the focus of this one. It is perfectly
reasonable to ask How could we understand leadership effects if we dont have a clear
notion of what produced those effects? That said, we raise this question in the face of
many claims that there is no agreed definition (e.g., Antonakis, Cianciolo & Sternberg,
2004). Indeed, the leadership literature contains literally hundreds of at least slightly
different conceptions of the concept.
One way often used to clarify the meaning of leadership is to compare it to the concept of
management. Some of these comparisons seem largely unhelpful, as in Bennis and
Nanus(1985) claim that management is doing things right and leadership is doing the
right things. More helpful, we think, is a distinction offered by Kotter (1990). According
to this source, management is about producing order and consistency, whereas leadership
is about generating constructive change. Adopting this perspective, the primary effect of
organizational leadership would be significant change in a direction valued by the
organization. In practice, of course, distinguishing between leadership and management
behaviours can be extremely difficult. This is because the distinction rests not on the
nature of the behaviour but its effects. If behaviour produces order and consistency then it
must be management; if it produces change in a valued direction it must be leadership.
Most conceptions of leadership do associate it with productive change. And at the core of
most of these conceptions are two functions generally considered indispensable to its
meaning:
here very briefly with an exclusive focus on pupil effects. It is important to note that most
of this evidence has come from research on school-level leaders, especially heads, deputy
heads or their equivalent in schools outside of the United Kingdom (U.K.). Local
Education Authority (LEA) or district leadership effects on pupils have, until recently,
been considered too indirect and complex to sort out, and research on teacher leadership
has rarely inquired about pupil effects (Barr & Duke, 2004).2
Claims about the important effects of leadership on pupils are justified by five quite
different types of research evidence:
Effects of specific leadership practices on pupil test scores: a third source of evidence
about leadership effects on pupils is also large-scale and quantitative in nature.
However, instead of examining overall leadership effects, it inquires about the effects
of specific leadership practices on pupil test scores. Evidence of this sort can be found
2 This very comprehensive review of empirical evidence was able to locate only five studies assessing the
effects of teacher leadership on pupils. Only four of these studies actually included direct measures of pupil
learning and three of the four found no effects of teacher leadership on such learning. In light of how
popular teacher leadership has become as a strategy for school reform, this is an astonishing demonstration
of just how susceptible are schools to evidence-free claims about what they ought to do.
3 While this quantitative synthesis of research produces clearly interesting data, estimates from such data to
principal effects on pupil learning in real world conditions must be treated with considerable caution. First
of all, the data are correlational in nature, but cause and effect assumptions are required for the extrapolated
effects of leadership improvement on pupil learning. Second, the illustrative effects on pupil achievement
described in the study depend on a leaders improving their capacities across all 21 practices at the same
time, an extremely unlikely occurrence! Some of these practices are dispositional in nature, or rooted in
deeply held beliefs unlikely to change much, if at all, within adult populations (e.g., ideals/beliefs,
flexibility). And just one of the 21 practices, increasing the extent to which the principal is knowledgeable
about current curriculum, instruction and assessment practices would be a major professional development
challenge, by itself. Nonetheless, this line of research is a useful addition to other lines of evidence which
justify a strong belief in the contributions of successful leadership to pupil learning.
the nature of school leadership practices that are successful in improving pupil
learning;
under what conditions or circumstances are some forms of school leadership more
successful than others?
how successful leadership practices are connected to changes in the school
10
knowledge is not because leadership preparation programmes are never evaluated; rather,
the vast majority of such evaluations do not provide the type and quality of evidence
required to confidently answer questions about their organizational or pupil effects. Most
evaluations are limited to assessing participants satisfaction with their programmes and
sometimes their perception of how such programmes have contributed to participants
work in schools (McCarthy, 2002).
11
12
Antecedents
Independent
Variables
leadership practices
Moderating Variables
e.g. family background
family culture
gender
formal education
reward structure
Mediating Variables
school conditions
class conditions
individual teachers
professional community
13
Dependent
Variables
short term
long term
activities directly effecting the learning of pupils. The more fully developed models
in this category (e.g., Hallinger, 2003) also include attention to broader sets of
organizational variables, such as school culture or climate, thought to influence
teachers classroom practices.
Moral leadership: is concerned with the ethics and values of those exercising
leadership. Specifically, it aims to clarify the nature of the values used by leaders in
their decision making and how conflicts among values are best adjudicated (Begley &
Leonard, 1999; Begley & Johansson, 2003). A strand within this approach to
leadership specifically aims to promote democratic values and the empowerment of a
large proportion of organizational members (e.g. Starratt, 2003: Johansson, 2003).
14
Contemporary approaches: with a more explicit focus on both the leaders and the
development of their followers (p. 254), includes charismatic and transformational
approaches to leadership. This superordinate category also includes Leader-Member
Exchange theory which argues that leaders have unique relationships with individual
organizational members depending on such factors as trust, perceived competence
and the like.6.
Alternative approaches: expand the focus of attention beyond either the individual
leader or followers to relationships and interactions. Included in this superordinate
4 See, for example, the special issue of School Leadership and Management (2004, vol. 24, no. 1) edited by
Brent Davies.
5 We use this term academic leadership research in reference to systematic, theoretically informed,
empirical inquiry about leadership - as distinct from the highly popular genre of leadership literature which
is autobiographical, anecdotal and/or exclusively case based.
6 For one of the few studies of this model of leadership in a school context, see Devereauxs (2004) recent
dissertation.
15
16
evaluations neither specify nor measure the leadership practices which they aim to
improve, electing instead for more global measures of participant satisfaction with the
contribution of the programme to participants personal and implicit leadership efforts or
espoused leadership theories.
These shortcomings in the actual measurement of leadership point to the importance of
clearly specifying those leadership practices which are hypothesized to effect pupil
outcomes. Failure to do this arises from both practical and conceptual sources.
Practically, available resources will often press researchers and evaluators to rely on
existing evidence, evidence that is an imperfect match for their purposes. Conceptually, a
major source of the problem is lack of agreement about the definition of leadership, as we
discussed earlier.
These challenges notwithstanding, subsequent improvements in our understanding of
leadership effects on pupils depends, to a significant degree, on the use of reliable
leadership measures explicitly based on clearly defined and conceptually coherent images
of desirable leadership practices.
17
18
As these arguments make clear, our current preoccupation with pupil test scores, as the
dependent measure of choice in inquiries about leadership effects, is open to serious
challenge. That said, the preference for assessing leadership effects on pupil test scores is
not likely to go away anytime soon. In fact, Hallinger and Heck (1996a) decried the
extent of use of such measures in studies of leadership a decade ago. But the press to use
them has grown rather than diminished in the interim. So what are the challenges
associated with this measure of pupil outcomes?
While purpose-built achievement measures could be used by researchers and evaluators
(although they would have their own limitations), in practice, both levels of funding and
national, state or LEA policies mean that most research studies and programme
evaluations end up using existing measures. These measure are typically part of national,
state or LEA pupil testing programmes which have three well-known limitations as
estimates of leadership effects: a narrow focus, questionable or unknown reliability, and
the questionable accuracy with which they are able to estimate change over time. We
have encountered a handful of less pervasive, practical limitations in some of our own
recent work which we also identify in this section.
Narrow focus. First, most large-scale testing programmes confine their focus to
maths and language achievement with occasional forays into science. Only in relatively
rare cases (e.g. Kentucky) have efforts been made to test pupils in most areas of the
curriculum not to mention cross-curricular areas such as problem-solving or teamwork.
Technical measurement challenges, lack of resources and concerns about the amount of
time for testing explain this typically narrow focus of large-scale testing programmes.
But this means that evidence of leaders effects on pupil achievement using these sources
is evidence of effects on pupils literacy and numeracy.
Because improving literacy and numeracy are such pervasive priorities in so many
schools at the moment, this is a limitation that will not concern many researchers and
evaluators. There is evidence, however, that leadership effects are of a different
magnitude for even these two areas of achievement. The lesson for researchers and
programme evaluators is that the size and significance of leadership effects on other areas
of achievement cannot be assumed or extrapolated.
Reliability. Lack of reliability at the school level is a second limitation of many
large-scale testing programmes. Most of these programmes are designed to provide
reliable results only for large groups of pupils. So results aggregated to national, state or
LEA levels are likely to be reliable. But as the number of pupils diminishes, as in the case
of a single school or even a small district or region, few testing systems claim to even
know how reliable their results are (e.g., Wolfe, Childs & Elgie, 2004). The likelihood,
however, is that they are not very reliable, thereby challenging the accuracy of judgments
about leadership effects. Researchers and programme evaluators would do well to limit
analysis of achievement to data aggregated above the level of the individual school or
leader.
Estimating change. Conceptually speaking, monitoring the extent to which a school
improves the achievement of its pupils over time is a much better reflection of a schools
19
(and leaders) effectiveness than is its annual mean achievement scores. Technically
speaking, however, arriving at a defensible estimate of such change is difficult. Simply
attributing the difference between the mean achievement scores of this years and last
years Key Stage Two pupils on the countrys literacy test to changes in a schools
(and/or leaders) effectiveness overlooks a host of other possible explanations:
Cohort differences: This years pupils may be significantly more or less advanced in
their literacy capacities when they entered the cohort. Such cohort differences are
quite common, as any teacher will attest;
Test differences: While most large-scale assessment programmes take pains to ensure
equivalency of test difficulty from year to year, this is an imperfect process and there
are often subtle and not-so subtle adjustments in the tests that can amount to
unanticipated but significant differences in scores;
External environment differences: Perhaps the weather this winter was more severe
than last winter and pupils ended up with six more snow days - six fewer days of
instruction, or a teacher left half way through the year, or was sick for a significant
time;
Regression to the mean: this is a term used by statisticians to capture the highly
predictable tendency for extreme scores on one test administration to change in the
direction of the mean performance on a second administration. So schools scoring
either very low or very high one year can be expected to score less extremely the
second year, quite aside from anything else that might be different.
Linn(2003) has demonstrated that these challenges to change scores become less severe
as change is traced over three or four years. It is the conclusions drawn from simply
comparing this years and last years scores that are especially open to misinterpretation.
Unfortunately, it is the year over year comparisons that are most commonly made by
those who report achievement results. The lesson here for researchers and programme
evaluators is to use, as measures of their dependent variable, changes in pupil
achievement over relatively long periods of time (three or more years).
Two further limitations. While the three limitations, reviewed above, of national
or state achievement data challenging researchers and evaluators have attracted
considerable recent attention, we encountered two others in the course of conducting a
recent leadership programme evaluation (Leithwood et al., 2003) using U.S. state
(Louisiana) achievement data as the measure of programme and leadership effects.
One of these limitations was changing measures over time. Ours was a five year
longitudinal evaluation which was complicated by changes in the states achievement
measures. Given the frequency of policy shifts in pupil assessment practices in many
jurisdictions, it may become impossible to maintain a consistent set of data over several
years. Not only can the tests change, but scoring rubrics and cut-offs may also be
modified.
20
Missing or incorrect data may also be a problem for researchers and evaluators. Indeed,
school records of achievement data, should they be used, are quite likely to be inadequate
for research purposes. Data may be missing, or coded incorrectly, or simply misplaced
over the years. In our own recent evaluation, achievement data for a few schools were
available for one year but not subsequent years, even though the grades to which the tests
were administered were included in the schools. This reduced the number of schools
available for comparison across years. A few schools changed the grade levels tested
further complicating the comparability issue by the lack of data for the same grade(s)
each year.
Level Three Effects In Education: Toward Organizational Survival
Level three effects are the long-term outcomes of schooling. Such effects reflect fairly
closely, a number of the important reasons for societys heavy investment in public
education. These reasons, as we have argued elsewhere (Leithwood, Aitken & Jantzi,
2002), concern both the individual welfare of students and the general public good.
Public investment in primary and secondary schooling is typically justified by the
assumption that it prepares people for life after secondary school employment,
terteriary education and commitments to learning over the life span. So two important
effects suggested by this justification are students commitment to further education
and preparation for work.
The publics investment in schooling is also justified by the contributions to communities
that accrue from a well educated population. These aggregate rather than individual
effects include, for example, contributions to the communitys economic productivity.
Contributions of this sort might be gleaned from employers perceptions of how well
graduates are prepared for the workplace and from claims on the part of employers that
their attraction to the community and their decisions to stay can be at least partly
attributed to the quality of its schools.
Economic productivity is not the only indicator of schools contributions to their
communities. Such contributions may also take the form of:
21
leadership effect. But that was before the complete capture of the public sector by an
ideology and ethic of accountability, one which has proven extraordinarily hostile to the
monopolistic provision of public services. By now, competition from private education
alternatives is quite common and public schools on both sides of the Atlantic failing to
meet their performance targets are sometimes closed and reconstituted, after being
placed in special measures, often with new leadership and staff.
Less obvious but perhaps even more pervasive is the growth of private providers of a
wide range of services to public schools and LEAs, services formerly provided within the
public school system itself. This trend is viewed by its advocates as a highly desirable
development within the public school sector allowing it to focus on its core competences
and mission. But it could as easily be viewed as death by a thousand cuts. Wherever
this trend takes us in the future, there can be little doubt that leaders in public education
systems are faced with a challenge to the survival of their systems. So institutional
survival is a rational long term criterion by which to judge their effectiveness.
22
7 First, we re-examined evidence summarized in an extensive review of literature carried out for another
major research project (Leithwood, Seashore Louis, Anderson & Wahlstrom, 2004). The recentness and
comprehensiveness of this review allowed us to assume that it adequately represented the current state of
knowledge from North American research sources.
Second, in order to reflect information reported in the U.K. context about variables both mediating and
moderating successful leadership, we conducted a search of relevant empirical evidence - both qualitative
and quantitative - published over the past 12 years using the international ERIC data base (1976-2004) for
U.K. sources only, Education-line (an exclusively British data base) and the Web of Science (U.K sources
only). We also reviewed all empirical studies conducted in the U.K. relevant to leadership effects, very
broadly defined, published in Educational Management, Administration and Leadership between 2000 and
2004.
Finally, to augment our search for variables which moderate leadership effects, we analyzed empirical
studies reported between 2000 and 2004 in The Leadership Quarterly, one of the most widely respected
outlets for research on leadership conducted in non-school organizations. While limited to only one journal,
we assumed that this source would include most, if not all, of the relevant variables that would have
surfaced had an expanded number of comparable journals been included in our analysis. This assumption
was confirmed by scanning chapters in a small number of quite recent edited texts in this field.
23
point for our present work, we then supplemented the results with evidence from the U.K.
sources described above.
Table 1 lists the mediating variables found in these studies, as a whole. It should be
stressed that these variables were often not always conceptualized by the authors of the
U.K. studies as we have conceptualized them for this paper. One of the reasons for this is
that by far the majority of the U.K. studies we were able to locate were qualitative in
nature and carried out using case study methodologies. Research of this sort is not often
designed to explore causal relationships in as direct a manner as the overall framework
for this paper indicates was our intention. Nonetheless, the decision to include a variable
in Table 1 was based on evidence either that leadership had influenced it or that it had
influenced pupil learning. We found no studies that had systematically tested both sets of
relationships. Indeed, we found no U.K. leadership effects research explicitly guided by a
framework approximating Figure 1.
The left hand column of Table 1 names each of the mediating variables (four categories)
and the middle column cites published evidence about each mediator collected in UK
contexts. We provide here a brief explanation of what is included in each category of
mediating variables.
The School Conditions category of mediating variables in Table 1 includes:
school structures: this variable includes school size and reflects evidence about the
value of small structures in building a sense of identity with the school and increased
chances for meaningful relationships between teachers and pupils. Also included is
the extent to which governance is centralized or decentralized and the degree to
which there are opportunities for staff to participate in school-level decision making.
school culture: this mediator encompasses evidence about the impact on students of a
school for what it is they aspire to for both students and the nature of their
organization;
organizational learning processes: these are largely self reflective and critical
reflective processes on the part of teaching staff in the literature in which they
appeared;
relationships with families: the extent to which parents are considered partners with
schools in the education of pupils;
instructional policies and practices: concerned directly with the schools core
technology, this variable encompasses school-wide policies shaping decisions about
student retention and promotion, the coherence of instructional programmes, and the
nature and availability of extra-curricular programmes;
24
human resources: how teacher time is allocated, as well as a wide range of working
conditions that support the instructional work of teachers are part of this variable;
extra-curricular activities: the types of activities in which students engage outside of
classrooms but within the sponsorship of the school.
Teacher recruitment and retention practices.
class size: suited especially to leadership studies at the primary level, this variable
reflects evidence of the effects on learning in the primary grades of small classes
(approximating 15 students) when teachers adapt their instruction to take advantage
of smaller classes;
teaching loads: suited especially to leadership studies in secondary schools, this
school leadership studies, this variable assumes that pupil learning is significantly
associated with teachers subject matter knowledge;
homework: although difficult to implement in a widely acceptable manner, this
foundation for this variable; this evidence clearly favours heterogeneous over
homogeneous grouping strategies;
curriculum and instruction: this variable reflects the impact, on the learning of most
basic skills: especially important to measure are teachers literacy skills in primary
schools;
pedagogical content knowledge and skill; this is knowledge and skill about how to
teach particular ideas or concepts;
motivation: the nature and extent of commitment staff have toward pupils and their
school.
25
shared norms and values among teachers concerning student and teaching standards;
a focus on pupil learning as the criterion for virtually all teacher decisions;
reflective dialogue: places a high priority on inquiry and collaborative study of ones
own and colleagues practices;
collective professional development: the nature and extent to which professional
distribution of leadership; the nature and extent to which leadership in the school is
shared.
26
Table 1: A Summary of U.K. Evidence about Variables which Mediate Leaders Effects on
Pupil Learning
Mediating Variables
School Conditions
Structure
School culture (e.g., shared norms and values,
shared decision making, collaboration)
Instructional policies/practices
(e.g., performance targets)
Resources (human, financial)
Extra-curricular activities
Teacher recruitment and retention
Classroom conditions
Class size
Teaching loads
Formal preparation
Homework
Curriculum & Instruction
Teachers: Individual
Basic skills
Content knowledge
Pedagogical Content Knowledge
Experience
Professional development
Motivation
Teachers: Collective
Shared norms
Focus on students
Deprivatized practice
Reflective dialogue
Collective Pro Development
U.K. Research
Barker (2001)
Nicolaidou, Anscow (2002)
Colwell, Hammersley-Fletcher
(2004)
Morris (2000)
Harris, et al (2003)
Morris (2000)
Devos, Verhoeven, (2003)
Rutherford (2002)
Wilson, McPake (2000)
Barker (2001)
Jones (2003)
Harris et al (2003)
Jones (2003)
Caddell (1996)
Harris et al (2003)
James, Colebourne (2004)
Jennings, Lomas (2003)
Wilkins (2002)
Milgram (2003)
Farrell, Morris (2004)
Poulson (2001)
Harris et al (2003)
Farrell, Morris (2004)
Penny (2003)
Durrant (2003)
Wallace (2001)
Adey (2000)
Distribution of leadership
27
We do not claim to have uncovered all potential mediators or all of the relevant evidence
about those we have identified. But we do believe that most mediators conceptually
suited for leadership effects studies are evident in Table 1 and there is sufficient empirical
evidence about the influence of each on pupil learning to warrant their consideration in
the design of leadership effects research.
28
Since moderating variables help explain how or why certain effects will hold, the careful
selection of moderating variables is a key step in designing leadership effects research
and one that has been badly neglected in educational leadership research to date. The
consequences of this neglect can be easily illustrated. Suppose, for example, that we
design a study to assess the effects of leaders goal-setting practices on teachers
organizational commitment. Reflecting evidence from previous research, we might
decide that teachers sense of professional self-efficacy is a key mediator of the effects of
goal-setting practices on such commitment. So we measure it. But suppose goal-setting
practices only influence teachers sense of efficacy in a positive direction when trust in
leaders is high. Unless we measure trust in leaders, as well, it is likely that at least some
subsequent studies will disconfirm our results. They will do so because the different
teacher samples in those subsequent studies will vary in unknown ways with respect to
their levels of trust in leaders.
As this example makes clear, inadequate attention to moderating variables is one of the
more plausible explanations for contradictory research findings and the general
scepticism that often follows about the potential of research to provide clear guidance for
policy and practice. This charge, it should be noted, is in no way unique to large-scale
quantitative research, which at least has a tradition of worrying about such matters. Case
study, qualitative, research is especially well suited to the task of surfacing promising
moderators for subsequent consideration.8
A final point of clarification about moderators concerns the basis on which a variable is
assigned moderator status. Indeed, the same variable might be defined moderator,
mediator or dependent variable status depending entirely on the theory or framework
used to guide a leadership effects study. For example, employee trust was used above
as an example of a moderating variable in research concerned with the effects of
8 Qualitative research should not be considered exempt from the obligation of providing evidence
about moderating variables, even though some of its advocates eschew even the idea of causal
relationships and the concept of dependent, independent and mediating variables.
29
30
To extend our illustrative menu of possible moderators for future research beyond those
typically found in our original North American sources we reviewed the same published
U.K. sources used to identify mediating variables. In addition, we analyzed all issues of
the 2000 to 2004 volumes of The Leadership Quarterly; this journal reports research
conducted mostly in non-school contexts.
As Table 2 indicates, our review of empirical studies carried out in the U.K. and studies
published in The Leadership Quarterly identified five categories of moderators (twentyone specific moderators) that may warrant additional attention in future studies of
educational leadership effects on pupils. The first category of moderators, Pupils,
includes a range of features associated with pupils family background, family
educational culture and pupils gender and ethnicity. In the second category, Teachers9, is
included gender, formal education and tenure (age and experience have been included
here). Also part of this category are teachers ethnicity, their beliefs and values, morale,
trust and the confidence they have in their leaders and their leadership prototypes.10 Two
characteristics of Leaders themselves are identified in Table 2; their gender and their
level in the organizational hierarchy
Table 2 identifies five features of the Organization which may moderate responses to
leadership practices, including school size, what it is people are rewarded for, and
opportunities for job enrichment. The difficulty of the tasks people are expected to
accomplish, the interpersonal dynamics among people in the organization, and the
availability and use of information in decision making are also identified as moderators in
this category. Finally, in the category Organizational context, the nature of other
stakeholders in the school or district and their relationships with the school, as well as the
policy environment in which the organization finds itself, moderate the amount and
nature of leaders influence.
9 We exclude from this category teachers in leadership roles. In the non-school leadership research
literature, the equivalent category label would be employees or followers.
10 The citation in Table 2 to Lord and Maher is not found in the issues of The Leadership Quarterly that we
reviewed. But leadership prototypes have gained extensive attention in other publications and should not
go unnoticed, in our view. Leadership prototypes are the mental models people have developed for their
meaning of leadership. Such prototypes serve as the source of criteria used by people in judging whether or
not someone is exercising leadership and the extent to which that leadership is desirable.
31
Table 2: A Summary of U.K. and The Leadership Quarterly Evidence About Variables
Which Moderate Leaders Effects
Moderating Variables
Pupils
Background, mobility, social
identity, class
Formal education
Tenure/age/experience
Ethnicity
Beliefs &values
Morale
Trust/confidence in leader
Leaders prototypes
Leaders
Gender
U.K Research
McLay, 2003
Moyo-Robbins (2002)
Moyo-Robbins (2002)
Jones (2003)
McLay (2003)
Oduro (2004)
Hierarchical level
Organization
Size
Reward structure
Job enrichment opportunities
Goal or task difficulty
Interpersonal
dynamics/norms
Availability/use of
information technology
Organizational context
Stakeholders
Policy environment
Koene et al (2002)
Kahai, Sosik & Avolio (2003)
Whittington, Goodwin & Murray, (2004)
Whittington, Goodwin & Murray(2004)
Johnston (1997)
Pepin (2000)
Olson, Davidson (2003)
Selwyn (2001)
Radnor, Ball, Vincent
(2002)
Leithwood et al (2004)
32
33
schools. The highest levels of transformational leadership were associated with moderate
(rather than vigorous or conservative) levels of organizational innovativeness prompted
by increased competition. This study also demonstrated a significant relationship between
principal proactivity and the use of transformational practices.
Both Ross (2004) and Leithwood et al (2004 b) found evidence that two substantially
different formal leadership experiences had significant effects on the development of
transformational leadership behaviours among school principals. The training programme
in Rosss study extended over four sessions (one full day and three half days), was
conducted with principals and a team of their teachers and aimed to indirectly improve
students reading and writing achievement by directly changing teachers assessment
practices. Leithwood et als study was based on a five-year longitudinal evaluation of the
effects of a leadership centre programme which provided a variety of sometimes intense
experiences for principals over several years both inside and outside their schools. Both
studies reported significant positive effects of the training experiences on principals
transformational leadership as well as student achievement.
34
35
Qualities of
Effective
Programs
Leadership
Preparation
Experiences
Participant
Satisfaction
Changes in
Knowledge,
Skills,
Dispositions
Changes in
Leadership
Practices in
Schools
Participant
Satisfaction
Improved
Student
Outcomes
Altered
Classroom
Condition
1
3
4
5
6
Evaluations guided by models 3 and 4 are, in many respects, highly defensible responses
to the outcomes usually demanded of education programmes in most fields. That is,
programmes change the capacities and/or actual practices of participants. Indeed, it is
reasonable to argue that, given the methodological difficulties associated with models 5
and 6, that this ought to be viewed as the near-term standard for evaluating leadership
preparation programmes. Very few examples of such evaluation can be found, at present.
One such example is Leithwood et als (1996) summative evaluation of 11 universitybased programmes sponsored by the Danforth Foundation. With Wallace Foundation
support, Darling-Hammond (2004), and her colleagues are just now beginning a second
such example.
These first three models may be parts of the more complex models 4 through to 6 and
model 6 potentially subsumes all the others. In case of model 4, the criterion variable for
judging leadership programme effects is change in programme participants actual
practices in their organizations. Model 5 requires, in addition, evidence that such
leadership practices lead to desirable changes in school and classroom conditions.
Finally, model 6 expects everything the criterion for judging a leadership programme
successful is that the students in the participants schools learn more.
By mixing and matching, other hybrid models are possible, although we do not have
much to say about them in this paper. For example, one could easily create a direct
36
effects model in which only leadership programmes and/or leadership practices and
student outcomes were linked. But the likelihood of such a model detecting changes in
pupils learning due to either experience in a leadership programme or only changes in
leaders practices is remote (Hallinger & Heck, 1996).
School and
Classroom
Conditions
School
Leadership
Center
Programs
Participants
Internal
Processes
Participants
Leadership
Practices
in their
Schools
! Mission/Goals
! Culture
! School
Planning
! Instruction
! Structure/
Organization
! Information
collection/
decision
making
! Policies and
procedures
Student
Participation
and
Engagement
Student
Achievement
Three recent studies with which we are associated provide more specific illustration of
the range of alternatives within model 6. Figure 3 summarizes the framework used to
guide a recently completed, five-year evaluation of an annual series of development
initiatives provided for a selected set of practicing school principals in the greater New
Orleans, Louisiana, region of the United States (Leithwood et al, 2003). In this case,
internal processes were primarily cognitive in nature and a transformational model (after
Leithwood, Jantzi, & Steinbach, 1999) was used to conceptualize leadership practices.
Variables in the school and classroom, influenced by organizational design theory, were
derived from evidence reported in Leithwood and Aitken (1995). Two sets of student
outcomes served as dependent variables in this evaluation student participation and
engagement in class and school, and achievement as measured by the states annual tests.
37
1
State
Leadership,
Policies and
Practices
e.g.
standards
testing
funding
2
District
Leadership,
Policies and
Practices
e.g.
standards
curriculum
alignment
use of data
9
Leaders
Professional
Learning
Experiences
e.g.
socialization
mentoring
formal
programmes
3
Student/Family
Background
e.g.
family
educational
culture
4
School
Leadership
6
School Conditions
e.g.
culture/community
school improvement
planning
7
Teachers
e.g.
individuals capacity
professional
community
5
Other
Stakeholders
e.g.
unions
community groups
business
media
8
Classroom
Conditions
e.g.
content of
instruction
nature of
instruction
student
assessment
38
10
Student
Learning
Teachers
Motivation
School
Leadership
Practices
Teachers
Capacity
Teachers
Classroom
Practices
Teachers
Work
Setting
39
Student
Learning
Methodological Challenges
Earlier sections of this paper have identified some of the challenges of measuring both
leadership and student outcomes. Evaluators of leadership programmes, and researchers
inquiring about leadership effects, face additional thorny methodological challenges in
their efforts to demonstrate programme and leader effects on pupil outcomes, in
particular. We describe six such challenges in this section of the paper.
The first four methodological challenges are based on the direct experience of Leithwood
and his colleagues (2003) during the evaluation of the New Orleans programme described
above. These challenges arose from the attempt to use state test data as the basis for
assessing programme and leader effects on student achievement; data of this sort often
will be the cheapest and most accessible achievement measures for evaluators and
researchers. Leithwood et al (2003) describe their four challenges to illustrate what they
believed ought to be the appropriate standards of evidence for evaluating leadership
programme effects on pupil learning plausible evidence of effects - along with a
convincing explanation for such effects. Certainty of effects, they claimed, is an
unrealistic standard for programme evaluations. In the case of the New Orleans
evaluation, while achievement data available was available through state sources, at least
five important limitations on their use had to be addressed.
School Organisation
Considerable variation in the organisation of programme participants schools meant that
there were very small numbers of students available for analysis within any one type of
organization. Few schools had identical sets of data. Across participants schools, at least
eleven different grade configurations could be found. None of these configurations
included enough schools to carry out detailed analyses by configuration using the same
measures. For example, a school mean for the Iowa achievement test was calculated to
develop a gain score from 2000 to 2002. The number of grades contributing to the school
mean varied across schools from one to five. This raised questions about the
comparability of, for example, a mean score for grades 3 and 5, a mean score for a single
grade 3, or a mean score for a grade 6 and 7 combination. But limiting comparisons only
to identical grades would have virtually ruled out most analyses, particularly for schools
within the same participants cohort.
School Type
A mix of state or public (82%) and parochial schools (18%) in the New Orleans sample
meant that approximately one fifth of the schools did not have results for two of the main
tests used by the state. Data for parochial schools could only be obtained directly from
the schools, some of which are not able/willing to provide data without more
encouragement than the external evaluator managed to provide. However, even if data
had been provided, the small number of such schools would have permitted only the
simplest form of analyses.
40
41
There are also problems with using entire schools as the unit of analysis for leadership
impact. A school model assumes that leadership impacts are direct and evenly distributed,
but as our earlier discussion shows, neither of these assumptions is necessarily correct.
Take the issue of staff mobility. One reasonable expectation for good leadership is that
staff stability should be high, since many changes in staff are likely to make it more
difficult to improve learning and teaching. However changes in leadership are associated
with increased turnover of staff as leaders recruit new staff who are more aligned with the
leaders values.
A further difficulty with using school-level results as the measure of leadership
effectiveness relates to the situational nature of leadership. If we want to assess the
impact of leadership in situ, then presumably we need a definition of leadership that is
precise enough so that we can recognize it. Yet we also know that many factors shape
school outcomes, and that these factors also may affect the nature of leadership. For
example, presumably one important task of school leaders is to recruit and retain good
teachers, but that task may be much more difficult in settings that also face other
constraints to good outcomes, such as geographical isolation, high poverty or high ethnic
diversity.
Many other aspects of leader activity, such as strategies for involving parents, relations
with students and approaches to curriculum may also vary considerably within schools. If
this is so, how is it possible to assess leadership impact across settings? Two strategies
are possible. The most commonly used strategy is simply to rely on an average impact
score. While this doesnt tell us much about the areas of low and high impact in a school,
it does tell us the overall level of impact which may be sufficient for some evaluation
purposes; indeed, most of the quantitative evidence we now have of leadership impact is
based on this strategy.
A second strategy used quite extensively in leadership research, is to start with schools
judged to be exemplary on the basis of student outcomes and then determine what leaders
do. Even better is a design that compares outliers exceptionally high vs low scoring
schools. While these designs have some value as research strategies, they are of no value
for programme evaluation purposes. One cannot pick leaders and schools for evaluation;
the programme picks them for you.
Research design
Statistical models used in quantitative research on leader effects typically some path
analytic technique (e.g. LISREL) are capable of measuring both leaders direct effects
on pupil learning, as well as their indirect effects through influence on school and
classroom conditions which, in turn, also influence pupil learning. Statistical models of
this type are extremely useful in the evaluation of leadership programmes, beyond
models 1 to 3, in those vast majority of evaluations in which only data related to
participants, their schools and students can be collected.
42
43
our understanding of school leadership practices and their effects? Building on our
earlier description of leadership models in non-school contexts, we identified
perspectives on leadership yet to be explored in school-based leadership research but
with promising potential.
How to define and what do we know about - dependent, moderating and mediating
variables in school leader and leadership programme effects studies? We define and
illustrate the state-of-the-art of knowledge about these variables through a review of
leadership research carried out exclusively in U.K. contexts. It is clear from this
review that most recently published U.K. leadership research is almost exclusively
small scale and qualitative in nature.
Although not as extensive as our treatment of conceptual challenges, the paper also
responded to a series of methodological challenges typical of field - based leadership
research. One of problems we took up in the paper is the measurement of leadership
practices; we examined a small set of instruments commonly used for this purpose.
We also discussed such common difficulties in conducting both programme and
leadership effects research as the narrow and unreliable nature of commonly used student
assessment instruments and how to deal with missing data for individual schools. The
paper presents evidence that, while the school is the unit of analysis in much leadership
effects research, there is typically greater within - than across - school variation in
measures of leadership an important, largely unaddressed, challenge for future research.
Promising approaches to the evaluation of leadership programme effects are also
outlined.
44
Our conclusions and recommendations bear on selected aspects of future DfES sponsored research and evaluation that might be carried out in the United Kingdom on
both leadership, and leadership programme, effects. Future leadership effects research
should, in our view:
measure a more comprehensive set of leadership practices than has been included in
most research to date: these measures should be explicitly based on coherent images
of desirable leadership practices. Such research is likely to produce larger estimates
of leadership effects on pupil outcomes than has been provided to date;
measure an expanded set of dependent (outcome) variables: these are variables
beyond just short-term pupil learning including, as well, longer term effects.
Examples of such long - term effects include pupil success in tertiary education,
employment and commitment to learning over the life span.
leadership effects research: our review suggests that the vast majority of previous
efforts to evaluate leadership programme effects, in most parts of the world, have not
generated the type and quality of evidence required to confidently answer questions
about their organizational or pupil effects. This problem could be addressed by
introducing, into the conceptual frameworks we have suggested for leadership effects
studies, leadership programmes conceptualized as one category of antecedent
variables stimulating changes in leadership practices.
45
learning as well as leaders practices, then a different level of funding will be required
than has been typical to date.
Implementing these recommendations will require considerable attention to a significant
number of conceptual and technical issues at the core of designing and conducting high
quality, high impact research and evaluation. Just doing more research of the type that is
typically being done in the country now - at least as it is reflected in the published
literature - seems unlikely to significantly advance our understandings about how
leadership programmes and leaders most productively improve pupil learning.
46
References
Adey, K. (2000). Professional development priorities: The views of middle managers in
secondary schools. Educational Management and Administration, 28(4), 419-431.
Antonakis, J., Avolio, B., & Sivasubramaniam, N. (2003). Context and leadership: An
examination of the nine-factor full-range leadership theory using the Multifactor
Leadership Questionnaire. The Leadership Quarterly, 14(3), 261-295.
Antonakis, J., Cianciolo, A. T., & Sternberg, R. J. (Eds.). (2004). The nature of
leadership. Thousand Oaks, CA: Sage Publications.
Avolio, B., & Yammarino, F. (2002). Reflections, closing thoughts, and future directions.
In B. Avolio & F. Yammarino (Eds.), Transformational and charismatic
leadership: The road ahead. Oxford: Elsevier Science Ltd.
Barker, B., & Busher, H. (2001). The nub of leadership: Managing the culture and policy
contexts of educational organizations. Paper presented at the British Educational
Research Association Annual Conference, Leeds.
Baron, R. M., & Kenny, D. A. (1986). The moderator-mediator variable distinction in
social psychological research: Conceptual, strategic and statistical considerations.
Journal of Personality and Social Psychology, 51(6), 1173-1182.
Bass, B. M. (1985). Leadership and performance beyond expectations. New York: The
Free Press.
Begley, P., & Johansson, O. (Eds.). (2003). The ethical dimensions of school leadership.
Dordrecht: Kluwer Academic Publishers.
Begley, P., & Leonard, P. E. (Eds.). (1999). The values of educational administration.
London: Falmer Press.
Bell, L., Bolam, R., & Cubillo, L. (2003). A systematic review of the impact of school
headteachers and principals on student outcomes. Retrieved March 20, 2004, from
http://eppi.ioe.ac.uk/EPPIWebContent/reel/review_groups/leadership/
lea_rv1/lea_rv1.pdf
Bennis, W., & Nanus, B. (1985). Leaders: The strategies for taking charge. New York:
Harper & Row.
Blake, R. R., & Mouton, J. S. (1964). The managerial grid. Houston, TX: Gulf.
Brown, D., & Lord, R. (1999). The utility of research in the study of transformational and
charismatic leadership. The Leadership Quarterly, 10(4), 531-539.
Bryman, A. (2004). Qualitative research on leadership: A critical but appreciative review.
The Leadership Quarterly, 15(6), 729-769.
Caddell, D. (1996). Roles, responsibilities and relationships: Engendering parental
involvement. Paper presented at the Scottish Educational Research Association
Conference, Dundee.
Caldwell, B. J. (2000). Leadership in the creation of world-class schools. In K. A. Riley
& K. S. Louis (Eds.), Leadership for Change and School Reform. London:
Routledge Falmer.
Campbell, D., & Stanley, J. (1963). Experimental and quasi-experimental designs for
research. Boston, MA: Houghton Mifflin.
Chatterji, M. (2004). Evidence on "what works": An argument for extended-term mixed
method (ETMM) evaluation designs. Educational Researcher, 33(9), 3-13.
47
Clark, K. E., & Clark, M. B. (Eds.). (1990). Measures of Leadership. West Orange, NJ:
Leadership Library of America, Inc.
Clark, M., Walter, H., & Madaus, G. (2000). High stakes testing and high school
completion. NBETPPS Statements, 1(3).Chestnut Hill, MA: National Board on
Educational Testing on Public Policy.
Collins, D. (2002) Performance-level evaluation methods used in management
development studies from 1986 to 2000. Human Resource Development Review
1(1), 91-1110.
Colwell, H., & Hammersley-Fletcher, L. (2004). The emotionally literate primary school.
Paper presented at the British Educational Research Association Annual
Conference, Manchester.
Cook, T.D. (1991). Clarifying the warrant for generalized causal inferences in quasiexperimentation. In M. McLaughlin & D. Phillips (Eds.), Evaluation and
education at quarter century (pp. 115-144). Chicago: National Society for the
Study of Education.
Conley, S. (1991). Review of research on teacher participation in school decision making.
In G. Grant (Ed.), Review of research in education. Washington, DC: American
Educational Research Association.
Creemers, B. P. M., & Reezigt, G. J. (1996). School level conditions affecting the
effectiveness of instruction. School Effectiveness and School Improvement(7),
197-228.
Dansereau, F., & Yammarino, F. (Eds.). (1998). Leadership: The multi-level approaches.
Stamford, CT: JAI Press.
Dansereau, F., Yammarino, F., & Markham, S. (1995). Leadership: The multiple-level
approaches. The Leadership Quarterly, 6(3), 251-263.
Davies, B. (Ed.). (2004). School Leadership and Management, 24(1).
Day, C., Hadfield, M., & Harris, A. (1999). Leading schools in times of change. Paper
presented at the European Conference on Educational Research, Lahti, Finland.
Day, C, Harris, A, Hadfield, M. (2001)(a). Grounding knowledge of schools in
stakeholder realities: a multi-perspective study of effective school leaders, School
Leadership and Management, 21, 1, 19-42.
Day, C, Harris, A, Hadfield, M. (2001)(b). Challenging the orthodoxy of effective school
leadership, International Journal of Leadership in Education, 4, 1, 39-56.
Day, D. V. (2001). Assessment of leadership outcomes. In S. J. Zaccaro & R. J. Klimoski
(Eds.), The nature of organizational leadership: Understanding the performance
imperatives confronting today's leaders (pp. 384-410). San Francisco: JosseyBass.
Day, D. V., & Lord, R. G. (1988). Executive leadership and organizational performance:
Suggestions for a new theory and methodology. Journal of Management, 14, 453464.
Devereaux. (2004). Merging role-negotiation and leadership practices that influence
organizational learning. University of Toronto, Toronto.
Devos, G., & Verhoeven, J. C. (2003). School self-evaluation - Conditions and caveats:
The case of secondary schools. Educational Management and Administration,
31(4), 403-420.
Dror, Y. (1986). Policymaking under adversity. New Brunswick, NH: Transaction Press.
48
Earl, L. Watson, N., Levin, B., Leithwood, K. Fullan, M., and Torrance, N. (2003).
Watching and learning 3: Final report of the OISE/UT evaluation of the
implementation of the National Literacy and Numeracy Strategies. Prepared for
the Department for Education and Skills, England. Toronto: OISE/University of
Toronto. www.standards.dfes.gov.uk/literacy/publications.
Evers, C., & Lakomski, G. (2000). A plea for strong practice. Educational Leadership,
61(3), 6-10.
Durrant, J. (2003). Teachers leading learning. Paper presented at the British Educational
Research Association Annual Conference, Edinburgh.
Farrell, C., & Morris, J. (2004). Resigned compliance: Teacher attitudes towards
performance-related pay in schools. Educational Management Administration and
Leadership, 21(1), 81-104.
Finn, J. D. (1989). Withdrawing from school. Review of Educational Research, 59(2),
117-143.
Fredricks, J., Blumenfeld, P., Paris, A., (2004). Student Engagement: Potential of the
Concept, State of Evidence, Review of Educational Research, 74, 1, 59-109.
Gezi, K. (1990). The role of leadership in inner-city schools. Educational Research
Quarterly, 12(4), 4-11.
Gillborn, D. (1997). Ethnicity and educational performance in the United Kingdom:
Racism, ethnicity and variability in achievement. Anthropology and Education
Quarterly, 28(3), 375-393.
Graham, M., & Robinson, G. (2004). The silent catastrophe: Institutional racism in the
British educational system and the underachievement of black boys. Journal of
Black Studies, 34(5), 653-671.
Gronn, P. (2002). Distributed leadership. In K. Leithwood & P. Hallinger (Eds.), Second
international handbook of educational leadership and administration (pp. 653696). Dordrecht: Kluwer Academic Publishers.
Gronn, P. (2000). Distributed properties: A new architecture for leadership. Educational
Management and Administration 28(3), 317-338.
Guskey, T. R. (2000). Evaluating Professional Development. Thousand Oaks, CA:
Corwin Press.
Hallinger, P. (2003). Reflections on the practice of instructional and transformational
leadership. Cambridge Journal of Education.
Hallinger, P., & Heck, R. (1996). The principal's role in school effectiveness: An
assessment of methodological progress, 1980-1995. In K. Leithwood & et al.
(Eds.), International handbook of educational leadership and administration (pp.
723-783). Dordrecht: Kluwer Academic Publishers.
Hallinger, P., & Heck, R. (1996). Reassessing the principal's role in school effectiveness:
A review of empirical research, 1980-1995. Educational Administration
Quarterly, 32(1), 5-44.
Hallinger, P., & Murphy, J. (1985). Assessing the instructional management behavior of
principals. Elementary School Journal(86), 217-247.
Hallinger, P., & Heck, R. H. (1996a). Reassessing the principals role in school
effectiveness: A review of empirical research, 1980-1995. Educational
Administration Quarterly, 32(1), 5-44.
49
Hallinger, P., & Heck, R. H. (1996b). The principals role in school effectiveness: A
review of methodological issues, 1980-1995. In Leithwood, K, Chapman, J.,
Corson, D., Hallinger, P, & Weaver-Hart, A. (Eds.), International handbook of
educational leadership and administration (pp.723-784). New York: Kluwer.
Hallinger, P., & Heck, R. H. (1998). Exploring the principals contribution to school
effectiveness: 1980-1995. School Effectiveness and School Improvement, 9, 157191.
Hargreaves, A., Moore, S., Fink, D., Brayman, C., & White, R. (2003). Succeeding
leaders? A study of principal rotation and succession. Toronto, ON: Ontario
Principals' Council.
Harris, A., & Chapman, C. (2002). Democratic leadership for school improvement in
challenging contexts. Paper presented at the International Congress of School
Effectiveness and Improvement, Copenhagen.
Harris, A., Muijs, D., Chapman, C., Stoll, L., & Russ, J. (2003). Raising attainment in
schools in former coalfield areas. London: Department for Education and Skills.
Heck, R., Marcoulides, G. (1996). School culture and performance: testing the invariance
of an organizational model, School Effectiveness and School Improvement, 7, 1,
76-95.
Hesketh, B. (1997). Dilemmas in training for transfer and retention. Applied Psychology:
An International Review 46(4), 339-361.
James, C., & Colebourne, D. (2004). Managing the performance of staff in LEAs in
Wales. Educational Management Administration and Leadership, 32(1), 45-65.
Jennings, K., & Lomas, L. (2003). Implementing performance management for
headteachers in English secondary schools. Educational Management and
Administration, 31(4), 369-383.
Johansson, O. (2003). School leadership as a democratic arena. In P. Begley & O.
Johansson (Eds.), The ethical dimensions of school leadership. Dordrecht: Kluwer
Academic Publishers.
Johnston, J. (1997). Primary school teachers' perceptions of dynamic process in working
together: A case study. Paper presented at the Educational Research Association
Annual Conference, York, UK.
Jones, S. (2003). School leadership in disadvantaged communities. Paper presented at the
British Educational Research Association Annual Conference, Edinburgh.
Kahai, S. S., Sosik, J. J., & Avolio, B. J. (2003). Effects of leadership style, anonymity,
and rewards on creativity-relevant processes and outcomes in an electronic
meeting system context. The Leadership Quarterly, 14(4-5), 499-524.
Koene, B. A. S., Vogelaar, A. L. W., & Soeters, J. L. (2002). Leadership effects on
organizational climate and financial performance: Local leadership effect in chain
organizations. The Leadership Quarterly, 13(3), 193-215.
Kotter, J. (1990). A force for change: How leadership differs from management. New
York: The Free Press.
Kouzes, J. M., & Posner, B. Z. (1995). The leadership challenge: How to keep getting
extraordinary things done in organizations (Revised ed.). San Francisco: JosseyBass.
50
Leithwood, K., Levin, B. (2004 a). Approaches to the evaluation of leadership effects and
leadership programmes. Toronto: OISE/UT: Paper prepared for the U.K.
Department for Education and Skills, February.
Leithwood, K., Levin, B. (2004 b). Understanding leadership effects on pupil learning.
Toronto: OISE/UT: Paper prepared for the U.K. Department for Education and
Skills, December.
Leithwood, K., Levin, B. (2005). Assessing leadership effects on pupil learning. Part 2:
Methodological issues. Toronto: OISE/UT: Paper prepared for the U.K.
Department for Education and Skills, March.
Leithwood, K., Aitken, R., & Jantzi, D. (2001). Making schools smarter: A system for
monitoring school and district progress (2nd ed.). Thousand Oaks, CA: Corwin
Press.
Leithwood, K., & Duke, D. (1999). A century's quest to understand school leadership. In
J. Murphy & K. Seashore Louis (Eds.), Handbook of research on educational
administration (pp. 45-72). San Francisco: Jossey-Bass.
Leithwood, K., & Jantzi, D. (1999). The relative effects of principal and teacher sources
of leadership on student engagement with school. Educational Administration
Quarterly, 35(Supplemental), 679-706.
Leithwood, K., & Jantzi, D. (2000). The Transformational School Leadership Survey.
Toronto: OISE/University of Toronto.
Leithwood, K., Jantzi, D., & Steinbach, R. (1999). Changing leadership for changing
times. Buckingham, UK: Open University Press.
Leithwood, K., & Levin, B. (2004). Approaches to the evaluation of leadership
programmes and leadership effects: OISE/UT and University of Manitoba.
Leithwood, K., Riedlinger, B., Bauer, S., & Jantzi, D. (2003). Leadership programme
effects on pupil learning: The case of the Greater New Orleans School Leadership
Center. Journal of School Leadership and Management, 13(6), 707-738.
Leithwood, K., Jantzi, D., Earl, L., Watson, N., Levin, B., & Fullan, M. (2004). Strategic
leadership for large-scale reform: The case of England's National Literacy and
Numeracy Strategies. Journal of School Leadership and Management, 24(1), 5780.
Leithwood, K., Seashore-Louis, K., Anderson, S., & Wahlstrom, K. (2004). How
leadership influences pupil learning: A review of research for the Learning from
Leadership Project. New York: The Wallace Foundation.
Leithwood, K., Jantzi, D., Coffin, G., & Wilson, P. (1996). Preparing school leaders:
What works? Journal of School Leadership, 6(3), 316-342.
Leithwood, K., & Steinbach, R. (1995). Expert problem solving: Evidence from school
and district leaders. New York: SUNY.
Leithwood, K., & Levin, B. (2004). Approaches to the evaluation of leadership effects
and leadership programmes: UK Department of Education and Skills.
Levin, B. (in press). Students at-risk: A review of research. Paper prepared for The
Learning Partnership, Toronto.
Linn, R. (2003). Accountability: responsibility and reasonable expectations. Educational
Researcher, 32(7), 3-13.
Lord, R. G., & Maher, K. J. (1993). Leadership and information processing. London:
Routledge.
51
52
Olson, M., & Davidson, J. (2003). School leadership in networked schools: Deciphering
the impact of large technical systems on education. International Journal of
Leadership in Education, 6(3).
Penny, R. (2003). Transforming schools, transforming learning: Integrating CPD and
knowledge management strategies to build collegiate practice. Paper presented at
the British Educational Research Association Annual Conference, Edinburgh.
Pepin, B. (2000). Cultures of didactics: Teachers' perceptions of their work and their role
as teachers in England, France and Germany. Paper presented at the European
Conference on Educational Research, Edinburgh.
Peterson, K. D. (2001). The professional development of principals: Innovations and
opportunities. Paper commissioned for the first meeting of the National
Commission for the Advancement of Educational Leadership Preparation,
Rancine, WI.
Popper, M., & Mayseless, O. (2002). Internal world of transformational leaders. In B.
Avolio & F. Yammarino (Eds.), Transformational and charismatic leadership:
The road ahead. Oxford: Elsevier Science Ltd.
Poulson, L. (2001). Paradigm lost? Subject knowledge, primary teachers and education
policy. British Journal of Educational Studies, 49(1), 40-55.
Radnor, H. A., Ball, S. J., & Vincent, C. (2002). Local educational governance,
accountability, and democracy in the United Kingdom. Educational Policy, 12(12).
Reitzug, U., & Patterson, J. (1998). "I'm not going to lose you!" Empowerment through
caring in an urban principal's practice with pupils. Urban Education, 33(2), 150181.
Roland, E., Galloway, D. (2004). Professional cultures in schools with high and low rates
of bullying, School Effectiveness and School Improvement, 15, 3-4, 241-260.
Ross, J. (2004). Effects of a running records assessment on early literacy achievement:
Results of a controlled experiment. Journal of Educational Research, 97, 4, 186194.
Rost, J. C. (1991). Leadership for the twenty-first century. New York: Praeger Publishers.
Rutherford, D. (2002). Changing times and changing roles: The perspectives of primary
headteachers on the senior management teams. Educational Management and
Administration, 30(4), 447-459.
Scheurich, J. J. (1998). Highly successful and loving, public elementary schools
populated mainly by low-SES children of color: Core beliefs and cultural
characteristics. Urban Education, 33(4), 451-491.
Selwyn, N., & Fitz, J. (2001). The politics of connectivity: The role of big business in UK
education technology policy. Policy Studies Journal, 29(4).
Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi experimental
designs for generalized causal inference. Boston, MA: Houghton Mifflin.
Shavelson, R., & Towne, L. (Eds.). (2002). Scientific research in education. Washington,
DC: National Academy Press.
Silins, H., & Mulford, B. (2002). Schools as learning organizations: The case for system,
teacher and pupil learning. Journal of Educational Administration, 40, 425-446.
53
Silins, H., & Mulford, B. (2004). Schools as learning organizations - effects on teacher
leadership and student outcomes. School Effectiveness and School Improvement,
15(3-4), 443-466.
Smith, F., Hardman, F., & Mroz, M. (1999). Evaluating the effectiveness of the National
Literacy Strategy: Identifying indicators of success. Paper presented at the
European Conference on Educational Research, Lahti, Finland.
Spector, P. (1981). Research designs. Beverly Hills, CA: Sage Publications.
Spillane, J., Halverson, R., & Diamond, J. (2000). Toward a theory of leadership
practice: A distributed leadership perspective. Paper presented at the annual
meeting of the American Educational Research Association, New Orleans, LA.
Spillane, J., Halverson, R. & Drummond, J. (2001). Investigating school leadership
practice: A distributed perspective. Educational Researcher 30(3), 23-28.
Stake, R. (1997). Case study methods. In R. Jaeger (Ed.), Complementary methods for
research in education (pp. 4001-4421). Washington, DC: American Educational
Research Association.
Starratt, R. J. (2003). Democratic leadership theory in late modernity: An Oxymoron or
ironic possibility? In P. Begley & O. Johansson (Eds.), The ethical dimensions of
school leadership. Dordrecht: Kluwer Academic Publishers.
Thomas, A. B. (1988). Does leadership make a difference to organizational performance?
Administrative Science Quarterly, 33, 388-400.
Townsend, T. (1994). Goals for effective schools: the view from the filed. School
Effectiveness and School Improvement, 5(2), 127-148.
Tyler, T. R., & Degoey, P. (1996). Trust in organizational authorities: The influence of
motive attributes on willingness to accept decisions. In R. M. Kramer & T. R.
Tyler (Eds.), Trust in organizations: Frontiers of theory and research. Thousand
Oaks, CA: Sage Publications.
Vecchio, R. P., & Boatwright, K. J. (2002). Preferences for idealized styles of
supervision. The Leadership Quarterly, 13(4), 327-342
Viadero, D. (2004). Math program seen to lack a research base. Education Week, 24(1),
1.
Wallace, M. (2001). Sharing leadership of schools through teamwork: A justifiable risk?
Educational Management and Administration, 29(2), 153-167.
Waters, T., Marzano, R. J., & McNulty, B. (2003). Balanced leadership: What 30 years
of research tells us about the effect of leadership on pupil achievement. A working
paper: McREL.
Whittington, J. L., Goodwin, V. L., & Murray, B. (2004). Transformational leadership,
goal difficulty, and job design: Independent and interactive effects on employee
outcomes. The Leadership Quarterly, 15(5), 593-606.
Wilkins, R. (2002). Linking resources to learning: Conceptual and practical problems.
Educational Management and Administration, 30(3), 313-326.
Wilson, V., & McPake, J. (2000). Managing change in small Scottish primary schools.
Educational Management and Administration, 28(2), 119-132.
Witzier, B., Bosker, R., & Kruger, M. (2003). Educational leadership and pupil
achievement: The elusive search for an association. Educational Administration
Quarterly, 34(3), 398-425.
54
55