Vous êtes sur la page 1sur 25

Value-Added in Higher Education

April 30, 2009

Jesse Cunha
Department of Economics
Stanford University
Darwin Miller
Department of Economics
Stanford University

Abstract: We explore issues surrounding the measurement of the value-added by


individual colleges and offer preliminary estimates for all public colleges in Texas. Our
problem differs from that of the primary and secondary educational system as college
students specialize their instruction by choosing both school and major. This implies that
the standard approach to measuring school level value-added averaging standardized
test score changes is impractical. We thus explore the use of labor market returns as a
measure of value-added. As wages in a competitive labor market reflect productivity,
they allow for meaningful comparisons across various courses of study. Furthermore, if
selection into college is controlled for, mean wage differentials across colleges are a
measure of value-added by the institution. Using administrative data from Texas, we
estimate the labor market return to attending each of the 33 individual public colleges in
the state. We present unconditioned estimates and find that labor differences are large.
We then control for a rich array of observables that might be correlated with college
choice, including demographics, SAT scores, parental income and education, local labor
market conditions, high school GPA and course-taking patterns, and college major, and
the vectors of college application decisions and subsequent acceptances. Upon
conditioning, labor market returns across colleges tend to converge, yet significant
differences remain. We discuss the potential for using this method to measure valueadded for practical policy purposes and the feasibility of alternative identification
strategies.

This work was conducted while Darwin Miller was supported by the Stanford Institute for Economic and Policy Research (SIEPR) Dissertation
Fellowship, and Jesse Cunha was supported by the Schulz Graduate Student Fellowship in Economic Policy Research through a grant to
SIEPR. We would like to thank the Texas Higher Education Coordinating Board for its support of this project. We thank Rick Hanushek,
Giacomo De Giorgi, Caroline Hoxby, Doug Bernheim, Saar Golde, Isaac McFarlin, Paco Martorell, and Lee Holcombe for their helpful
comments and suggestions.

1. Introduction
Value-added models that measure the relative effectiveness of teachers and
schools are becoming increasingly pervasive in the primary and secondary sector. The
literature suggests that there are large differences in value-added across teachers, but that
teacher training, wages, and other observable characteristics explain very little of this
variation in teacher quality (Rivkin, Hanushek and Kane 2005; Koedel & Betts 2007;
Harris & Sass 2006). Similarly, there are large differences in value-added across schools,
but educational inputs such as class size and school funding explain little of the variation
(Hanushek 2003; Hoxby 2000). These findings have encouraged education policy
makers to consider the use of value-added measures to hold teachers and schools
accountable for the performance of their students Some states and school districts have
begun to publish annual value-added rankings of schools, and some have introduced payfor-performance contracts that remunerate teachers and/or schools for improvements in
value-added. The hope is that holding schools and teachers accountable for the
performance of their students will lead to better outcomes and, indeed, the literature
suggests that such policies work (Carnoy & Loeb 2002; Hanushek & Raymond 2005;
Ladd 1999; Ballou 2001).
Given this positive evidence from the K-12 setting, it is reasonable to conjecture
that such measures could be useful in improving outcomes in the higher education sector
as well. A credible value-added ranking provides information about the quality of
education provided by individual colleges which might not be otherwise observable.
Such information can be used by administrators to monitor a schools progress relative to
peer institutions and by college applicants to help decide which college to attend.
Moreover, value-added estimates would allow colleges to consider adopting incentivizing
accountability policies.
Unfortunately, there has been little substantive progress on implementing such
policies in colleges, or and the question has largely been ignored by the academic
community. Existing research has been plagued by selection problems and omitted
variable bias (Rodgers 2007) or only focuses on specific courses of study ((Cengiz &
Yuki 1998) and (Yunker 2005) on CPAs; (Kreutzer & Wood 2007), (Fisher 2007), and
(Tracy & Waldfogel 1997) on MBAs and undergraduate Business Schools, (Oyer and

Schaefer 2009) on Lawyers). To the best of our knowledge, we are the first to develop a
comprehensive methodology appropriate for measuring value-added by all undergraduate
students at the institution level.
Several factors render the wholesale importation of primary education valueadded methodologies impractical for higher education. Most importantly, colleges intend
to add different value to different students; nurses learn vastly different skills than
engineers, making it difficult to compare achievement gains across the entire student
body. The difference between general skills, such as the ability to communicate and
reason, and specific skills, such as nursing or engineering knowledge, is paramount.
Thus, achievement is an unobserved, student specific construct that is difficult to measure
with standardized tests.
Nationally-recognized tests of both general skills and some major-specific skills
do exist. However, they are of limited value as tests do not exist for all majors, and
subjective assumptions would be required to weight the importance of general and
specific knowledge1. For example, different majors place varying emphasis on general
versus specific skills. By and large, it is not unreasonable to think that engineering
majors learn more practical knowledge that directly increases productivity than English
majors. How then are policy makers to place objective weights on the general and
specific components of standardized tests?
One solution to this aggregation problem is to use causal labor market returns as a
measure of value-added. In a competitive labor market, workers earn the marginal
product of their labor which, in turn, is a function of human capital. Perhaps more
importantly, market wages weight the return to general and specific skills in a coherent
and meaningful way that is, in accordance with their effect on productivity. A nursing
majors wages reflect her productivity in caring for her patients, while a political science
majors wages reflect his writing and reasoning skills. However, this solution requires a
researcher to isolate the causal labor market return to attending individual colleges. As
rational students select a college based on both observed and unobserved factors,
identification of causal effects can be difficult to obtain. The academic literature suggests
that positive selection into higher quality and/or more selective colleges is large (Estelle
1

A further complication arises in that not all colleges offer all majors.

1986; Dale & Krueger 2002; Black & Smith 2006; Black 2005). Furthermo re,
accounting for such selection is subject to the credibility of the identifying assumptions.
Failure to account for such selection would bias estimates of the labor market return to
enrolling at particular colleges in favor of more selective schools.
We discuss the relative merits of econometric methods that account for this
selection problem and conclude that conditional OLS regressions provide the most
convincing preliminary estimates. Obviously, causal inference is only achieved in this
model to the extent that the included observable characteristics - demographics, SAT
scores, parental income and education, local labor market conditions, high school GPA
and course-taking patterns, and college major, and the vectors of college application
decisions and subsequent acceptances adequately control for unobserved covariates that
jointly influence wages and choice of college. However, our set of observable student
characteristics is exceptionally large. Of the other available techniques, Instrumental
Variables are difficult to implement as we would need as many valid instruments as there
are colleges in the choice set. Data envelope methods are problematic because they
impose implausible assumptions about the educational production process. Berg and
Kruegers (2002) method of matching student application profiles holds promise for
estimating value-added models for more selective colleges, but is of limited utility for
studying less selective schools whose students do not apply to a sufficient number of
colleges. Finally, we plan to explore the use of a Regression Discontinuity design in the
future if more information can be obtained about the acceptance decisions of individual
colleges.
For the empirical exercise, we use state administrative data which includes
detailed information on all applicants to Texas public colleges in the 1998-99 and 99-00
school years. This database includes individual demographics, parental education,
household income, SAT scores, high school GPA and rank, course-taking patterns in high
school, college application decisions, college acceptance outcomes, college attendance
choice, college major, and quarterly earnings.
Not surprisingly, there are large unconditional differences in earnings across
colleges in Texas. For example, students from Texas A&M, one of the states two
flagship institutions, earns roughly 58 percent more per year on average than the typical

matriculant at the historically black college, Texas Southern University. Observable


student covariates account for a substantial portion but not all of these differences. To
continue the example, the earnings premium for Texas A&M students over Texas
Southern University students drops to 27 percent upon controlling for all observables.
We obviously cannot test whether there are unobservable determinants of college choice
that are correlated with wages, and thus we cannot adopt a causal interpretation of our
results. Nevertheless, our results provide an interesting first picture of the labor market
returns associated with attending particular colleges in Texas, and future work will
hopefully uncover a causal identification strategy that can lead to meaningful valueadded estimates.
2. Value-Added in Higher Education
We define value-added as the increase in students skills and knowledge over
their tenure in school. As such, it is student-specific and inherently difficult to measure.
For nursing students, the value-added by a college encompasses what is learned in the
core liberal arts curriculum as well as practical knowledge, like how to inject a vaccine
and accurately measure a patients blood pressure, about the nursing profession. For a
math major, value-added is quite different; while it still encompasses the same core
liberal arts curriculum, we do not care if a math major knows how to clean a bedpan.
However, we do hope that he completes college with a thorough understanding of proofbased logic and has a grasp of at least one branch of modern mathematics.
In general, there are two types of knowledge that an educational institution may
seek to impart upon students - specific knowledge and general knowledge. Specific
knowledge is what one would learn in a major course of study. In the previous example,
nursing students learn how to inject a vaccine and math majors learn how to prove
theorems. These practical, major-specific skills are what allow students to specialize,
increase their productivity, and ultimately increase their income. However, there is a
strong consensus in the academic community that increased productivity not only stems
from job-specific skills, but also from the more general ability to function in a complex
and rapidly evolving economy (Dwyer, 2006). These general skills include the ability
to think critically, to communicate effectively, to reason, and to interact with others. The

ubiquitous tradition in higher education of combining a core curriculum with a skillspecific major stands as a testament to the fact that both general and specific skills add
important value to students.
To be more explicit, we define human capital for student i at time t as follows:
(1) HCit = F(Git, Sit),
where Git denotes student is achievement level in general skills at time t, Sit is a vector
where element j, Sijt, denotes student is achievement level in specific skill j at time t, and
F: GxSV is a function mapping knowledge and skills to human capital. We call F the
value mapping. For now, we do not place any restrictions on the value mapping; F is an
arbitrary function chosen by the researcher so as to measure achievement in some
combination of skills. Value-added for student i is then simply the growth in human
capital from period t to t+1:
(2) VAit = HCit HCit-1.
Theorem 1: Suppose that either F is additively separable in G or Sit-1 = St-1 for all
students i. Further suppose that the course of study is such that students are taught only
general knowledge. Then one can estimate value-added using two administrations of a
single standardized test of general knowledge.
The proof of Theorem 1 is trivial, but relies on the fact that in either case, we can
find a function f such that for all students i, VAi = f(Git) f(Git-1). In this case, the
function f is simply a scaling, so we can get an estimate of value-added by simply
plugging in estimates of G before and after treatment.
Theorem 1 tells gives us some plausible conditions under which the standard
value-added methodology may be appropriate for the K-12 setting. Particularly for
primary school students, education in the K-12 setting tends to emphasize the attainment
of general knowledge: mathematics, reading and writing. There is little emphasis on
specific knowledge. Thus, if either the value-mapping is additively separable in G or all
students come to school with the same level of specific knowledge, then measuring
value-added is tantamount to measuring growth in general knowledge.

A problem arises for the standard differencing technique, however, when the
course of study is not standardized across students, as is unfortunately the case for the
higher education sector. When different students add to different components of value, it
is impossible to determine value with a single standardized exam. In theory, we could
measure value component-wise. Indeed, there exist standardized tests of general
knowledge like the Collegiate Learning Assessment (CLA), and there are standardized
tests of specific knowledge for some majors, like the ETS Subject Tests. However, there
are not standardized tests of specific knowledge for all majors, so to directly measure
value-added using standardized exams would require a considerable investment in test
development.
Potentially more damning for the case for direct measurement of value-added
using standardized tests in higher education is the choice of the value mapping, F. When
different students learn vastly different skill sets in college, how do we translate what is
learned to a rigorous concept of value that is applicable to all students? The crux of this
concern is that students take courses in many majors, and the value of what they learn in
those courses likely depends on their previous stock of knowledge. For example, a
calculus course is highly valuable to most science majors, but arguably of little value to
English majors. This example illustrates that the value mapping F is likely not additively
separable because there are obvious complementarities between certain components of S.
Thus, even if the researcher could come up with an appropriate value mapping, value
depends on the previous stock of knowledge in all components of S, so each student must
be tested in all components of value2.

2.2

Labor Market Returns as a Proxy for Value


In light of the problems with using standardized tests to directly estimate value-

added in the higher education sector, it makes sense to look at other potential proxies for
human capital. Our solution uses labor market earnings to proxy for human capital.
More explicitly, we assume HCit = wit, where wit is the wage rate that student i would
2

Also important is the fact that different majors place different emphases on the relative importance of
specific versus general skills. Engineering majors spend the majority of their academic career learning
practical knowledge related specifically to their field; English majors learn how to think creatively and
write persuasively skills useful in a wide range of jobs and specializations.

earn at time t if he were participating fully in the labor market. Assuming a competitive
labor market, workers earn the marginal product of their labor. Thus intuitively, our
method uses the labor market to properly value knowledge and skills in the way that they
best translate to worker productivity.
To be more explicit, assume that the marginal product of student is labor can be
written as follows:
(2) MPit = L(Git, Sit) + e,
Where, Git and Sit represent student is stock of general and specific skills at time t, and e
is an idiosyncratic error term. Then, since we are assuming a competitive labor market,
the assumption that HCit = wit is tantamount to the following:
(3) F(Git, Sit) = L(Git, Sit) + e.
That is, using wages to proxy for human capital is equivalent to directly measuring all
components of human capital and using the function L, which provides a direct mapping
from knowledge and skills to worker productivity, as the value-mapping. Thus, we can
bypass testing all together and use the labor market to implicitly value knowledge and
skills in a way that is applicable to all students.
The problem now is the fact that students do not typically participate fully in the
labor market until after completing college, so we cannot get a valid measure of wit until
after college completion. Unfortunately, this means that we cannot use wages to estimate
value-added at the individual level. However, most of the utility of value-added
measures comes from the ability to estimate the impact of individual schools or teachers
on the value-added of their students. Fortunately, we are still able to use wages to
estimate the impact of individual institutions of higher education on the value-added of
their students. Consider the following model:
(4) VAit = XitB + SiA + e,
where Si is a vector of college dummies where the jth component equals one if and only
if student i enrolled in college j at time t, Xit is is a vector of student-level observables at
time t, and e is an idiosyncratic error term. The coefficients of interest are the A terms.
Subbing in wages for human capital, we get:
(5) wit wit-1 = XitB + SiA + e.

Now, the problem is that we do not observe wt-1. However, note that if condition (6)
holds and wt-1 is independent of S, conditional on labor market experience, then wt-1 can
be subsumed into the constant term:
(6) wt-1 | (S | Xit).
In this case, we need only regress w on S and X to get an unbiased value-added measure
A.
Unfortunately, condition (6) is an unreasonable assumption because students likely
select into colleges based on private information. Indeed, the academic literature on the
return to college quality provides substantial evidence that there is positive selection into
higher quality or more selective colleges (Estelle 1986; Dale & Krueger 2002; Black &
Smith 2006; Black 2005). Failure to account for such selection would bias our value-added
estimates in favor of more selective colleges, whose students come to college better prepared.
In terms of our model, we are concerned that condition (6) is violated because wt-1, the wage
rate that students would have earned if they had fully participated in the labor market directly
after high school, is higher for students attending more selective schools. These students
enter college with more knowledge and skills, and hence would have earned more on the
labor market than their counterparts at less selective schools. In the next section, we discuss
several methods one might use to account for this selection problem.

3. Methods to Account for Selection into Colleges The Return to College Quality
While, to the best of our knowledge, this is the first academic study to attempt to
quantify the economic return to attending particular undergraduate institutions, a handful
of studies looks at the economic return to enrolling in particular educational programs
like MBA schools (Tracy & Waldfogel, 1997; Cegniz & Yuki, 1998; Fisher, 2007) and
undergraduate business programs (Kreutzer & Wood, 2008). Tracy and Waldfogel
(1997) and Kreutzer and Wood (2007) both use OLS to address selection into graduate
business programs, finding that significant differences in labor market returns across
MBA programs that persist even after conditioning upon fairly extensive sets of control
variables. Nevertheless, the concern persists that these studies are unable to capture all

factors that jointly determine labor market outcomes and college choice, thereby limiting
their argument for causality.
Several studies employ data envelope methods to address selection into
educational programs. These methods are typically used in Operations Research to
measure production possibility frontiers, and assume that individual producers maximize
output given a set of inputs and a production process. Cegniz and Yuki (1998) adapt this
technique to address selection into MBA programs, finding significant differences in
value-added across institutions. Fisher (2007) obtains similar results for undergraduate
business programs. Unfortunately, the assumptions underlying the data envelope
methodology are not well suited to the educational production process since it is unclear
what objective, if any, colleges intend to maximize.
A highly related strand of literature looks at the economic return to attending a
higher quality college. These studies aggregate institutions into groups based on
selectivity, ranking or other means, thereby reducing the dimensionality of the problem
and permitting a wider range of estimation strategies.
In general, the college quality literature attempts to estimate models of the
following form:
Yi = a + XiB + c*qualityi = e,
where Yi is the log of student is earnings, Xi is a vector of student-level observables, and
qualityi is a measure of the quality of student is college. Quality is typically measured
by selectivity, like median SAT scores, or prestige.
The main difference between our problem and the one above is that we replace
the quality measure with a vector of school dummies. However, the problems are quite
different conceptually. In essence, we are attempting to estimate college quality, while
the college quality literature takes school quality as given and attempts to estimate the
labor market return to a higher quality school.
The main concern of the college quality literature is the consistent estimation of c,
the economic return to college quality. The earliest work on this topic used OLS to
control explicitly for factors in X that might jointly determine college quality and
earnings. Using this technique, Estelle et. al finds significant returns to attending elite
private colleges on the East Coast. More recent work obtains similar results with

10

propensity score matching methods, which dispense with the functional form
assumptions inherent in OLS (Black and Smith, 2004; Oyer and Schaeffer (2009)).
Unfortunately, in order for the authors of these studies to adopt a causal interpretation of
their estimates, both methods require the researcher to control for all factors jointly
determining college choice and labor market outcomes, and this is an unreasonable
assumption.
Brewer et al. (1996) use Lees (1983) model for polychotomous choice with
selectivity to estimate the labor market return to attending more selective colleges. They
instrument for college choice using relative prices, and gain identification by assuming a
normally distributed error term. Behrman et al, use a similar approach but utilize parental
education as an added excluded instrument, thereby dispensing with the distributional
assumption of Brewer et al. Both authors find significant returns to college selectivity
that persist once accounting for selection. However, these methods are difficult to
employ in our context because of the dimensionality problem: since we disaggregate
college quality into a series of college dummies, we require as many exclusion
restrictions as there are colleges in the choice set - in our case thirty.
Two studies use more convincing experimental designs to look at the return to
college quality in more narrow contexts. Behrman et al. (1996) uses twin data to parse
out the effects of genetic influences and college quality on earnings, finding significant
differences in outcomes of twins who attend colleges of differing quality. Hoekstra
(2009) uses a regression discontinuity approach to examine the labor market outcomes of
students near the cusp of admission to a state flagship university. Those just under the
cutoff end up at a college with a median SAT score that is 80-90 points lower than the
flagship. Hoekstra also find significant positive impacts associated with attending a more
selective college. While the results of these studies are informative, our question is more
ambitious, requiring an estimation strategy that permits value-added estimates applicable
to all students and all colleges.
One approach with particular promise for our setting is the matched applicant
method developed by Berg and Krueger (2002). The model assumes rational college
applicants who make college application decisions based on all of the private information
they know about themselves. If applicants make decisions in such a way, then all of the

11

private information, including information unobservable to the researcher, can be


captured through the applicants college application profile. Thus, by comparing only
students with identical college application profiles, Berg and Krueger implicitly control
for all unobservable information that might be correlated with college choice. Berg and
Krueger use their model to study the labor market return to college quality, finding only
marginal effects of college quality on earnings.
The Berg and Krueger method has been criticized based on evidence that students do
not make college choices in a rational way. For example, Avery and Hoxby (2004) find that
students facing a menu of college application and aid offers respond in a way that is
inconsistent with rational choice, responding to seemingly trivial aspects of financial aid
packages like whether or not a scholarship has a formal name. Others have criticized the
Berg and Krueger model on grounds that it breaks down once students receive their
application decisions. For example, given rational applicants, one might argue that once a
student knows where he has been accepted, he should choose the school with the highest
return. However, for the Berg and Kruger model to be identified, the authors must assume
that, conditional on being in a matched applicant group, students enrolling in different
colleges would have had the same labor market outcomes if they had attended the same
college.
Despite these criticisms, the Berg and Krueger model holds considerable promise as a
method to address selection into particular colleges, at least in some contexts. Even if all
students do not apply to college in a rational way, there certainly is valuable informational
content in students application behavior, and this information should help to mitigate some
concerns over selection. Moreover, to address concerns stemming from the irrational
behavior that is required of the Berg and Krueger model at the choice stage, one could
combine the Berg and Krueger model with an IV approach. For example, within a matched
applicant group, one could use distance to college to instrument for college choice.
One concern with using the Berg and Krueger model in the present context stems
from the functional form assumption imposed by OLS. We are particularly concerned over
common support: there may be no students that are at risk of applying to both high and low
quality schools. However, we could still implement the model in more narrow contexts,
comparing schools that compete for similar applicants. More damning for the Berg and

12

Krueger model is the empirical fact that most students over 70% - in Texas apply to just
one college. This leads to a multicollinearity problem, which blows up our standard errors.
Nevertheless, we have data on college application decisions for all students in Texas,
and it may be possible to use the Berg and Krueger model in some narrow contexts. For
example, students who enroll in the Texas flagships apply to more colleges than those who
apply to lower tiered schools. Thus, we might be able to look at the economic return to
enrolling in the states flagships, UT-Austin and Texas A&M. Also, our current data only
allows us to observe applications to state schools within Texas, but we could potentially
obtain data from the National Student Clearinghouse (NSC) allowing us to observe
applications to all colleges participating in the NSC3. This could potentially increase the
number of applications we observe per student, thereby making the model tractable. In
future work, we will push on these fronts.

4. Data
Several secure, individual-level databases housed at the THECB offices in Austin,
Texas are used in this analysis. Combined, they track the universe of Texas public high
school graduates through Texas colleges and into the workforce, and can be linked to each
other and across time using Social Security Numbers. For this empirical exercise, we
construct a longitudinal panel of the 1998 and 1999 cohorts of Texas public high school
graduates.4
While college-level data is reported directly to the Coordinating Board by Texas
colleges and universities, high school level data, wage data, and testing data is reported first
to other agencies, and then shared with THECB. The Student Report, the Graduation Report,
and the Application Report are reported by all Texas colleges directly to THECB for the
years 1998-2007. Along with several demographic variables, the Student Report includes
semester credit hours attempted and declared major for every institution a student attended.
The Application Report identifies which public institutions a student applied to and was

Another possibility is to use the College Board data on where students send their SAT scores.
We use the 1998 and 1999 cohorts so that there is ample time (at least 8 years) for students to both
complete college and enter into the workforce, at which time we believe their wages will be indicative of
value-added in college. Data for earlier cohorts is not available.
4

13

accepted at during the report year, while the Graduation Report indicates any degrees earned
during the report year.
The Texas Education Association (TEA) requires Texas public high schools to report
data on all their students. They have shared some of this data with the THECB for the years
1991-2007, including a yearly report containing, for each graduate, demographic variables,
an indicator for the high school of graduation, and a list of all academic courses taken during
grades 9-12. The Texas Workforce Commission (TWC) collects earnings data on all
employed persons in the state, and have shared it with the THECB from the first quarter of
1998 through the second quarter of 2007. These reports include total quarterly nominal
earnings for each job a person held.
SAT test scores are purchased annually by the THECB from the corresponding testing
agencies. The Board currently has scores from SAT tests taken between 1998 and 2004. In
addition to test scores, the SAT database includes demographic information self-reported by
the test taker on the day of the exam household income, the educational attainment of each
of the test takers parents, planned college major, and planned level of educational attainment.
Lastly, we use the 1997 County and City Data Book from the US Census to assign
several indicators of local labor market conditions to each student5. This data is not
housed at the THECB, but can be found at the Geospatial and Statistical Data Center at
the University of Virginia Library (http://fisher.lib.virginia.edu/).

4.2

Construction of Sample
We begin by identifying all 1998 and 1999 graduates of Texas public high schools who

enroll in the Texas Public University System in the year following graduation
approximately 82,000 students.6 From this group, the sample is further restricted in several
ways in order to identify value-added differences across institutions. First, college enrollees
that do not have a valid SAT score are excluded. Since most Texas public colleges require
SAT scores as a condition for enrolling, this restriction only excludes 5,773 of the original
sample. Second, 1501 graduates of high schools located in 13 largely rural counties are
excluded as there is no data on local labor markets for these areas. Finally, the sample is
5

Students are assigned the labor market variables of the county in which he or she went to high school.
As any student may be enrolled in more than one college in a given year, we assign each student to the
college or university at which he or she attempted the most credit hours during that year.
6

14

restricted to include only college enrollees who 1) have completed their schooling, 2) are in
the Texas labor force, and 3) earned at least $2000 in the fiscal year beginning 8 years after
graduating high school 19,648 observations. With these restrictions, the sample used in our
analysis has 45,657 observations. Summary statistics are presented in Table 1.
5. Econometric Specification and Results
While our model is derived in terms of the hourly wage rate, we only observe
annualized labor market earnings. We thus estimate models of the following form by OLS:
(7) Yit+8 = XitB + SitA + e,
where Yit+8 is the log of student is annual labor market earnings eight years after enrolling in
college, Xit is a vector of student-level observables, and e is an idiosyncratic error term. In
various specifications, X contains demographic variables, parental income and education,
SAT scores, high school GPA and class rank, planned educational attainment, high school
course-taking patterns, and college application decisions.
It is important to note that we are patently aware of the selection issues with respect
to the matrix S. While our extensive control list likely goes a long way towards controlling
for selection into particular colleges, it is likely that some selection remains. Nevertheless,
the results of these models provide valuable information about the labor market outcomes
associated with attending particular colleges in Texas, and the selection patterns into those
institutions.
The coefficients of interest are those in the matrix A. In all models, the reference
school is Texas A&M University, the institution with the highest earnings of matriculants in
all specifications. The coefficient on school Z can thus be interpreted as the percent
difference in earnings of observationally equivalent students attending Texas A&M and
school Z.
Table 2 presents the main results of our analysis. We run six specifications, adding
progressively more covariates in each model. The specification in column 1 is the nave
model, which controls just for a quadratic in labor market experience. The estimates here
give us the raw differences in earnings across colleges in Texas, and thus give us a basis for
comparison with other models that control for other determinants of labor market earnings.
The results show that there are indeed large differences in earnings amongst students who

15

matriculate at different Texas public colleges. For example, conditioning on labor market
experience, the typical student who enrolls at Texas Southern University, one of the states
historically black colleges, earns roughly 58% less eight years after enrolling in college than
the typical student who enrolls at Texas A&M.
Also, this method ranks schools similarly to the popular rankings published in the
U.S. News and World Report and Barrons Magazine. This similarity should not be
surprising as these popular rankings do not control for the selection problem either. The
highest ranked schools in the Unconditional Ranking UT Austin, Texas A&M and UT
Dallas are the three most selective public universities in the state. In contrast, the bottomranked school, Texas Southern University, accepted every applicant in the sample7.
Aside from important selection issues, there are two concerns with the specification
in Column 1. First, the specification in Column 1 is unable to parse out the effect of the
college from that of the local labor market in which the college resides. Thus, the estimates
in Column 1 are biased in favor of colleges located in areas with healthier local economies
because part of the effect of the local economy is attributed to impact of the college. To
correct for this source of bias, specification 2 adds controls for local labor market conditions
in the county in which the student attended high school. These variables include countylevel unemployment and poverty rates, crime rates, as well as population and business
growth rates.
Another concern with specification 1 is that the model does nothing to account for the
specialization and course-offerings of the college. Thus, this specification favors colleges
with a relatively large proportion of students majoring in high paying fields like business and
engineering. To correct for this, the specification 2 adds controls for the students initial
major8.
As seen in Column 2, the above modifications tend to reduce the differences between
Texas A&M and other Texas colleges. For example after accounting for local labor market
conditions and student major, the gap in earnings between Texas A&M and UT El Paso,

This does not mean that Texas Southern University has an open admissions policy. In fact, since we
restrict the sample to college enrollees, it is entirely possible that some students applied to Texas Southern
University, were rejected, and subsequently chose not to enroll in a 4 year college.
8
Specifically, we include 52 dummies for the first two digits of the federal XXX code for the students
major. Th

16

which is located in one of the poorest MSAs in the nation and has relatively fewer
engineering and business majors than Texas A&M, goes from 41% to 33%.
Column 3 adds demographic controls including race, gender, household income and
parental education. As expected, adding these covariates decreases the gaps between Texas
A&M and most other schools, particularly those with large minority or female proportions.
For example, adding demographic controls decreases the estimated earnings gap between
Texas A&M and Texas Southern, an historically black college, by more than 20 percentage
points.
Column 4 attempts to correct for ability bias by adding a quadratic in the students
SAT score. Since students at Texas A&M have higher SAT scores, on average, than those at
all but two Texas public colleges (UT Austin and UT Dallas), this correction once again
decreases the gaps between Texas A&M and other colleges. However, the differences in the
estimates in Column 4 and those in Column 3 are not huge, suggesting that demographic
factors may account for a larger share of the differences in labor market outcomes across
colleges.
Column 5 adds further controls for the students academic preparedness for college in
high school. Specifically, we control for high school GPA and class rank, the students
desired educational attainment, and high school course-taking patterns. As the addition of
these factors significantly alter our point estimates, they appear to have a larger influence on
selection into college and the resulting economic returns than do student SAT scores. As
expected, the estimates in Column 5 further reduce the gaps in labor market outcomes
between Texas A&M and other colleges.
Column 6 provides a first attempt at replicating the Berg and Krueger matched
applicant methodology in the present context. Specifically, we match students on their
application profiles, and include over 1400 matched applicant dummy variables. The key
issue is that most students in Texas applied to just one college, so if we were to include a
matched applicant dummy for these students, then it would be highly collinear with the
dummy variable indicating enrollment at the school to which the student applied. We thus
only include matched applicant dummies for the thirty percent of students who apply to more
than one public college in Texas. Unfortunately, the addition of these variables does not

17

explain much of the variance in labor market outcomes of college students in Texas, and the
resulting point estimates remain relatively unchanged.
Overall, our results indicate that much of the differences in labor market outcomes
associated with attending particular colleges in Texas are due to the types of students
attending those colleges, and not to relative educational effectiveness or institutional
resources. Nevertheless, there remain statistically significant and economically meaningful
differences in earnings across colleges after accounting for our extensive set of control
variables. For example, the difference between Texas A&M and Texas Southern, the school
whose students earn the least on the labor market, drops from 58% to 26%.
Finally, to get a better sense of the differences in earnings of students across colleges
in Texas, Table 3 presents p-values for pairwise tests of differences in the estimated
coefficients on the matrix of college enrollment dummy variables in Specification 6. Both
the columns and the rows in Table 3 are ordered by the colleges coefficient estimate, so that
schools with the highest conditional earnings are listed first. The (i, j) element in Table 3
represents the p-value for a significant difference in coefficients from Column 6 of Table 2
for schools i and j. The results in Table 3 indicate that the point estimates in Column 6 of
Table 2 are reasonably precise, such that there are significant differences between many
colleges. For example, the point estimate for Texas A&M is statistically larger than that of
all but three relatively small colleges, and we can say with reasonable confidence that
students at Texas A&M earn more on average than observationally equivalent students
attending UT Austin, the states other flagship institution. Interestingly, we cannot say with
any degree of confidence that students attending UT Permian Basin, a small regional college
located in West Texas, earn less on average than observationally equivalent students at Texas
A&M. However, this is due to large standard errors stemming from the relatively small
number of students attending UT Permian Basin.

5. Conclusion
We have summarized the issues surrounding the measurement of the value-added by
individual higher education institutions, and offered cautioned preliminary estimates of one
measure. The fact that college students specialize their instruction by choosing a college
major renders the use of test scores to measure educational attainment inappropriate for

18

higher education. Instead, we propose using the labor market returns to individual colleges
as a measure of value-added. Competitive markets weight the human capital gained in
college by their marginal productivity, and hence overcome the problem of comparing across
majors. However, for this method to produce unbiased value-added measures, we must fully
account for selection into college.
We discuss the literature addressing selection into college, noting strategies that hold
promise for the context of selection into particular higher education institutions. We find IV
and selection methods impractical in our case due to the fact that such methods would require
as many instruments as there are colleges in the market. Despite its criticisms, we feel that
Berg & Kruegers matched applicant method holds promise for addressing selection into
particular colleges, at least for some narrow contexts.
We provide some evidence on the labor market return to attending particular colleges
in Texas. The use of large administrative databases permits us to control for an extensive
array of covariates that might be correlated with college choice. Sequentially adding various
sets of covariates allows us to closer examine the selection patterns into more selective
colleges. Overall, we find substantial variance in labor market outcomes across colleges in
Texas that persist even when conditioning upon demographics, parental education and
income, SAT scores, high school GPA and class rank, high school course-taking patterns,
and college application behavior.
It is important to keep in mind that our results can only be considered causal if we are
able to control for all factors jointly influencing college choice and labor market outcomes.
We are particularly concerned about bias due to the influence of unobservable factors such as
motivation, time preference, and ability. Despite these concerns, this exercise provides
valuable insight into selection patterns in the public college system, as well as the labor
market outcomes associated with attending those institutions. This is an important first step
towards producing an unbiased value-added methodology for higher education.
In future work, we will explore the identification problem in more depth. We intend
to segment the market for higher education in Texas and look at value-added differences
within similar schools. Under such an approach, the Berg and Kreuger assumptions
concerning college application decisions may be more plausible. Finally, we are in the
process of acquiring other data sources that would allow us to observe students college

19

application decisions for public schools outside of Texas and for all private schools in the
country. The NSC has data on applications to all colleges participating in the Clearinghouse
and the College Board has data on all colleges where students send their SAT scores.

20

REFERENCES
Avery, Chris, and Caroline Hoxby (2004). Do and Should Financial Aid Packages
Affect Students College Choices? In Caroline Hoxby (ed.) College Choices: The
Economics of Where to Go, When to Go, and How to Pay for It. NBER.
Ballou, Dale (2001). Pay for Performance in Public and Private Schools. Economics
of Education Review. 20: 51-61.
Behrman, Jere et al (1996). The Impact of College Quality on Wages: Are There
Differences Among Demographic Groups? Williams Project on the Economics of
Higher Education. Working Paper No. 38.
Behrman, Jere, Mark Rosenzweig & Paul Taubman (1996). College Choice and Wages:
Estimates Using Data on Female Twins. Review of Economics and Statistics. 78:
672-685.
Black, Dan & Jeffrey Smith (2006). Estimating the Returns to College Quality with
Multiple Proxies for Quality. Journal of Labor Economics. 24(3): 701-728.
__________ (2004). How Robust is the Evidence on the Effects of College Quality:
Evidence from Matching. Journal of Econometrics. 121(1-2): 99-124.
Black, Dan et al. (2005). College Quality and Wages in the United States. German
Economic Review. 6(3): 415-443.
Brewer, Dominic, et al. (1999). Does it Pay to Attend an Elite Private College? CrossCohort Evidence on the Effect of College Type on Earnings. Journal of Human
Resources. 34(1): 104-123.
Carnoy, Martin & Susanna Loeb (2002). Does External Accountability Affect Student
Outcomes? A Cross State Analysis. Educational Evaluation and Policy Analysis,
24(4): 305-331
Cengiz, Hakesever & Muragishi Yuki (1998). Measuring Value in MBA Programs.
Education Economics. 6(1): 11-25.
Dale, Terry Berg & Alan Krueger (2002). Estimating the Payoff to Attending a More
Selective College: An Application of Selection on Observables and Unobservables.
Quarterly Journal of Economics. 117(4): 1491-1527.
Dwyer, Carol, et al. (2006). Culture of Evidence: Postsecondary Assessment and
Learning Outcomes, Recommendations to Policymakers and the Higher Education
Community. ETS Policy Paper.
Eide, Eric, et al. (1998). Does it Pay to Attend an Elite Private College? Evidence on
the Effects of College Quality on Graduate School Attendance. Economics of
Education Review. 17(4): 371-376.
Estelle, James, et al. (1989). College Quality and Future Earnings: Where Should You
Send Your Child to College? American Economic Review. 79(2): 247-252.
Fisher, Dorothy, et al. (2007). A Value-Added Approach to Selecting the Best Master
of Business Administration (MBA) Program. Journal of Education for Business.
83(2): 72-76.
Hanushek, Eric (2003). The Failure of Input-Based Schooling Policies. The Economic
Journal. 113(Feb): 64-98.
Hanushek, Eric & Margaret Raymond (2005). Does School Accountability Lead to
Improved Student Performance? Journal of Policy Analysis and Management.
24(2): 297-327.

21

Harris, Douglas & Tim Sass (2006). The Effects of Teacher Training on Teacher ValueAdded. Working Paper.
Hoxby, Caroline (2000). The Effects of Class Size on Student Achievement: New
Evidence from Population Variation. Quarterly Journal of Economics. 115(4):
1239-1285.
Koedel, Cory & Julian Betts (2007). Re-Examining the Role of Teacher Quality in the
Educational Production Function. Working Paper.
Kreutzer, David & William Wood (2007). Value-Added Adjustment in Undergraduate
Business School Ranking. Journal of Education for Business. 82(6): 357-362.
Ladd, Helen (1999). The Dallas School Accountability and Incentive Program: An
Evaluation of its Impacts on Student Outcomes. Economics of Education Review.
18: 1-16.
Light, Audrey & Wayne Strayer (2000). Determinants of College Completion: School
Quality or Student Ability? Journal of Human Resources. 35(2): 299-332.
Oyer, Paul & Scott Schaeffer (2009). The Returns to Attending a Prestigious Law
School. Working Paper.
Rivkin, Steven, Eric Hanushek, & John Kane (2005). Teachers, Schools, and Academic
Achievement. Econometrica, 73(2): 417-458.
Rodgers, Timothy (2007). Measuring Value-Added in Higher Education: A Proposed
Methodology for Developing a Performance Indicator Based on the Economic Value
Added to Graduates. Education Economics. 15(1): 55-74.
Tracy, Joseph & Joel Waldfogel (1997). The Best Business Schools: A Market-Based
Approach. Journal of Business. 70(1): 1-31.
Wales, Terrence (1973). The Effect of College Quality on Earnings: Results from the
NBER-Thorndike Data. Journal of Human Resources. 8(3): 306-317.
Yunker, James (2005). The Dubious Utility of the Value-Added Concept in Higher
Education: The Case of Accounting. Economics of Education Review. 24: 355367.

22

Table 1 - Summary Statistics (Mean) for Variables used in Log Earnings


Outcome Variables
Annual Earnings, $
% Graduated within 8 years
% Persisted into 2nd year of college

$38,327
69.9%
82.%

Demographic Variables
SAT (or converted ACT) score
% Black
% Hispanic
% White

1034.0
11.2%
17.6%
64.8%

% Eligible for Free Lunch


% At Risk of Not Graduating
# of AP courses taken
GPA > A minus
Top 10% of HS Class

12.1%
9.8%
1.07
55.4%
24.8%

H.S. Courses as Senior, % enrolled


English as a 2nd Language (ESL)
Gifted and Talented Program
Calculus
Pre-Caculus
Algebra 2
Biology
Chemistry
Physics

.1%
27.%
27.%
25.%
6.%
7.%
9.8%
28.3%

Observations

43,735

Notes: Earnings are the sum of the 4 quarterly earnings 8 years after graduating high school.

23

Table 2 - Log earnings models, including dummy variables for 4-year Texas Public Colleges
Variables
1
2
3
4
5
6
Experience Controls
Yes
Yes
Yes
Yes
Yes
Yes
College Major & Local Labor Market Controls
Yes
Yes
Yes
Yes
Yes
Race, Gender, Parental Ed. & Income
Yes
Yes
Yes
Yes
SAT score (level, quadratic)
Yes
Yes
Yes
H.S. GPA, Class Rank, Courses Taken
Yes
Yes
Matched Application & Acceptance Groups
Yes
Texas A&M University
Texas Women's University

--0.23
(.04)

(.04)

(.04)

(.04)

(.04)

(.04)

Stephen F. Austin State Univ.

-0.22

-0.18

-0.13

-0.12

-0.05

-0.04

(.02)

(.02)

(.02)

(.02)

(.02)

(.02)

TAMU - International

-0.30

-0.15

-0.09

-0.07

-0.07

-0.05

(.05)

(.05)

(.05)

(.05)

(.05)

(.05)

Texas State University

-0.23

-0.14

-0.11

-0.10

-0.06

-0.06

(.02)

(.02)

(.02)

(.02)

(.02)

(.02)

UT - Austin

-0.08

-0.05

-0.06

-0.06

-0.06

-0.06

(.01)

(.01)

(.01)

(.01)

(.01)

(.01)

Texas Tech University

-0.15

-0.12

-0.11

-0.11

-0.06

-0.07

(.02)

(.02)

(.02)

(.02)

(.02)

(.02)

TAMU - Kingsville

-0.31

-0.22

-0.15

-0.13

-0.07

-0.07

(.03)

(.03)

(.03)

(.03)

(.03)

(.04)

Sam Houston State Univ.

-0.29

-0.23

-0.17

-0.15

-0.09

-0.08

(.02)

(.02)

(.02)

(.02)

(.02)

(.02)

TAMU - Commerce

-0.30

-0.22

-0.16

-0.15

-0.10

-0.09

(.03)

(.03)

(.03)

(.03)

(.03)

(.03)

University of Noth Texas

-0.28

-0.22

-0.18

-0.17

-0.12

-0.10

(.02)

(.02)

(.02)

(.02)

(.02)

(.02)

UT - Permian Basin

-0.32

-0.22

-0.15

-0.15

-0.11

-0.11

(.08)

(.07)

(.07)

(.07)

(.07)

(.08)

UT - Arlington

-0.23

-0.23

-0.19

-0.18

-0.14

-0.11

(.02)

(.02)

(.02)

(.02)

(.02)

(.02)

University of Houston

-0.23

-0.22

-0.16

-0.16

-0.11

-0.11

(.02)

(.02)

(.02)

(.02)

(.02)

(.02)

UT - Pan American

-0.33

-0.25

-0.20

-0.19

-0.11

-0.11

(.03)

(.03)

(.03)

(.03)

(.03)

(.03)

West Texas A&M Univ.

-0.34

-0.23

-0.19

-0.18

-0.13

-0.12

(.03)

(.04)

(.04)

(.04)

(.04)

(.04)

TAMU Galveston

-0.26

-0.22

-0.20

-0.19

-0.14

-0.13

(.05)

(.05)

(.05)

(.05)

(.05)

(.05)

UT - Dallas

-0.15

-0.17

-0.16

-0.16

-0.15

-0.14

(.03)

(.03)

(.03)

(.03)

(.03)

(.03)

Tarleton State University

-0.36

-0.27

-0.23

-0.21

-0.15

-0.15

(.03)

(.03)

(.03)

(.03)

(.03)

(.03)

UT San Antonio

-0.39

-0.27

-0.22

-0.21

-0.15

-0.15

(.02)

(.02)

(.02)

(.02)

(.02)

(.02)

Lamar University

-0.33

-0.30

-0.24

-0.23

-0.16

-0.16

(.02)

(.03)

(.03)

(.03)

(.03)

(.03)

UT - Tyler

-0.22

-0.19

-0.15

-0.15

-0.14

-0.16

(.07)

(.07)

(.07)

(.07)

(.07)

(.07)

TAMU - Corpus Chirsti

-0.36

-0.26

-0.21

-0.20

-0.15

-0.17

(.03)

(.03)

(.03)

(.03)

(.03)

(.03)

Univ. of Houston - Downtown

-0.46

-0.41

-0.28

-0.25

-0.17

-0.17

(.03)

(.03)

(.03)

(.04)

(.04)

(.04)

Angelo State University

-0.39

-0.28

-0.24

-0.23

-0.17

-0.17

(.03)

(.03)

(.03)

(.03)

(.03)

(.03)

Midwestern State University

-0.39

-0.30

-0.25

-0.24

-0.17

-0.17

(.03)

(.03)

(.03)

(.03)

(.03)

(.03)

UT - El Paso

-0.41

-0.33

-0.25

-0.23

-0.18

-0.17

(.02)

(.02)

(.03)

(.03)

(.03)

(.03)

Prarie View A&M University

-0.49

-0.46

-0.27

-0.24

-0.17

-0.18

(.03)

(.03)

(.03)

(.03)

(.03)

(.03)

Sul Ross State University

-0.53

-0.42

-0.35

-0.32

-0.24

-0.25

(.05)

.05.

(.05)

(.05)

(.05)

(.05)

Texas Southern University

-0.58

-0.58

-0.38

-0.34

-0.25

-0.26

(.04)

(.04)

(.04)

(.04)

(.04)

(.04)

Constant

9.61

9.42

9.43

8.98

9.09

9.09

(.03)

(.08)

(.08)

(.12)

(.33)

(.33)

43735
0.09

43735
0.12

42962
0.13

42962
0.13

42962
0.14

42962
0.14

Observations
Adjusted R-squared

--0.18

--0.09

--0.07

--0.01

-0.00

Notes: Includes all TX 1998 & 1999 HS graduates enrolled in a TX 4-yr public college, that took the SAT, and earned over $2000 8 yrs after
graduating HS. Texas A&M is excluded. TAMU = Texas A&M University, UT = University of Texas

24

0.00
0.01
0.01
0.03
0.41
0.03
0.11
0.56
0.50
0.82
0.96
0.75
0.59
0.64
0.58
0.63
0.18
0.12
0.03
0.04
0.42
0.05
0.02
0.03
0.05
0.01
0.01
0.01
0.00

0.95
0.92
0.88
0.83
0.81
0.60
0.59
0.53
0.50
0.59
0.44
0.42
0.41
0.39
0.35
0.30
0.11
0.04

0.00
0.01
0.01
0.04
0.35
0.04
0.09
0.45
0.37
0.67
0.75
0.95
0.88
0.83
0.75
0.75
0.31
0.24
0.10
0.12
0.49
0.11
0.06
0.08
0.09
0.03
0.02
0.01
0.00

0.00
0.01
0.00
0.01
0.30
0.01
0.03
0.37
0.22
0.58
0.59
0.92
0.88
0.89
0.81
0.79
0.32
0.25
0.07
0.11
0.51
0.11
0.06
0.07
0.08
0.02
0.01
0.01
0.00

0.00
0.02
0.03
0.07
0.28
0.08
0.14
0.37
0.36
0.59
0.64
0.88
0.83
0.89
0.92
0.87
0.50
0.45
0.30
0.28
0.57
0.23
0.19
0.21
0.20
0.07
0.09
0.02
0.00

0.00
0.02
0.03
0.08
0.29
0.10
0.14
0.37
0.34
0.54
0.58
0.83
0.75
0.81
0.92
0.94
0.59
0.54
0.42
0.38
0.62
0.32
0.26
0.27
0.27
0.17
0.14
0.04
0.00

0.01
0.05
0.11
0.18
0.34
0.21
0.27
0.44
0.44
0.58
0.63
0.81
0.75
0.79
0.87
0.94
0.73
0.71
0.63
0.58
0.69
0.50
0.45
0.45
0.43
0.36
0.29
0.09
0.02

0.00
0.00
0.00
0.01
0.13
0.01
0.02
0.14
0.09
0.23
0.18
0.60
0.31
0.32
0.50
0.59
0.73
0.98
0.87
0.78
0.85
0.65
0.58
0.57
0.54
0.42
0.33
0.09
0.01

0.00
0.00
0.00
0.00
0.11
0.00
0.01
0.10
0.05
0.18
0.12
0.59
0.24
0.25
0.45
0.54
0.71
0.98
0.88
0.77
0.86
0.64
0.55
0.55
0.53
0.39
0.30
0.07
0.01

0.86
0.90
0.68
0.59
0.58
0.55
0.37
0.28
0.07
0.00

0.96
0.82
0.74
0.72
0.68
0.53
0.41
0.10
0.01

0.95
0.93
0.90
0.87
0.83
0.74
0.31
0.18

0.95
0.91
0.85
0.76
0.60
0.15
0.03

0.00
0.00
0.00
0.00
0.04
0.00
0.00
0.03
0.01
0.06
0.02
0.42
0.06
0.06
0.19
0.26
0.45
0.58
0.55
0.59
0.74
0.93
0.95
0.95
0.88
0.79
0.61
0.15
0.02

0.00
0.00
0.00
0.00
0.05
0.00
0.00
0.04
0.01
0.07
0.03
0.41
0.08
0.07
0.21
0.27
0.45
0.57
0.55
0.58
0.72
0.90
0.91
0.95
0.93
0.86
0.67
0.17
0.04

(I,j) element represents the p-value for the test of a significant difference in conditional logged earnings between schools i and j.
Point estimates are from the preffered Model 6 in Table 2, which contains our full set of controls.
Schools are listed in rank order according to the point estimate from Model 6. Thus, by column, schools above (below) the diagonal are schools with point estimates higher (lower) than the school indicated by that column.
Point estimates that are significant at the 5% level are in bold.

0.00
0.00
0.00
0.00
0.05
0.00
0.00
0.05
0.02
0.08
0.05
0.39
0.09
0.08
0.20
0.27
0.43
0.54
0.53
0.55
0.68
0.87
0.85
0.88
0.93

0.00
0.00
0.00
0.00
0.02
0.00
0.00
0.01
0.00
0.03
0.01
0.35
0.03
0.02
0.07
0.17
0.36
0.42
0.39
0.37
0.53
0.83
0.76
0.79
0.86
0.95

0.00
0.00
0.00
0.00
0.02
0.00
0.00
0.01
0.00
0.03
0.01
0.30
0.02
0.01
0.09
0.14
0.29
0.33
0.30
0.28
0.41
0.74
0.60
0.61
0.67
0.76
0.78

0.00
0.00
0.00
0.00
0.01
0.00
0.00
0.00
0.00
0.01
0.01
0.11
0.01
0.01
0.02
0.04
0.09
0.09
0.07
0.07
0.10
0.31
0.15
0.15
0.17
0.22
0.19
0.28

0.95
0.76 0.78
0.22 0.19 0.28
0.05 0.03 0.06 0.73

Texas Southern

0.00
0.00
0.00
0.00
0.06
0.00
0.00
0.05
0.02
0.09
0.05
0.44
0.11
0.11
0.23
0.32
0.50
0.65
0.64
0.68
0.82
0.95

SRSU

0.03
0.05
0.11
0.16
0.24
0.18
0.21
0.31
0.31
0.39
0.42
0.59
0.49
0.51
0.57
0.62
0.69
0.85
0.86
0.90
0.96

PVAMU

0.00
0.00
0.00
0.00
0.07
0.00
0.00
0.06
0.01
0.10
0.04
0.50
0.12
0.11
0.28
0.38
0.58
0.78
0.77
0.86

Midwestern

0.00
0.00
0.00
0.00
0.06
0.00
0.00
0.05
0.01
0.10
0.03
0.53
0.10
0.07
0.30
0.42
0.63
0.87
0.88

Angelo State

Tarleton

UT Dallas

TAMU G alveston

WTSU

UT Pan Am

Houston

UT Arlington

UTPB
0.17
0.22
0.41
0.53
0.61
0.57
0.63
0.76
0.80
0.89
0.96

UTEP

0.82
0.89
0.67
0.58
0.59
0.54
0.58
0.23
0.18
0.10
0.10
0.39
0.09
0.06
0.07
0.08
0.03
0.03
0.01
0.00

UNT

East Texas

SHSU
0.82
0.50
0.80
0.37
0.22
0.36
0.34
0.44
0.09
0.05
0.01
0.01
0.31
0.02
0.01
0.01
0.02
0.00
0.00
0.00
0.00

0.01
0.07
0.14
0.29
0.56
0.35
0.46
0.76
0.82

UH DT

0.89
0.76
0.56
0.76
0.45
0.37
0.37
0.37
0.44
0.14
0.10
0.05
0.06
0.31
0.05
0.03
0.04
0.05
0.01
0.01
0.00
0.00

0.00
0.05
0.07
0.23
0.63
0.30
0.46
0.89

TAMUCC

0.03
0.13
0.31
0.53
0.73
0.62
0.75

UT Tyler

0.00 0.24 0.00 0.00


0.18 0.36 0.13 0.10
0.49 0.73 0.30 0.21
0.95 0.76 0.57
0.95
0.96 0.87
0.76 0.96
0.73
0.57 0.87 0.73
0.53 0.73 0.62 0.75
0.23 0.63 0.30 0.46
0.29 0.56 0.35 0.46
0.03 0.41 0.03 0.11
0.53 0.61 0.57 0.63
0.04 0.35 0.04 0.09
0.01 0.30 0.01 0.03
0.07 0.28 0.08 0.14
0.08 0.29 0.10 0.14
0.18 0.34 0.21 0.27
0.01 0.13 0.01 0.02
0.00 0.11 0.00 0.01
0.00 0.06 0.00 0.00
0.00 0.07 0.00 0.00
0.16 0.24 0.18 0.21
0.00 0.06 0.00 0.00
0.00 0.04 0.00 0.00
0.00 0.05 0.00 0.00
0.00 0.05 0.00 0.00
0.00 0.02 0.00 0.00
0.00 0.02 0.00 0.00
0.00 0.01 0.00 0.00
0.00 0.00 0.00 0.00

TAMU Kingsville

Texas Tech

UT Austin

TAMU Intl

Texas State

SFA

0.96 0.03
0.34
0.34
0.18 0.49
0.36 0.73
0.13 0.30
0.10 0.21
0.13 0.31
0.05 0.07
0.07 0.14
0.01 0.01
0.22 0.41
0.01 0.01
0.01 0.00
0.02 0.03
0.02 0.03
0.05 0.11
0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00
0.05 0.11
0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00
0.00 0.00

Lamar

0.96
0.03
0.00
0.24
0.00
0.00
0.03
0.00
0.01
0.00
0.17
0.00
0.00
0.00
0.00
0.01
0.00
0.00
0.00
0.00
0.03
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00

UTSA

25

Texas A&M
TWU
SFA
Texas State
TAMU Intl
UT Austin
Texas Tech
TAMU Kingsville
SHSU
East Texas
UNT
UTPB
UT Arlington
Houston
UT Pan Am
WTSU
TAMU Galveston
UT Dallas
Tarleton
UTSA
Lamar
UT Tyler
TAMUCC
Angelo State
Midwestern
UH DT
UTEP
PVAMU
SRSU
Texas Southern

TW U

Texas A&M

Table 3 - Pairwise Tests for Significanct Difference in Value-Added Score

0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.00
0.04
0.00
0.00
0.00
0.00
0.02
0.01
0.01
0.00
0.01
0.18
0.03
0.02
0.04
0.05
0.03
0.06
0.73