Académique Documents
Professionnel Documents
Culture Documents
LIST OF CONTRIBUTORS ix
EDITORIAL BOARD xi
INTRODUCTION
Marc J. Epstein and John Y. Lee xvii
v
vi
ix
x
xiii
EDITORIAL POLICY AND MANUSCRIPT
FORM GUIDELINES
ABSTRACT
Although the “new economy” once again resembles the old economy, the
drivers of success for many firms continue to be intangible or service-related
assets. These changes in the economic basis of business are leading to
changes in practice which are creating exciting new opportunities for
research. Management accounting still is concerned with internal uses of
and demands for operating and performance information by organizations,
their managers, and their employees. However, current demand for internal
information and analysis most likely reflects current decision making needs,
which have changed rapidly to meet economic and environmental conditions.
Many management accounting research articles reflect traditional research
topics that might not conform to current practice concerns. Some accounting
academics may desire to pursue research topics that reflect current prob-
lems of practice to inform, influence, or understand practice or influence
accounting education.
This study analyzes attributes of nearly 2,000 research and professional
articles published during the years 1996–2000 and finds numerous, relatively
unexamined research questions that can expand the scope of current man-
agement accounting research. Analyses of theories, methods, and sources
of data used by published management accounting research also describe
publication opportunities in major research journals.
DATA AVAILABILITY
Raw data are readily available online, and coded data are available upon request
from the authors.
Research Objectives
Research Domain
Sampling
The study analyzes articles that appeared in print during the years 1996–2000. This
five-year period witnessed dramatic changes in technology, business conditions,
and the responsibilities of financial and accounting professionals. There is no
reason to believe that future years will be any less volatile. The study further defines
the domain of management accounting research as articles fitting the above topics
that were published in the following English-language research journals:
Academy of Management Journal Journal of Accounting and
(AMJ) Economics (JAE)
Academy of Management Review Journal of Accounting Research
(AMR) (JAR)
Accounting and Finance (A&F) Journal of Management Accounting
Research (JMAR)
Accounting Organizations and Management Accounting Research
Society (AOS) (MAR)
Advances in Management Review of Accounting Studies (RAS)
Accounting (AIMA)
Contemporary Accounting Strategic Management Journal (SMJ)
Research (CAR)
Journal of Accounting, Auditing, The Accounting Review (TAR)
and Finance (JAAF)
We assume that the research literature in other languages either covers similar
topics or is not related to the practice literature aimed at English-speaking
professionals.3
6 FRANK H. SELTO AND SALLY K. WIDENER
Data Collection
The study uses the online, electronic contents of the abstracts of management
accounting articles from research and practice journals published during the
years 1996–2000 as its source of data. The study includes the entire contents of
explicitly named management accounting journals (e.g. Advances in Management
Accounting, Strategic Finance) and selected articles from other journals and
magazines if articles matched the topic domain. The database of management
accounting articles consists of information on:
373 research articles;
1,622 professional or practice articles.
Data Analysis
Qualitative Method
The study uses a qualitative method to label, categorize, and relate the management
accounting literature data (e.g. Miles & Huberman, 1994). The study uses Atlas.ti
software (www.atlasti.de), which is designed for coding and discovering relations
among qualitative data.4 The study began with predetermined codes based on the
researchers’ expectations of topics, methods, and theories. As normally happens
in this type of qualitative study, the database contains unanticipated qualitative
data that required creation of additional codes. This necessary blend of coding,
analysis, and interpretation means that the coding task usually cannot be outsourced
to disinterested parties. Thus, this method is unlike content analysis, which counts
pre-defined words, terms, or phrases.
New Directions in Management Accounting Research 7
Table 1. (Continued )
method-ARCHIVAL
method-EXPERIMENT
method-FIELD/CASE STUDY
method-LOGICAL ARGUMENT
method-SURVEY
THEORY
theory-AGENCY
theory-CONTINGENCY
theory-CRITICAL
theory-ECONOMIC CLASSIC
theory-INDIVID/TEAM JDM
theory-ORGANIZATION CHANGE
theory-POSITIVE ACCOUNTING
theory-SOCIAL JUSTICE/POWER/INFLUENCE
theory-SOCIAL/PSYCH
theory-SYSTEMS
theory-TRANSACTION COST
TOPIC
topic-BUDGETING
topic-budgeting-activity based
topic-budgeting-capital budgeting
topic-budgeting-general
topic-budgeting-participation
topic-budgeting-planning&forecasting
topic-budgeting-slack
topic-budgeting-variances
topic-BUSINESS INTELLIGENCE
topic-BUSINESS PROCESSES
topic-business processes-credit management
topic-business processes-fixed assets
topic-business processes-inventory management
topic-business processes-procurement
topic-business processes-production management
topic-business processes-reengineering
topic-business processes-travel expenditures
topic-CASH MANAGEMENT
topic-cash management-borrowings
topic-cash management-collections
topic-cash management-credit policies
topic-cash management-electronic banking
topic-cash management-electronic exchange
topic-cash management-foreign currency
topic-cash management-investing
topic-cash management-payments
topic-COMPENSATION
New Directions in Management Accounting Research 9
Table 1. (Continued )
topic-compensation-accounting measures
topic-compensation-design/implementation
topic-compensation-executive
topic-compensation-pay for performance
topic-compensation-stock options
topic-CONTROL
topic-control-alliances/suppliers/supply chain
topic-control-complementarity/interdependency
topic-control-cost of capital
topic-control-customers/customer profitabilty
topic-control-environmental
topic-control-information/information technology
topic-control-intangibles
topic-control-international/culture
topic-control-JIT/flexibility/time
topic-control-org change
topic-control-quality
topic-control-R&D/new product develop
topic-control-risk
topic-control-smart cards/purchasing cards
topic-control-strategy
topic-control-structure
topic-control-system
topic-COST ACCOUNTING
topic-cost accounting-environmental
topic-cost accounting-general
topic-cost accounting-standards
topic-cost accounting-throughput
topic-COST MANAGEMENT
topic-cost management-ABC
topic-cost management-ABM
topic-cost management-benchmarking
topic-cost management-cost efficiency/reduction
topic-cost management-cost negotiation
topic-cost management-costing
topic-cost management-process mapping
topic-cost management-quality/productivity/tqm
topic-cost management-shared services
topic-cost management-strategy
topic-cost management-target costing
topic-cost management-theory of constraints/capacity
topic-ELECTRONIC
topic-electronic-business
topic-electronic-commerce
topic-electronic-intranet
topic-electronic-processing
10 FRANK H. SELTO AND SALLY K. WIDENER
Table 1. (Continued )
topic-electronic-web sites
topic-electronic-xml/xbrl
topic-EXPERT SYSTEMS
topic-FINANCIAL ACCOUNTING
topic-financial reporting-accounting standards/SEC
topic-financial reporting-depreciation
topic-financial reporting-drill downs
topic-financial reporting-e reporting
topic-financial reporting-environmental
topic-financial reporting-general
topic-financial reporting-international
topic-financial reporting-open books
topic-financial reporting-realtime accounting
topic-INTERNAL CONTROL
topic-internal control-controls
topic-internal control-corporate sentencing guidelines
topic-internal control-data security/computer fraud
topic-internal control-ethics
topic-internal control-fraud awareness/detection
topic-internal control-internal audit
topic-internal control-operational audits
topic-MANAGEMENT ACCOUNTING-practices
topic-OTHER
topic-OUTSOURCING DECISION
topic-PERFORMANCE MEASUREMENT
topic-performance measurement-balanced scorecard
topic-performance measurement-business process
topic-performance measurement-EVA/RI
topic-performance measurement-evaluation/appraisal
topic-performance measurement-group
topic-performance measurement-incentives
topic-performance measurement-individ
topic-performance measurement-manipulation
topic-performance measurement-nonfinancial
topic-performance measurement-productivity
topic-performance measurement-strategic
topic-performance measurement-system
topic-PRICING
topic-PROFITABILITY
topic-PROJECT MANAGEMENT
topic-RESEARCH METHODS
topic-SHAREHOLDER VALUE
topic-SOCIAL RESPONSIBILITY
topic-SOFTWARE
topic-software-ABC/product costing
topic-software-accounting technology (general)
New Directions in Management Accounting Research 11
Table 1. (Continued )
topic-software-budgeting
topic-software-costing
topic-software-credit analysis
topic-software-data conversion
topic-software-database
topic-software-decision support
topic-software-document management
topic-software-erp
topic-software-fixed assets
topic-software-graphical accounting
topic-software-groupware
topic-software-human resources/payroll
topic-software-internet
topic-software-mindmaps
topic-software-modules
topic-software-operating system
topic-software-project accounting
topic-software-purchasing
topic-software-reporting
topic-software-sales/C/M
topic-software-selection/accounting platforms/implementation
topic-software-spreadsheets
topic-software-t&e
topic-software-warehousing/datamarts/intelligent agents
topic-software-workflow
topic-software-year2000 compliant
topic-TRANSFER PRICING
topic-VALUATION
topic-VALUE BASED MANAGEMENT
topic-VALUE CHAIN
Measures of Correspondence
The study measures correspondence between research and practice to capture
different dynamics of information exchange between the realms of inquiry. The
study defines differences in changes and levels of topic frequency as measures of
correspondence. Research and practice topic frequencies are scaled by the total
number of research or practice topics to control for the relative sizes of the two
outlets. The study examines contemporaneous and lagged differences, as the data
permit, for evidence of topic correspondence. Furthermore, the study investigates
whether research topic frequency leads or lags practice.
Validity Issues
One researcher coded all of the practice article abstracts in the database and a
5% random sample of the research abstracts. Another researcher coded all of
the research abstracts and a 5% random sample of the practice abstracts. Inter-
rater reliability of the overlapped coding was 95%, measured by the proportion of
coding agreements divided by the sum of agreements plus disagreements from the
New Directions in Management Accounting Research 13
5% random samples of articles in the research and practice databases.5 Because the
measured inter-rater reliability is well within the norms for this type of qualitative
research (i.e. greater than 80%) and because hypothesis testing or model building
is not the primary objective of the study, the researchers did not revise the database
to achieve consensus coding.
Aggregate Analysis
Figure 2 shows the most aggregated level of analysis used in this study, which
reflects the levels of research and practice frequencies of major topics. The three
most frequent practice topics in Fig. 2 are: (1) software; (2) management control;
and (3) cost management. 6
The Institute of Management Accountants (IMA) analyzed the practice of man-
agement accounting (1997, 2000) in part by asking respondents to identify critical
work activities that are currently important and that are expected to increase in
the future. The IMA reports that 21% of respondents identified computer systems
and operations as one of the five most critical current work activities and 51%
believe that this work activity will increase in importance in the future. Eighteen
percent of respondents in the IMA practice analysis state that control of customer
and product profitability is one of the most critical work activities; however, 59%
of respondents believe that this is one of the work activities that will increase
(0.3 < R < 0.6). Note that these annual correlations do not reflect a monotonic
increase of correspondence over time. However, the data show that modest
contemporaneous correspondence of research and practice topics exists.
ANALYSIS OF CORRESPONDENCE
OF TOPIC LEVELS
One can observe many instances in Fig. 2 where topic frequency differences are
less than 5%, which indicate high correspondence between research and practice.
Most of these topics apparently are of relatively minor interest to both researchers
and professionals (i.e. total frequency of either practice or research is less than
5%). While these low frequency topics may represent emerging areas for both
realms, we focus here on topics that also have at least 5%7 of the total article
coverage in either practice or research. The only major topic meeting these criteria
is “cost management.”
Cost Management
Benchmarking Questions
Several benchmarking research questions seem obvious, including:
What are the costs and benefits of benchmarking at the process, service, or firm levels? One
should be able to measure costs of benchmarking activities, but, as is usually the case, benefits
may be more elusive. Attributing improvements in processes to benchmarking may be more
feasible than attempting to explain business unit or firm-level financial performance.
What are the attributes of successful or unsuccessful design and implementation of
benchmarking? Addressing this question perhaps should follow the first unless one wants to
proxy costs and benefits with user satisfaction measures.
New Directions in Management Accounting Research 17
Strategy Questions
Most research studies use measures of strategy as independent variables to
explain performance or other organizational outcomes. Practical concerns related
to strategy include:
What are appropriate ratios or indicators to measure whether an organization is meeting its
strategic goals, which may be heavily marketing and customer oriented? Are these indicators
financial, non-financial, or qualitative? Numerous practice articles argue that strategic manage-
ment is possible only with the “right indicators.” But what are they? How are they used? With
what impact?
Is the balanced scorecard an appropriate tool for performance evaluation, as well as for
strategic planning and communication? The BSC is offered as a superior strategic planning
and communication tool. Many organizations are inclined to also use the BSC (or similar,
complex performance measurement models) as the basis for performance evaluations. What
complications does this extension add? With what effects?
Are scenarios from financial planning models effective tools for strategic management?
Financial modeling is an important part of financial and cost management. Do strategic planners
need or use scenarios from these models? Why or why not? With what effects?
Relatively less research than practice exists in the area of cost reduction/efficiency
– a difference of 22%. Practice coverage of this topic is fairly uniform over the
5-year study period. Although research coverage peaked in 1998, some coverage
continued into 2000.
What are the effects of IT on total costs and productivity? The information systems literature
commonly focuses on measuring IT-user satisfaction while larger issues of efficiency remain
under-researched. Application contexts include finance, human resources, procurement,
payables, travel, payroll, customer service.
current, practical interest, but they continue to attract research efforts, perhaps
because of tradition and the interesting theoretical issues they present. It also is
possible that researchers’ long concern with budgetary slack still leads practice.
For example, excess budget slack conceivably might be included with other dys-
functional actions designed to manipulate reported performance and targeted for
elimination by financial reforms. Conversely, the data contain no research in topic
areas of activity-based budgeting (Difference = 10%) and planning & forecasting
(Difference = 65%). The latter area, planning and forecasting, has a large topic
difference and has grown in practice coverage each year of the study period.
What are the determinants of effective planning and forecasting? Effective planning and
forecasting can be defined as: (1) accurate, timely, and flexible problem identification; (2)
communication; and (3) leading to desired performance. Researchers may find environmental,
organizational, human capital, and technological antecedents of effective planning and
forecasting methods and practices. Whether these are situational or general conditions would
be of considerable interest.
New Directions in Management Accounting Research 21
What exogenous factors affect sales and cost forecasting? This includes consideration of
the related question, What is a parsimonious model? Nearly every management accounting
text states that sales forecasting is a difficult task. Likewise, cost forecasting can be difficult
because of the irrelevancy or incompleteness of historical data. Yet both types of forecasting
are critical to building useful financial models and making informed business decisions.
What are effects of merging BSC or ABC with planning & forecasting? ABC and the
balanced scorecard represent current recommendations for cost and performance measurement.
However, the research literature has not extensively considered the uses or impacts of these
tools, which may be particularly valuable for planning and forecasting.
What are the roles of IT & decision-support systems in improving planning & forecasting?
Most large organizations use sophisticated database systems, and accessing and using
information can be facilitated by intelligent interfaces and decision support systems. Yet we
know little about the theoretical and observed impacts of these tools in general and almost
nothing about their effects on planning and forecasting.
What are appropriate management controls, internal controls, and performance measures
for E-business ventures? This includes the related question, Do they differ from conventional
business? Doing business in the “New Economy” has impacted the underlying business model
of most firms thus impacting the design of the firm’s management control system, internal
control environment, and performance measurement system.
What technologies drive enhanced productivity and efficiencies in the firm? Firms
must be able to perform cost/benefit analysis weighing the potential benefits to be gained
from employing new technologies against the cost of implementing that technology and
reengineering the business process. Two related questions are What is the optimal capital
budgeting model for electronic business? and Which business processes lend themselves to a
reengineering process that would result in increased efficiencies and reduced costs?
How does electronic data interchange affect the management control system? Electronic
commerce is changing traditional business practices in areas such as increased use of bar
coding of transactions and inventory, and the use of electronic procurement. How do these
new business practices impact the design of the MCS?
Electronic Processing
The processing of accounting transactions can be a tedious and time-consuming
task. Electronic processing of transactions can create efficiencies within organi-
zations. Questions of interest are primarily related to how electronic processing
can improve firm performance and efficiencies.
How should accounting workflows and transaction processing be reengineered to take
advantage of electronic processing? Firms need to know how to integrate an environment that
traditionally generates lots of paper and incorporates many formal controls with an electronic
processing environment that may not generate any paper and dispenses with some of the
traditional controls.
What is the impact of electronic processing on the firm’s control environment? With the
potential for increased efficiencies arising from the reduction of traditional paper documents,
there may not be a paper trail left to substantiate and document transactions. What is the
impact on internal control? Is a paperless environment cost effective?
Topic Coverage
Figure 6 shows some evidence of journal specialization by topic, although all the
research journals have published at least some articles addressing these topics.
For example, management control topics have appeared most often in AOS
and MAR, both U.K.-based journals. Performance measurement issues have
appeared most often in the North American journals, JAR, CAR, TAR, and JAE,
and these journals also publish management control articles. This concentration
may reflect editorial policies or results of years of migration of topics. Apart
from concentration of performance measurement and management control, it
appears that all the surveyed journals are open to publishing various management
accounting topics.
Theories
using contingency theory are the U.K. journals, AOS and MAR. These journals
plus AIMA and JMAR appear to be the broadest in using alternative theories.
Methods of Analysis
As shown in Fig. 8, articles in JAR, etc. tend to use either analytical or statistical
methods, but almost never use qualitative analyses. On the other hand, management
accounting articles in other journals rarely use analytical methods, though they
often use statistical methods. For example, articles in AIMA, AOS, and JMAR
most often use statistical methods. Qualitative analysis appears mostly in the U.K.
journals, AOS and MAR, followed by AIMA and JMAR.
Sources of Data
Authors want to place their work in the most prestigious journals (a designation
that varies across individuals and universities) and also want to receive competent
reviews of their work. Thus it seems sensible (or perhaps explicitly strategic) to
design research for publishability in desired outlets. As a practical matter, this
strategic design perspective may lead researchers, who themselves specialize
in theories and methods, to design practice-oriented management accounting
research for specific journals. Historical evidence indicates that all surveyed
journals may be open to new topics. Although several journals seem open to
alternative theories and methods (AIMA, JMAR, MAR, AOS), the major North
American journals have been more specialized. This may reflect normative values
and practical difficulty of building and maintaining competent editorial and
review boards. Thus, if one wants to pursue a new topic in research aimed at JAR,
etc., for example, one perhaps should use a theory, source of data, and method
that these journals have customarily published.
28 FRANK H. SELTO AND SALLY K. WIDENER
CONCLUSION
There is no shortage of interesting, potentially influential management accounting
research questions. From an analysis of published research and practice articles,
this study has identified many more than could be reported here. Even where
research and practice topics appear to correspond, considerable divergence in
questions exists. Identified research questions offer opportunities for ALL per-
suasions of accounting researchers. Synergies between management accounting
and accounting information systems seem particularly obvious and should not be
ignored. Furthermore, research methods mastered by financial accountants and
auditors can be applied to management accounting research questions.
Even with efforts to design practice-oriented management accounting research
for publishability, challenges to broader participation and publication might
remain. Some of the challenges to publishing this type of management accounting
research might include lack of institutional knowledge of authors, reviewers, and
editors. To be credible, authors must gain relevant knowledge to complement
their research method skills. For example, research on management control of
information technology and strategic planning should be preceded by knowledge
of the three domains, in theory and practice. Furthermore, editors and reviewers
who want to support publication of practice-oriented research should be both
knowledgeable of practice and open minded, particularly with regard to less ob-
jective sources of data. However, it does not seem necessary or desirable to lower
the bar on theory or methods of analysis to promote more innovative research. In
summary, we hope that this paper encourages management accounting researchers
to take on the challenges of investigating interesting, innovative questions oriented
to today’s business world and practice of management accounting.
NOTES
ACKNOWLEDGMENTS
We acknowledge and thank Shannon Anderson, Phil Shane, Naomi Soderstrom
and participants at the 2003 Advances in Management Accounting Conference,
2002 MAS mid-year meeting, a University of Colorado at Boulder workshop and
the AAANZ-2001 conference for their comments and suggestions for this paper.
REFERENCES
Anderson, P. F. (1983). Marketing, scientific progress, and scientific method. Journal of Marketing,
47(4, Fall), 18–31.
Atkinson, A. A, Balakrishnan, R., Booth, P., Cote, J., Groot, T., Malmi, T., Roberts, H., Uliana, E., &
Wu, A. (1997). New directions in management accounting research. Journal of Management
Accounting Research, 79–108.
Demski, J. S., & Sappington, D. E. M. (1999, March). Summarization with errors: A perspective on em-
pirical investigations of agency relationships. Management Accounting Research, 10(1), 21–37.
Elnathan, D., Lin, T., & Young, S. M. (1996). Benchmarking and management accounting: A
framework for research. Journal of Management Accounting Research, 37–54.
Institute of Management Accountants (2000). Counting more, counting less.
http://www.imanet.org/content/publications and research/IMAstudies/moreless.pdf.
Ittner, C. D., & Larcker, D. F. (1998). Innovations in performance measurement: Trends and research
implications. Journal of Management Accounting Research, 205–238.
Ittner, C. D., & Larcker, D. F. (2001). Assessing empirical research in managerial accounting: A
value-based management perspective. Journal of Accounting and Economics.
Malina, M. A., & Selto, F. H. (2001). Communicating and controlling strategy: An empirical study of
the effectiveness of the balanced scorecard. Journal of Management Accounting Research, 13,
47–90.
Miles, M., & Huberman, A. (1994). Qualitative data analysis: An expanded sourcebook. Thousand
Oaks, CA: Sage.
Shields, M. D. (1997). Research in management accounting by North Americans in the 1990s. Journal
of Management Accounting Research, 3–62.
30 FRANK H. SELTO AND SALLY K. WIDENER
APPENDIX: PRACTICE-ORIENTED
RESEARCH QUESTIONS
Appendix (Continued )
Appendix (Continued )
Appendix (Continued )
Major Topic Sub-Topic Selected Research Questions
Appendix (Continued )
Appendix (Continued )
ABSTRACT
An important management topic across a wide spectrum of firms is recon-
figuring the value delivery system – defining the boundaries of the firm.
Profit impact should be the way any value chain configuration is evaluated.
The managerial accounting literature refers to this topic as “make versus
buy” and typically addresses financial impact without much attention to
strategic issues. The strategic management literature refers to the topic
as “level of vertical integration” and typically sees financial impact in
broad “transaction cost economics” terms. Neither approach treats fully the
linkages all along the causal chain from strategic actions to resulting profit
impact. In this paper we propose a theoretical approach to explicitly link
supply chain reconfiguration actions to their profit implications. We use the
introduction by Levi Strauss of Personal Pair™ jeans to illustrate the theory,
evaluating the management choices by comparing profitability for one pair
of jeans sold through three alternative value delivery systems. Our intent is
to propose a theoretical extension to the make/buy literature which bridges
the strategic management literature and the cost management literature,
using A-P-L and SCM, and to illustrate one application of the theory.
It has been reported that firms which have restructured their value delivery
system have experienced lower overhead costs, enhanced responsiveness and
flexibility, and greater efficiency of operations (Lorenzoni & Baden Fuller, 1995).
Furthermore, from the single-firm perspective, alliances and partnerships have
created new strategic options, induced new rules of the game, and enabled new
complementary resource combinations (Kogut, 1991).
From the initial emphasis on joint ventures, a wider spectrum of forms of network
alliances has emerged. The basic idea in “outsourcing strategies” is to transform the
firm’s value chain to reduce the assets required and the number of traditional func-
tional activities performed inside the organization, resulting in a much different
configuration of the corporate boundaries. This can challenge the firm to carefully
reconsider its core capabilities (Prahalad & Hamel, 1994). Alliances and partner-
ships create leverage in strategic maneuvering, shaping the so-called “intelligent
firm” (Quinn, 1992) or the “strategic center” (Lorenzoni & Baden Fuller, 1995).
In “networked organization” design, the key choice is which activities to per-
form internally and which to entrust to the network (Khanna, 1998). The choice
can free resources from traditional supply chains to focus on core competencies
that foster the firm’s competitive advantage. One example is Dell’s de-emphasis
of manufacturing in favor of web-enhanced direct distribution in business PCs in
the 1990s. Williamson (1975) frames this choice as how much “market” and how
The Profit Impact of Value Chain Reconfiguration 39
THEORETICAL BACKGROUND
The Transaction Cost Approach
Transaction cost economics (TCE) defines the rational boundaries of the firm in
terms of the trade-off between the costs of internally producing resources and the
costs associated with acquiring resources in an external exchange. Williamson
(1975) developed this approach, drawing upon the institutional economics studies
of Coase (1937). A “transaction” is defined as the exchange of a good or a service
between two legally separated entities. Williamson (1985) holds that such ex-
changes can increase costs, relative to internal production, because of delays, lack
of communication, transactor conflicts, malfunctions or other maladjustments.
40 JOHN K. SHANK ET AL.
With the purpose of avoiding such transactions costs, firms set up internal gov-
ernance structures with planning, coordination and control functions. However,
these structures, themselves, also cause resource consumption and thus costs.
Transaction costs derive from managing supplier and buyer activities both ex
ante in searching, learning, and negotiating (or safeguarding) agreements, and ex
post in organizing, managing and monitoring the resulting relationship. The cost
of transacting may become higher than the production cost savings because of
increases in opportunistic behavior, bounded rationality, uncertainty, transactor
conflicts or asset specificity. Firms then tend to prefer the integrated organization
over the market transaction.
The vertically integrated organization may provide numerous benefits:
controllability of the actors, better attenuation of conflicts, and more effective
communications (Williamson, 1986). But, a great many people believe today
that markets provide better incentives and limit bureaucratic distortions more
efficiently than vertical integration. The market may also better aggregate several
demands, yielding economies of scale or scope (Williamson, 1985). Vertical
integration, as a generalization, is in decline. For further theoretical literature on
vertical integration see D’Aveni and Ravenscraft (1994), D’Aveni and Illinitch
(1992), Harrigan (1983), Hennart (1988), or Quinn et al. (1990).
could allow managers to better estimate the “cost of ownership” (Carr & Ittner,
1992; Kaplan & Atkinsons, 1989; Kaplan & Cooper, 1998). Cost of ownership
would include not only purchase price, but also those costs related to purchasing
activities (ordering, receiving, incoming inspection), holding activities (storage,
cost of capital, obsolescence), poor quality issues (rejections, re-receiving, scrap,
rework, repackaging), or delivery failures (expediting, premium transportation,
lost sales owing to late deliveries). Cost of ownership is obviously dramatically
higher than purchase price alone.
For example, Carr and Ittner (1992) note that Texas Instruments increased its
estimate of the cost of ownership of an integrated circuit from $2.50 to $4.76
when considering poor system quality. In another survey, Ask and Laseter (1998)
found that in selected commodities such as office supplies, fabrication equipment
and copy machines, total cost of ownership was, respectively, 50, 100, and 200%
higher than purchase price alone. Clearly, ABC is a necessary augmentation to
the traditional management accounting conception of the make/buy decision, but
the result is still internally focused on the firm rather than the full supply chain.
Strategic Cost Management is the view that cost analysis and cost management
must be tackled broadly with explicit focus on the firm’s strategic positioning in
terms of the overall value supply chain of which it is a part.
Strategic Positioning
For sustained profitability, any firm must be explicit about how it will compete.
Competitive advantage in the marketplace (Porter, 1985) ultimately derives from
providing better customer value for equivalent cost (differentiation) or equivalent
customer value for lower cost (low cost). Occasionally, in a few market niches,
a company may achieve both cost leadership and superior value simultaneously,
for awhile. Examples include IBM in PCs in 1986 or Intel in integrated circuits
in 1992. In general, shareholder value derives more clearly from differentiation,
since the benefits of low cost are ultimately passed more often to customers than
to shareholders.
about value creation and destruction at each stage (Shank et al., 1998). In carefully
analyzing its internal value chain and the industry chain of which it is a part, a firm
might discover that economic profits are earned in the downstream activities, such
as distribution or customer service or financing, but not upstream in basic manufac-
turing. For example, Shank and Govindarajan (1992) show that in the consumer liq-
uid packaging industry, much higher returns to investment are earned downstream
at the filling plant than upstream in package manufacturing. In such a case, any
incremented resource allocation upstream would require very strong justification.
Many businesses today are showing that value is moving downstream in the
chain (Slywotzky, 1996). General Electric and Coca Cola have experienced the
benefits of moving downstream (Slywotzky & Morrison, 1997). In the U.S. auto
industry, a very high percentage of overall profit is in after-market services such
as leasing, insurance, rentals, and repairs. New car sales and auto manufacturing
show low profit (Gadiesh & Gilbert, 1998). At the deepest level, value chain
analysis allows managers to better understand their activities in relation to
their core competencies and to customer value. Many firms have discovered
that streamlining the chain can reduce costs and enhance the value provided to
customers (Hergert & Morris, 1989; Normann & Ramirez, 1994; Porter, 1985;
Rackharn et al., 1996; Womack & Jones, 1996).
Epstein et al. (2000) argue that evaluating the impact of any strategic initiative
requires assessing the profit implications all along the linked set of causal steps
that make up the initiative. Too often, they say, the linkages from a decision to the
related action variables to the related intervening system variables to profit are not
clearly identified and quantified. They propose and illustrate a theoretical model
(APL) to make such linkages explicit. The APL model is intended to promote an
integrative and systemic approach to evaluating strategic choices and to propose
a new performance metric – full supply chain profitability – for use in monitoring
strategy implementation.
We believe that APL is a very appealing extension of the SCM framework for
evaluating strategic choices and we incorporate it here.
Lean Thinking
The work by Womack and Jones (1996) on supply chain reconfiguration provides
a context to apply SCM and APL to the decision by Levi Strauss to introduce
The Profit Impact of Value Chain Reconfiguration 43
Personal Pair™ jeans. Although Levi’s has long been a dominant brand in apparel,
Levi Strauss is primarily a manufacturing company, selling its products to whole-
salers or retailers rather than end-use customers. As the apparel business continues
to evolve, firms all along the industry value chain are continually presented with
opportunities to create new ways to compete. This requires the firm to carefully
position itself within the industry structure, avoiding or mitigating the power of
competitors. Successful firms have the ability to differentiate an idea from an
opportunity and can quickly marshal physical resources, money, and people to take
advantage of windows of business opportunity. Competitive advantage is always a
dynamic concept, continually shifting as firms either reposition within industries
or position in such a manner that existing industry boundaries are redrawn
(D’Aveni, 1993).
Womack and Jones have studied this process since the mid-1980s, starting with
the auto industry. They later expanded their research base in an attempt to identify
“best-in-class,” across industries (Womack & Jones, 1996). They termed their point
of view “lean thinking.” In their cross-industry study, they demonstrate that many
companies have been able to create substantial shareholder wealth by challenging
the way they implement their strategies through a five-step process. First, identify
the value criterion from the customer viewpoint at a disaggregated level – a specific
product to a specific customer at a specific price at a specific place and time.
Second, map the value chain’s three elements: the physical stream which originates
with the first entity that supplies any raw input to the system and ends with a satis-
fied customer, regardless of legal boundaries; the information stream that enables
the physical stream; and the problem solving/decision stream which develops
the logic for the physical stream. Third, focus on continuous flow and minimize
disruptions such as those in a typical “push-based, batch-and-wait” system. This is
accomplished by the fourth step – creating “pull,” such that the customer initiates
the value stream. And fifth, strive for continuous improvement (Kaizen) by
creating a “virtuous circle” where transparency allows all members to continually
improve the system.
In their study of world-class lean organizations, the authors cite results such
as 200% labor efficiency increases, 90% reductions in throughput time, 90%
reductions in inventory investment, 50% reductions in customer errors and 50%
reductions in time-to-market with wider product variety and modest capital
investment. An APL model is necessary to tie down the profit implications of
improvements in these leading performance indicators.
In the next section of the paper, we present the Levi’s Personal Pair™ business
initiative as one example of a management innovation demonstrating attention to
all five of these steps. In Section IV, we present for the example a full value chain
profitability impact assessment using APL.
44 JOHN K. SHANK ET AL.
In 1995, women’s jeans was a $2 billion fashion category in the U.S. and growing
fast. Levi’s was the market leader with more than 50% share of market, but their
traditional dominant position was under heavy attack. Standard Levi’s women’s
jeans, which were sold in only 51 size combinations (waist and inseam) had
been the industry leading product for decades, but “fashion” was now much more
important in the category. Market research showed that only 24% of women were
“fully satisfied” with their purchase of standard Levi’s at a list price of about
$50 per pair.
“Fashion” in jeans meant more styles, more colors, and better fit. All of these
combined to create a level of product line complexity that was a nightmare for
manufacturing-oriented, push-based companies like Strauss which depend on
independent retailers to reach consumers. Recognizing a need for better first-hand
market information, in the early 1990s Strauss opened a few retail outlets, Original
Levi’s stores. By 1994, Strauss operated 19 retail outlets across the country (2,000
to 3,000 square foot mall stores) to put them in closer touch with the ultimate
customers. But this channel was still a tiny part of their overall $6 billion sales
which were still primarily to distributors and independent retailers.
Strauss was as aggressive as most apparel manufacturers and retailers in
investing in process improvements and information technology to improve
manufacturing and delivery cycle times and (pull-based) responsiveness to actual
buying patterns. But the overall supply chain from product design to retail sales
was still complex, expensive and slow. In spite of substantial improvements in
recent years, including extensive use of Electronic Data Interchange (“EDI”),
there was still an eight-month lag, on average, between receiving cotton fabric
and selling the final pair of Levi’s jeans (see Fig. 1). The industry average lag was
still well over twelve months in 1995.
Custom Clothing Technology Corp. (CCTC), a small Newton, MA-based
software firm, offered Levi’s a very innovative business proposal in 1994 based on
an alternative value chain concept. CCTC specialized in client/server applications
linking point-of-sale custom fitting software directly with single-ply fabric cutting
software for apparel factories. CCTC suggested a joint venture to introduce
women’s Personal Pair™ kiosks in 4 of the Original Levi’s stores. The management
of CCTC had solid technology backgrounds but little retail experience. They were,
however, convinced of the attractiveness of their new process, which operates
as follows:
(1) The Personal Pair™ kiosk is a separate booth in the retail store equipped with
a touch screen PC.
The Profit Impact of Value Chain Reconfiguration
Fig. 1. The Conventional Supply Chain for Levi’s Jeans.
45
46 JOHN K. SHANK ET AL.
(2) A specially-trained sales clerk uses a tape to take three measurements from
the customer (waist, hips and rise) and record them on the touch screen. There
are 4,224 possible combinations of these three measurements. Inseam length
is not yet considered.
(3) The computer flashes a code corresponding to one of 400 prototype pairs
which are stocked at the kiosk. The sales clerk retrieves the prototype pair for
the customer to try on.
(4) With one or two tries, the customer is wearing the best available prototype.
Then the sales clerk uses the tape again to finalize the exact measurements
for the customer (4,224 possible combinations) and to note the inseam length
desired.
(5) The sales clerk enters the 4 final measurements on the touch screen and records
the order. The system was available only for the Levi’s 512 style, but 5 color
choices were offered in both tapered and boot-cut legs.
(6) The customer pays for the jeans and pays a $5 Fed Ex delivery charge (per
pair). Delivery is promised in not more than three weeks.
(7) Each completed customer order is transmitted by modem from the kiosk
to CCTC where it is logged and retransmitted daily to a Levi’s factory in
Tennessee.
(8) At the factory, each pair is individually cut, hand sewn, inspected and packed
for shipment. Each garment includes a sewn-in bar code unique to the customer
for easy re-ordering at the store where the bar code is on file in the kiosk.
(9) There is a money-back guarantee of full satisfaction on every order.
As is immediately obvious in Fig. 1, the Original Levi’s store system is the an-
tithesis of lean! Due to uncertainty in demand forecasting and inconsistent supply
chain lead times, large investments in inventories are necessary (raw, WIP and
finished). This, in turn, necessitates investments in logistics support assets such as
warehouses, IT systems, and vehicles.
For the business that CCTC targeted, the five elements of lean thinking can be
summarized as follows:
(1) Value. Although Levi Strauss was a very profitable and very large firm, only
24% of women were satisfied with the fit of their new jeans. This “opportunity”
in women’s jeans that Levi’s was missing illustrates the need to apply the lean
thinking methodology on a disaggregated level.
(2) Value stream. It is unclear how concerned Levi’s was about the cumbersome
value stream for this particular product. Their approach was typical in the
The Profit Impact of Value Chain Reconfiguration 47
industry, and Levi’s corporate ROE averaged a very robust 38% for the three-
year period, 1993 to 1995.
(3) Continuous flow. Given the eight month denim-to-sale cycle, with frequent
inventory “stops,” there is clearly very little “flow.”
(4) Pull. Likewise, this is the classic push system. The customer initiates noth-
ing. All production and distribution activity is driven by sales forecasts and
production lot-sizing.
(5) Kaizen. Again, it seems that Levi’s satisfaction with a high overall ROE may
have led them to miss the lack of transparency in the women’s jeans chain
which blocks the opportunity for continuous improvement.
Although Levi’s does not publish financial results for women’s jeans sold through
the Original Levi’s channel, we were still able to analyze this channel with some
degree of confidence using field site visits, industry averages, benchmark company
comparisons, and interviews with industry participants.
A breakdown of the profitability impact along each stage of the chain for the
normal wholesale channel, using the APL model, is shown in Fig. 3. As noted
earlier, the retail list price for a pair of jeans is approximately $50. Assuming a
typical retail gross margin of 30%, the Levi’s wholesale price is close to $35. In
addition, historically, approximately 1/3 of Levi’s jeans are sold at markdowns
averaging approximately 30% off list. This equates to average price allowances of
about $5 per pair (1/3 × 30% × $50). About 60% of this, or $3 per pair, is made
good by Levi’s in some type of co-op agreement. The result is a net sales price for
Levi’s of $32 ($35 – $3).
The footprint gross margin in Fig. 2 for Levi’s as the manufacturer are about
40%. This implies that cost of goods sold for one pair of jeans is about $19
(60% × $32). From research, we know that denim costs about $5 per pair and
conversion another $5. This leaves approximately $9 for distribution logistics.
Overall S,G&A is 25% of sales per Fig. 2, which would be $8 per pair, based
on net sales of $32. We estimate S,G&A to be moderately higher ($9 per pair)
for women’s jeans because of the more complex supply chain for a fashion item.
Pre-tax profit per pair is thus $4 ($32 – $19 – $9). Note that a significant part of
cost is directly due to the “push” system in place ($3 in markdowns, the additional
$1 in S,G&A, and $9 in distribution costs).
The investment per pair can also be estimated from the financial footprint. Using
the average inventory turnover of 4.73, we estimate the inventory investment to
be about $4 ($19/4.73). Accounts payable (27 days) for this channel rounds to
$1, yielding a net inventory requirement of $3 for every pair sold. The collection
period for women’s jeans should not be that much different from the overall
Levi’s collection period of 51 days, which translates to $4 in accounts receivable
for each pair. In a like manner, the 5.33 fixed asset turn gives us a total of $6
per pair ($32/5.33) in property assets. Our field research indicates that this plant
investment for the normal channel is mostly in the factory, rather than distribution.
In total, we estimate that, for this channel, every pair sold requires capital of
approximately $13 ($4 – $1 + $4 + $6). With the above pre-tax operating profit
of $4, this is an overall very healthy ROIC of about 31%. The figure is marginally
less than the corporate average because of extra downstream costs for a women’s
fashion item.
We next make the adjustments necessary to convert this analysis to one pair of
women’s jeans sold through Original Levi’s stores. The profitability impact at
each stage along this chain is shown in the APL model in Fig. 4. Basically, the
financial consequences of adding a retail outlet can be derived from databases for
retail clothing companies. The only difficult element to estimate is the in-store
investment. A visit to a local Levi’s outlet revealed the following information:
Building investment – 3,000 square feet leased for about $240,000 per year. The
lease rate of $80 per foot per year is typical for high-end malls. We capitalized
50 JOHN K. SHANK ET AL.
Fig. 4. An APL Formulation of the Profitability of Women’s Jeans Sold Through the
Original Levi’s Channel.
Comparing the normal wholesale channel with the owned retail channel, prof-
itability (ROIC) for women’s jeans falls by about 50%, from 31 to 16%. Levi
Strauss is paying a high price to gain customer intimacy in this segment!
Our next step is to estimate how Personal Pair™ changes the profitability analysis.
This requires that we first understand the CCTC value chain, its impact on the
aggregate financial footprint, and its impact on each pair of jeans. Based on the
CCTC proposal outlined earlier, a reasonable estimate of the new value chain is
as shown below.
As is obvious in comparing the original Levi’s and Personal Pair™ value chains,
the CCTC system is indeed much more “lean.” It adds “fit” value for the customer.
It has a well-defined value stream, including not only the physical and information
flows, but also the decision-making. The flow is interrupted only at the transporta-
tion nodes and is initiated by “pull” from a customer order. Although perfection
can never be achieved, the areas for kaizen seem obvious given the transparency
and simplicity of the system. All five lean thinking criteria have been markedly
improved. But, how is profitability affected?
Specific financial information with respect to this system is difficult to estimate
because of the short history for CCTC and the vastly different structure of the
chain. However, our research enabled us to make the estimates summarized in
Fig. 5. Again, the APL framework is used to show profit impact all along the causal
chains. If, indeed, CCTC can deliver what it has promised, the results are dramatic.
More customer satisfaction implies an opportunity for higher selling prices.
Levi’s priced each Personal Pair™ $15 higher initially, with very little customer
resistance. One year later, the premium was cut to $10 based on the estimated
price elasticity. Custom fit also eliminates mark-downs, driving up the net price.
Distribution costs are transferred to Fed Ex for which the customer pays separately.
Operating costs per pair in the store are cut by half, assuming half the orders are
52 JOHN K. SHANK ET AL.
Fig. 5. An APL Formulation of the Profitability of Personal Pair™ Women’s Jeans. Note:
The normal $8 for Strauss plus the normal $10 for the store (increase in personal selling
offset by decrease in space costs) but divided by 2 for 50% repeat orders by mail, plus $3
for CCTC ($8 + $ 10/2 + $3 = $16).
repeat business with zero store contact. Selling costs for the first pair would increase
given the time spent on measuring and fitting each customer. But, this happens only
once (until body dimensions change.). The inventory and retail store investment
decrease substantially with only an offset of CCTC investment in computers
and software.
Overall, the CCTC opportunity is very enticing with very high ROIC. Given
that this model did not have an established record in retail merchandising, a test
phase was probably the wise choice for Levi’s.
The Profit Impact of Value Chain Reconfiguration 53
CONCLUSION
Clearly, value chain reconfiguration is a central topic for many firms today in many
industries. Although value chain analysis can be framed in accounting terms as
the classic make/buy problem, the traditional accounting literature with its focus
on relevant cost analysis (RCA) does not provide much help with the strategic
aspects of the dilemma. The management literature is rich in discussing those
strategic issues, but is very thin on the related financial analysis. This literature’s
focus on transaction cost economics (TCE) is very appealing conceptually, but not
of much pragmatic help.
In this paper, we propose a new theoretical approach which extends conven-
tional RCA and TCE analysis to a full cost ROIC basis spanning the entire value
chain. This approach disaggregates the level of analysis from the firm as a whole
to an individual product sold to a particular customer segment. It couples the
SCM framework with an APL model to explicitly address the profit impact of
the managerial actions at each stage along the supply chain. We apply this new
The Profit Impact of Value Chain Reconfiguration 55
REFERENCES
Anthony, R. N., & Welsch, G. (1977). Foundamentals of management accounting. Irwin.
Ask, J. A., & Laseter, T. M. (1998), Cost modeling: A foundation purchasing skill. Strategy and
Business, 10. Booz Allen & Hamilton.
Atkinson, A. A., Banker, R. D., Kaplan, R. S., & Young, S. M. (1997). Management accounting.
Englewood Cliffs, NJ: Prentice-Hall.
Beamish, P. W., & Killing, P. J. (Eds) (1997). Cooperative strategies. European perspectives. San
Francisco: New Lexington Press.
Carr, L. P., & Ittner, C. D. (1992). Measuring the cost of ownership. Journal of Cost Management, Fall.
Clark, K. B., & Fujimoto, T. (1991). Product development performance. Boston, MA: Harvard Business
School Press.
Coase, R. H. (1937). The nature of the firm. Economica, 4.
D’Aveni, R. A. (1993). Hypercompetition. New York, NY: Free Press.
D’Aveni, R. A., & Ilinitch, A. V. (1992). Complex patterns of vertical integration in the forest products
industry: Systematic and bankruptcy risk. Academy of Management Journal, 35.
D’Aveni, R. A., & Ravenscraft, D. J. (1994). Economies of integration vs. bureaucracy costs: Does
vertical integration improve performance? Academy of Management Journal, 37.
Epstein, M. J., Kumar, P., & Westbrook, R. A. (2000). The drivers of customer and corporate profitabil-
ity: Modeling, measuring and managing the causal relationships. Advances in Management
Accounting, 9.
Gadiesh, O., & Gilbert, J. L. (1998). Profit pools: A fresh look at strategy. Harvard Business Review,
May–June.
Gulati, R. (1995). Does familiarity breed trust? The implications of repeated ties for contractual choice
in alliances. Academy of Management Journal, 38.
Hagedoorn, J. (1993). Understanding the rationale of strategic technology partnering: Interorganiza-
tional modes of cooperation and sectoral differences. Strategic Management Journal, 14(5).
Harrigan, K. R. (1983). Strategies for vertical integration. Lexington, MA: Heath & Lexington Books.
Hennart, J. F. (1988). Upstream vertical integration in the aluminum and tin industry. Journal of
Economic Behavior and Organization, 9.
56 JOHN K. SHANK ET AL.
Hergert, M., & Morris, D. (1989). Accounting data for value, chain analysis. Strategic Management
Journal, 10.
Horngren, C. T., Foster, G., & Datar, S. (1997). Cost accounting: A managerial emphasis. Englewood
Cliffs, NJ: Prentice-Hall.
Johnson, T. H. (1992). Relevance regained. New York, NY: Free Press.
Johnson, T. H., & Kaplan, R. S. (1987). Relevance lost. The rise and fall of management accounting.
Cambridge, MA: Harvard Business School Press.
Kaplan, R. S., & Atkinsons, A. A. (1989). Advanced management accounting. Englewood Cliffs, NJ:
Prentice-Hall.
Kaplan, R. S., & Cooper, R. (1998). Cost & effect. Using integrated cost systems to drive profitability
and performance. Boston, MA: Harvard Business School Press.
Khanna, T. (1998). The scope of alliances. Organization Science, May–June, Special Issue.
Kogut, B. (1988). Joint ventures: Theoretical and empirical perspectives. Strategic Management Jour-
nal, 9(4).
Kogut, B. (1991). Joint ventures and the option to expand and acquire. Management Science, 37.
Lorenzoni, G., & Baden Fuller, C. (1995). Creating a Strategic center to manage a web of partners.
California Management Review, 37.
Mowery, D. C. (Ed.) (1988). International collaborative ventures in U.S. manufacturing. Cambridge,
MA: Ballinger.
Nohria, N., & Eccles, R. G. (Eds) (1992). Networks and organizations: Structure, form, and action.
Boston, MA: Harvard Business School Press.
Normann, R., & Ramirez, R. (1994). Design interactive strategy. From value chain to value constella-
tion. Chichester, UK: Wiley.
Porter, M. E. (1985). Competitive advantage. New York, NY: Free Press.
Prahalad, C. K., & Hamel, G. (1994). Competing for the future. Boston, MA: Harvard Business School
Press.
Quinn, J. B. (1992). Intelligent enterprise. A knowledge and service based paradigm for industry.
New York, NY: Free Press.
Quinn, J. B., Doorley, T. L., & Paquette, P. C. (1990). Technologies in services: Rethinking strategic
focus. Sloan Management Review, 3(1).
Rackharn, N., Friedman, L., & Ruff, R. (1996). Getting partnering right. How market leaders creating
long term competitive advantage. New York, NY: McGraw-Hill.
Shank, J. K., & Govindarajan, V. (1992). Strategic cost analysis: The value chain perspective. Journal
of Management Accounting Research.
Shank, J. K., & Govindarajan, V. (1993). Strategic cost management: The new tool for competitive
advantage. New York, NY: Free Press.
Shank, J. K., Spiegel, E. A., & Escher, A. (1998). Strategic value analysis for competitive advan-
tage. An illustration from the petroleum industry. Strategy and Business, 10. Booz Allen
& Hamilton.
Shillinglaw, G. (1982). Managerial cost accounting. Homewood, IL: Irwin.
Slywotzky, A. J. (1996). Value migration. How to think several moves ahead of the competition. Boston,
MA: Harvard Business School Press.
Slywotzky, A. J., & Morrison, D. J. (1997). The profit zone. How strategic business design will lead
you to tomorrow’s profits. New York, NY: Times Business.
Westney, D. E. (1988). Domestic and foreign learning curves in managing international cooperative
strategies. In: F. J. Contractor & P. Lorange (Eds), Cooperative Strategies in International
Business. Lexington, MA: Lexington Books.
The Profit Impact of Value Chain Reconfiguration 57
Williamson, O. E. (1975). Markets and hierarchies. New York, NY: Free Press.
Williamson, O. E. (1985). The economic institutions of capitalism. New York, NY: Free
Press.
Williamson, O. E. (1986). Economic organization. Brighton: Wheatsheaf Books.
Womack, J. P., & Jones, D. T. (1996). Lean thinking. New York, NY: Simon & Schuster.
THE MEASUREMENT GAP IN PAYING
FOR PERFORMANCE: ACTUAL AND
PREFERRED MEASURES
ABSTRACT
What is measured gets managed – especially if rewards depend on it. For
this reason many companies (over 70% in this survey) have upgraded
their performance measurement systems so as to include a mix of financial
and non-financial metrics. This study compares how companies currently
measure performance for compensation purposes with how their managers
think performance should be measured. We find significant measurement
gaps between actual and preferred measures, and we find that larger
measurement gaps are related to lower overall performance. The choice
of performance measures for compensation purposes is also related to the
attitudes of managers towards manipulation of reported results.
INTRODUCTION
Performance measures are powerful means of conveying which aspects of perfor-
mance are important to a company and which areas a manager needs to focus on to
be evaluated as a top performer. Managers direct their attention to those measures
that most strongly influence their compensation. Recognizing the motivational
effects of performance measures, many companies have implemented major
changes to improve their performance measurement systems. According to the
LITERATURE REVIEW
Recent surveys on performance measurement have documented managers’
widespread dissatisfaction with current performance measures. For example, the
Institute of Management Accountants’ annual surveys on performance measure-
ment practices have consistently shown, since the 1990s, that more than half of
the respondents rate their companies’ performance measurement systems as poor
or, at best, adequate (see summary of the recent survey in Frigo, 2002). The IMA
survey in 2001 indicated that non-financial metrics related to customers, internal
processes and learning and growth (three perspectives proposed in the balanced
scorecard framework developed by Kaplan & Norton, 1992, 1996, 2001) received
lower ratings than financial metrics.
A large-scale study conducted by the American Institute of Certified Public
Accountants showed that only 35% of the respondents regarded their company’s
The Measurement Gap in Paying for Performance 61
THEORETICAL DEVELOPMENT
The person-organization fit literature is well-established in the human resource
management field (e.g. Chatman, 1989; O’Reilly et al., 1991; Posner, 1992). Fit
is defined in that literature as “the congruence between the norms and values of
organizations and the values of persons” (Chatman, 1989). Going beyond simply
measuring fit, organizational researchers have studied how individual values
interact with situations (e.g. incentives) to affect the attitudes and behaviors of
people in the workplace (O’Reilly et al., 1991). Person-organization fit has been
shown to influence vocational and job choices, job satisfaction, work adjustment,
organizational commitment and climate, and employee turnover (Chatman,
1989, 1991; O’Reilly et al., 1991). The importance of person-organization fit
for the ethical work climate has also received much attention by researchers in
business ethics (e.g. Sims & Kroeck, 1994). Yet, in the accounting literature,
person-organization fit has received only limited attention.
Management accounting researchers have mainly focused on fit from a macro
perspective, using either contingency theory (e.g. Merchant, 1984) or national
culture (Chow et al., 1996, 1999) to examine the effectiveness of actual control
system characteristics. Those studies have contributed additional evidence to
suggest that lack of fit can potentially increase the costs of attracting and retaining
employees, and may induce behaviors contrary to the firm’s interests.
In this study we attempt to extend prior research on management control
system fit in three main ways. First, we take a more focused perspective and
study fit at the level of interactions between individual managers and the specific
performance measurement systems that govern their work. Second, instead of
assuming what management preferences might be, we actually asked managers
to state their preferences for seventeen performance metrics commonly used
in performance-based incentive plans. This allowed us to directly quantify the
measurement gap or lack of fit. Third, we address a common criticism of previous
performance measurement research by investigating the empirical relationships
between selection of performance measures and actual performance.
We expect that existing performance measurement systems vary in how closely
they match managers’ preferences for performance metrics. When organizations
make decisions about which control system practices they will adopt, their choices
reflect primarily the values and preferences of those in charge of designing control
systems. There is little guarantee that those choices will be similarly valued by
all managers subject to such control systems, as demonstrated in previous studies
of the person-organization fit in the performance measurement and incentive
compensation areas (Bento & Ferreira, 1992; Bento & White, 1998). While
selection, socialization and turnover are all powerful mechanisms to improve
The Measurement Gap in Paying for Performance 63
While the purpose of this exploratory study is not to provide a direct and
exhaustive test of this proposition, the results from this survey do provide initial
empirical evidence on the relationships among performance metrics, fit, and
organizational performance.
METHOD
Sample
We obtained support for this study from a sample of 100 managers in the mid-
Atlantic area. We conducted initial interviews with the managers to explain the
nature of the project, to ensure participation, and to verify that their positions in
their firms included budget responsibility.
The sample covered a wide range of industries (30% in manufacturing, 70%
in the service sector). To ensure confidentiality, we asked each participating
manager to complete the survey questionnaire and mail it to us anonymously. We
received a total of 64 completed questionnaires, yielding a 64% response rate.
We attribute this unusually high response rate (relative to other survey studies) to
64 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE
Measures
Performance Measures
The survey questionnaire included nine financial measures (sales growth, volume
of orders, shipments, expense control, profitability, receivables management, in-
ventory management, efficiency gains, cash flow) and eight non-financial measures
(quality of products or services, customer satisfaction, new product introduction,
on-time delivery, accomplishment of strategic objectives, market share, and
employee satisfaction) selected from the literature on performance measurement.
Gap Measure
We asked the managers to rate, for each financial and non-financial performance
measure, the extent to which it actually affected his or her compensation (actual
measures). We also asked the managers to rate, for each measure, the extent
to which he or she would like that measure to affect his or her compensation
(preferred measures). Both types of ratings used a Likert-type scale ranging
from “1” (very little) to “7” (very much). Consistent with the literature on
person-organization fit (e.g. Chow et al., 1999), we computed the “measurement
gap” or lack of fit by subtracting actual use from preferred use of performance
measures for compensation purposes. A positive measurement gap indicates
that managers would prefer more emphasis on those measures. A negative gap
indicates that managers would prefer less emphasis on those measures.
Performance
We measured performance using a nine-item instrument that has been widely
applied by management accounting researchers (e.g. Brownell & Hirst, 1986;
Kren, 1992; Nouri et al., 1995). This instrument, while relatively subjective,
was employed because more objective performance indicators are usually not
available for lower-level responsibility centers. Matching self-evaluations with
supervisor evaluations was not possible given that the scope of this study (which
included earnings management issues) required that the various responsibility
center managers could rely on absolute survey confidentiality within their firms.
Gaming
We described various scenarios in which managers decide to improve reported
profits in the short-term, and asked the respondents to rate the likelihood that they
The Measurement Gap in Paying for Performance 65
would make such decisions. We adapted the scenarios from previous studies on
earnings management (Bruns & Merchant, 1989, 1990). For example, the scenarios
described decisions directed at either increasing sales (e.g. shipping earlier than
scheduled in order to meet a budget target), or reducing expenses (e.g. postponing
discretionary expenses to another budgeting period).
Actual Measures
Figures 1 and 2 show the financial and non-financial measures the respondents
reported as the most commonly used for compensation purposes. At least half
of the managers reported that their compensation is influenced largely by their
performance measured by sales, receivables management, volume of orders, and
profitability. Over 70% of the respondents reported that non-financial measures
play a major role in determining their compensation. The three most widely
used non-financial measures are new product introduction, customer satisfaction,
and achievement of strategic objectives. This result is similar to the findings of
the AICPA survey (AICPA & Maisel, 2001) and the Conference Board’s survey
(Gates, 1999).
Preferred Measures
When asked what performance factors they would like to see being used to affect
their own compensation, over half of the managers reported preferences for
efficiency gains and profitability for financial measures (Fig. 3), while employee
satisfaction, achievement of strategic objectives and quality are the three most
preferred non-financial measures (Fig. 4). Their preferences may be explained
by several different factors. Managers may feel that these are the aspects of
66
Table 1. Distributions of Actual and Preferred Performance Measures.
Measures Actual and Preferred Actual Measures Preferred Measures
Theoretical Range Actual Range Mean Standard Actual Mean Standard
Range Deviation Range Deviation
Note: Extent to which the performance measure affects the manager’s compensation (1 = very little, 7 = very much).
The Measurement Gap in Paying for Performance
Fig. 1. Actual Use of Financial Performance Measures for Compensation Purposes.
67
68 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE
performance that they can most directly control or they may consider that
these factors most closely reflect the decisions that they make on a day-to-day
basis. Preferences for particular performance measures may also be driven
by the managers’ ability in those areas, so that managers seek a performance
measurement system that closely matches their skill set. Interestingly enough,
two of the most commonly used measures are reported among the least preferred:
receivables management and new product introduction.
Measurement Gap
We performed some additional analysis to calculate the level of fit between actual
practices and managerial preferences with respect to performance measures
affecting compensation. As explained in the method section above, the gap mea-
sure was obtained by subtracting actual use from preferred use of performance
measures for compensation purposes. Figure 5 shows the measurement gap for
financial measures, while the gap for non-financial measures appears in Fig. 6.
Considering both financial and non-financial performance measures combined,
the measurement gap is smallest for inventory management, expense controls,
and on-time delivery; it is largest for receivables management, new product
introduction, and sales. On average, there is greater disagreement (in absolute
terms) with the use of non-financial than financial measurements when companies
evaluate how much managers will get paid. The relative signs of the gap measure
provide additional information: the respondents reported that they would rather
have less emphasis on receivables management, new product introduction, and
sales for compensation purposes. One possible interpretation is that managers
would prefer less emphasis on these measures because they do not believe that
these measures adequately capture their performance. Conversely, the managers
responded that they would rather have more emphasis on employee satisfaction and
efficiency gains.
In this survey we found that most companies are, in fact, trying to balance out
the use of financial and non-financial performance measures. As the correlation
70 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE
Fig. 5. Financial Performance Measurement Gap: Preferred Use Minus Actual Use.
matrix of Table 2 shows, there are some significant correlations among financial
and non-financial metrics.
For example, the respondents who reported that their compensation is largely
influenced by sales also reported that their pay is contingent on achieving strategic
objectives. Similarly, managers who are paid based on how well they perform with
respect to efficiency gains also tend to be the ones who reported that employee
satisfaction plays a major role in determining their rewards. Managers in those
situations would have incentives to cut costs, but not in ways that would hurt
employee morale so much that the short-term cost savings would end up hurting
long-term profits and growth. Likewise, we found a strong correlation between the
emphasis on cash flows and quality for compensation purposes. This combination
encourages managers to invest in quality without losing sight of the need to
The Measurement Gap in Paying for Performance 71
Fig. 6. Non-Financial Performance Measurement Gap: Preferred Use Minus Actual Use.
generate cash flows. Those results are consistent with existing evidence that com-
panies are striving to get a more balanced picture of performance in order to avoid
distortions caused by a concern directed exclusively toward short-term, historic-
based financial measures (see evidence summarized by Malina & Selto, 2001).
This balance depends on the availability of non-financial indicators that can
deliver accurate, relevant and timely information on how well managers are doing
in those areas. But in many companies those measures are not readily available.
Besides measurement difficulties, companies also face serious constraints with
respect to what their information systems can deliver. According to a survey by
The Conference Board and the international consulting firm A. T. Kearney, Inc.,
57% of the respondents said that their companies’ information technology limited
their ability to implement the necessary changes in their current performance
72
Table 2. Correlation Matrix for Actual Financial and Non-Financial Performance Measures Used.
Measures 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17
1 Sales growth
2 Volume of orders 0.37***
3 Shipments 0.20 0.58***
4 Expense control 0.07 0.23* 0.55***
5 Profitability 0.10 0.25** 0.15 0.49***
6 Receivables man- 0.28** 0.55*** 0.27** 0.30** 0.44***
measurement systems (Gates, 1999). Similarly, more than half of the respondents
to the AICPA survey anticipated technology changes in their organizations in
the next year to 18 months, and 79% rated the quality of information in their
performance measurement systems as poor to adequate (AICPA & Maisel, 2001).
IMPACT ON PERFORMANCE
Firms invest large amounts of resources in designing, implementing and
maintaining performance measurement systems. It is thus critical to assess
whether the choice of performance measures has had any significant impact on
actual overall performance and on managerial decisions. In this study we used
Pearson correlation coefficients to evaluate the relationship between performance
measures and managerial performance.
Panel A: Correlation between actual financial measures used for compensation purposes and
performance
Sales growth 0.13 (0.319)
Volume of orders 0.04 (0.782)
Shipments 0.17 (0.178)
Expense control 0.15 (0.224)
Profitability 0.17 (0.169)
Receivables management −0.03 (0.801)
Inventory management 0 (0.971)
Efficiency gains 0.26 (0.039)*
Cash flow −0.09 (0.483)
Panel B: Correlation between actual non-financial measures used for compensation purposes and
performance
Quality of products or services 0.16 (0.207)
Customer satisfaction 0.19 (0.128)
New product introduction 0.23 (0.063)*
On-time delivery 0.16 (0.198)
Accomplishment of project milestones 0.14 (0.258)
Achievement of strategic objectives 0.37 (0.003)*
Market share 0.33 (0.008)*
Employee satisfaction 0.04 (0.736)
∗ Significant at p < 0.10.
74 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE
Panels A and B of Table 3 show that managers who have their compensation
primarily tied to efficiency gains, new product introduction, achievement of
strategic objectives, and market share tend to be the ones who perform the best
overall. This result is consistent with Kaplan and Norton’s central argument
that individual performance measures must be directly tied to strategy (Kaplan
& Norton, 2001). This evidence also confirms what many companies have
learned, albeit the hard way: there has to be a strong link between financial and
non-financial performance measurements, lest the two may work against each
other. For example, a company that may be doing well in terms of achieving
certain strategic objectives, introducing new products and controlling a significant
market share may still face financial difficulties, if efficiency is not properly
considered when evaluating and rewarding its managers.
In Panels A and B of Table 4 the correlations between managerial preferences
for performance measures and overall performance are reported. The three
Panel A: Correlation between preferred financial measures used for compensation purposes and
performance
Sales growth 0.25 (0.049)*
Volume of orders 0.32 (0.009)*
Shipments 0.14 (0.283)
Expense control 0.27 (0.028)*
Profitability 0.15 (0.244)
Receivables management 0.08 (0.506)
Inventory management −0.05 (0.696)
Efficiency gains −0.04 (0.738)
Cash flow 0.31 (0.014)*
Panel B: Correlation between preferred non-financial measures used for compensation purposes and
performance
Quality of products or services 0.28 (0.023)*
Customer satisfaction 0.30 (0.018)*
New product introduction 0.14 (0.254)
On-time delivery 0.28 (0.024)*
Accomplishment of project milestones 0.47 (0.000)*
Achievement of strategic objectives 0.36 (0.003)*
Market share 0.11 (0.371)
Employee satisfaction −0.06 (0.654)
∗ Significant at p < 0.10.
The Measurement Gap in Paying for Performance 75
Panel A: Correlation between measurement gap in financial measures used for compensation
purposes and performance
Sales growth −0.25 (0.044)*
Volume of orders −0.42 (0.000)*
Shipments −0.03 (0.827)
Expense control −0.06 (0.662)
Profitability −0.14 (0.255)
Receivables management −0.11 (0.401)
Inventory management −0.15 (0.226)
Efficiency gains −0.21 (0.092)*
Cash flow −0.08 (0.506)
Panel B: Correlation between measurement gap in non-financial measures used for compensation
purposes and performance
Quality of products or services 0.02 (0.860)
Customer satisfaction −0.19 (0.123)
New product introduction −0.18 (0.148)
On-time delivery 0.05 (0.707)
Accomplishment of project milestones 0.06 (0.630)
Achievement of strategic objectives −0.35 (0.004)*
Market share −0.15 (0.233)
Employee satisfaction −0.09 (0.461)
∗ Significant at p < 0.10.
76 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE
that managers want to see as factors influencing their compensation. Some experts
argue that striving too hard to achieve consensus may paralyze an organization’s
effort to implement a new performance measurement system. Moreover, an ex-
cessive concern about consensus may lead to an incoherent measurement system,
one that uses diverse measures to please groups with different interests but has no
clear link to overall strategy.
We explored the issue of consensus by conducting further correlation analysis
on the relationship between the level of disagreement with the performance
measurement system and performance. The results in Table 5 show that the
magnitude of the disagreements do matter. Disagreements with the use of
volume of customer orders, strategic objectives, sales, and efficiency gains for
compensation purposes are associated with lower performance. One possible
explanation is that managers who disagree strongly about having order volume,
strategic objectives, sales, and efficiency gains influence their compensation lack
the motivation necessary to achieve high performance levels. These measurement
Panel A: Correlation between actual financial measures used for compensation purposes and gaming
Sales growth 0.18 (0.145)
Volume of orders 0.02 (0.862)
Shipments 0.01 (0.929)
Expense control 0.06 (0.638)
Profitability −0.05 (0.666)
Receivables management 0.12 (0.356)
Inventory management −0.04 (0.738)
Efficiency gains 0.16 (0.220)
Cash flow −0.02 (0.875)
Panel B: Correlation between actual non-financial measures used for compensation purposes and
gaming
Quality of products or services 0.05 (0.694)
Customer satisfaction −0.10 (0.422)
New product introduction 0.01 (0.933)
On-time delivery −0.12 (0.329)
Accomplishment of project milestones 0.08 (0.526)
Achievement of strategic objectives 0.30 (0.018)*
Market share 0.21 (0.096)*
Employee satisfaction 0.11 (0.398)
∗ Significant at p < 0.10.
The Measurement Gap in Paying for Performance 77
gaps and their relationship with performance suggest that companies need to
devote more attention to the preferences of participants in pay-for-performance
plans, and involve them in the process of selecting performance measures to
be used for compensation purposes. This result is in sharp contrast with one
finding in the Conference Board survey mentioned above: while recognizing the
potential resistance to change in performance measurement systems, only 9% of
the respondents said they would “consider identifying key stakeholder reasons
for resisting the strategic performance measurement effort” (Gates, 1999, p. 6).
EARNINGS MANAGEMENT
Critics of non-financial performance measures often point out that these measures
are more easily manipulated than financial measures, since they are neither audited
Panel A: Correlation between preferred financial measures used for compensation purposes and
gaming
Sales growth 0.13 (0.313)
Volume of orders 0.18 (0.158)
Shipments 0.07 (0.607)
Expense control −0.13 (0.299)
Profitability 0.10 (0.417)
Receivables management 0.03 (0.794)
Inventory management −0.04 (0.751)
Efficiency gains 0.06 (0.664)
Cash flow 0.21 (0.090)*
Panel B: Correlation between preferred non-financial measures used for compensation purposes and
gaming
Quality of products or services −0.00 (0.984)
Customer satisfaction 0.21 (0.089)*
New product introduction 0.38 (0.766)
On-time delivery 0.20 (0.108)
Accomplishment of project milestones 0.38 (0.002)*
Achievement of strategic objectives 0.28 (0.027)*
Market share 0.08 (0.516)
Employee satisfaction 0.06 (0.650)
∗ Significant at p < 0.10.
78 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE
Panel A: Correlation between measurement gap in financial measures used for compensation
purposes and gaming
Sales growth −0.01 (0.931)
Volume of orders 0.15 (0.225)
Shipments 0.06 (0.665)
Expense control 0.15 (0.228)
Profitability 0.14 (0.269)
Receivables management −0.08 (0.535)
Inventory management 0.00 (0.983)
Efficiency gains −0.10 (0.425)
Cash flow 0.19 (0.138)
Panel B: Correlation between measurement gap in non-financial measures used for compensation
purposes and gaming
Quality of products or services −0.04 (0.759)
Customer satisfaction 0.30 (0.018)*
New product introduction 0.02 (0.854)
On-time delivery 0.27 (0.029)*
Accomplishment of project milestones 0.28 (0.025)*
Achievement of strategic objectives 0.06 (0.666)
Market share −0.13 (0.306)
Employee satisfaction −0.07 (0.589)
∗ Significant at p < 0.10.
The Measurement Gap in Paying for Performance 79
the most with the degree of importance given to customer satisfaction, on-time
delivery, and project milestones are the ones most likely to manipulate reported
performance.
The results we obtained relating the measurement gap (the difference between
preferred and actual measures) to overall performance and gaming suggest that
the process of designing a measurement system may be at least as important as
the actual measures selected in the end. Companies need to involve those who will
participate in pay-for-performance plans in the process of selecting appropriate
performance measures. Teams in charge of designing measurement systems could
benefit from an awareness of managers’ preferences, as those preferences may
reveal the measures that most closely reflect what managers can control.
A well-managed design process should increase motivation and commitment,
promote improved performance, and reduce the incidence of a gaming attitude.
This is particularly true in companies that value participation and consensus. The
design process should encourage open, honest dialogue on strategic objectives, and
it should involve as many incentive plan participants as possible. When designers
and participants reach an agreement about strategic objectives, then they can
select specific measures based on those objectives without risking a loss of focus.
Managing this design process presents quite a challenge for an organization’s
executive team, for this team must strive to stimulate open discussion while
maintaining a coherent vision. The design process may be seriously constrained
by company politics. This process may become even more difficult when conflicts
arise concerning how non-financial measures should be defined. In our survey we
found that managers on average disagree more with their company’s use of non-
financial measures than with their use of financial measures. Such disagreement
results at least in part from inexperience with non-financial measurements.
Our finding of a high use of non-financial measures in pay-for-performance
plans presents an important challenge to the management accounting profession.
While most management accountants have been trained to produce, analyze and
communicate financial information, few are ready for the technical and human
adjustments necessary to deal with non-financial information that is useful for
managers. Not surprisingly, the AICPA survey anticipated that the level of effort
finance professionals will be devoting to performance measurement will increase
in the near future (AICPA & Maisel, 2001). Furthermore, the need to develop and
implement non-financial measures brings additional pressure to bear on informa-
tion systems to provide reliable information on how well managers are doing in
a wide range of areas. In this information age, it is increasingly important to have
information systems capable of supporting the complexities of cutting-edge perfor-
mance measurement systems. Current surveys show that such information systems
are in short supply.
The Measurement Gap in Paying for Performance 81
ACKNOWLEDGMENT
Research support from a grant by the Merrick School of Business is gratefully
acknowledged.
REFERENCES
American Institute of Certified Public Accountants & Maisel, L. (2001). Performance measurement
practices survey results. Jersey City, NJ: AICPA.
Banker, R., Konstans, C., & Mashruwala, R. (2000). A contextual study of links between employee
satisfaction, employee turnover, customer satisfaction and financial performance. Working
Paper. University of Texas at Dallas.
Banker, R., Potter, G., & Srinivasan, D. (2000). An empirical investigation of an incentive plan that
includes nonfinancial performance measures. The Accounting Review (January), 65–92.
Bento, R., & Ferreira, L. (1992). Incentive pay and organizational culture. In: W. Bruns (Ed.),
Performance Measurement, Evaluation and Incentives (pp. 157–180). Boston: Harvard
Business School Press.
Bento, R., & White, L. (1998). Participant values and incentive plans. Human Resource Management
Journal (Spring), 47–59.
82 JEFFREY F. SHIELDS AND LOURDES FERREIRA WHITE
Brownell, P., & Hirst, M. (1986). Reliance on accounting information, budgetary participation,
and task uncertainty: Tests of a three-way interaction. Journal of Accounting Research,
241–249.
Bruns, W., & Merchant, K. (1989). Ethics test for everyday managers. Harvard Business Review
(March–April), 220–221.
Bruns, W., & Merchant, K. (1990). The dangerous morality of managing earnings. Management
Accounting (August), 22–25.
Chatman, J. (1989). Improving interactional organizational research: A model of person-organization
fit. Academy of Management Review, 333–349.
Chatman, J. (1991). Matching people and organizations: Selection and socialization in public
accounting firms. Administrative Science Quarterly, 36, 459–484.
Chow, C., Kato, Y., & Merchant, K. (1996). The use of organizational controls and their effects
on data manipulation and management myopia: A Japan vs. U.S. comparison. Accounting,
Organizations and Society, 21, 175–192.
Chow, C., Shields, M., & Wu, A. (1999). The importance of national culture in the design of management
controls for multi-national operations. Accounting, Organizations and Society, 24, 441–461.
Epstein, M. J., Kumar, P., & Westbrook, R. A. (2000). The drivers of customer and corporate
profitability: Modeling, measuring, and managing the casual relationships. Advances in
Management Accounting, 9, 43–72.
Frigo, M. (2001). 2001 Cost management group survey on performance measurement. Montvale, NJ:
Institute of Management Accountants.
Frigo, M. (2002). Nonfinancial performance measures and strategy execution. Strategic Finance
(August), 6–9.
Gates, S. (1999). Aligning strategic performance measures and results. New York: Conference Board.
Ittner, C., & Larcker, D. (1998a). Innovations in performance measurement: Trends and research
implications. Journal of Management Accounting Research, 10, 205–238.
Ittner, C., & Larcker, D. (1998b). Are nonfinancial measures leading indicators of financial per-
formance? An analysis of customer satisfaction. Journal of Accounting Research (Suppl.),
1–35.
Kaplan, R., & Norton, D. (1992). The balanced scorecard: Measures that drive performance. Harvard
Business Review (January–February), 71–79.
Kaplan, R., & Norton, D. (1996). The balanced scorecard. Boston: Harvard Business School Press.
Kaplan, R., & Norton, D. P. (2001). The strategy-focused organization. Boston: Harvard Business Press.
Kren, L. (1992). Budgetary participation and managerial performance. The Accounting Review,
511–526.
Leahy, T. (2000). All the right moves. Business Finance (April), 27–32.
Malina, M., & Selto, F. (2001). Communicating and controlling strategy: An empirical study of the
effectiveness of the balanced scorecard. Journal of Management Accounting Research, 47–90.
Merchant, K. (1984). Influences on departmental budgeting: An empirical examination of a
contingency model. Accounting, Organization and Society, 9, 291–307.
Nagar, V., & Rajan, M. (2001). The revenue implications of financial and operational measures of
product quality. The Accounting Review (October), 495–513.
Nouri, H., Blau, G., & Shahid, A. (1995). The effect of socially desirable responding (SDR) on
the relation between budgetary participation and self-reported job performance. Advances in
Management Accounting, 163–177.
O’Reilly, C., Chatman, J., & Caldwell, D. (1991). People and organizational culture: A profile compar-
ison approach to assessing person-organization fit. Academy of Management Journal, 487–516.
The Measurement Gap in Paying for Performance 83
ABSTRACT
Despite arguments that traditional product costing and variance analysis
operate contrary to the strategic goals of advanced manufacturing practices
such as just in time (JIT), total quality management (TQM), and Six Sigma, lit-
tle empirical evidence exists that cost accounting practices (CAP) are chang-
ing in the era of continuous improvement and waste reduction. This research
supplies some of the first evidence of what CAP are employed to support
the information needs of a world-class manufacturing environment. Using
survey data obtained from executives of 121 U.S. manufacturing firms, the
study examines the relationship between the use of JIT, TQM, and Six Sigma
with various forms of traditional and non-traditional CAP. Analysis of vari-
ance tests (ANOVA) indicate that most traditional CAP continue to be used
in all manufacturing environments, but a significant portion of world-class
manufacturers supplement their internal management accounting system with
non-traditional management accounting techniques.
INTRODUCTION
Firms competing in a global arena and adopting sophisticated manufacturing
technologies, such as total quality management (TQM) and just-in-time (JIT), re-
quire a complementary management accounting system (MAS) (Sillince & Sykes,
1995; Welfe & Keltyka, 2000). The MAS should support advanced manufacturing
technologies by providing integrated information to interpret and to assess
activities that have an impact on strategic priorities. The adoption of advanced
manufacturing practices suggests a shift away from a short-term, financially-
oriented product focus towards a modified, more non-financial, process-oriented
focus that fits operations strategies (Daniel & Reitsperger, 1991) and integrates
activities with strategic priorities (Chenhall & Langfield-Smith, 1998b).
Previous studies have reported that organizations using more efficient produc-
tion practices make greater use of non-traditional information and reward systems
(Banker et al., 1993a, b; Callen et al., 2002; Fullerton & McWatters, 2002; Ittner
& Larcker, 1995; Patell, 1987); yet, little empirical evidence exists that cost
accounting practices (CAP) of a firm’s MAS also are changing. The objectives
of traditional product costing and variance analysis seemingly operate contrary to
the strategic goals of continuous improvement and waste reduction embodied in
advanced manufacturing production processes. It is argued that the benefits from
JIT and TQM implementation would be captured and reflected more clearly by
the parallel adoption of more simplified, non-traditional CAP. However, minimal
evidence exists to support the assessment that current accounting practices are
harmful to the improvement of manufacturing technology. In fact, most studies
that have examined this issue find that companies continue to rely on conventional
accounting information, even in sophisticated manufacturing environments (e.g.
Baker et al., 1994; Cobb, 1992; McNair et al., 1989). Zimmerman (2003, p. 11)
suggests that managers must derive some hidden benefits from continuing to
use “presumably inferior accounting information” in their decision making. For
example, management control over operations provided by the existing MAS
may outweigh the benefits of other systems that are better for decision-making
purposes. The objective of this study is to explore whether specific CAP are
changing to meet the information needs of advanced manufacturing environments.
To examine the CAP used in advanced manufacturing environments, a survey
instrument was sent to executives representing 182 U.S. manufacturing firms.
Data from the 121 survey responses were analyzed to determine whether the
use of non-traditional CAP is linked to the implementation of JIT, TQM, and
Six Sigma. The results show that there have been minimal changes in the use
of traditional CAP. However, evidence exists that rather than replacing the
traditional, internal-accounting practices, supplementary measures have been
An Empirical Examination of Cost Accounting Practices 87
added to provide more timely and accurate information for internal planning and
control. Perhaps much of the criticism of CAP is unfounded, and the emergence of
supplemental financial and non-financial information, combined with traditional
accounting techniques, equips management with the appropriate decision-making
and control tools for an advanced manufacturing environment. This paper provides
some of the first empirical evidence of what CAP actually are being used in
conjunction with JIT, TQM, and Six Sigma.
The remainder of this paper is organized as follows: The following section exam-
ines the prior literature related to advanced manufacturing practices and CAP, and
identifies the research question. The next section describes the research method.
The following two sections present and discuss the empirical results. The final
section briefly summarizes the study and provides direction for future research.
Research Question
The benefits that firms reap from implementing advanced manufacturing
techniques appear to be enhanced by complementary changes in their internal
accounting measures (Ahmed et al., 1991; Ansari & Modarress, 1986; Barney,
1986; Ittner & Larcker, 1995; Milgrom & Roberts, 1995). “Quality improvement
advocates argue that the organizational changes needed for effective TQM
require new approaches to management accounting and control,” with more
comprehensive distribution of new types of information that measure quality
and team performance (Ittner & Larcker, 1995, p. 3). This study examines the
association between the adoption of advanced manufacturing techniques and
specific CAP in terms of the following research question:
Do firms that implement advanced manufacturing techniques such as JIT, TQM,
and Six Sigma use more non-traditional cost accounting practices?
RESEARCH METHOD
Data Collection
Survey Instrument
To explore the research question, a detailed survey instrument was used to
collect specific information about the manufacturing operations, product costing
methods, information and incentive systems, advanced manufacturing practices
employed, and characteristics of the respondent firms.1 The majority of the
questions on the survey instrument were either categorical or interval Likert
scales. Factor analysis combined the Likert-scaled questions into independent
measures to test the research question. To evaluate the survey instrument for
readability, completeness, and clarity, a limited pretest was conducted by soliciting
feedback from business professors and managers from four manufacturing firms
that were familiar with advanced manufacturing practices. Appropriate changes
were made to reflect their comments and suggestions.
Sample Firms
The initial sample firms for this study constituted 253 pre-identified executives
from manufacturing firms that had responded to a similar survey study in 1997
(see Fullerton & McWatters, 2001, 2002). Of the original 253 firms, 66 (26%)
were no longer viable, independent businesses. Over half of the initial respondents
90 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS
in the 187 remaining firms were no longer with the company. Replacements
were contacted in all but the five firms that declined to participate. Thus, 182
manufacturing executives were contacted a maximum of three times via e-mail,
fax, or mail.2 One hundred twenty-one usable responses were received, for an
overall response rate of 66%. The majority of the respondents had titles equivalent
to the Vice President of Operations, the Director of Manufacturing, or the Plant
Manager. They had an average of 19 years of management experience, including
12 years in management with their current firm.
The respondent firms had a primary two-digit SIC code within the manufac-
turing ranges of 20 and 39. As shown on Table 1, the majority (64%) of the
respondent firms is from three industries: industrial machinery (SIC-35, 16%),
electronics (SIC-36, 28%), and instrumentation (SIC-38, 20%).3
20 – Food 0 0 1 0 0 2 3
22 – Textiles 0 0 0 0 0 1 1
25 – Furniture & fixtures 5 3 2 3 2 0 5
26 – Paper & allied products 0 1 1 0 0 0 2
27 – Printing/publishing 0 1 0 0 0 0 1
28 – Chemicals & allied products 0 4 0 0 0 5 9
30 – Rubber products 1 1 0 1 0 0 1
32 – Nonmetallic mineral products 0 1 1 0 0 0 1
33 – Primary metals 3 4 5 2 2 3 9
34 – Fabricated metals 3 3 1 2 1 2 6
35 – Industrial machinery 11 10 5 8 2 6 19
36 – Electronics 14 16 16 7 7 9 34
37 – Motor vehicles & accessories 2 3 2 2 2 1 4
38 – Instrumentation 12 11 7 7 2 7 24
39 – Other manufacture 1 1 1 1 1 1 2
Totals 52 59 42 33 19 37 121
Supplemental information: 15 firms implemented JIT exclusively; 13 firms implemented TQM exclusively; 5 firms implemented six sigma exclusively;
13 firms implemented only TQM and six sigma; 5 firms implemented only JIT and six sigma.
a Classification of production processes were self-identified by survey respondents.
91
92 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS
ten consensus JIT elements identified in the work of established JIT authors (e.g.
Hall et al.). These consensus elements used by White (1993), White et al. (1999),
White and Prybutok (2001), Fullerton and McWatters (2001, 2002) and Fullerton
et al. (2003) as JIT indicators are designated as follows: focused factory, group
technology, reduced setup times, total productive maintenance, multi-function
employees, uniform workload, kanban, JIT purchasing, total quality control, and
quality circles.4
Factor Analysis
Using the above-noted JIT indicators, eleven survey questions asked respondents
to identify their firm’s level of JIT implementation on the basis of a six-point Likert
scale, ranging from “no intention” of implementing the identified JIT practice to its
being “fully implemented.”5 Using the principal components method, these items
were subjected to a factor analysis. Three components of JIT with eigenvalues
greater than 1.0 were extracted from the analysis, representing 61% of the total
variance in the data.6 The first factor is a manufacturing component that explains the
extent to which companies have implemented general manufacturing techniques
associated with JIT, such as focused factory, group technology, uniform work loads,
and multi-function employees. The second JIT factor is a quality component that
examines the extent to which companies have implemented procedures for improv-
ing product and process quality. The third JIT factor identified is one of uniquely
JIT practices that describe the extent to which companies have implemented JIT
purchasing and kanban. For the results of the factor analysis, which are similar to
those of previous studies by Fullerton and McWatters (2001, 2002), see Table 2.
McNair et al. (1989) identified three trends that differentiate traditional and
non-traditional MAS: (1) a preference for process costing; (2) the use of actual
costs instead of standard costs; and (3) a greater focus on traceability of costs.
Traditional variance reports support maximum capacity utilization, which
contradicts the JIT objective to produce only what is needed, when it is needed.
Traditional CAP encourage production of goods with a high contribution margin
and ignore constraints or bottlenecks. For performance evaluation, shop-floor
cost accounting measures emphasize efficiency, and encourage large batches and
unnecessary inventory (Thorne & Smith, 1997). Cooper (1995) suggests that the
missing piece to the puzzle of Japanese cost superiority in lean enterprises is
the role of cost management systems. Western enterprises use cost accounting
systems, rather than cost management systems, which have different objectives.
An Empirical Examination of Cost Accounting Practices 93
Cost accounting systems report distorted product costs without helping firms
manage costs. Cost management systems control costs by designing them out of
products through such techniques as target costing and value engineering.
Process costing (PROCESS), which simplifies inventory accounting procedures
by reducing the need to track inventory, is generally considered to be better suited
to the JIT environment than job-order costing. When Hewlett-Packard introduced
JIT, all products were produced and accounted for in batches. JIT gradually
transformed the manufacturing to a continuous process. The MAS changed to
process costing, and CAP were simplified and streamlined (Patell, 1987). Over
half of the sample firms in Swenson and Cassidy (1993) switched from job-order
costing to process costing after JIT implementation.
Related to the physical flow of inventory through the system is the parallel
recording to account for it. In a process environment, the traditional costing
methods have tracked work in process inventory per equivalent units. In an
advanced manufacturing environment where both raw materials and work in
process inventory is minimized, detailed tracking of inventory can be unnecessary
(Scarlett, 1996). The simplification of product costing and inventory valuation
in a JIT environment calls for backflush accounting (BCKFL), where inventory
costs are determined at the completion of the production cycle (Haldane, 1998).
94 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS
Although still widely used, standard costing (STDRD) was developed in a differ-
ent business environment than currently exists. Standards in a traditional standard
costing system can incorporate slack, waste, and downtime, without encouraging
improvement. This system also allows for manipulation and mediocrity and may
not be appropriate for advanced manufacturing environments (McNair et al., 1989).
Rolling averages of actual performance, as benchmarks to monitor performance,
are preferred to estimates of what is expected (Green et al., 1992; Hendricks, 1994).
Haldane (1998) claimed that some uses of standard costing were “pernicious” and
actually “enshrine waste.” However, standard costing may be a good tool if it is
used properly to monitor trends for continuous improvement (Drury, 1999). Case
studies have shown that lean-manufacturing Japanese firms use standard costs,
but often adapt them continually to include Kaisen improvements (Cooper, 1996).
Related to standard costing is the use of variance analysis (VARAN). Variances
identify results that differed from initial expectations, not what caused the
deviation to occur. Avoidance of negative variances actually can impede the
implementation of lean manufacturing practices (Najarian, 1993). The use of
the traditional labor and machine utilization as volume and efficiency criteria
encourages overproduction and excess inventories (Drury, 1999; Fisher, 1992;
Haldane, 1998; Hendricks, 1994; Johnson, 1992; McNair et al., 1989; Wisner
& Fawcett, 1991). Standard-costing data used in traditional variance analysis
lack relevancy and can lead to defects before variances are noted and problems
corrected (Hendricks, 1994). In addition, the actual collection of variance
information is a non-value-added activity that increases cycle time (Bragg, 1996).
It is natural to conclude from this literature that the relevance of cost accounting
information would increase if managers spent more time checking the accuracy
of product costs, rather than reading variance reports.
Use of the absorption method (ABSORP) for inventory costing is required in
many jurisdictions to meet GAAP financial reporting requirements. Absorption
costing encourages inventory building by attaching all manufacturing overhead
costs to inventory and postponing the recording of the expense until the product is
sold. Building and storing inventory is contrary to lean manufacturing objectives,
yet can enhance net income. Traditional overhead allocation focuses on “overhead
absorption,” rather than on “overhead minimization” (Gagne & Discenza, 1992).
A suggested alternative for restraining the motivation to build inventory is the
replacement of absorption costing with variable (direct) or actual costing for
internal reporting of inventories (McWatters et al., 2001).
Since the mid-1980s, activity-based-costing (ABC) has been cited as a remedy
for the deficiencies of traditional cost accounting in advanced manufacturing
environments (Anderson, 1995; Cooper, 1994; Shields & Young, 1989). ABC
promotes decisions that are consistent with a lean manufacturing environment
An Empirical Examination of Cost Accounting Practices 95
that it better matches revenues and expenses, because it defers and allocates to
future periods all initial costs of research, marketing, and start-up to the period
in which the actual production of the units occurs and the benefits from these
prior activities are expected to be received. Activities are expensed based on the
number of units expected to be sold. Similar to management accounting in the
world-class Japanese manufacturing firms, the MAS should be integrated with
corporate strategy, and LCC should be integrated with the MAS (Ferrara, 1990).
LCC provides better decision-making information, because it is more reflective
of today’s advanced manufacturing environments (Holzer & Norreklit, 1991;
Peavey, 1990). In fact, prediction of life-cycle costs is a requirement for the
quality-initiative, ISO 9000 certification process (Rogers, 1998).
Life-cycle costing and value chain analysis (VCA) are related concepts. Value
chain analysis focuses on all business functions for a product, from research
and development to customer service, whether it is in the same firm or different
organizations (Horngren et al., 1997, p. 14). For example, TQM, business process
re-engineering, and JIT may not be as successful as anticipated, because the
necessary changes to support these processes have not been replicated for all of
the firms along the supply chain. It is not effective to try to optimize each piece
of the supply chain. All successive firms in the manufacturing process, beginning
with the customer order, back to the purchase of raw materials, must be evaluated
and integrated. Using “lean logistics,” activities should be organized to move in
an uninterrupted flow within a pull production system (Jones et al., 1997).
Contextual Variables
Four contextual variables are also examined. Firm size (SIZE) affects most aspects
of a firm’s strategy and success; therefore, each sample firm’s net sales figure,
as obtained from COMPUSTAT data, is used to examine firm size. Whether
a firm follows a more innovative strategy can affect its willingness to make
An Empirical Examination of Cost Accounting Practices 97
RESEARCH RESULTS
ANOVA Results for CAP and Advanced Manufacturing Practices
On the survey instrument, respondents indicated whether or not (yes or no) their
firm had formally implemented the advanced manufacturing techniques of JIT,
TQM and Six Sigma. In addition, they were asked to rate their manufacturing
operations on a scale from 1 to 5, with 1 being traditional and 5 world class.
The sample was separated into JIT and non-JIT, TQM and non-TQM, and Six
Sigma and non-Six Sigma users to compare the firm mean differences in the use
of specific CAP practices. The sample was further segregated into firms using
both JIT and TQM or neither, as well as firms that had implemented one of the
three WCM methods in comparison to firms that had none of the three methods in
place. The ANOVA results for the five different classifications are fairly similar,
which may indicate that the same measurement and information tools are used to
support most advanced manufacturing practices (Tables 3–7).
Of the 11 CAP practices examined, three of them consistently show significantly
more use in advanced manufacturing environments than in traditional manufactur-
ing operations. These are EVA, VCA, and LCC. There is limited and inconsistent
significant means differences among the remaining eight CAP practices evaluated.
Process costing, as opposed to job-order costing, is used more in JIT firms than in
non-JIT firms, along with backflush costing. TQM and Six Sigma environments
98 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS
Follower = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Leader.
∗∗∗ How satisfied are you with your management accounting system?
Follower = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Leader.
∗∗∗ How satisfied are you with your management accounting system?
Follower = 1 . . . 2 . . . 3 . . . 4 . . . 5 = Leader.
∗∗∗ How satisfied are you with your management accounting system?
employ target costing. Firms using the Six Sigma approach demonstrate a strong
preference for using the balance scorecard measures over firms that do not use
Six Sigma. Little difference exists in the sample firms’ use of a standard costing
system, with over 90% of all the classifications indicating that standard costs
were used internally for estimating product costs. Also, around 90% of the three
102 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS
difference in its use in TQM firms compared to non-TQM firms. In fact, although
not a significant difference, more non-JIT firms than JIT firms are using ABC. This
agrees with the study by Abernethy et al. (2001) that concluded firms adopting
advanced manufacturing technologies may not be well served by a sophisticated,
hierarchical ABC system.
The results support the argument that most firms identified as world-class manu-
facturers will adopt a combination of advanced manufacturing techniques. Each of
the advanced manufacturing processes (JIT, TQM, and Six Sigma) is implemented
significantly more in all of the advanced manufacturing environments examined.
In addition, the sample firms’ self-evaluated ratings as a world-class manufacturer
are significantly higher in JIT, TQM, and Six Sigma environments.
The contextual environments for firms practicing advanced manufacturing
techniques are very similar. These firms are larger in size, and the respondents
perceive their firms to be leaders in process and product development. They
also report significantly more support from top management in initiating and
implementing change, as well as providing necessary training in new production
strategies. All of the firms appear to be “somewhat satisfied” with their MAS. No
evidence exists that user satisfaction is dependent upon the type of manufacturing
environment in which the MAS is operating.
Table 8. (Continued )
n = 121 JIT JIT JIT JIT
Manufacturing Quality Unique Combined
Notes: Respondents were asked to indicate the extent to which their firm had implemented the indi-
vidual JIT elements. Possible responses were:
No intention = 1; Considering = 2; Beginning = 3; Partially = 4; Substantially = 5;
Fully = 6.
∗ p < 0.05.
∗∗ p < 0.01.
∗∗∗ p < 0.001.
variance analysis to monitor Kaisen objectives. Cooper also found that despite
Japanese firms’ strong emphasis on cost management, their cost systems were
relatively traditional, rather than technically advanced.
Traditional accounting techniques may be more inadequate than outdated, and
need to be supplemented with new planning and control tools, such as target
costing, life-cycle costing, and value chain analysis. The research results suggest
that advanced manufacturing environments require information beyond what is
provided by traditional production-costing methods. For proper planning and
control, firms need to understand the full gamut of costs from product inception
to disposal, including costs for research and development, change initiatives, and
marketing. Advanced manufacturing systems, such as JIT, must be highly flexible
to respond to customer demand. In order to effectively operate a lean, pull system
that focuses on continuous improvement, firms must integrate and coordinate
their operations with both their suppliers and customers and have assurance that
similar quality initiatives will be exercised and supportive all along the value chain
(Kalagnanam & Lindsay, 1998). In evaluating their financial success, advanced
manufacturing firms want to examine the net added value to their firms using
measurement techniques such as EVA and make strategic decisions accordingly.
Life-cycle costing and value chain analysis perspectives are supported by
the tenets of target costing. According to this study’s results, target costing is
being used for new product development in TQM and Six Sigma environments.
Target-costing principles, which assist in the planning and control of costs during
the initial stages of product and process design, are considered an important
Japanese accounting tool (Ansari et al., 1997; Sakurai & Scarbrough, 1997), as
demonstrated by Cooper’s (1996) study of Japanese lean manufacturers.
An interesting result is the strong correlation between the use of Six Sigma and
the balanced scorecard. Six Sigma is a data-driven process that is highly tied to the
bottom line, but it is also much broader in its application than the measurement
of profitability alone. It supports continuous improvement through management,
financial, and methodological tools that improve both products and processes
(Voelkel, 2002). A balanced scorecard analysis would assist in evaluation of Six
Sigma efforts by providing information not only about profitability measures,
but also about internal manufacturing operations, customer satisfaction, and
employee contributions and retention.
As earlier studies have reported, top management support is key to making
changes and successfully implementing lean strategies (Ahmed et al., 1991;
Ansari & Modarress, 1986; Celley et al., 1986; Im, 1989; Willis & Suter, 1989).
The survey results support this idea, as those sample firms that have adopted
JIT, TQM, and/or Six Sigma have a significantly higher level of support from
top management for change initiatives and new strategies compared to firms that
An Empirical Examination of Cost Accounting Practices 107
have not implemented these techniques. In addition, larger firms that have more
resources to allocate to these practices are more successful in adopting them.
An innovative environment that would provide flexibility and empowerment also
appears to facilitate the adoption of advanced production processes.
The results indicate that world-class manufacturers integrate a combination of
advanced manufacturing techniques. Six Sigma is implemented to support the use
of TQM and JIT. The results support evidence argued by Vuppalapati et al. (1995)
that JIT and TQM should be viewed as integrated, rather than isolated strategies.
The two are complementary in their focus, “making production more efficient and
effective through continuous improvement and elimination of waste strategies.”
Research further shows that the integration of TQM, JIT, and TPM leads to
higher performance than does the exclusive implementation of each technique
(Cua et al., 2001).
Research Limitations
SUMMARY
Strategic use of information resources help in customer service, product differen-
tiation, and cost competition (Narasimhan & Jayaram, 1998). Although the MAS
should support organizational operations and strategies, evidence in this study, as
in similar research (Banker et al., 2000; Clinton & Hsu, 1997; Durden et al., 1999;
Yasin et al., 1997), indicates that CAP are not changing substantially to support
lean practices. However, the results demonstrate that world-class manufacturing
firms are integrating additional, non-traditional information techniques into their
MAS, such as EVA, life-cycle costing, and value chain analysis.
In a survey of 670 UK firms, Bright et al. (1992) cited the following reasons
given for the lack of substantial change in cost accounting systems: (1) The
108 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS
benefits do not appear to outweigh the costs. (2) New techniques have not proven
to provide better information. (3) There is already too much change and change
is difficult; they are comfortable with what they have. (4) The current system
is adequate; it just needs to be better utilized. (5) There is a lack of integration
between the factory and accounting information; thus, non-accountants do not use
accounting information for decision making. Also, there is a low expectation of
what accountants can offer. “If companies have simplistic cost systems, it may well
be that there is no need for a better system or that an existing need has not yet been
recognized. In most cases, systems will be improved before poor cost information
leads to consistently poor decisions” (Holzer & Norreklit, 1991, p. 12).
When surviving firms retain the same procedures over time, it is implicit that
the benefits derived therefrom exceed the costs. Moreover, the MAS has many
uses. It is plausible that control aspects of the system yield benefits that are
overlooked by those who decry the system’s inadequacy for decision making.
Further research is needed to determine the extent to which the implementation
of advanced manufacturing production processes motivates changes in a firm’s
internal accounting practices, as well as the impact of more extensive CAP on
firm profitability and competitiveness.
While this study provides evidence that some change is taking place in the
CAP of WCM firms, it appears to be from an expansion of traditional information
techniques, rather than from their replacement. Advanced manufacturing firms
must be experiencing benefits from the continued use of existing internal
accounting measures. The alleged limitations of these methods might stem more
from their application and not from the methods themselves. Rather than abandon
practices that have endured through decades, what is needed is greater acceptance
of management accounting’s role to support organizational strategy.
NOTES
1. The survey instrument was either available over the Internet, or hard copies were
faxed or mailed. The executives contacted were asked to choose which alternative they
preferred for responding to the questionnaire. Initially, 128 of the sample firms were
contacted via the Internet of which 97 responded. Forty-two were faxed and 12 were
mailed the questionnaire initially, with 17 and 6 respondents, respectively.
2. To check for non-response bias, the analyses were performed on the late responders.
No significant differences were found in the results.
3. The industry distribution for the respondent firms is similar to the total sample
industry distribution. Sixty-two percent of the firms sampled were from these same three
industries: industrial machinery, electronics, and instrumentation.
4. A definition of these terms was supplied with the questionnaire and can be found in
Fullerton et al. (2003).
An Empirical Examination of Cost Accounting Practices 109
5. Total quality control is represented by two questions on the survey: one is related to
process quality and the other to product quality.
6. All of the 11 elements loaded greater than 0.50 onto one of the three constructs except
for number 11, asking about the use of “quality circles.” Thus, this question was eliminated
from further testing representing JIT.
7. EVA is a registered trademark of Stern Stewart & Company.
8. Cronbach’s alpha (1951) for the combined measure and the three individual JIT factors
exceed the standard of 0.70 for established constructs (Nunnally, 1978).
ACKNOWLEDGMENTS
This research was made possible through a Summer Research Grant provided
by Utah State University. We gratefully acknowledge comments received from
participants at the AIMA Conference on Management Accounting Research in
Monterey, California (May 2003).
REFERENCES
Abernethy, M. A., Lillis, A. M., Brownell, P., & Carter, P. (2001). Product diversity and costing system
design choice: Field study evidence. Management Accounting Research, 12(1), 1–20.
Ahmed, N. U., Runc, E. A., & Montagno, R. V. (1991). A comparative study of U.S. manufacturing
firms at various stages of just-in-time implementation. International Journal of Production
Research, 29(4), 787–802.
Anderson, S. W. (1995). A framework for assessing cost management system changes: The case of
activity-based costing at General Motors, 1986–1993. Journal of Management Accounting
Research (Fall), 1–51.
Ansari, A., & Modarress, B. (1986). Just-in-time purchasing: Problems and solutions. Journal of
Purchasing and Materials Management (Summer), 11–15.
Ansari, S. L., Bell, J. E., & CAM-I Target Cost Core Group (1997). Target costing: The next frontier
in strategic cost management. Burr Ridge, IL: Irwin.
Atkinson, A. A., Balakrishnan, R., Booth, P., Cote, J. M., Groot, T., Malmi, T., Roberts, H., Uliana, E.,
& Wu, A. (1997). New directions in management accounting research. Journal of Management
Accounting Research, 9, 80–108.
Baker, W. M., Fry, T. D., & Karwan, K. (1994). The rise and fall of time-based manufacturing.
Management Accounting (June), 56–59.
Banker, R., Potter, G., & Schroeder, R. (1993a). Reporting manufacturing performance measures to
workers: An empirical study. Journal of Management Accounting Research, 5(Fall), 33–53.
Banker, R., Potter, G., & Schroeder, R. (1993b). Manufacturing performance reporting for continuous
quality improvement. Management International Review, 33(Special Issue), 69–85.
Banker, R., Potter, G., & Schroeder, R. (2000). New manufacturing practices and the design of control
systems. Proceedings of the Management Accounting Research Conference.
Barney, J. (1986). Strategic factor markets: Expectation, luck, and business strategy. Management
Science (October), 1231–1241.
110 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS
Bragg, S. M. (1996). Just-in-time accounting: How to decrease costs and increase efficiency. New
York: Wiley.
Bright, J., Davies, R. E., Downes, C. A., & Sweeting, R. C. (1992). The deployment of costing
techniques and practices: A UK study. Management Accounting Research, 3, 201–211.
Callen, J. L., Fader, C., & Morel, M. (2002). The performance consequences of in-house productivity
measures: An empirical analysis. Working Paper, University of Toronto, Ontario.
Celley, A. F., Clegg, W. H., Smith, A. W., & Vonderembse, M. A. (1986). Implementation of JIT in
the United States. Journal of Purchasing and Materials Management (Winter), 9–15.
Chenhall, R. H., & Langfield-Smith, K. (1998a). Adoption and benefits of management accounting
practices: An Australian study. Management Accounting Research, 9, 1–19.
Chenhall, R. H., & Langfield-Smith, K. (1998b). Factors influencing the role of management ac-
counting in the development of performance measures within organizational change programs.
Management Accounting Research, 9, 361–386.
Clinton, B. D., & Hsu, K. C. (1997). JIT and the balanced scorecard: Linking manufacturing control
to management control. Management Accounting, 79(September), 18–24.
Cobb, I. (1992). JIT and the management accountant. Management Accounting, 70(February), 42–49.
Cooper, R. (1994). The role of activity-based systems in supporting the transition to the lean enterprise.
Advances in Management Accounting, 1–23.
Cooper, R. (1995). When lean enterprises collide. Boston, MA: Harvard Business School Press.
Cooper, R. (1996). Costing techniques to support corporate strategy: Evidence from Japan.
Management Accounting Research, 7, 219–246.
Cooper, R., & Kaplan, R. S. (1991). The design of cost management systems. Englewood Cliffs, NJ:
Prentice-Hall.
Cronbach, L. J. (1951). Construct validity in psychological tests. Psychological Bulletin, 297–334.
Cua, K. O., McKone, K. E., & Schroeder, R. G. (2001). Relationships between implementation of
TQM, JIT, and TPM manufacturing performance. Journal of Operations Management, 19(6),
675–694.
Daniel, S. J., & Reitsperger, W. D. (1991). Linking quality strategy with management control systems:
Empirical evidence from Japanese industry. Accounting, Organizations and Society, 16(7),
601–618.
Daniel, S. J., & Reitsperger, W. D. (1996). Linking JIT strategies and control systems: A comparison
of the United States and Japan. The International Executive, 38(January/February), 95–121.
Davy, J., White, R., Merritt, N., & Gritzmacher, K. (1992). A derivation of the underlying constructs
of just-in-time management systems. Academy of Management Journal, 35(3), 653–670.
Dean, J. W., Jr., & Snell, S. A. (1996). The strategic use of integrated manufacturing: An empirical
examination. Strategic Management Journal, 17(June), 459–480.
Drury, C. (1999). Standard costing: A technique at variance with modern management? Management
Accounting, 77(10), 56–58.
Durden, C. H., Hassel, L. G., & Upton, D. R. (1999). Cost accounting and performance measurement
in a just-in-time production environment. Asia Pacific Journal of Management, 16, 111–125.
Ellis, K. (2001). Mastering six sigma. Training, 38(12), 30–35.
Ferrara, W. L. (1990). The new cost/management accounting: More questions than answers.
Management Accounting (October), 48–52.
Fisher, J. (1992). Use of non-financial performance measures. Cost Management (Spring), 31–38.
Flynn, B. B., Sakakibara, S., & Schroeder, R. G. (1995). Relationship between JIT and TQM: Practices
and performance. Academy of Management Journal, 38(October), 1325–1353.
Frigo, M. L., & Litman, J. (2001). What is strategic management? Strategic Finance, 83(6), 8–10.
An Empirical Examination of Cost Accounting Practices 111
Fullerton, R. R., & McWatters, C. S. (2001). The production performance benefits from JIT. Journal
of Operations Management, 19, 81–96.
Fullerton, R. R., & McWatters, C. S. (2002). The role of performance measures and incentive systems
in relation to the degree of JIT implementation. Accounting, Organizations and Society, 27(8),
711–735.
Fullerton, R. R., McWatters, C. S., & Fawson, C. (2003). An examination of the relationships between
JIT and financial performance. Journal of Operations Management, 21, 383–404.
Gagne, M. L., & Discenza, R. (1992). Accurate product costing in a JIT environment. International
Journal of Purchasing and Materials Management (Fall), 28–31.
Green F. B., Amenkhienan, F., & Johnson, G. (1992). Performance measures and JIT. Management
Accounting (October), 32–36.
Haldane, G. (1998). Accounting for change. Accountancy, 122(December), 64–65.
Harris, E. (1990). The impact of JIT production on product costing information systems. Production
and Inventory Management Journal, 31(1), 44–48.
Hayes, R. H., & Wheelwright, S. C. (1984). Restoring our competitive edge. New York: Collier
Macmillan.
Hedin, S. R., & Russell, G. R. (1992). JIT implementation: Interaction between the production and
cost-accounting functions. Production and Inventory Management Journal (Third Quarter),
68–73.
Hendricks, C. A., & Kelbaugh, R. L. (1998). Implementing six sigma at GE. The Journal for Quality
& Participation (July/August), 48–53.
Hendricks, J. A. (1994). Performance measures for a JIT manufacturer. IIE Solutions, 26(January),
26–36.
Holzer, H. P., & Norreklit, H. (1991). Some thoughts on cost accounting developments in the United
States. Management Accounting Research, 2, 3–13.
Hoque, A., & James, W. (2000). Linking balanced scorecard measures to size and market factors:
Impact on organizational performance. Journal of Management Accounting Research, 12,
1–17.
Horngren, C. T., Foster, G., Datar, S. M., & Teall, H. D. (1997). Cost accounting. Scarborough, Ont.:
Prentice-Hall.
Howell, R. A., & Soucy, S. R. (1987). Cost accounting in the new manufacturing environment.
Management Accounting (August), 42–48.
Im, J. H. (1989). How does kanban work in American companies? Production and Inventory
Management Journal (Fourth Quarter), 22–24.
Ittner, C. D., & Larcker, D. F. (1995). Total quality management and the choice of information and
reward systems. Journal of Accounting Research (Supplement), 1–34.
Ittner, C. D., & Larcker, D. F. (1997). The performance effects of process management techniques.
Management Science, 43(4), 522–534.
Johnson, H. T. (1992). Relevance regained: From top-down control to bottom-up empowerment. New
York: Free Press.
Jones, D. T., Hines, P., & Rich, N. (1997). Lean logistics. International Journal of Physical Distribution
and Logistics Management, 27(3/4), 153–163.
Kalagnanam, S. S., & Lindsay, R. M. (1998). The use of organic models of control in JIT firms:
Generalising woodward’s findings to modern manufacturing practices. Accounting, Organiza-
tions and Society, 24(1), 1–30.
Kaplan, R. S. (1983). Measuring manufacturing performance: A new challenge for managerial
accounting research. The Accounting Review (October), 686–705.
112 ROSEMARY R. FULLERTON AND CHERYL S. McWATTERS
Kaplan, R. S. (1984). Yesterday’s accounting undermines production. Harvard Business Review, 62,
96–102.
Koufteros, X. A., Vonderembse, M. A., & Doll, W. J. (1998). Developing measures of time-based
manufacturing. Journal of Operations Management, 16, 21–41.
Kristensen, K., Dahlgaard, J. J., Kanji, G. K., & Juhl, H. J. (1999). Some consequences of just-in-time:
Results from a comparison between the Nordic countries and east Asia. Total Quality
Management, 10(1), 61–71.
Lillis, A. M. (2002). Managing multiple dimensions of manufacturing performance – An exploratory
study. Accounting, Organizations and Society, 27(6), 497–529.
Linderman, K., Schroeder, R. G., Zaheer, A., & Choo, A. S. (2003). Six Sigma: A goal-theoretic
perspective. Journal of Operations Management, 21, 193–203.
Mackey, J., & Hughes, V. H. (1993). Decision-focused costing at Kenco. Management Accounting,
74(11), 22–32.
McNair, C. J., Lynch, R. L., & Cross, K. F. (1990). Do financial and non-financial performance
measures have to agree? Management Accounting (November), 28–36.
McNair, C. J., Mosconi, W., & Norris, T. (1989). Beyond the bottom line: Measuring world class
performance. Homewood, IL: Irwin.
McWatters, C. S., Morse, D. C., & Zimmerman, J. L. (2001). Management accounting: Analysis and
interpretation (2nd ed.). New York: McGraw-Hill.
Mehra, S., & Inman, R. A. (1992). Determining the critical elements of just-in-time implementation.
Decision Sciences, 23, 160–174.
Milgrom, P., & Roberts, J. (1995). Complementarities and fit: Strategy, structure and organizational
change in manufacturing. Journal of Accounting and Economics, 19, 179–208.
Moshavi, S. (1990). Well made in America: Lessons from Harley-Davidson on being the best. New
York: McGraw-Hill.
Najarian, G. (1993). Performance measurement: Measure the right things. Manufacturing Systems,
11(September), 54–58.
Narasimhan, R., & Jayaram, J. (1998). An empirical investigation of the antecedents and consequences
of manufacturing goal achievement in North American, European, and Pan Pacific firms.
Journal of Operations Management, 16, 159–179.
Nunnally, J. (1978). Psychometric theory. New York: McGraw-Hill.
Patell, J. M. (1987). Cost accounting, process control, and product design: A case study of the
Hewlett-Packard personal office computer division. The Accounting Review, 62(4), 808–839.
Peavey, D. E. (1990). Battle at the GAAP? It’s time for a change. Management Accounting (February),
31–35.
Pettit, J. (2000). EVA and production strategy. Industrial Management (November/December), 6–13.
Rogers, B. (1998). Paperwork with a payoff. Manufacturing Engineering, 121(3), 16.
Sakakibara, S., Flynn, B. B., & Schroeder, R. G. (1993). A framework and measurement instrument
for just-in-time manufacturing. Production and Operations Management, 2(3), 177–194.
Sakurai, M., & Scarbrough, D. P. (1997). Japanese cost management. Menlo Park, CA: Crisp
Publications.
Scarlett, B. (1996). In defence of management accounting applications. Management Accounting,
74(1), 46–52.
Schonberger, R. J. (1986). World class manufacturing casebook: The lesson of simplicity applied.
New York: Free Press.
Shields, M. D., & Young, S. M. (1989). A behavioral model for implementing cost management
systems. Journal of Cost Management (Winter), 17–27.
An Empirical Examination of Cost Accounting Practices 113
Sillince, J. A. A., & Sykes, G. M. H. (1995). The role of accountants in improving manufacturing
technology. Management Accounting Research, 6(June), 103–124.
Spencer, M. S., & Guide, V. D. (1995). An exploration of the components of JIT. International Journal
of Operations and Production Management, 15(5), 72–83.
Swenson, D. W., & Cassidy, J. (1993). The effect of JIT on management accounting. Cost Management
(Spring), 39–47.
Thorne, K., & Smith, M. (1997). Synchronous manufacturing: Back to basics. Management
Accounting, 75(11), 58–60.
Tully, S. (1993). The real key to creating wealth. Fortune (September 20), 38–50.
Voelkel, J. G. (2002). Something’s missing: An education in statistical methods will make employees
more valuable to Six Sigma corporations. Quality Progress (May), 98–101.
Voss, C. A. (1995). Alternative paradigms for manufacturing strategy. International Journal of
Production Management, 15(4), 5–16.
Vuppalapati, K., Ahire, S. L., & Gupta, R. (1995). JIT and TQM: A case for joint implementation.
International Journal of Operations and Production Management, 15(5), 84–94.
Welfe, B., & Keltyka, P. (2000). Global competition: The new challenge for management accountants.
The Ohio CPA Journal (January–March), 30–36.
White, R. E. (1993). An empirical assessment of JIT in U.S. manufacturers. Production and Inventory
Management, 34(Second Quarter), 38–42.
White, R. E., Pearson, J. N., & Wilson, J. R. (1999). JIT manufacturing: A survey of implementations
in small and large U.S. manufacturers. Management Science, 45(January), 1–15.
White, R. E., & Prybutok, V. (2001). The relationship between JIT practices and type of production
system. Omega, 29, 113–124.
White, R. E., & Ruch, W. A. (1990). The composition and scope of JIT. Operations Management
Review, 7(3–4), 9–18.
Willis, T. H., & Suter, W. C., Jr. (1989). The five M’s of manufacturing: A JIT conversion life cycle.
Production and Inventory Management Journal (January), 53–56.
Wisner, J. D., & Fawcett, S. E. (1991). Linking firm strategy to operating decisions through performance
measurement. Production and Inventory Management Journal (Third Quarter), 5–11.
Witt, E. (2002). Achieving six sigma logistics. Material Handling Management, 57(5), 10–13.
Yasin, M. M., Small, M., & Wafa, M. (1997). An empirical investigation of JIT effectiveness: An
organizational perspective. Omega, 25(4), 461–471.
Zimmerman, J. L. (2003). Accounting for decision making and control (4th ed.). Burr Ridge, IL:
McGraw-Hill.
THE INTERACTION EFFECTS OF LEAN
PRODUCTION MANUFACTURING
PRACTICES, COMPENSATION, AND
INFORMATION SYSTEMS ON
PRODUCTION COSTS: A RECURSIVE
PARTITIONING MODEL
ABSTRACT
The study re-examines if lean production manufacturing practices (i.e.
TQM and JIT) interact with the compensation system (incentive vs. fixed
compensation plans) and information system (i.e. attention directing goals
and performance feedback) to reduce production costs (in terms of manufac-
turing and warranty costs) using a recursive partitioning model. Decision
trees (i.e. recursive partitioning algorithm using Chi-square Automatic
Interaction Detection or CHAID) are constructed on data from 77 U.S.
manufacturing firms in the electronics industry. Overall, the “decision tree”
results show significant interaction effects. In particular, the study found
that better manufacturing performance (i.e. lower production costs) can be
achieved when lean production manufacturing practices such as TQM and
JIT are used along with incentive compensation plans. Also, synergies do
INTRODUCTION
In a US$5 million 5-year study on the future of the automobile, Womack et al.
(1990) made an excellent summary of manufacturing practices since the late
1800s. At the beginning of the industrial age (and in fact even before that), manu-
facturing was dominated by craft production. It had the following characteristics:
(1) highly skilled craftspeople; (2) very decentralised organisations; (3) general-
purpose machine tools; and (4) very low production volume. Although craft
production had worked very well then, it had several drawbacks. These included
high production costs (regardless of volume) and low consistency and reliability.
The first revolution in manufacturing practices came after World War I in
the early 1900s when Henry Ford introduced new manufacturing practices
that could reduce production costs drastically while increasing product quality.
This innovative system of manufacturing practices was called mass production
(vis-à-vis craft production). The defining feature of mass production was the
complete interchangeability of parts and the simplicity of attaching them to each
other. This led to the division of labour in the production process (to repetitive
single tasks) and the construction of moving assembly lines. Mass production
spurred a remarkable increase in productivity (and a corresponding remarkable
decrease in cost per unit output), a drastic improvement in product quality and
a significant reduction in capital requirements. Mass production was eventually
adopted in almost every industrial activity around the world.
Problems with mass production started to become prominent in the mid-1900s.
The minute division of labour removed the career path of workers and resulted
in dysfunctions (e.g. reduced job satisfaction). Also, mass production led to
standardised products that were not suited to all world markets. The production
system was inflexibility and time-consuming and expensive to change. Finally,
intense competition and unfavourable macro-economic developments further
eroded the advantages of mass production.
While mass production was declining in the mid-1900s, the second revolution in
manufacturing practices took root in Toyota in Japan. The Toyota Production Sys-
tem, also referred to by Womack et al. (1990) as lean production, was established
The Interaction Effects of Lean Production Manufacturing Practices 117
by the early 1960s and since then has been incorporated by many companies and
industries world-wide. Lean production brings together the advantages of craft
production and mass production by avoiding the former’s high cost and the latter’s
rigidity. It employs teams of multi-skilled workers at all levels of the organisation
and uses highly flexible and increasingly automated machines to produce outputs
in enormous variety as well as with high quality. Specifically, lean production
is characterised by the following focus: (1) cost reductions; (2) zero defects;
(3) zero inventories; (4) product variety; and (5) highly skilled, motivated and
empowered workers.
Lean production manufacturing practices have often been referred to by
other names, such as total quality management (TQM) and just-in-time (JIT).
In particular, these manufacturing practices (i.e. TQM and JIT) have been used
by manufacturing firms striving for continual improvement. To date, however,
mixed results have been reported – while some firms have excelled because of
their use of TQM and JIT (e.g. Xerox and Motorola), other firms do not seem
to have improved their manufacturing performance (see, for example, Harari,
1993; Ittner & Larcker, 1995). Although there is an expanding literature on lean
manufacturing such as TQM or JIT implementation, there is little empirical
evidence that provides reasons for these mixed results (Powell, 1995).
Since the middle part of the 1980s, there is a growing interest in how manage-
ment control systems could be modified to tailor to the needs of manufacturing
strategies (Buffa, 1984; Hayes et al., 1988; Kaplan, 1990; Schonberger, 1986).
Empirical evidence shows that higher organisational performance is often the
result of a match between an organisation’s environment, strategy and internal
structures or systems. By and large, most studies on management control focus
on senior management performance or overall performance at strategic business
unit levels (Govindarajan, 1988; Govindarajan & Gupta, 1985; Ittner & Larcker,
1997). Given that successes in manufacturing strategies are often influenced
by activities at the shop floor or operational level, empirical studies linking
management control policies to manufacturing performance at the operational
level may provide useful information on the mixed findings related to lean
manufacturing practices. Accordingly, the objective of this study is to examine
whether manufacturing firms using lean production manufacturing practices
such as TQM and/or JIT achieve higher manufacturing performance (i.e. lower
production costs) when they accompany these practices with contemporary
operational controls. More formally, the study investigates the performance effect
(i.e. reduction in production costs) of the match between lean production manu-
facturing practices and compensation and information systems using a recursive
partitioning model.
118 HIAN CHYE KOH ET AL.
RESEARCH FRAMEWORK
As mentioned earlier, TQM and JIT feature prominently in lean production. TQM
focuses on the continual improvement of manufacturing efficiency by eliminating
waste, scrap and rework while improving quality, developing skills and reducing
costs. Along similar lines, JIT emphasises manufacturing improvements via
reducing set-up and cycle times, lot sizes and inventories. These lean production
manufacturing practices require workers who are highly skilled, motivated
and empowered. In particular, workers are made responsible for improving
manufacturing capabilities and product and process quality (Siegel et al., 1997),
performing a variety of activities, and detecting non-conforming items. TQM
and JIT implementation are expected to improve manufacturing performance. In
addition, worker empowerment (which is an important part of TQM and JIT) is
expected to indirectly improve manufacturing performance via greater intrinsic
motivation (Hackman & Wageman, 1995). Among other things, improved
manufacturing performance translates to reduction in production costs.
Wruck and Jensen (1994) suggest that effective TQM implementation requires
major changes in organisational infrastructure such as the systems for allocating
decision rights, performance feedback and reward/punishment. Kalagananam
and Lindsay (1998), on the other hand, suggest that a fully developed JIT
system represent a significant departure from the traditional mass production
systems. They advocate that manufacturing firms adopting JIT must abandon a
mechanistic management control system and adopt an organic model of control.
Taken together, the literature suggests that management control systems in lean
manufacturing practices should be different from those under the traditional mass
production system.
Management control systems have been described as processes or tools for
influencing behaviour towards attainment of organisational goals or objectives.
A control system performs its function by controlling the flow of information,
establishing criteria for evaluation, and designing appropriate rewards and punish-
ment (Birnberg & Snodgrass, 1988; Flamhaultz et al., 1985). As such, this study
focuses on the compensation system (incentive vs. fixed compensation plans)
and the information system (i.e. attention-directing goals and manufacturing
performance feedback).
Compensation System
H1. Lean production manufacturing practices (i.e. TQM/JIT) interact with the
use of compensation plans to enhance manufacturing performance (i.e. to lower
production costs).
Locke and Latham (1990) found that goals positively influence the attention,
effort and persistence of workers. This finding is consistent across many studies
(e.g. Latham & Lee, 1986; Locke et al., 1981). Thus, if an organisation wants its
workers to achieve particular goals, then prior research findings suggest that the
presence of such goals can motivate workers to accomplish them. In practice, to
help workers achieve better manufacturing performance, attention-directing goals
or targets are often provided via the firm’s information system. Examples of such
goals include customer satisfaction and complaints, on-time delivery, defect rate
and sales returns, cycle time performance . . . etc.
From a learning standpoint, providing performance feedback helps workers
develop effective task strategies. Alternatively, feedback which shows that
performance is below the target can increase the motivation to work harder (Locke
& Latham, 1990). As a result, the combination of both goals and feedback often
lead to better performance (Erez, 1977). Using a framework of management
120 HIAN CHYE KOH ET AL.
Sample Selection
The electronics industry (SIC Code 36) was chosen as the primary industry for
the study for the following reasons. Unlike the TQM concept, the JIT concept
The Interaction Effects of Lean Production Manufacturing Practices 121
Questionnaire
An effective management control system starts with defining the critical perfor-
mance measures. Firms using lean production manufacturing practices such as
TQM and JIT are expected to experience improved efficiencies such as lower
manufacturing costs. Also, they are expected to have improved product quality and
hence lower warranty costs. Thus, changes in manufacturing costs and warranty
costs (collectively termed as production costs here) were used as dependent
variables in the study. Respondents were requested to indicate the changes in
122 HIAN CHYE KOH ET AL.
these production costs in the last three years, anchoring on a scale of 1–5, with 1
denoting “decrease tremendously,” 3 “no change” and 5 “increase tremendously.”
It is noted that this study focuses only on one key aspect of lean production,
namely, the reduction in production costs. This sole key aspect, however, captures
a great (as well as important) part of lean production.
Just-in-Time (JIT)
The JIT scale used in the study was a modified scale from Snell and Dean (1992)
(see Sim & Killough, 1998). Snell and Dean (1992) developed a 10-item scale
anchored on a 7-point Likert scale to measure JIT adoption. In this study, only
eight of the ten original JIT items were used. The first omitted item relates to
the extent to which the accounting system reflects costs of manufacturing. This
item was loaded onto the TQM construct in the Snell and Dean (1992) study and
did not seem to reflect a JIT construct. The second omitted item asked whether
the plant was laid out by process or product. This item was also loaded onto the
TQM construct and was deleted from the final TQM scale in Snell and Dean
(1992). For this study, an item that represented “time spent to achieve a more
orderly engineering change by improving the stability of the production schedule”
was added.
Compensation System
The independent variable “compensation system” consisted of two categories,
namely fixed compensation plans and incentive compensation plans. Specifically,
The Interaction Effects of Lean Production Manufacturing Practices 123
firms using fixed compensation plans were coded as “0,” while firms using incentive
compensation plans were coded as “1.” That is, compensation system was measured
as a dichotomous variable.
The subgroups and sub-subgroups are usually referred to as nodes. The end
product can be graphically represented by a tree-like structure. (See also Breiman,
1984, pp. 59–92; Ittner et al., 1999; Lehmann et al., 1998.) In the Chi-square
Automatic Interaction Detection (CHAID) algorithm, all possible splits of each
node for each independent variable are examined. The split that leads to the
most significant Chi-square statistic is selected. For the purpose of the study, the
dependent variables are dichotomised into two groups.
Results
Table 1 – Panel A provides the job title of the respondents. A review of the
respondents’ job titles shows that most respondents were closely associated with
manufacturing operations, suggesting that they are the appropriate persons for
providing shop floor information. Table 1 – Panel B shows descriptive statistics of
lean manufacturing practices and the type of reward systems. Except for 24 (30)
plants which did not have a formal TQM (JIT) program, respectively, the remain-
ing plants have implemented some form of lean manufacturing. Also, additional
analysis shows the majority of the sample plants had annual sales of between 10
and 15 million U.S. dollars. Further, manufacturing firms that made greater use of
lean production manufacturing practices such as TQM and JIT also made greater
use of incentive compensation plans, and more frequently set goals on operational
performance and more frequently provided performance feedback to their workers.
Results from the recursive partitioning algorithm are summarised in Fig. 1. As
shown, goal setting (p-value = 0.0003) is significantly associated with the change
in manufacturing costs. In particular, a higher level of goal setting is associated
with decreasing manufacturing costs. In addition, there is also a significant
interaction effect between goal setting and JIT (p-value = 0.0187). That is, a
higher level of goal setting coupled with a greater use of JIT is associated with
better manufacturing performance (in terms of decreases in manufacturing costs)
vis-à-vis a higher level of goal setting and a lower use of JIT.
The interaction effect of compensation plan can also be seen in Fig. 1. In partic-
ular, the combination of a fixed compensation plan with a high level of goal setting
and a moderately high use of JIT is associated with a lower probability of decreases
in manufacturing costs (p-value = 0.0362). Also, the combination of an incentive
compensation plan with a high level of goal setting but a low use of JIT and either
a very low use or a very high use of TQM (p-value = 0.0049) is associated with a
very low probability of decreases in manufacturing costs. It is noted that the latter
part of the findings is not consistent with conventional wisdom. Finally, there is
persuasive evidence (p-value = 0.1216) that a high level of goal setting, a relatively
The Interaction Effects of Lean Production Manufacturing Practices 125
Table 1.
Panel A: Job Title of Respondents
Years of TQM 24 20 18 15
experience
Years of JIT 30 22 12 13
experience
Compensation 34 6 12 25
type
low use of JIT and a moderately high use of TQM, coupled with both an incentive
compensation plan and a high level of feedback is associated with decreases in
manufacturing costs. This result does not come as a “surprise” – for example, the
hallmark of lean manufacturing is efficient use of resources (see Womack et al.,
1990). This means less space, fewer inventories, less production time (key aspects
of JIT), less waste, better quality, and continuously striving for manufacturing
excellence (key aspects of TQM). The literature suggests that JIT and TQM often
126
HIAN CHYE KOH ET AL.
Fig. 1. Decision Tree Results for Manufacturing Costs.
The Interaction Effects of Lean Production Manufacturing Practices 127
complement each other; also, a low JIT can be complemented with a high TQM, or
vice versa.
Figure 2 summarises the decision tree results for warranty costs. As shown in
Fig. 2, a high level of TQM (p-value = 0.0139) is significantly associated with
the change in warranty costs. In particular, a high level of TQM is associated
with decreasing warranty costs. The results also show that a high use of JIT, even
in the presence of a low use of TQM, is associated with decreasing warranty
costs (p-value = 0.0237) (see earlier comments on the complementarity of
JIT and TQM).
Although not statistically significant, the effect of compensation plan can also
be seen in Fig. 2. In particular, with a moderately high use of TQM, an incentive
compensation plan seems to be more associated with decreasing warranty costs
than the case of a fixed compensation plan (p-value = 0.2023). Also, there is
a significant interaction effect between feedback and TQM and JIT practices
(p-value = 0.0156). That is, with low levels of TQM and JIT, the probability of de-
creases in warranty costs is higher for a low level of feedback than for a high level
of feedback. This finding illustrates that the best configurations of management
control systems are often contingent upon the type of production systems. For
example, for manufacturing plants which have not switched to lean manufacturing
practices, it is not necessary for them to reconfigure the accounting systems to pro-
vide more timely feedback. Finally, there is some persuasive evidence of interaction
effect between goal setting and TQM and compensation plan (p-value = 0.2069).
In particular, in the presence of a moderately high level of TQM, the use of an
incentive compensation plan coupled with a high level of goal setting appears to
be associated with decreasing warranty costs. (It can be argued that some results
are statistically insignificant primarily because of the relatively small sample
size in the relevant nodes.)
The above findings provide support for H1 and H2. Except for one unexpected
result, the findings are consistent with the underlying theory, i.e. the best
configurations of management control systems are contingent upon the type of
production systems.
Discussion
Several theoretical papers have motivated this study (see Hemmer, 1995; Ittner &
Kogut, 1995; Milgrom & Roberts, 1990, 1995). In particular, Milgrom and
Roberts (1990, 1995) provide a theoretical framework that attempts to address the
issue of how relationships among parts of a manufacturing system affect perfor-
mance. They suggest that organizations often experience a simultaneous shift in
128
HIAN CHYE KOH ET AL.
Fig. 2. Decision Tree Results for Warranty Costs.
The Interaction Effects of Lean Production Manufacturing Practices 129
non-response bias by geographical region and 4-digit SIC code. Third, the present
research design precludes inferences to be made with regards to the pattern of
changes in the warranty costs or manufacturing costs.
In this concluding section, it is appropriate to discuss some caveats to
recursive partitioning models such as CHAID. For example, the splitting of the
subgroups (or nodes) are results driven, i.e. they are not theory or decision driven.
Nevertheless, when interpreting the findings, a common approach is to use some
underlying theories to explain the results or patterns. Finally, since there is no
model-fit statistic, there exists a risk of over-fitting the model. Despite some
limitations, a recursive partitioning model such as CHAID is a potentially useful
tool when examining complex relationships in the real world and has proved to
be helpful in discovering meaningful patterns when large quantities of data are
available (see, for example, Berry & Linoff, 1997, p. 5).
REFERENCES
Balakrishnan, R., Linsmeier, T., & Venkatachalam, M. (1996). Financial benefits from JIT adoption:
Effects of customer concentration and cost structure. The Accounting Review, 71, 183–205.
Banker, R., Potter, G., & Schroeder, R. (1993). Reporting manufacturing performance measures to
workers: An empirical study. Journal of Management Accounting Research, 5, 33–55.
Berry, M., & Linoff, G. (1997). Data mining techniques – For marketing, sales, and customer support.
Wiley Computer Publishing.
Birnberg, J. G., & Snodgrass, C. (1988). Culture and control: A field study. Accounting, Organizations
and Society, 13, 447–464.
Breiman, L. (1984). Classification and regression trees. Belmont, CA: Wardsworth International Group.
Buffa, E. S. (1984). Meeting the competitive challenge. IL: Dow Jones-Irwin.
Chenhall, R. H. (1997). Reliance on manufacturing performance measures, total quality management
and organizational performance. Management Accounting Research, 8, 187–206.
Coopers & Lybrand (1992). Compensation planning for 1993. New York: Coopers & Lybrand.
Daniel, S., & Reitsperger, W. (1991). Linking quality strategy with management control systems:
Empirical evidence from Japanese industry. Accounting Organizations and Society, 17,
601–618.
Daniel, S., & Reitsperger, W. (1992). Management control systems for quality: An empirical
comparison of the U.S. and Japanese electronic industry. Journal of Management Accounting
Research, 4, 64–78.
Erez, M. (1977). Feedback: A necessary condition for the goal setting-performance relationship.
Journal of Applied Psychology, 62, 624–627.
Fama, E. F., & Jensen, M. C. (1983). Separation of ownership and control. Journal of Law and
Economics, 26, 301–325.
Flamhaultz, E. G., Das, T. K., & Tsui, A. (1985). Towards an integrative framework of organizational
control. Accounting, Organizations and Society, 10, 51–66.
Gage, G. H. (1982). On acceptance of strategic planning systems. In: P. Lorange (Ed.), Implementation
of Strategic Planning. NJ: Prentice-Hall.
The Interaction Effects of Lean Production Manufacturing Practices 131
Gomez-Mejia, L. R., & Balkin, D. B. (1992). Structure and process of diversification, compensation,
strategy and firm performance. Strategic Management Journal, 13, 381–387.
Govindarajan, V. (1988). A contingency approach to strategy implementation at the business-unit
level: Integrating administrative mechanisms with strategy. Academy of Management Journal,
31, 828–853.
Govindarajan, V., & Gupta, A. K. (1985). Linking control systems to business unit strategy: Impact
on performance. Accounting, Organizations and Society, 10, 51–66.
Hackman, J. R., & Wageman, R. (1995). Total quality management: Empirical, conceptual and
practical issues. Administrative Science Quarterly, 40, 309–342.
Harari, O. (1993). Ten reasons why TQM doesn’t work. Management Review, 82, 33–38.
Hayes, R. H., Wheelwright, S. C., & Clark, K. B. (1988). Restoring our competitive edge: Competing
through manufacturing. New York: Free Press.
Hemmer, T. (1995). On the interrelation between production technology, job design, and incentives.
Journal of Accounting and Economics, 19, 209–245.
Holmstrom, B. (1979). Moral hazards and observability. Bell Journal of Economics, 10, 74–91.
Ichniowski, C., Shaw, K., & Prennushi, G. (1997). The effects of human resource management
practices on productivity: A study of steel finishing lines. The American Economic Review, 87,
291–314.
Ittner, C., & Kogut, B. (1995). How control systems can support organizational flexibility. In: E.
Bowman & B. Kogut (Eds), Redesigning the Firm. New York: Oxford University Press.
Ittner, C., & Larcker, D. F. (1995). Total quality management and the choice of information and reward
systems. Journal for Accounting Research, 33(Suppl.), 1–34.
Ittner, C., & Larcker, D. F. (1997). The performance effects of process management techniques.
Management Science, 43, 534–552.
Ittner, C., Larcker, D. F., Nagar, V., & Rajan, M. (1999). Supplier selection, monitoring practices, and
firm performance. Journal of Accounting and Public Policy, 18, 253–281.
Jensen, M., & Meckling, W. (1976). Theory of the firm: Managerial behavior, agency costs, and
ownership structure. Journal of Financial Economics, 3, 305–360.
Johnson, H., & Kaplan, R. (1987). Relevance lost: The rise and fall of management accounting. MA:
Harvard Business School Press.
Kalagananam, S. S., & Lindsay, R. M. (1998). The use of organic models of control in JIT firms: Gen-
eralising Woodward’s findings to modern manufacturing practices. Accounting Organizations
and Society, 24, 1–30.
Kaplan, R. S. (1983). Measuring manufacturing performance: A new challenge for managerial
accounting research. The Accounting Review, 58, 686–705.
Kaplan, R. S. (1990). Measures for manufacturing excellence. MA: Harvard Business School Press.
Latham, G. P., & Lee, T. W. (1986). Goal setting. In: E. A. Locke (Ed.), Generalizing from Laboratory
to Field Settings. MA: Lexington Books.
Lehmann, D. R., Gupta, S., & Steckel, J. H. (1998). Marketing research. MA: Addison-Wesley.
Locke, E., & Latham, G. (1990). Goal setting theory and task performance. New York: Prentice-Hall.
Locke, E. A., Shaw, K. M., Saari, L. M., & Latham, G. P. (1981). Goal setting and task performance:
1969–1980. Psychological Bulletin, 90, 121–152.
MacDuffie, J. P. (1995). Human resource bundles and manufacturing performance: Organizational
logic and flexible production systems in the world auto industry. Industrial and Labor Relations
Review, 48, 197–221.
Milgrom, P., & Roberts, J. (1990). The economics of modern manufacturing: Technology, strategy
and organization. American Economic Review, 80, 511–528.
132 HIAN CHYE KOH ET AL.
Milgrom, P., & Roberts, J. (1995). Complementarities and fit strategy, structure, and organizational
change in manufacturing. Journal of Accounting and Economics, 19, 179–208.
Milkovich, G. T. (1988). A strategic perspective to compensation management. Research in Personnel
and Human Resources, 9, 263–288.
Powell, T. C. (1995). Total quality management as competitive advantage: A review and empirical
study. Strategic Management Journal, 16, 15–37.
Sarkar, R. G. (1997). Modern manufacturing practices: Information, incentives and implementation.
Harvard Business School Working Paper.
Schonberger, R. J. (1986). World-class manufacturing: The lessons of simplicity applied. New York:
Free Press.
Siegel, D. S., Waldman, D. A., & Youngdahl, W. E. (1997). The adoption of advanced manufacturing
technologies: Human resource management implications. IEEE Transactions on Engineering
Management, 44, 288–298.
Sim, K. L., & Killough, L. N. (1998). The performance effects of complementarities between
manufacturing practices and management accounting systems. Journal of Management
Accounting Research, 10, 325–346.
Snell, S., & Dean, J. (1992). Integrated manufacturing and human resource management: A human
capital perspective. Academy of Management Journal, 35, 467–504.
Womack, J. P., Jones, D. T., & Daniel, R. (1990). The machine that changed the world. New York:
Macmillan.
Wruck, K. H., & Jensen, M. C. (1994). Science, specific knowledge and total quality management.
Journal of Accounting and Economics, 18, 247–287.
APPENDIX: QUESTIONNAIRE
∗ Reverse Coding
Production Costs
In this section, we are interested to know the extent to which the following
performance attributes have changed during the past 3 years using the scale of
1–5 listed below: (1 = Decrease Tremendously, 3 = No Change, 5 = Increase
Tremendously).
Manufacturing Cost
Warranty Cost
Just in Time
(1) Are products pulled through the plant by the final assembly schedule/master
production schedule?
(2) How much attention is devoted to minimizing set up time?
(3) How closely/consistent are predetermined preventive maintenance plans
adhered to?
(4) How much time is spent in achieving a more orderly engineering change by
improving the stability of the production schedule?
How much has each of the following changed in the past three years? (Anchored
by 1 = Large Decrease, 4 = Same, and 7 = Large Increase)
∗ (5) Number of your suppliers
∗ (6) Frequency of the deliveries
(7) Length of product runs
∗ (8) Amount of buffer stock
∗ (9) Number of total parts in Bill of Material
products?
1 2 3 4 5 6 7
(4) How much effort (both time and cost) is spent in preventive maintenance to
improve quality?
(5) How much effort (both time and cost) is spent in providing quality related
training to the plant’s employees?
(6)a What percentage of the plant’s manufacturing processes are under statistical
quality control? %
134 HIAN CHYE KOH ET AL.
Performance Feedback
Customer Perception
Customer perceived quality
Customer complaints
Delivery
On-time delivery
Quality
Cost of scrap
Rework
Defect
Warranty cost
Sales return
Cycle Time
Product development time
Manufacturing lead time
Work station setup time
The Interaction Effects of Lean Production Manufacturing Practices 135
Goals
Does your firm set specific numeric targets for the following performance
measures? (Anchoring on “Yes” or “No”)
Customer Perception
Customer perceived quality
Customer complaints
Delivery
On-time delivery
Quality
Cost of scrap
Rework
Defect
Warranty cost
Sales return
Cycle Time
Product development time
Manufacturing lead time
Work station setup time
(1) How are plant workers currently being compensated? (please circle only one).
(a) Strictly individual fixed pay only
(b) Individual fixed pay + non-monetary reward
(c) Individual fixed pay + individual-based monetary incentive
(d) Individual fixed pay + group-based monetary incentive
COMPENSATION STRATEGY
AND ORGANIZATIONAL
PERFORMANCE: EVIDENCE
FROM THE BANKING INDUSTRY
IN AN EMERGING ECONOMY
ABSTRACT
To survive in the turbulent, global business environment, companies must
apply strategies to increase their competitiveness. Expectancy theory
indicates that salary rewards can motivate employees to achieve company
objectives (Vroom, 1964). First, we develop an analytical model to predict
that companies using a high-reward strategy could outperform those using
a low-reward strategy. Then, we obtain archival data from banking firms
in Taiwan to test the proposed model empirically. We control the effects of
operating scale (firm size) and assets utilization efficiency (assets utilization
ratio). Empirical results show that salary levels and assets utilization
efficiency significantly affect banks’ profitability.
INTRODUCTION
The banking industry has played a critical role in many countries’ financial
operations. Since the early 1990s, many large banks in the world have been
In general, there are two types of rewards for employees: intrinsic and extrinsic
(Deci & Ryan, 1985; Kohn, 1992; Pittmaan et al., 1982). Intrinsic rewards
relate to the environment within which the employee operates. Employees often
have high job satisfaction in a collegial organizational culture with positive
management styles. Extrinsic motivators include salaries, bonuses and financial
incentives, such as stock options. A number of recent studies have examined
whether different compensation schemes for top executives relate to corporate
profitability and other measures of organizational performance. The results are
quite consistent with Deci and Ryan’s prediction: there are only slight or even
negative correlations between compensation and performance (e.g. Hadlock &
Lumer, 1997; Jensen & Murphy, 1990; Lanen & Larcker, 1992). Conversely,
other studies have shown that rewards should be based on individual performance
and that the effects of such rewards can reflect on the company’s performance.
In a study on making decisions about pay increases, Fossum and Fitch (1985)
used three groups of subjects: college students, line managers, and compensation
managers. All three groups agreed that the largest consideration should be given to
individual performance – even over other relevant criteria, such as cost of living,
difficulties in replacing someone who leaves, seniority, and budget constraints.
In addition, in management accounting contexts, Kaplan and Atkinson (1998)
argue that it is the responsibility of accounting professionals to evaluate whether
employees’ rewards are appropriately associated with firm performance.
Recently, Fey et al. (2000) conducted a survey using both managers and non-
managers from 395 foreign firms operating in Russia. Their results show a direct
positive relationship between merit pay for both managers and non-managers and
140 C. JANIE CHANG ET AL.
the firm’s performance. However, they used only non-financial measures, such
as job security, internal promotion, employee training, and career planning, to
evaluate firm performance. Schuster (1985) conducted a survey with 66 randomly
sampled Boston-area high-tech firms; that survey’s results reveal a greater reliance
on special incentives (e.g. bonuses, stock options, and profit-sharing plans) in
financially successful high-tech companies than in unsuccessful ones.
Many prior studies have reported that reward/incentive systems are positively
related to firm performance (e.g. Arthur, 1992; Fey et al., 2000; Gerhart &
Milkovich, 1990; Pfeffer, 1994; Schuster, 1985). However, these studies used
either a non-financial measurement for firm performance or used a survey ques-
tionnaire and did not investigate actual firm profitability or financial performance.
Neither did they control essential factors such as operating scale (e.g. firm size)
and operating efficiency (e.g. asset utilization).
Martocchio (1998) states that two aspects need to be examined to determine
whether a firm’s compensation strategies are effective: in the short term, a
compensation strategy is effective if it motivates employees to behave the way
the firm expects them to; in the long term, the strategy should be able to boost the
firm’s financial performance. Hence, we develop an analytical model to examine
the impact of reward/incentive systems on a firm’s long-term performance.
Similar to Ou and Lee (2001), we propose an analytical model to depict the asso-
ciation between a firm’s reward/salary strategy and its financial performance. We
assume that there are two types (t) of workers on the labor market: type h with high
skills/ability, and type l with low skills/ability (i.e. t = l, h). The productivity of
type t workers is denoted as xt and x h > x l > 0. The probability of finding workers
of types l and h is f and 1 − f, respectively. The above-mentioned information is
available on the market. We further use the following notations:
The a 2t /2 means that the cost to a worker increases with efforts in an increasing
rate. Accordingly, the firm’s corresponding profit t is (Y t − p t ) or (x t + a t − p t ).
The objective function is to maximize the overall profit of the firm, which can be
formulated using the following equations:
Max f (x l + a l − p l ) + (1 − f )(x h + a h − p h ) (1)
eh ,el ,ph ,pl
s.t.
a 2l
pl − ≥0 (2)
2
a 2h [a l − (x h − x l )]2
ph − ≥ pl − (3)
2 2
Equation (2) states the constraint that type l workers will take any job when the wage
of the job is larger than or equal to the associated personal cost (a 2l /2). Equation (3)
indicates that type h workers will take any job that pays them properly. That is,
the personal benefit earned by a highly skilled worker taking ph is greater than
or equal to the benefit from taking pl . Note that when a highly skilled worker
earns ph , he/she must put in effort eh with personal costs of a 2h /2. We know that
Y l = x l + a l , Y l = x h + x l + a l − x h = x h + a l − (x h − x l ). Therefore, when a
highly skilled worker takes a low-paying job to produce Y l , the personal cost to the
highly skilled worker will be a l − (x h − x l ). Hence, the personal benefit a highly
skilled worker can have when he/she takes a low-wage job is p l − [a l − (x h −
x l )]2 /2. Using this model, we would like to prove that firms using high rewards
to attract high-skilled workers can outperform those using low rewards to attract
less-skilled workers.
We use the Lagrange multiplier to solve the above-mentioned objective
function (Eq. (1)). Let L represent the Lagrangian; and represent the Lagrange
multipliers. The Lagrangian can be shown as follows:
a2
L = f (x l + a l − p l ) + (1 − f )(x h + a h − p h ) + k(p l − l )
2
a 2
[a l − (x h − x l )] 2
+ ph − h − pl + (4)
2 2
By taking the derivatives of al , ah , pl , and, ph respectively, we get the following
four equations:
f − a l + [a l − (x h − x l ) = 0 (4a)
1 − f − a h = 0 (4b)
142 C. JANIE CHANG ET AL.
−f + − = 0 (4c)
−(1 − f ) + = 0 (4d)
Besides, from Eq. (4), we know that:
a2
( p l − l ) = 0 (4e)
2
a 2h [a l − (x h − x l )]2
ph − − pl + =0 (4f )
2 2
empirically tested this hypothesis using banking firms in Taiwan. The following
section describes the sample and variables used in the empirical study.
The Sample
The sample consists of 232 observations of banking firms listed on the Tai-
wan Stock Exchange or in the Taiwan Over-the-Counter market from 1991
to 1999. Table 1 provides the descriptive statistics of the sample firms. On
average, each firm has 1,645 employees. The means (standard deviations) of
net income and salary expenses are NT$1,543,126,000 (NT$1,979,132,000) and
NT$1,660,523,000 (NT$1,923,147,000), respectively.
The purpose of this study is to examine the relationship between salary rewards
and firm performance, especially profitability. The independent variable is a firm’s
salary level (SL), which is the mean salary expense per employee (SE/NE). We
include a couple of control variables in our empirical model. The well-known Du
Pont model decomposes a firm’s operating performance into two components:
profitability and efficiency. The efficiency component is the asset utilization
ratio (AUR), which measures a firm’s ability to generate sales from investment
in assets (Penman, 2001; Stickney & Brown, 1999). Bernstein and Wild (1998,
p. 30) state that “asset utilization ratios, relating sales to different asset categories,
are important determinants of return on investment.” One of the AURs suggested
by Bernstein and Wild (1998, p. 31) is net sales to total assets ratio (NS/TA).
Since our focus in the analytical model is firm profitability, we use this important
variable to control firm efficiency when analyzing our data. Table 2 defines all
the variables used in our empirical tests.
In addition, we include operating scale (Log(Total Assets)) as the control
variable in our model. Issues related to operating scale have continuously
generated much interest in the academic community (e.g. Altunbas, Evans &
Molyneux, 2001; Altunbas, Gardener, Molyneux & Moore, 2001; De Pinho,
2001). Theories of economies of scale suggest that the best efficiency and thus the
highest performance can be obtained when a firm operates at the optimal scale.
However, the operating scale (i.e. firm size) is difficult for individual employees
to influence, so we have decided to use it as a control variable for firm differences.
144
Table 1. Descriptive Statistics of the Sample.
Variables Mean Std. Dev. Maximum 3rd Quantile Median 1st Quantile Minimum
Note: N = 232, Unit: 1000 (with NT$). NS = Net Sales, OI = Operating Income, PTI = Pretax Income, NI = Net Income, SE = Salary Expense,
TA = Total Assets, NE = Number of Employees.
Dependent variables
PM1 TL/TD (loan-to-deposit ratio)
PM2 OI/NE (operating income per employee)
PM3 PTI/NE (pretax income per employee)
PM4 NI/NE (net income per employee)
Independent variable
SL SE/NE (salaries expense per employee)
Control variables
OS Log (TA) (log for total assets)
AUR NS/TA (net sales to total assets ratio)
AUR Assets utilization ratio
LDR Loan-to-deposit ratio
NE Number of employees
NI Net income
NS Net sales
OI Operating income
OS Operating scale
PM1 Performance measure of a bank’s potential profitability
PM2 First accounting-based profitability measure (per employee)
PM3 Second accounting-based profitability measure (per employee)
PM4 Third accounting-based profitability measure (per employee)
PTI Pretax income
SE Salary expenses
SL Salary level
TL Total loans
TD Total deposits
TA Total assets
Dependent Variables
specify that “Banks that want a strategic earning advantage must strive for a strong
loan-to-deposit ratio. They must cultivate loan business, maintain it, and attract
new business. Increasing the loan-to-deposit ratio by one percentage point will
likely add four or five basis points to net interest margins.” Hence, we use this
important indicator as one of our performance measures.
In addition, we use three accounting-based profitability measures as our
dependent variables: operating income per employee (OI/NE), pretax income
per employee (PTI/NE), and net income per employee (NI/NE). These measures
are commonly used by financial analysts to evaluate a firm’s performance.
Although prior research has suggested including market-based measures to
evaluate firm performance, the focus was to examine the relationship between
firm performance and executive compensations (Gorenstein, 1995; Jensen &
Murphy, 1990; McCarthy, 1995; Stock, 1994). Since the purpose of this study is
to explore a firm’s salary strategy on general employees, not on executives, we
focus on accounting-based performance measures.
EMPIRICAL RESULTS
Descriptive Statistics
Table 3 presents the descriptive statistics of all the variables, including the means,
standard deviations, maximum and minimum data values, and medians of the
sample’s loan-to-debt ratio, operating income per employee, pre-tax income per
employee, net income per employee, salary levels, assets utilization ratio, and op-
erating scales. To assess the collinearity among independent and control variables
in our regression models, we obtained the correlation matrix using Pearson and
Spearman correlation coefficients. According to the results, none of the correla-
tions is high (below 0.9). Thus, we do not have much concern about collinearity.
147
148 C. JANIE CHANG ET AL.
CONCLUSIONS
The global business environment has been extremely turbulent and competitive.
Companies must apply effective strategies to increase their competitiveness to
survive and prosper in such an environment. Expectancy theory indicates that salary
rewards can motivate employees to achieve company objectives. Accordingly, we
develop an analytical model to prove that companies using a high-reward strategy
could outperform those using a low-reward strategy. Then, we obtain archival data
from banking firms in Taiwan to empirically test the proposed model. Using four
different performance measures on profitability, we find that salary level and assets
utilization ratio significantly affect Taiwanese bank performance.
In this study, we have examined the salary strategies used by the banks in
Taiwan on their profitability performance. To generalize our findings, future
studies can look into this issue using firms in different industries and from
different countries. Also, does the relationship between employee compensation
strategies and firm performance depend on various firm characteristics, such as
Compensation Strategy and Organizational Performance 149
REFERENCES
Altunbas, Y., Evans, L., & Molyneux, P. (2001). Bank ownership and efficiency. Journal of Money,
Credit, and Banking, 33(4), 926–954.
Altunbas, Y., Gardener, E. P. M., Molyneux, P., & Moore, B. (2001). Efficiency in European banking.
European Economic Review, 45(10), 1931–1955.
Arthur, J. B. (1992). The link between business strategy and industrial relations systems in American
steel minimills. Industrial and Labor Relations Review, 45(3), 488–506.
Bernstein, L. A., & Wild, J. J. (1998). Financial statement analysis. Boston: McGraw-Hill.
Cheng, Y. R. (2000). The environmental change of money and banking system on the new development
of Taiwan banking. Co Today – Taiwan Cooperative Bank, 26(5), 44–54.
De Pinho, P. S. (2001). Using accounting data to measure efficiency in banking: An application to
Portugal. Applied Financial Economics, 11(5), 527–538.
Deci, E. L., & Ryan, R. M. (1985). Intrinsic motivation and self-determination in human behavior.
New York: Plenum Press.
Fey, C. F., Bjorkman, I., & Pavlovskaya, A. (2000). The effect of human resource management prac-
tices on firm performance in Russia. International Journal of Human Resources Management,
11(1), 1–18.
Fin, W. T., & Frederick, J. B. (1992). Managing the margin. ABA Banking Journal, 84(4), 50–53.
150 C. JANIE CHANG ET AL.
Fossum, J. A., & Fitch, M. K. (1985). The effects of individual and contextual attributes on the sizes
of recommended salary increases. Personnel Psychology, 38(Autumn), 587–602.
Gerhart, B., & Milkovich, G. T. (1990). Organizational differences in managerial compensation and
financial performance. Academy of Management Journal, 33(4), 663–691.
Gorenstein, J. (1995). How the executive bonus system works. The Philadelphia Inquirer (July 9).
Hadlock, C. J., & Lumer, G. B. (1997). Compensation, turnover, and top management incentives:
Historical evidence. Journal of Business, 70(2), 153–187.
Horngren, C. T., Foster, G., & Darter, S. M. (2000). Cost accounting: A managerial emphasis (10th
ed.). Englewood Cliffs, NJ: Prentice-Hall.
Jensen, M. C., & Murphy, K. J. (1990). Performance pay and top-management incentives. Journal of
Political Economy, 98(21), 225–255.
Kaplan, R. S., & Atkinson, A. A. (1998). Advanced management accounting. Englewood Cliffs, NJ:
Prentice-Hall.
Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard: Translating strategy into action.
Harvard Business School Press, 73–84.
Kohn, A. (1992). No contest: The case against competition. Boston: Houghton Mifflin.
Lambert, R. A., Larcker, D. F., & Weigelt, K. (1993). The structure of organizational incentives.
Administrative Science Quarterly, 38, 438–461.
Lanen, W. N., & Larcker, D. F. (1992). Executive compensation contract adoption in the electric utility
industry. Journal of Accounting Research, 30(1), 70–93.
Martocchio, J. J. (1998). Strategic compensation – A human resource management approach.
Prentice-Hall.
McCarthy, M. J. (1995). Top 2 UAL officers got $17 million in 94 stock options. Wall Street Journal
(April 5).
Ou, C. S., & Lee, C. L. (2001). A study of the association between compensation strategy and labor
performance. Working Paper, National ChengChi University.
Penman, S. H. (2001). Financial statement analysis security valuation. McGraw-Hill and Irwin.
Pfeffer, J. (1994). Competitive advantage through people. Boston: Harvard Business School Press.
Pittmaan, T. S., Emery, J., & Boggiano, A. K. (1982). Intrinsic and extrinsic motivational orientations:
Reward-induced changes in preference for complexity. Journal of Personality and Social
Psychology (March).
Schuster, J. R. (1985). Compensation plan design: The power behind the best high-tech companies.
Management Review, 74(May), 21–25.
Steinborn, D. (1994). Earnings dip in first half. ABA Banking Journal, 86(9), 26–27.
Stickney, C. P., & Brown, P. R. (1999). Financial reporting and statement analysis. Fort Worth: Dryden.
Stock, C. (1994). Bottom lines: Did CEO earn his pay. The Philadelphia Inquirer (November 20).
Vroom, V. H. (1964). Work and motivation. New York: Wiley.
Warner, J., Watts, R. L., & Wruck, K. H. (1988). Stock prices and top management change. Journal
of Financial Economics, 20, 461–492.
ACCOUNTING FOR COST
INTERACTIONS IN DESIGNING
PRODUCTS
ABSTRACT
Since quality cannot be manufactured or tested into a product but must
be designed in, effective product design is a prerequisite for effective
manufacturing. However, the concept of effective product design involves
a number of complexities. First, product design often overlaps with such
design types as engineering design, industrial design and assembly design.
Second, while costs are key variables in product design, costing issues often
arise that add more complexities to this concept.
The management accounting literature provides activity-based costing
(ABC) and target costing techniques to assist product design teams. However,
when applied to product design these techniques are often flawed. First, the
product “user” and “consumer” are not identical as often assumed in target
costing projects, and instead of activities driving up the costs, managers may
use budgeted costs to create activities to augment their managerial power by
bigger budgets and to protect their subordinates from being laid off. Second,
each of the two techniques has a limited costing focus, activity-based costing
(ABC) focusing on indirect costs and target costing on unit-level costs. Third,
neither technique accounts for resource interactions and cost associations.
INTRODUCTION
Since quality cannot be manufactured or tested into a product but must be
designed in, effective product design is a prerequisite for effective manufacturing
(Cooper & Chew, 1996, p. 88; National Research Council, 1991, p. 7). But
designing and developing new products are a complex and risky process that must
be tackled systematically (Roozenburg & Eekels, 1995, p. xi). Ignoring this issue
can adversely affect the nation’s competitiveness (National Research Council,
1991, p. 1). In this process, cost is a primary driver (National Research Council,
1991, p. 15; Ruffa & Perozziello, 2000, p. 1). Over 70% of a product’s life-cycle
cost is determined during the design stage (Ansari et al., 1997, p. 13; National
Research Council, 1991, p. 1). Michaels and Wood (1989, p. 1) elevate cost to
“the same level of concern as performance and schedule, from the moment the
new product or service is conceived through its useful lifetime.”
Yet costing issues often add to the complexity of product design. For example, in
the defense industry, Ruffa and Perozziello (2000, p. 161) report that aircraft manu-
facturers recently stressed the importance of adopting improved design approaches
as a means to control product costs. However, cost advantages are often hard to
discover, as they (p. 161) state: “Only, when we attempted to quantify the specific
cost savings to which these [improved design approaches contributed], we were
often disappointed. While it intuitively seems that broader benefits should exist, we
found that they are not consistently visible.” How does the managerial accounting
literature help in reducing this product-design and costing complexities?
In managerial accounting, activity-based costing (ABC) and target costing are
often touted as valuable methods of accounting for product design. They help
management to develop cost strategies for product design programs to create
new products or improve existing ones. While value engineering, value analysis
and process analysis techniques identify, reduce or eliminate nonvalue-added
Accounting for Cost Interactions in Designing Products 153
activities during the product’s lifecycle, when applied to product design programs,
these methods have serious shortcomings in theory and application. This paper
explains the limitations of activity-based costing (ABC) and target costing in
product design, and applies a more recent technique, i.e. associative costing
(Bayou & Reinstein, 2000), to overcome such limitations. The first section of the
paper explains the nature of product design since this concept is vaguely described
in the engineering and accounting literatures. The second section discusses
the shortcomings of ABC and target costing when applied to product design
programs. The associative costing model is then applied to a product design
scenario in the third section. Finally, a summary and conclusions are presented.
that “design” is an activity that: (a) recognizes the goals or purposes of products
or systems; (b) shapes its objects – creates their forms – in accordance with the
goals or purposes of these objects; and (c) evaluates and determines the forms of
its objects and makes their contents universally comprehensible. Both form and
content are important in product and service design.
“Appearance” is a closely related concept to form, which Niebel and Draper
(1974, p. 21) assign a high value when they conclude: “Appearance must be
built into a product, not applied to a product.” Appearance then is an important
element for both product design (Niebel & Draper, 1974) and for industrial design
(Roozenburg & Eekels, 1995; Wood, 1993). These product design issues have im-
portant implications for target costing and ABC techniques, discussed as follows.
Limiting Assumptions
Target costing focuses on a target product. But the nature of this target product
from the manufacturer’s viewpoint differs from that of its customers. For example,
Morello (1995, p. 69) differentiates between the “user” and the “consumer.” He
(p. 70) argues that “[b]oth user and consumer have an explicit or implicit ‘project’
to use with efficacy and efficiency . . . But the project of the user is a microproject,
defined by many specific occasions, while the project of the consumer is, relatively,
a macroproject, for every possible occasion of use.” He (p. 70) adds: “the only way
to design properly is to have the user in mind; and the role of marketing . . . is to have
in mind the true project of the consumer, which paradoxically, is not to consume
but to be put in the condition to use properly.” Morello’s argument echoes the points
made in 1947 by Lawrence D. Miles, the founder of modern value engineering
(quoted in Akiyama, 1991, p. 9):
Customer-use values and customer-esteem values translate themselves into functions as far as
the designer is concerned. The functions of the product . . . cause it to perform its use and to
provide the esteem values wanted by the customer.
The use values require functions that cause the product to perform, while the esteem values
require functions that cause the product to sell.
and combinations of parts of an automobile and may use time and motion studies
to determine the standard direct labor time and cost allowed for assembling a ve-
hicle. However, batch-level costs, e.g. machine setups, and facility-level costs, e.g.
factory cafeteria, factory security, facility cleaning and maintenance costs, are dif-
ficult to incorporate into the design of a unit of a concrete product.2 In short, ABC
focusing on indirect costs and engineered target costing focusing on unit-level costs
render them, individually, incomplete costing systems for product design purposes.
ABC and target costing do not account for strategic interactions among resources,
activities and their costs. For example, maintenance and testing activities frequently
interact so much so that a reduction in maintenance activities can lead to more de-
fective output units, which in turn may necessitate increased testing activities. Yet,
for practical reasons, ABC models do not account explicitly for these interactions
among activities. With only four groups of different activities, each at two levels,
high and low, 11 interactions (24 – 4 activity groups – 1) among these activities to
account for would arise, as explained in the following section. When considering
that the median number of activity-area-based cost pools in practice is 22 (Innes
et al., 2000, p. 352), accounting for the activity interaction effects becomes even
more impractical. ABC also is a cost traceability model where costs are traced to
the cost object. This leap of costing from input to output bypasses the manufactur-
ing process where resources interact and costs associate (Bayou & Reinstein, 2000,
p. 77). Similarly, value-engineering programs often do not account formally for
input interactions and cost associations. This weakness of target costing systems
when applied to product design arises, for example, with the type of metal (e.g.
steel vs. aluminum) that enters into the production of an automobile, which must
be associated with such other costs as fuel consumption, environmental problems,
safety and price fluctuations of the metal (de Korvin et al., 2001).
A method that does not contain these target-costing and ABC limitations is
shown below.
AN APPLICATION OF THE
ASSOCIATIVE COSTING MODEL
The associative costing model focuses on main factors and interaction effects
among these factors. This model “allows cost interactions to be designed, planned
and controlled in order to help apply the application of process-oriented thinking
Accounting for Cost Interactions in Designing Products 157
and realize its continuous improvement goals” (Bayou & Reinstein, 2000, p. 75).
The ultimate goal of this model when applied to product design is to guide design
engineers and management in determining the optimum product design on the
basis of associating the most important factors, the interactions among these
factors and the costs of their combinations. Following Bhote and Bhote (2000,
p. 93), we label the most important factors Red X, the second-order factors Pink X,
the third-order factors Pale X and the dependent variable, i.e. the output of each
combination, Green Y. Table 1 lists the basic steps of applying this model.
The associative costing model employs well-known statistical methods of
clustering, classification and analysis of variance as illustrated by the following
hypothetical scenario. The object of design is a new model of a laptop computer
targeted for purchase by college students. The product has many elements that
can have low and high values, e.g. the RAM size, computing capability, number
and kind of software packages installed on a unit, quality of material for the frame
and carrying case, and the electronic screen. The following discussion applies the
nine steps listed in Table 1 to the new laptop model.
This step develops an exhaustive list of factors or components. The list can contain
many components since many types of resources are needed to design, manu-
facture and deliver a product to customers. The product design team has several
methods to compile this list. As a starting point, the method of reverse engineer-
ing of competitors’ products can provide insights to differentiate and improve on
competitors’ models. Another method is the rapid automated prototyping (RAP),
which is a new field that creates three-dimensional objects directly from CAD files,
without human intervention. According to Wood (1993, p. 1), prototypes, which
are integral to the industrial design cycle, have three purposes:
(1) Aesthetic visualization – to see how the product appears, especially a consumer
item that must look appealing when printed or packaged.
(2) Form-fit-and-function testing – to ascertain that the part fits and interfaces well
with other parts.
(3) Casting models – to make a casting model around the part for full-scale pro-
duction of replicas of the part.
The prototypes can enhance the design team’s imagination and enliven their
brainstorming sessions. To illustrate, this step develops a list of components: A1 ,
A2 , . . . An , B1 , B2 , . . . Bn , . . . Z1 , Z2 , . . . Zn (Table 1), as explained in Step 2.
Step 2: Clustering
Clustering means grouping of similar objects (Hartigan, 1975, p. 1). Its princi-
pal functions include naming, displaying, summarizing, predicting and seeking
explanation. Since “clustering” is almost synonymous with classification in that
all objects in the same cluster are given the same name, data are summarized
by referring to properties of clusters rather than properties of individual objects
(Hartigan, 1975, p. 6). The concept of “similarity” among members of a cluster
is crucial in a clustering approach (Kruskal, 1977, p. 17). Good (1977) provides
many dimensions to describe alternative clustering approaches.
Accounting for Cost Interactions in Designing Products 159
Green Y is the output whose selection and measurement depend on the design
team’s views and corporate goals. To illustrate, the design team decides that in
order to be consistent with the company’s goals to maximize sales revenues and
market share, the output (Green Y) should be defined as the potential customer’s
degree of willingness to buy the product. Using a 1–9 Likert scale, the responses
are defined as follows:
Respondents indicate their perception on the assumption that the price of the
product in question is affordable.3
This step identifies critical factors in the chosen cluster. There are several ways
to conduct this step. In each of the following methods, respondents receive a
questionnaire seeking their degree of willingness to buy the product:
(1) Descriptive method: The basis for respondents’ judgments is a description of
several versions of the product, by varying one element at a time. This is the
least expensive method; yet it is also the weakest since respondents do not
physically examine the different versions of the product.
(2) The CAD prototype method: Respondents examine several versions of a three-
dimensional CAD replica (Wood, 1993), on which they express their willing-
ness to buy or not buy. This method is useful when the appearance of the
product or its elements is a key factor in their purchasing decision.
(3) The actual prototype method: Responses are based on an actual version of
the product. This method is the most effective because perception is based
on a real product; yet it is the most expensive since it requires producing
several product versions and experiments, by varying one element at a time.
For example, respondents compare two laptops, one basic with RAM of only
32 MB and one advanced, with 64 MB, then with 128 MB, then with 256 MB
and so on. This is the method we apply in the following illustration since it is
usually the most effective in designing such a relatively expensive (for many
students) product as a laptop computer.
We consider three samples of 30 college students each, from a private school
(PS), a small state school (SS) and a large state school (LS). To test the degree
of importance of each component (factor), Ci , in the component list of Step 2, a
statistical inference for one mean with normal distribution and 95% significance
level is applied to test the following hypothesis:
Ho : ≤5 For Component C i ,
Ha : >5
While a mean response equal to or less than 5 for the Green Y indicates a low
degree of perceived importance, an average response greater than 5 denotes a high
degree of importance. Table 2 applies this statistical procedure for Component C1 .
A similar table can be developed for each of the C1 –C9 components of Step 2.
Accounting for Cost Interactions in Designing Products 161
Input Data
1 4 8 5
2 6 6 7
3 9 5 9
4 9 8 6
5 6 6 7
6 8 9 8
7 9 8 7
8 6 9 9
9 9 5 9
10 8 8 7
11 9 6 6
12 8 8 8
13 8 9 6
14 5 4 8
15 6 7 9
16 9 6 9
17 7 8 8
18 8 6 9
19 9 8 7
20 9 8 9
21 8 7 6
22 7 9 9
23 8 6 7
24 9 7 8
25 8 6 8
26 9 4 7
27 7 6 9
28 8 9 6
29 8 7 8
30 9 7 7
Statistical output: Sample statistics
Sample means 6.5 7.5 6.0
Sample Std. Dev. 3.5355 0.7071 1.414
Sample size 30 30 30
Point estimate 6.5 7.5 6.0
Confidence interval: Confidence level critical zone 0.012
Standard error of point estimate 0.645 0.129 0.258
162 MOHAMED E. BAYOU AND ALAN REINSTEIN
Table 2. (Continued )
Statistical Inference: One Mean with Normal Distribution
Input Data
Note: Significance level: 95%. PS = Private-school student sample; SS = Small state-school student
sample; LS = Large state-school student sample.
Responses: 1 = Defintely, Component C1 will NOT affect my decision to buy the product;
9 = Defintely, Component C1 will affect my decision to buy the product.
Table 2 shows that the hypothesis testing for Component C1 leads to rejecting the
null hypothesis, which means that this component is significantly important.
The results of the hypothesis testing of the nine components, C1 –C9 , in this
step are as follows (Table 3):
Most design of experiment (DOE) experts consider the Full Factorial the most pure
formal DOE technique because “it can neatly and correctly separate the quantity
of each main effect, each second-order, third-order, fourth-order, and higher order
interaction effect” (Bhote & Bhote, 2000, p. 282). The Full Factorial requires
2n experiments for a randomized, replicated and balanced design, where n is the
number of factors. Thus, if n equals 3, 4, 5, 6 and 10, a Full Factorial design
would respectively require 8, 16, 32, 64 and 1,024 experiments. Accordingly, the
Full Factorial becomes impractical if the number of factors exceeds four (Bhote
& Bhote, 2000, p. 282).
The Full Factorial methodology requires selecting two levels for each factor, a
low and high level. For n = 3 factors, the number of experiments equals 23 or 8
combinations, where each level of each factor is tested with each level of all the
other factors (Bhote & Bhote, 2000, p. 234). Applying the Full Factorial method
to the three Red X factors, C1 , C3 and C7 of Step 4, Table 4 shows the ANOVA
data and results. To save space, only one sample is considered. The procedure
should be repeated, however, for all samples of respondents.
To examine the content of Table 4 more closely, respondents are asked to
indicate their willingness to purchase each of the eight versions of a laptop on
1–9 Likert scale, where:
The average response for each experiment (or combination) is listed in the output
(Green Y) column. In experiment 1, the three Red X factors are held at low levels
(each with a –1). This is a laptop version with its Red X factors at the lowest level.
The student group’s examination derived an average response of 1, indicating no
substantial potential customer demand for this laptop. Demand is strongest for
the laptop version of experiment 6, with an average score of 8.
164
Table 4. Full Factorial Experimental Design.
ANOVA (Full Factorial) Table
1 −1 −1 −1 1 1 1 −1 1 100
2 1 −1 −1 −1 −1 1 1 1 300
3 −1 1 −1 −1 1 −1 1 4 240
4 1 1 −1 1 −1 −1 −1 5 440
5 −1 −1 1 1 −1 −1 1 7 160
6 1 −1 1 −1 1 −1 −1 8 360
7 −1 1 1 −1 −1 1 −1 6 300
8 1 1 1 1 1 1 1 7 500
C1 $50 $250
C3 40 180
C7 10 70
Total $100 $500
a
To illustrate, the cost column is computed as follows for the first three experiments:
Experiment 1 Experiment 2 Experiment 3
C1 $50 $250 $50
C3 40 40 180
C7 10 10 10
Total cost $100 $300 $240
Accounting for Cost Interactions in Designing Products 165
The data in the interaction columns are developed as follows. For the C1 × C3
interaction for experiment 1, the −1 in the C1 column multiplied by the −1 in
the C3 column yields +1. For experiment 2, +1 multiplied by −1 under columns
C1 and C7 , respectively, yields −1 for the C1 × C3 interaction column. Similar
calculations are made for the remaining cells of the interaction columns. The
bottom three rows of Table 4 are computed as follows for column C1 :
As indicated above, Table 4 shows the two costing levels low (−1) and high (+1)
for factors C1 , C3 , and C7 and the results for only one sample of respondents. The
last column of Table 4 shows the total cost of each experiment. To explain how
these costs are computed, let us consider only three experiments, experiments 1,
2, and 6, in Table 4. The average response for the laptop version of experiments
1, 2 and 6 are 1, 1 and 8, respectively on the 1−9 Likert scale. The costs for these
experiments are computed as follows:
The ANOVA table (Table 4) shows that experiment 6 is the optimum in terms of
the Green Y result. The average response of 8 indicates a high degree of willingness
to purchase the laptop version. This experiment may be replicated with different
sample groups to develop more confidence in this level of customer perception. The
cost of this laptop version is $360 (Table 4). Simplifying these computations helps
to illustrate the application of the associative-costing method. In some situations,
the product design team may need to make tradeoff decisions where the product
version with a high Green Y value may be too costly to produce, and the version
with low cost is of too low Green Y value.4 In other words, a continuum exists that
Accounting for Cost Interactions in Designing Products 167
begins at the design-to-cost point and ends at the cost-to-design point. Determining
the optimum point on this continuum is a multivariate decision problem, which
we recommend for further research.
NOTES
1. According to Zaccai (1995, p. 6), during the first century of the industrial revolution,
many peoples’ most basic needs were met by products narrowly designed by technical
specialists. These products, although technically functional, often did not meet consumers’
requirement and “as a result could be undermined by more desirable alternatives.” This
problem is magnified today by the variety of products consumers can acquire and the
options and methods of financing or ownership (e.g. lease vs. buy) available to them.
2. Target costing is a concept mired in obscurity. First, an examination of Ansari et al.’s
(1997, pp. 45–48) delineation of costs for target costing purposes reveals a vague, yet,
incomplete set of cost perspectives, which includes:
Value chain (organizational perspective).
Life cycle (time) perspective.
Customers (value) perspective.
Engineered (design) perspective.
Accounting (cost type) perspective.
This list contains overlapping functions and illustrates the common problems of func-
tional arrangements. One can add other perspectives, including micro (e.g. competition,
demand elasticity, and substitutes) and macro (e.g. industry and the economy) perspectives.
Second, the cost in the most common target-costing model, called “the deductive method”
(Kato, 1993, p. 36) (where Target Cost = Price − Profit) is a difference, which does not
exist for empirical measurement (Deleuze, 1994 [1964]; Sterling, 1970). (For a detailed
explanation of the Deleuzian difference in accounting, see Bayou & Reinstein, 2001.) This
means that a manufacturer does not and cannot measure (an empirical process) the target
cost of a target product. It can only determine this cost. But cost determination is a result
of calculation (a rational process) based on the design-to-cost view where design should
converge to cost, rather than vice versa, as explained above. In short, target costing is a
vague concept.
3. This assumption helps separate the product design from pricing issues. A target price,
the ground for the target-costing deductive method, is often a vague notion of affordability,
for several reasons. First, in some cases, this price may depend on the design, rather than
the design depending on the price; in other cases, the price and design are locked into a
circular interdependent relationship. Second, an affordable price is often a fuzzy variable that
changes within a wide range (Bayou & Reinstein, 1998). Finally, positing an affordable price
is valid when the target group of customers is homogenous in terms of tastes, preferences,
income, demand elasticity and available means of financing in the short run and the long run.
Accounting for Cost Interactions in Designing Products 169
In the West, this high degree of customer homogeneity is rarely found for many products
and services. Shorter: the target price, which is a key element in the target-cost deductive
method, is often a vague target. Separating design issues from pricing issues makes the
marketing function and component suppliers more crucial in producing and selling the
designed product at acceptable profits.
4. This tradeoff decision is consistent with Bayou and Reinstein’s (1997) discussion of
the circular price-cost interaction and their “tiger chasing his tail” metaphor.
REFERENCES
Akiyama, K. (1991). Functional analysis. Cambridge, MS: Productivity Press.
Ansari, S. L., Bell, J. E., & the CAM-I Target Cost Core Group (1997). Target costing, the next frontier
in strategic cost management. Chicago: Irwin.
Bayou, M. E., & Reinstein, A. (1997, September/October). Formula for success: Target costing for
cost-plus pricing companies. Journal of Cost Management, 11(5), 30–34.
Bayou, M. E., & Reinstein, A. (1998). Applying Fuzzy set theory to target costing in the automobile
industry. In: P. H. Siegel, K. Omer, A. de Korvin & A. Zebda (Eds), Applications of Fuzzy Sets
and the Theory of Evidence to Accounting, II (Vol. 7, pp. 31–47).
Bayou, M. E., & Reinstein, A. (2000). Process-driven cost associations for creating value. Advances
in Management Accounting (Vol. 9, pp. 73–90). New York: JAI Press/Elsevier.
Bayou, M. E., & Reinstein, A. (2001). A systemic view of fraud explaining its strategies, anatomy and
process. Critical Perspectives on Accounting (August), 383–403.
Bhote, K. R., & Bhote, A. K. (2000). World class quality (2nd ed.). New York: American Management
Association.
Borgmann, A. (1995). The depth of design. In: R. Buchanan & V. Margolin (Eds), Discovering Design:
Explanation in Design Studies (pp. 13–22). Chicago: University of Chicago Press.
Cooper, R., & Chew, W. B. (1996). Control tomorrow’s costs through today’s designs. Harvard Business
Review (January–February), 88–97.
de Korvin, A., Bayou, M. E., & Kleyle, R. (2001). A fuzzy-analytic-hierarchical-process model for the
metal decision in the automotive industry. Paper N. 01IBECB-2, Society of Automotive Engi-
neers (SAE) IBEC 2001 International Body Engineering Conference and Exhibition, Detroit,
Michigan, October, 16–18.
Deleuze, G. (1994). Difference and repetition. P. Patton (Trans.). New York: Columbia University
Press.
Good, I. J. (1977). The botryology of botryology. In: J. Van Ryzin (Ed.), Classification and Clustering
(pp. 73–94). New York: Academic Press.
Hartigan, J. A. (1975). Clustering algorithm. New York: Wiley.
Innes, J., Mitchell, F., & Sinclair, D. (2000, September). Activity-based costing in the UK’s largest
companies: A comparison of 1994 and 1999 survey results. Management Accounting Research,
11(3), 349–362.
Kato, Y. (1993). Target costing support systems: Lessons from Leading Japanese companies. Manage-
ment Accounting Research, 4, 33–47.
Kruskal, J. (1977). The relationship between multidimensional scaling and clustering. In: J. Van Ryzin
(Ed.), Classification and Clustering (pp. 17–44). New York: Academic Press.
Michaels, J. V., & Wood, W. P. (1989). Design to cost. New York: Wiley.
170 MOHAMED E. BAYOU AND ALAN REINSTEIN
Monden, Y., & Hamada, K. (1991). Target costing and kaizen costing in Japanese automobile compa-
nies. Journal of Management Accounting Research, 3(Fall), 16–34.
Morello, A. (1995). ‘Discovering design’ means [re-]discovering users and projects. In: R. Buchanan &
V. Margolin (Eds), Discovering Design: Explanation in Design Studies (pp. 69–76). Chicago:
University of Chicago Press.
Redford, A., & Chal, J. (1994). Design assembly: Principles and practice. London: McGraw-Hill.
Roozenburg, N. F. M., & Eekels, J. (1995). Product design: Fundamentals and methods. Chichester:
Wiley.
Ruffa, S. A., & Perozziello, M. J. (2000). Breaking the cost barrier. New York: Wiley.
Sterling, R. (1970). A theory of measurement of enterprise income. Iowa Printing.
Wood, L. (1993). Rapid automated prototyping: An introduction. New York: Industrial Press.
Zaccai, G. (1995). Art and technology: Aesthetics redefined. In: R. Buchanan & V. Margolin (Eds),
Discovering Design: Explanation in Design Studies (pp. 3–12). Chicago: University of Chicago
Press.
RELATIONSHIP QUALITY: A CRITICAL
LINK IN MANAGEMENT ACCOUNTING
PERFORMANCE MEASUREMENT
SYSTEMS
ABSTRACT
Performance measurement has benefited from several management ac-
counting innovations over the past decade. Guiding these advances is the
explicit recognition that it is imperative to understand the causal linkage that
leads a firm to profitability. In this paper, we contend that the relationship
quality experienced between two organizations has a measurable impact
on performance. Guided by prior models developed in distribution channel
and relationship marketing research (Cannon et al., 2000; Morgan & Hunt,
1994) we build a causal model of relationship quality that identifies key
relationship qualities that drive a series of financial and non-financial
performance outcomes. Using the healthcare industry to illustrate its
applicability, the physician practice – insurance company relationship is
described within the context of the model’s constructs and causal linkages.
Our model offers managers employing a causal performance measurement
system such as, the balanced scorecard (Kaplan & Norton, 1996) or the
action-profit-linkage model (Epstein et al., 2000), a formal framework to
analyze observed outcome metrics by assessing the underlying dynamics in
their third party relationships. Many of these forces have subtle, but tangible
INTRODUCTION
Performance measurement has benefited from several management accounting
innovations over the past decade. Guiding these advances is the explicit recognition
that it is imperative to understand the causal linkage that leads a firm to profitability.
One of the most significant innovations is the integration of non-financial variables
into performance measurement systems. Non-financial performance measures help
firms recognize how specific actions or outcomes impact profitability. Some models
such as the Action-Profit-Linkage (Epstein et al., 2000) and the Service Profit
Chain (Heskett et al., 1997) develop quantitative causal models that demonstrate
how a unit change in non-financial variables impact profitability and other financial
variables. Other models, such as the Balanced Scorecard (Kaplan & Norton, 1996)
represent the interrelationships among financial and non-financial performance
variables as a set of causal hypotheses that serve as guideposts for managerial
decision making. What all of these performance models have in common is the
need for organizations to clearly understand the factors that drive performance
outcomes. In this paper, we focus on one specific domain inherent in most of
these performance models, inter-organizational transactions, to introduce a causal
model that links the quality of inter-organizational relationships to financial and
non-financial outcomes.
Successful organizations develop exchange relationships with other orga-
nizations, that persist over time, to develop a network that provides reliable
sources of goods or services. These relationships often develop to accommodate
specific strategic goals such as when a manufacturer outsources portions of
its value chain. Alternatively, inter-organizational exchange relationships also
arise from unique interdependencies that compel organizations to cooperate to
produce a product or service. In healthcare, for example, insurance companies and
physician practices are dependent upon each other to provide healthcare services
to patients. These relationships are recognized as important performance drivers
by most causal performance measurement models. For instance, the Balanced
Scorecard (BSC), in the internal-business-process perspective, acknowledges
the critical role that vendors play in providing quality inputs efficiently. The
Action-Profit-Linkage Model (APL) is a framework that “links actions taken
by the firm to the profitability of the firm within its market environment”
(Epstein et al., 2000, p. 47). A subset of firm actions are the inter-organizational
Relationship Quality 173
CONCEPTUAL FOUNDATION
To build a model of relationship quality that augments current performance mea-
surement frameworks, several studies drawn from the marketing literature provide
direction. For instance, as a starting point, legal contracts are viewed as one of the
primary governance structures that safeguard an exchange while maximizing ben-
efits for the relationship partners. In their “plural form” thesis, however, Cannon
et al. (2000) argue for viewing the contract as just one of a variety of mechanisms
that provide the building blocks for governance structures in relationships; that
focusing on the legal contract alone is a deficient approach to governing modern
exchanges. Their research on purchasing professionals examines the interaction
of contracts and relationship norms in various contexts and demonstrates that
Relationship Quality 175
Morgan and Hunt (1994) posit that the key mediating variables in a relational
exchange are commitment and trust. Relationship commitment is defined as
“an exchange partner believing that an ongoing relationship with another is so
important as to warrant maximum efforts at maintaining it; that is, the committed
party believes the relationship is worth working on to ensure that it endures
indefinitely” (Morgan & Hunt, 1994, p. 22). Relationship trust exists when
one exchange partner “has confidence in an exchange partner’s reliability and
integrity” (Morgan & Hunt, 1994, p. 23).
Drawing from the definitions above, these two constructs are positioned as
mediating variables given their central role in influencing partners to: (1) preserve
176
JANE COTE AND CLAIRE LATHAM
Fig. 1. Trust and Commitment Model of Relationship Quality.
Relationship Quality 177
the relationship through cooperation; (2) favor a longer time horizon or working to
ensure it endures; and (3) support potential high risk transactions in the exchange
given the partners’ beliefs that neither will act in an opportune fashion. The
authors further note that trust is a determinant of relationship commitment, that is,
trust is valued so highly that partners will commit to relationships which possess
trust. Thus, they theorize that the presence of both commitment and trust is what
separates the successful from the failed outcomes. Building commitment and
trust to reach relationship marketing success requires devoting energies to careful
contracting, specific cooperative behaviors and other efforts that both partners
invest. We now turn to our discussion of these antecedents.
Antecedents
Legal Bonds
Legal bonds or legal contracting refers to the extent to which formal contractual
agreements incorporate the expectations and obligations of the exchange partners.
A high degree of contract specificity, as it relates to roles and obligations, places
constraints on the actions of exchange partners. It is this specificity and attention
to detail that typically supports a willingness by partners to invest time in an
exchange relationship. Exchange partners who make the effort to work out details
in a contract have a greater dedication to the long-term success of the partnership
(Dwyer et al., 1987). Thus, a higher degree of contract specificity is expected to
have a positive influence on relationship commitment.
Relationship Benefits
Firms that receive superior benefits from their partnership relative to other
options will be committed to the relationship. As with relationship termination
costs, partnership benefits have been measured along many dimensions. For
example, Morgan and Hunt (1994) capture relationship benefits as an evaluation
of the supplier on gross profit, customer satisfaction and product performances.
Alternatively, Anderson and Narus (1990) discuss benefit as satisfaction from the
perspective of whether the company’s working relationship with the exchange
partner, relative to others, has been a happy one. Finally, Heide and John (1992)
refer to the “norm of flexibility” where parties expect to be able to make adjust-
ments in the ongoing relationship to cope with changing circumstances. Morgan
and Hunt (1994) propose that benefits which relate to satisfaction and/or global
satisfaction generally show a strong relationship with all forms of commitment.
It is then expected that as the benefits to the relationship increase, relationship
commitment will be stronger.
Shared Values
Shared values are “the extent to which partners have beliefs in common about
what behaviors, goals, and policies are important or unimportant, appropriate or
inappropriate, and right or wrong” (Morgan & Hunt, 1994, p. 25). Dwyer et al.
(1987) note that contractual mechanisms and/or shared value systems ensure sus-
tained interdependence. Shared values are shown to be a direct precursor to both
relationship commitment and trust, that is, exchange partners who share values are
more committed to their relationships.
Communication
Communication refers to the formal and informal sharing of “meaningful and
timely information between firms” (Anderson & Narus, 1990, p. 44). Mohr and
Nevin (1990) note that communication is the “glue” that holds a relationship to-
gether. Anderson and Narus (1990) see past communication as a precursor to trust
but also that the building of trust over time leads to better communication. Hence,
relationship trust is positively influenced by the quality of communication between
the organizations.
Opportunistic Behavior
Opportunistic behavior is “self-interest seeking with guile” (Williamson, 1975,
p. 6). Opportunistic behavior is problematic in long-term relationships affecting
trust concerning future interactions. Where opportunistic behavior exists, partners
no longer can trust each other which leads to decreased relationship commitment.
We therefore expect a negative relationship between opportunistic behavior
and trust.
Relationship Quality 179
Outcomes
Acquiescence
Acquiescence is the extent to which a partner adheres to another partner’s requests
(Morgan & Hunt, 1994). This is an important construct in relationship quality
because when organizations are committed to successful relationships, they
recognize that the demands made by each other are mutually beneficial. Where
requests are perceived as extraordinary, those in a committed relationship are
willing to acquiesce because they value the relationship. Morgan and Hunt (1994)
found support for higher levels of acquiescence in highly committed relationships.
Propensity to Leave
Commitment creates a motive to continue the relationship. The investments to
create the committed relationship, described as the antecedents in the model,
directly impact the perceptions that one or both partners will dissolve the rela-
tionship in the near future. Partners in relationships expected to terminate in the
near term behave differently than those that perceive that both are invested in the
relationship for the long term. Thus propensity to leave, resulting from the level of
relationship commitment, is an outcome variable with performance implications.
Cooperation
Cooperation refers to the exchange parties working together to reach mutual goals
(Anderson & Narus, 1990). Even if partners have ongoing disputes concerning
goals, they will continue to cooperate because both parties’ termination costs are
high. Cannon et al. (2000) use the term “solidarity” which encompasses “the extent
to which parties believe that success comes from working cooperatively together
vs. competing against one another” (Cannon et al., 2000, p. 183). Though both
are outcome variables, Morgan and Hunt (1994) point out that cooperation is
proactive in contrast to acquiescence which is reactive. Organizations committed to
relationships and trusting of their partners, cooperate to make the relationship work.
Once trust and commitment is established, exchange partners will be more likely
to undertake high-risk coordinated efforts (Anderson & Narus, 1990) because they
believe that the quality of the relationship mitigates the risks.
Relationship Quality
Table 1. Variables and Proposed Direction of Effect.
Antecedent Variable Mediating Variable Direction
Impact on Mediating
Legal bonds Relationship commitment + Exchange partners having a higher degree of contract specificity
have a greater commitment to the relationship.
Relationship termination cost Relationship commitment + Exchange partners having a higher measure of relationship
termination costs have a greater commitment to the relationship.
Relationship benefits Relationship commitment + Exchange partners possessing a higher measure of relationship
benefits have a greater commitment to the relationship.
Shared values Relationship commitment + Exchange partners possessing a higher measure of shared values
have a greater commitment to the relationship.
Shared values Trust + Exchange partners with a higher measure of shared values have
greater relationship trust.
Communication Trust + Exchange partners with a higher degree of formal and informal
communication have greater trust.
Opportunistic behavior Trust − Exchange partnerships where a higher degree of opportunistic
behavior exists have less trust.
Mediating Variable Outcome Variables Direction
Impact on outcomes
Relationship commitment Acquiescence + Exchange partners who have higher measure of relationship
commitment are more willing to make relationship-specific
adaptations (higher measure of acquiescence).
Relationship commitment Propensity to leave − Exchange partners who have a higher measure of relationship
commitment are less likely to end the relationship.
Relationship commitment Cooperation + Exchange partners who have a higher measure of relationship
commitment are more likely to cooperate.
181
182
Table 1. (Continued )
Antecedent Variable Mediating Variable Direction
Trust Cooperation + Exchange partners who have a higher measure of trust are more
likely to cooperate.
Trust Functional conflict + Exchange partners who have a higher measure of trust are more
likely to resolve disputes in an amicable manner (functional
Functional Conflict
The resolution of disputes in a friendly or amicable manner is termed functional
conflict is a necessary part of doing business (Anderson & Narus, 1990). Morgan
and Hunt (1994) show that trust leads an exchange partner to believe that future
conflicts will be functional, rather than destructive. When an organization is con-
fident that issues that arise during the conduct of their arrangement with the other
organization will be met with positive efforts to reach a mutual solution, they
perceive the relationship quality to be higher and expect tangible benefits to result.
Uncertainty
Decision-making uncertainty encompasses exchange partners’ perceptions
concerning relevant, reliable, and predictable information flows within the
relationship. The issue relates to whether the exchange partner is receiving
enough information, in a timely fashion, which can be then used to confidently
reach a decision (Achrol, 1991; Morgan & Hunt, 1994). Cannon et al. (2000)
conclude that uncertainty creates information problems in exchange. They further
divide uncertainty into external and internal, where external refers to the degree of
variability in a firm’s supply market and internal refers to task ambiguity. Morgan
and Hunt (1994) support a negative relationship between trust and uncertainty. The
trusting partner has more confidence that the exchange partner will act reliably and
consistently.
In summary, we have discussed the antecedent variables legal bonds, relation-
ship termination costs, relationship benefits, shared values and communication.
These antecedents have been shown to influence commitment and trust. Com-
mitment and trust, as mediators, are then shown to have an effect on relationship
quality outcomes acquiescence, propensity to leave, cooperation, financial state-
ment impact, functional conflict and uncertainty. Table 1 provides an overview of
this discussion of variables and direction of effect.
Antecedents
Outcomes
environmental contexts where this model has greater relevance than others? The
model represents an initial step towards measuring the role that relationship
dynamics have in organizations by considering them as a part of performance
measurement systems. Further research can bring refinements that will build an
integrated and insightful perspective to performance measurement systems.
REFERENCES
Achrol, R. (1991). Evolution of the marketing organization: New forms for turbulent environments.
Journal of Marketing, 55(October), 77–94.
Anderson, J. C., & Narus, J. A. (1990). A model of distributor firm and manufacturer firm working
partnerships. Journal of Marketing, 54(January), 42–58.
Burns, L. R. (1999). Polarity management: The key challenge for integrated health systems. Journal
of Healthcare Management, 44(January–February), 14–34.
Cannon, J., Achrol, R., & Gundlach, G. (2000). Contracts, norms and plural form governance. Journal
of the Academy of Marketing Science, 28(Spring), 180–194.
Cote, J., & Latham, C. (2003). Exchanges between healthcare providers and insurers: A case study.
Journal of Managerial Issues, 15(Summer), 191–207.
Dwyer, P., Schurr, H., & Oh, S. (1987). Developing buyer-seller relationships. Journal of Marketing,
51(April), 11–27.
Eisenhardt, K. M. (1989, October). Building theories from case study research. Academy of
Management Review, 14(4), 532–550.
Epstein, M. A., Kumar, P., & Westbrook, R. A. (2000). The drivers of customer and corporate
profitability: Modeling, measuring, and managing the causal relationships. Advances in
Management Accounting, 9, 43–72.
Heide, J. B., & John, G. (1992). Do norms matter in marketing relationships. Journal of Marketing,
56(April), 32–44.
Heskett, J. L., Sasser, W. E. Jr., & Schlesinger, L. A. (1997). The service profit chain: How leading
companies link profit and growth to loyalty, satisfaction, and value. New York, NY: Free Press.
Kaplan, R. S. (1989). Kanthal (A). Harvard Business School Case #190–002. Harvard Business
School Press.
Kaplan, R. S., & Norton, D. P. (1996). The balanced scorecard. Boston, MA: Harvard Business School
Press.
Leone, A. J. (2002). The relation between efficient risk-sharing arrangements and firm characteristics:
Evidence from the managed care industry. Journal of Management Accounting Research, 14,
99–118.
Mohr, J., & Nevin, J. (1990). Communication strategies in marketing channels: A theoretical
perspective. Journal of Marketing, 58(July), 20–38.
Morgan, R. M., & Hunt, S. D. (1994). The commitment-trust theory of relationship marketing. Journal
of Marketing, 58(July), 20–38.
Morton, W. (2002). The unprofitable customer: How you can separate the wheat from the chaf. The
Wall Street Journal (October 28), A1.
Pascual, A. M. (2001). The doctor will really see you now. Business Week (July 9), 10.
Rindfleisch, A., & Heide, J. (1997). Transaction cost analysis: Past, present, and future applications.
Journal of Marketing, 61(October), 30–54.
190 JANE COTE AND CLAIRE LATHAM
Shapiro, B. P., Rangan, V. K., Moriarty, R. T., & Ross, E. B. (1987). Manage customers for profits
(not just sales). Harvard Business Review, 65(September–October), 101–107.
Sharpe, A. (1998a). Boutique medicine: For the right price, these doctors treat patients as precious.
Their consultancy signals rise of a system critics say favors the wealthy. practicing HMO
avoidance. The Wall Street Journal (August 12), A1.
Sharpe, A. (1998b). Health care: Discounted fees cure headaches, some doctors find. The Wall Street
Journal (September 15), B1.
Shute, N. (2002). That old time medicine. U.S. News and World Reports (April 22), 54–61.
Solomon, R. C. (1992). Ethics and excellence. Oxford: Oxford University Press.
Williamson, O. (1975). Markets and hierarchies: Analysis and antitrust implications. New York, NY:
Free Press.
Williamson, O. (1985). The economic institutions of capitalism: Firms, markets, and relational
contracting. New York, NY: Free Press.
MEASURING AND ACCOUNTING
FOR MARKET PRICE RISK
TRADEOFFS AS REAL OPTIONS IN
STOCK FOR STOCK EXCHANGES
ABSTRACT
The flexibility of managers to respond to risk and uncertainty inherent
in business decisions is clearly of value. This value has historically been
recognized in an ad hoc manner in the absence of a methodology for more
rigorous assessment of value. The application of real option methodology
represents a more objective mechanism that allows managers to hedge
against adverse effects and exploit upside potential. Of particular interest to
managers in the merger and acquisition (M&A) process is the value of such
flexibility related to the particular terms of a transaction. Typically, stock
for stock transactions take more time to complete as compared to cash given
the time lapse between announcement and completion. Over this period, if
stock prices are volatile, stock for stock exchanges may result in adverse
selection through the dilution of shareholder wealth of an acquiring firm or a
target firm.
The paper develops a real option collar model that may be employed by
managers to measure the market price risk involved to their shareholders
in offering or accepting stock. We further discuss accounting issues related
to this contingency pricing effect. Using an acquisition example from U.S.
INTRODUCTION
An important area of research in management accounting is the implementation
of strategic management decisions and managerial behavior that focus on ways
to improve management and corporate performance. With the increased reliance
of firms on financial instruments to manage business risk, its measurement and
disclosure has become increasingly important in accounting. This research looks
at one element of business risk, specifically, the measurement of market price
risk to shareholders in a merger and an acquisition transaction using an emerging
capital budgeting tool – real option methodology and accounting issues of market
price risk to the acquiring firm.
Merger and acquisition activity has increased sharply since the 1990s. Strikingly,
noticeable in the merger and acquisition activity in this decade is that companies
are increasingly paying for acquisitions with stock rather than cash. Stock for
stock offers typically takes more time from announcement to completion than
cash offers. This is particularly noticeable in regulated mergers as in the banking
industry. Over this extended time from announcement to completion, the target
and acquiring firm stock prices can change dramatically due to various factors
even though they are in the same industry. Especially, if the acquiring firms stock
is highly volatile, it can significantly affect the value of the deal at consummation
if there are no price protection conditions built into the deal.
Published literature dealing with price protection in mergers and acquisitions
is sparse. However, the contingent pricing effect on the value of a deal in a stock
for stock exchange due to stock price volatility has important risk management
implications for managers and both sets of shareholders. A practical way to
provide price protection to both acquiring and target firm shareholders is to set
conditions for active risk management by managers. For example, one possibility
is to provide managers the flexibility to renegotiate the deal and hedge the market
price risk by specifying a range within which the deal is allowed to fluctuate as in a
collar type arrangement.
This paper investigates how to better structure a stock for stock offer as a collar
using real option methodology when stock prices are volatile and when there
is considerable time lapse between announcement and final consummation. We
propose that managers use real option analysis to measure the price risk involved
to their shareholders. The main advantage of using real option analysis is that it can
Measuring and Accounting for Market Price Risk Tradeoffs 193
capture and measure the value of intangible such as maintaining flexibility when
there is high uncertainty. We argue that explicit valuation of managerial flexibility
including in the terms of interest may enhance deal value for both parties and
enforce favorable managerial behaviors.
We also discuss accounting issues related to business combinations and in
particular accounting for these intangibles. We propose that perhaps these real
options should be accounted as contingencies? Using a recent acquisition case
example from U.S. banking industry, the paper illustrates how the proposed collar
is used to hedge the market price risk and how this acquisition structure avoids
earnings per share (EPS) dilution to both sets of shareholders.
The paper is organized as follows. First, we discuss stock price variability and its
valuation effects on stock for stock transactions. Second, we introduce real option
theory and managerial flexibility in M&A decisions. Third, we present well-known
formulas for optimal exchange ratios of a target and an acquiring firm. Fourth, we
discuss the proposed real options collar model for valuing managerial flexibility in
stock for stock transactions. Fifth, we discuss accounting issues related to business
combinations and in particular accounting for contingencies. Sixth, we apply the
real option collar model to value the recent acquisition decision of BankFirst
Corporation by BB&T. Finally, we discuss our findings and future research.
Fig. 1. Dependency of Acquiring and Target Firm Wealth on Exchange Ratio and Post
Merger P/E Ratio.
Fig. 1. Of particular interest is the region where shareholders of both firms will
benefit. An exchange ratio f ∗ (F min ≤ f ∗ ≤ F max ) for some (P/E)c that lies in the
optimal region will theoretically increase post acquisition share holder wealth of
both parties. The minimum post completion price to earnings ratio (P/E)∗c is where
the two expressions equate (F min = F max ). As shown in Fig. 1, an exchange ratio
( f ) that is greater than minimum exchange ratio (Fmin ) and less than maximum
exchange ratio (Fmin ) for a any post completion (P/E)c ratio that is greater than
minimum post completion price to earnings ratio (P/E)∗c should be negotiated.
deal value. A target on the other hand could consider a put option or a floor,
which ensures that holder, would receive the maximum deal value. Consequently,
managerial flexibility to both parties can be structured as a collar arrangement,
going long on a cap and shorting a floor.
An acquiring firm should buy a real call option on theoretical value with a
strike price equal to a deal value that cap the exchange ratio to a maximum
exchange ratio that will not dilute post acquisition stock value for acquiring firm
shareholders. The underlying asset is the theoretical deal value. The cap would
guarantee that the deal value at any given time would be the minimum of the
theoretical value or the deal value based on the maximum exchange ratio. On
the other hand, to maximize its payment a target should hold a real put option on
the theoretical value with a strike price based on the minimum exchange ratio.
In this way the floor guarantees that a target would receive maximum of the
theoretical value or the deal value using the minimum exchange ratio.
In order to price the cap and the floor, we use the binomial lattice framework
for pricing options on a stock. The real option prices are thus consistent with
risk-free arbitrage pricing. Let SA and SB denote stock prices of an acquiring firm
and a target firm at announcement of the deal. The stock price of firm (i), S(i)
follows a random walk. The time between the announcement (t0 ) and the actual
closing of the deal (t1 ) is denoted by (t) where (t = t 1 − t 0 ). Assume that there
are four decision points (T = 0, 1, 2, 3) pertaining to when a deal may be closed.
We divide the time period (t) into equal periods of length (T = t/3), which
may be measured in weeks or months. In the binomial option pricing model, the
formulas to compute risk neutral probability (p(i) ), and upward movement for
stock price u (i) and downward movement d(i) for stock price (i) are as follows:
√
u i = ei T
and
√
d i = e−i T
and
er f T − d i
pi = ,
ui − di
where, (i) is the stock price volatility of firm (i). The short-term interest rate (rf ),
stock price volatility of the acquiring firm (A ) and target firm (B ) are assumed
to remain constant in the model. Upward and downward movement in stock price
are represented by the state variable (k) and denoted by (+) and (−) respectively.
Using above parameters we next develop a three period binomial lattices for
movement in stock prices of an acquiring firm and a target firm as shown in Fig. 2.
198 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
For any exchange ratio ( f ) the deal value (P kT ) at any period (T ) in state (k),
can be calculated as follows:
P kT = f N B S kA
As per our notation, the state variable (k) would be either + or −. The deal
value based on the maximum and minimum exchange ratios are given by:
U kT = N B S kA kF max and L kT = N B S kA kF min respectively. The theoretical deal value
based on a fixed exchange ratio is given by:
V kT = N B S kA F. Notice that value of U kT and L kT are also a function of acquiring
firm stock price. Alternatively, a target and acquiring firm may agree upon
minimum and maximum deal values based on the acquiring firm’s stock price
at announcement. In this case, the maximum and minimum deal values will be
equal to the following constant expressions U = N B S A F max and L = N B S A F min .
Notice that stock prices of both target and acquisition firms can vary from the
time a deal is announced to when it is completed. In addition, using the historic
price volatility the ratio of the projected stock price of an acquiring firm to a target
firm under each state (k) and time (T) can be computed. We refer to this variable
exchange ratio as the critical exchange ratio defined by ( f k ) where
S kB
fk =
S kA
Measuring and Accounting for Market Price Risk Tradeoffs 199
Notice that critical exchange ratio ( f k ) which is based on projected stock prices can
be used to determine the target’s market value at any state k. The target’s market
value (W kT ) based on variable exchange ratio at any period (T) in state (k), can be
obtained by substituting f k in the expression for P kT . The target’s market value is
W Tk = f k N B S kA
A buying firm would want flexibility to minimize the theoretical deal value at
completion. Thus a cap would guarantee that a deal is valued as the minimum of
the theoretical deal value or the agreed upon value based on maximum exchange
ratio given by min [U, V kT ] = min[NB SA Fmax , NB FS kA ]. An acquiring firm which
holds the cap should pay a target a terminal payoff of max[NB FS kA − NB SA Fmax ,
0] = max[NB FS kA − U, 0] to receive the benefit of having the right to pay min[NB
SA Fmax , NB FS kA ]. Notice that U is analogous to the exercise price of the call option
when the underlying asset in option terminology is the theoretical deal value V kT .
A target should hold a floor which would guarantee the maximum of the theoretical
value or the agreed upon deal value based on the minimum exchange ratio given
by max[L, V kT ] = max[NB SA Fmin , NB FS kA ]. The floor would provide a target
the managerial flexibility to maximize the value of the deal to its shareholders. A
target should pay an acquiring firm a floor premium max[Fmin NB SA − NB FS kA ,
0] = max[L − NB FS kA , 0] to have the right to receive benefit of max[NB SA Fmin ,
NB FS kA ]. Here L is analogous to the exercise price of the put option.
Given that the theoretical post acquisition market price of a target and acquiring
firm will depend on post acquisition earnings and the (P/E) ratio, the following
optimal terminal cap and floor values can be identified.
Proposition 1. If the exchange ratio is greater than maximum exchange ratio
then post acquisition shareholder wealth will be lower; that is S A > S c . The call
option (cap) to an acquiring firm has value (X kT > 0) and the put option (floor)
to the target has zero value (Y kT = 0).
Proof: To prove this result let us assume that f > F max
1 EA NA + EB NB P
F max ≤ − NA
NB SA E c
200 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
then
1 EA NA + EB NB P
f> − NA
NB SA E c
P
f N B S A > (E A N A + E B N B ) − NA SA
E c
P
( f N B + N A )S A > (E A N A + E B N B )
E c
EA NA + EB NB P
SA >
fN B + N A E c
P
SA > Ec
E c
SA > Sc
Where the post acquisition earnings per share of the acquiring firm is given by:
EA NA + EB NB
Ec >
fN B + N A
When f > F max , an acquiring firm would prefer to cap a deal to maximum deal
value of X kT = min[NB SA Fmax , NB fS kA ]. Since we assumed that both parties
agreed on the fixed exchange at announcement by substituting f = F we obtain
X kT = min[U, V kT ]. The terminal call option payoff is then equal to max[NB FS kA
− NB SA Fmax , 0] = F NB S kA − NB SA F max > 0. For a target, the terminal
payoff of its put option is Y kT = max[Fmax NB SA − NB FS kA , 0] = 0. Therefore
if f > F max it is beneficial for the acquiring company to cap the deal since it
will otherwise dilute post acquisition EPS of its shareholders.
Proposition 2. If the exchange ratio is less than minimum exchange ratio then
the deal does not preserve the post acquisition shareholder wealth of target
shareholders. The call option (cap) to the acquiring form has no value (X kT = 0)
and the put option (floor) has value (Y kT > 0).
Proof: To prove this result let us assume that f < F min
SB NA
F min ≥
(E A N A + E B N B )(P/E)c − S B N B
SB NA
f<
(E A N A + E B N B )(P/E)c − S B N B
(E A N A + E B N B )(P/E)c f < S B N A + S B N B f
Measuring and Accounting for Market Price Risk Tradeoffs 201
(E A N A + E B N B )(P/E)c f < S B (N A + N B f)
EA NA + EB NB P
f < SB
NA + NB fk E c
P
Ec f < SB
E c
Substituting for the variable exchange ratio f = F = N 0 /N B we obtain the
following:
P
Ec No < NB SB
E c
Pc No < NB SB
This completes the proof; the target shareholder’s wealth after acquisition
is less than target shareholder wealth prior to acquisition. When f < F min , a
target firm would prefer to receive maximum deal value equal to Y kT = max[NB
SA Fmin , NB f S kA ]. Again by substituting f = F, we get Y kT = max[L, V kT ]. The
terminal put option payoff is then equal to max[NB SA Fmin − NB FS kA , 0] =
NB SA Fmin − NB FS kA > 0. For an acquiring firm terminal payoff of its call
option is X kT = max[F NB S kA − Fmin NB SA , 0] = 0. Therefore if f < F min it
is beneficial for the target company to hold a floor to maximize its deal value.
Otherwise it will dilute post acquisition EPS of target shareholders.
Proposition 3. If the critical exchange ratio lies between minimum exchange
ratio and maximum exchange ratio (F min < f < F max ) then both acquiring firm
and target firm shareholder’s gain. There is no EPS dilution to either party and
the collar has no value.
Proof: To prove this result let us assume that f < F max
1 EA NA + EB NB P
F max ≤ − NA
NB SA E c
then
P
f N B S A < (E A N A + E B N B ) − NA SA
E c
P
( f N B + N A )S A < (E A N A + E B N B )
E c
EA NA + EB NB P
SA <
f NB + NA E c
202 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
P
SA < Ec
E c
SA < Sc
The post acquisition stock price of acquiring firm is greater than its stock price
before acquisition.
Proof: To prove this result let us assume that f > F min
SB NA
F min ≥
(E A N A + E B N B )(P/E)c − S B N B
SB NA
f>
(E A N A + E B N B )(P/E)c − S B N B
P
(E A N A + E B N B ) f > SB NA + SB NB f
E c
EA NA + EB NB P
f > SB
NA + NB f k E c
P
Ec f > SB
E c
Substituting for the variable exchange ratio f = F = N 0 /N B we obtain the fol-
lowing:
Pc No > NB SB
The post acquisition shareholder wealth of a target firm is greater than its share-
holder’s wealth before acquisition. Therefore, if F min < f < F max then there is
no dilution in the post acquisition shareholder wealth of an acquiring firm or
a target firm. By substituting f = F we obtain the following. When F < F max
value of the cap given by X kT = max[F NB S kA − NB SA Fmax , 0] = 0. Similarly,
when F > F min value of the floor given by Y kT = max[NB SA Fmin − NB FS kA , 0]
= 0. The value of the collar can be found by holding a cap and shorting a floor;
collar = cap – floor which is equal to zero. In Table 1 we summarize terminal
payoffs of the cap and the floor.
Using terminal payoff values in Table 1, we can now price the cap and the floor
based on risk-free arbitrage pricing. The payoff values (X K
T ) of a call option in
Measuring and Accounting for Market Price Risk Tradeoffs 203
X kT = max[N B S kA F − U, 0]
Y kT = max[L − N B S kA F, 0]
In Fig. 3, we present payoff values in each state (k) and time (T) for a real call
and a real put (as shown in box). Next, we employ a risk neutral approach to price
the real call and real put options. Since we have assumed that the time between
announcement and final consummation was divided into four decision points, we
need to find managerial flexibility available to both buyer and seller in each period
T = 1, 2 and 3. The value of managerial flexibility to the buyer can be priced
as three European calls with respective maturity at time T = 1, 2 and 3. Let XT
denote value of a T period call. For example, a two period call to offer minimum
deal value in period T = 2 can be value as follows.
The real call X2 is valued by finding terminal payoff values at T = 2,
X ++ −+ −−
2 , X 2 , X 2 and folding back two periods. For the payoffs at T = 2 we obtain
++
X2++ = max[NB SA F − U, 0]
X2+− = max[NB SA
+−
F − U, 0]
X2−− = max[NB SA
−−
F − U, 0]
204 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
Fig. 3. Payoff Values for the Real Call and Real Put (in Box).
X 2 = e−r f T [p A X + −
2 + (1 − p A )X 2 ]
Similarly, using the risk neutral procedure we can calculate flexibility to minimize
deal values at T = 1 and 3, given by X1 and X3 respectively.
The value of managerial flexibility to a target can be priced as three European
puts with respective maturity at time T = 1, 2 and 3. Let YT denote the value of
a T period real put option. For example, the one period put to receive maximum
deal value in period T = 1, Y1 can be valued by finding the terminal payoff values
Measuring and Accounting for Market Price Risk Tradeoffs 205
at T = 1, Y + −
1 , Y 1 and folding back one period. For the payoffs at T = 1 we obtain
Y1+ = max[L − NB SA
+
F, 0]
Y1− = max[L − NB SA
−
F, 0]
Using risk neutral discounting, we next find the value of a one period put option as
Y 1 = e−r f T [p A Y + −
1 + (1 − p A )Y 1 ]
In order to consider the value of managerial flexibility available to both the target
and acquiring firm to benefit by renegotiating the deal at a future decision point,
we can go long on the real call and short the real put. This would cap the theoretical
deal value to a maximum exchange ratio to benefit the buyer while guaranteeing
a deal value based on minimum exchange ratio to benefit the seller. Since the cap
and floor have different strike prices U kT = N B S kA F max and L kT = N B S kA F min by
holding a cap and selling a floor we can effectively create a collar. Therefore, the
value of a collar at any period T can be computed as the difference between value
of managerial flexibility to a buyer (call) and value of flexibility to a seller (put)
given by T = X T − Y T . Thus the deal value including managerial flexibility to
both buyer and seller (V) can be calculated as:
3
V = V0 + T
T =1
Where (V0 ) is the acquisition value with out managerial flexibility agreed between
the two parties at time T = 0 given by V 0 = FN B S A , where F is the fixed exchange
ratio and T is the net value of flexibility (value of flexibility to buyer) – (value
of flexibility to seller). Notice that value of flexibility to a buyer will increase the
purchase price of an acquisition while the value of flexibility to a seller would re-
duce it. Value of an acquisition if only buyer’s managerial flexibility to renegotiate
the deal by exercising the real call option is considered equal to
3
VA = V0 + XT
T =1
206 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
A real option collar can alternatively be valued instead of separately pricing each
cap and floor. In order to do so, one can find the value of collar (T ) at each time
period (T ). Consider terminal collar payoffs at each state (k) and time (T ) which
is equivalent to max[N B S kA F − U, 0] − max[L − N B S kA F, 0] and then price the
collar at each period (T ) using risk neutral discounting. This method will give
yield an equivalent value for the real option collar.
Mergers and acquisition transactions are treated for accounting purposes as busi-
ness combinations. In a business combination transaction one economic unit unites
with another or obtains control over another economic unit. Accordingly, there are
two forms of business combinations; purchase of net assets, where an enterprise
acquires net assets that constitute a business and; purchase of shares, where an
enterprise acquires sufficient equity interest of one or more enterprises and obtains
control of that enterprise or enterprises. The enterprise(s) involved in a business
combination can be either incorporated or unincorporated. However, the purchase
of some (less than 100%) of an entity’s assets is not a business combination. The
form of consideration in a business combination could be cash or a future promise
to pay cash, other assets, common or preferred stock, a business or a subsidiary or
any combination of the above.
In a purchase of assets, an enterprise may buy only the net assets and leave the
seller with cash or other consideration, and liabilities. Alternatively, a buyer may
purchase all the assets and assume all the liabilities. The more common form of
business combination however is the purchase of shares. In a purchase of share
transaction, the acquiring firm’s management makes a tender offer to target share-
holders to exchange their shares for cash or for acquiring firm shares. The target
continues to operate as a subsidiary. In both purchase of net assets and purchase of
shares the assets and liabilities of the target are combined with the assets and liabil-
ities of the acquiring firm. If the acquiring firm obtains control by purchasing net
Measuring and Accounting for Market Price Risk Tradeoffs 207
assets the combining takes place in acquirer’s books. If acquirer achieves control
by purchasing shares combining takes place when the consolidated financial state-
ments are prepared. The types of business combinations are summarized in Fig. 4.
Prior to July 1, 2001, in the U.S., there were two alternative approaches to account
for business combinations. The pooling method was used when an acquirer could
not be identified. Stock deals were accounted for by the pooling method. The
pooling method was only possible in a share exchange since if cash was offered
then the company offering cash became the acquirer and the purchase method had
to be used. Under the pooling method, the price paid was ignored, fair market
values were not used, and the book values of two companies were added together.
The pooling method avoided creating goodwill and ignored value created in a
business combination transaction. Also reported earnings were higher.
Since July 1, 2001, revised accounting standards in the U.S. allow only the
purchase method to account for mergers and acquisitions. Regardless of the
purchase consideration, if one company can be identified as the acquirer, the
purchase method has to be used. Under this method, the acquiring company
records net assets of the target at the purchase price paid. The purchase price
may include cash payments; fair market value of shares issued, and the present
value of promise to pay cash in the future. Goodwill is the difference between
purchase price paid and fair market value of net assets of the target. Goodwill
is reviewed for impairment and any impairment loss is charged against earnings.
Future reported earnings under the purchase method are lower.
208 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
Contingencies
post-merger stock price can be computed using post-merger (P/E) ratio and the
post-merger EPS relationship as $26.60.
The deal announced on August 23, 2000, closed on December 28, 2000 for
$216.2 million in stock. Accordingly, on a per share basis the deal was valued at
$17.42 per BKFR share based on BB&T closing price of $38.25. Notice, that over
the four-month period, from announcement to closing, the deal value increased
by $66.6 million, a 44% increase. Theoretically, this increase is a hidden loss
to shareholders of the acquiring bank since they are effectively paying more for
relatively the same net assets if the deal had closed at the original value of $149.7
million. Notice that fundamental economics of the acquisition have not changed
since expected post acquisition cash flows of target will remain unchanged. The
prices paid in a stock swap are real prices and as such there is greater dilution
of equity interest of acquiring firm shareholders. From a purely accounting
perspective, however, it would not make a difference since the transaction would
have been recorded using the pooling method at historic book values of net assets.
What would have been the deal value, if both BB&T and BKFR’s managerial
flexibility to renegotiate the deal by exercising real call and put options had been
considered? How much would that flexibility be worth? The data for our model
were obtained from company annual reports. The volatility of BB&T and Bank-
First stocks were estimated using stock price data from August 1998 to December
2000. BankFirst Corporation went public in August 1998. Although for illustration
purposes we used stock price data from August 1998 to December, 2000, notice
that stock price data beyond the announcement date should not be used.
Data for the model are summarized as follows:
BB&T Corporation annual stock price volatility estimated using historic data
A
is 35.7%. BankFirst Corporation annual stock volatility B using historic data is
found to be 33%. The binomial lattices for BBT and BKFR along with the actual
stock prices (high) are presented in Appendix Exhibit 2. Volatility is measured
as standard deviation of log returns of stock price over the period.
A constant risk-free rate, r f = 6%, is assumed.
Fixed exchange ratio is F = 0.4554 BBT stock for each BKFR share.
Fair market value of BankFirst Corporation net assets is NB FSA = $149,500,000.
Number of BKFR common stock outstanding is N B = 12,260,500 shares.
The time from announcement to closing is T = 4 months since the deal was
announced end of August and closed end of December. Thus T is divided in to
4 periods of equal length (T = 1 month or 0.0833 years).
Market price of BB&T stock at announcement S A = 26.81.
Stock price data of BB&T and BKFR used to estimate the volatility, resulting
binomial parameters and lattices for movements of stock prices for the acquiring
Measuring and Accounting for Market Price Risk Tradeoffs 211
bank and target are shown in Appendix Exhibits 1 and 2. Once we develop
binomial trees pertaining to each stock, we next compute the variable stock
exchange ratio ( f k ) in each state (k). For example the variable stock exchange
ratio at T = 0, is computed as f k = 12/26.68 = 0.4498. The variable exchange
ratios in spreadsheet format are shown in Table 3.
In spreadsheet format, an upward movement is shown directly to the right and
a downward movement is shown directly to the right but one step down.
In order to price each cap and floor, one needs to find maximum and minimum
exchange ratios. The relationship between these two exchange ratios and the
post merger (P/E) ratio is shown in Fig. 5. Notice that the fixed exchange
ratio of 0.4554 will only benefit BB&T. Since the minimum post merger (P/E)
ratio is 13.75, for both firm shareholders to benefit we selected a post-merger
(P/E) ratio of 13.8, which falls in the optimal region. The corresponding
maximum and minimum exchange ratios are 0.4591 and 0.3358. The optimal
exchange ratios and resulting minimum and maximum agreed upon deal value are
as follows:
Maximum exchange ratio F max = 0.4591.
Minimum exchange ratio F min = 0.3358.
Agreed upon minimum deal value L = 110.4 million.
Agreed upon maximum deal value U = 150.9 million.
In order to price the four real call options (caps) and four real put options (floors)
pertaining to each decision point T = 1, 2, 3 and 4 we use formulas for X kT and Y kT
at each state (k) and time (T). The terminal payoff for the cap at T = 2, k = ++
is X ++
2 = max{(12,260,500)(32.79)(0.4554) − 150,900,000, 0} = 32,155,266.
Similarly we can compute terminal payoff of the floor at T = 4, k = − − −− as
L −−−−
1 = max {110,400,000 − (12,260,500)(17.67)(0.4554), 0} = 11,737,537.
The terminal payoff values for pricing the cap and floor are presented
in Table 4.
Using formulas developed in preceding section, we next calculate values of the
four caps. This is done by finding expected terminal payoff values using risk-free
X −−
1 =e
−(0.06)0.0833
[0.499(32, 155, 266) + (0.501)0] = $15,950,563
X −+
1 =e
−(0.06)0.0833
[0.499(0) + (0.501)(0)] = $0
And then applying risk neutral discounting one more time to find the cap payoff
at T = 0 as X2
Cap (XT )
Value of buyer’s $7,058,217 $7,912,249 $11,591,413 $12,317,427 $38,879,306
flexibility
Floor (YT )
Value of seller’s 0 0 $127,898 $727,536 $855,435
flexibility
Collar (T )
Combined value of $7,058,217 $7,912,249 $11,463,515 $11,589,891 $38,023,872
buyer and seller
flexibility
214 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
DISCUSSION
The original deal, which was valued at $149.7 million, was closed on December
28th 2000 for $216.2 million in stock, a dilution of $66.6 million to BB&T share-
holders. If the acquisition was structured to include only BB&T Corporation’s
managerial flexibility to cap the deal value to hedge dilution to its shareholders,
the deal would be valued at $188.6 million. Alternately, if the deal were structured
with only BKFR’s flexibility to benefit its shareholders, it would be worth $148.9
million. Ideally, the acquisition could have been fairly valued by considering man-
agerial flexibility available to both BB&T and BKFR’s. This would result in a deal
valued at $187.72 million. The fair deal value based on the real option collar model
is thus $ 28.5 million lower than actual closing value.
Although we considered managerial flexibility to hedge market price risk avail-
able to both parties as a collar, at consummation of the deal only a cap or a floor
will be exercised since they are mutually exclusive managerial actions. Therefore,
when negotiating, the collar arrangement should clearly provide a contractual
provision that allows for the final purchase cost to be computed as the deal value
without flexibility plus the real call or put value that is relevant. For example, let
us assume that the deal was consummated four months later when BBT’s stock
price is $38.25. This would indicate that the acquiring firm’s management would
exercise a four period call to cap the deal. The value of the four period call is
$12 million and the purchase cost to the acquiring firm would be $161 million
(deal value of $149.7 million without flexibility plus cost of the call option of
$12 million). At the time of consummation the acquiring firm’s management will
exercise the call option and cap the deal to the agreed maximum value of $150.9
million. By exercising the call option the acquiring firm’s has hedged a dilution
of $65 million ($216 million less $150.9 million) at a cost of $12 million. The
corresponding managerial actions for the two firms at each decision state (k) are
shown in Table 6. The corresponding payoff diagrams of the collar are shown
in Fig. 6.
We have demonstrated that a better way to structure a stock for stock transaction
with stock price variability is to consider value of managerial flexibility in the
acquisition structure. Conventional stock for stock transactions ignores the value
of managerial flexibility available to both parties, which may be significant.
Using real option methodology we demonstrated how this managerial flexibility
could be valued using market based data. In this paper we propose that these
real options should also be accounted as contingencies in business combination
transactions so that managers are held accountable. In summary, we have argued
the case for real options as responses to anticipated managerial actions, which
provide a mechanism to commit managers to desirable behaviors that mitigate
EPS dilution over the period the exchange ratio is fixed and the acquisition
is complete.
Real option methodology is a significant step forward in conceptualizing and
valuing managerial flexibility in strategic investment decisions. The real option
methodology is conceptually superior especially when the there is high degree
of uncertainty associated with investment decisions, but it also has limitations.
While much of the academic research in real options to date has been done
by corporate finance academics, there is scope for extending and applying real
option methodology in management accounting areas such as capital budgeting
for IT investments, research and development and performance measurement
and evaluation.
216 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
ACKNOWLEDGMENTS
We thank Harjeet Bhabra, Graham Davis, Bruce McConomy and Peter Wilson
for their helpful suggestions. The paper also benefited from comments received
at the 2002 Northern Finance Association meeting presentation in Banff, Canada,
2003 AIMA Conference on Management Accounting Research presentation in
Monterey, California and 2003 CAAA Annual Conference presentation in Ottawa,
Canada.
REFERENCES
Booth, R. (1999). Avoiding Pitfalls in investment appraisal. Management Accounting (November),
22–23.
Brabazon, T. (1999). Real options: Valuing flexibility in capital investment decisions. Accountancy
Ireland (December), 16–18.
Busby, J. S., & Pitts, C. G. C. (1997). Real options in practice: An exploratory survey of how decision
makers in industry think about flexibility. Management Accounting Research, 8, 169–186.
Copeland, T., & Antikarov, V. (2001). Real options – A practitioner’s guide. New York, NY: Texere
Publishing Limited.
Gaughan, P. A. (1999). Mergers, acquisitions and corporate restructuring (2nd ed.). New York, NY:
Wiley.
Herath, H. S. B., & Jahera, J. S., Jr. (2001). Operational risk in bank acquisitions: A real options
approach to valuing managerial flexibility. In: Advances in Operational Risk (pp. 53–65).
London: Risk Books.
Hilton, M. W. (2003). Modern advanced accounting in Canada (3rd ed.). Toronto: McGraw-Hill.
Houston, J. F., & Ryngaert, M. D. (1996). The value added by bank acquisitions: Lessons from Wells
Fargo’s acquisition of First Interstate. Journal of Applied Corporate Finance, 9(2), 74–82.
Kester, W. C. (1984). Today’s option for tomorrow’s growth. Harvard Business Review (March-April),
153–160.
Lambrecht, B. M. (2001). The timing and terms of merger, stock offers and cash offers. Working
Paper, University of Cambridge.
Myers, S. C. (1977). Determinants of corporate borrowing. Journal of Financial Economics, 5(2),
147–176.
Myers, S. C. (1984). Financial theory and financial strategy. Interface, 14(January–February), 126–137.
Rappaport, A., & Sirower, M. L. (1999). Stock or cash? The trade-offs for buyers and sellers in
mergers and acquisitions. Harvard Business Review (November–December), 147–158.
Stark, A. W. (2000, April/May). Real options (dis)investments decision-making and accounting
measures of performance. Journal of Business, Finance and Accounting, 27(3&4), 313–331.
Zhang, G. (2000). Accounting information, capital investment decisions, and equity valuation: Theory
and empirical implications. Journal of Accounting Research, 38(2), 271–295.
Measuring and Accounting for Market Price Risk Tradeoffs 217
APPENDIX
Exhibit 1. Monthly Stock Prices and Returns for BBT and BKFR (August 1998
to December 2000).
Month BBT Stock BKFR Stock (Target)
Closing Price $ Return ri (%) Closing Price $ Return ri (%)
1 26.23 – 12.00 –
2 28.05 0.0669 11.38 −0.0535
3 33.70 0.1835 11.00 −0.0335
4 34.82 0.0327 9.75 −0.1206
5 38.00 0.0874 8.94 −0.0870
6 35.75 −0.0611 10.31 0.1431
7 35.87 0.0033 10.81 0.0473
8 34.27 −0.0456 10.00 −0.0782
9 37.99 0.1032 10.00 0.0000
10 34.72 −0.0900 9.22 −0.0813
11 34.90 0.0051 9.31 0.0101
12 33.73 −0.0342 9.00 −0.0342
13 32.05 −0.0509 8.88 −0.0140
14 30.97 −0.0342 9.50 0.0681
15 35.02 0.1226 9.00 −0.0541
16 31.05 −0.1204 9.38 0.0408
17 26.35 −0.1639 8.63 −0.0834
18 27.29 0.0349 8.38 −0.0294
19 22.80 −0.1797 8.00 −0.0458
20 27.23 0.1774 7.31 −0.0898
21 26.02 −0.0454 8.63 0.1650
22 28.65 0.0962 7.75 −0.1070
23 23.33 −0.2052 8.25 0.0625
24 24.58 0.0519 9.13 0.1008
25 26.68 0.0824 12.00 0.2739
26 29.69 0.1066 13.75 0.1361
27 31.67 0.0646 14.00 0.0180
28 33.16 0.0460 15.00 0.0690
29 37.07 0.1115 17.13 0.1325
Mean return 0.0123 0.0127
Standard deviation 0.1030 0.0971
Annual standard deviation 35.7% 33.6%
Annual mean 14.8% 15.2%
218 HEMANTHA S. B. HERATH AND JOHN S. JAHERA JR.
ABSTRACT
As manufacturers continue to increase their level of automation, the issue of
how to allocate machinery costs to products becomes increasingly important
to product profitability. If machine costs are allocated to products on a
basis that is incongruent with the realities of machine use, then income and
product profitability will be distorted. Adding complexity to the dilemma of
identifying an appropriate method of allocating machine costs to products
is the changing nature of machinery itself. Depreciation concepts were
formulated in days when a machine typically automated a single operation on
a product. Today’s collections of computer numerically controlled machines
can perform a wide variety of operations on products. Different products
utilize different machine capabilities which, depending on the function used,
put greater or less wear and tear on the equipment. This paper presents
a mini-case that requires management accountants to consider alternative
machine cost allocation methods. The implementation of an activity-based
INTRODUCTION
This paper presents a mini-case that requires management accountants to consider
alternative methods of machine cost allocation and how activity-based logic may
assist modern businesses in connecting machine costs in fixed-cost intensive
environments to products based on the demands products place on the machine.
Better matching of machine costs to products enables better strategic decisions
about pricing, mix, customer retention, capacity utilization, and equipment
acquisition.
Pat is the first person in the neighborhood to buy a new car. Impressed by the vehicle,
all of Pat’s neighbors express an interest in renting the car to satisfy their various
travel needs. Being a friendly (and entrepreneurial) neighbor, Pat wants to rent the
car to each neighbor and establish “Pat’s Car Rentals.” However, while renting the
car to each neighbor at the same price may maintain harmony on the block, Pat
knows that each neighbor plans to use the car quite differently. For example:
Neighbor A is a teenager who wants to borrow the car for dating purposes. Pat
knows Neighbor A will drive at a high rate of speed about town, blast the CD
player, and accelerate and decelerate quickly while making many stops to pick
up friends.
Neighbor B is wants to borrow the car for a short vacation driving in the nearby
mountains. Neighbor B plans to attach his recreational camper by tow ball to
Pat’s car for the vacation.
Neighbor C plans to deliver telephone books throughout town. This driving will
entail many stops and starts as well as ignitions and re-ignitions of the engine.
Neighbor D wants to take the car on a trip to a large city near Pat’s town. In the
town, Neighbor D will primarily be engaged in start and stop city driving.
Neighbor E wants to use the car to take a vacation at the beach. This will entail
a long straight drive on the flat Interstate highway from Pat’s town to the coast.
Neighbor E will drive the car at the posted speed limits.
Connecting Concepts of Business Strategy and Competitive Advantage 221
Neighbor F want to use the car for short “off-road” adventures. Neighbor F
will drive the car off of the main roads to various hunting and fishing locations
within the region.
Neighbor G is an elderly person who plans limited driving about town, primarily
to nearby church social events.
Pat estimates that the total automobile cost will equal $40,000. Pat believes
that the useful life of the auto depends upon the type of miles driven. In other
words, Pat recognizes that the diminution of the value and physical life of the
car will differ depending upon the lessee’s pattern of use. If lessees put “easy
miles” on the car, Pat estimates the car will go 200,000 miles before disposal;
if “hard miles” are put on the car, Pat estimates the car has a useful life of
100,000 miles.
Pat has evaluated the local rental car agency pricing scheme as a guide to
determine how much should be charged per mile. The local rental agency charges
a flat rate per mile to all lessees. However, Pat feels that the agency’s “one size fits
all” mileage rate doesn’t make sense given the wide differences in vehicle usage
by customers. Pat believes that knowledge of the lessees’ use of the automobile
can be used to obtain a competitive advantage and that accounting records should
reflect the economic fact that different customers consume the car to different
degrees. For example, Pat could use knowledge about customer driving habits to
offer a lower rental rate to Neighbor G (the grandparent) than that neighbor could
obtain at the local rental agency. By doing so, Pat will attract more neighborhood
customers who put “easy miles” on the automobile like Neighbor G. Conversely,
Pat’s rental rates for Neighbor A (the teenager) and Neighbor B (heavy load puller)
will likely be higher than the local car rental agency, driving these customers to
the lower rates of the local car rental agency (which are, in effect, subsidized by
neighbor G-type customers). If the local rental agency does not understand and
use information about the automobile usage patterns of its customers for pricing
decisions, they will eventually be driven out of business by their “hard miles”
customer base. Pat reckons that variable costs to operate the vehicle are irrelevant,
since these costs are borne by the lessee.
Pat seeks your assistance in developing reliable estimate of the relevant costs
associated with automobile use. First, Pat wants to develop a costing method that
measures the diminution in value associated their particular auto usage patterns.
Second, Pat’s Car Rentals must develop a method for monitoring the actual
manner of car use by customers. Though presently confident of the neighbors’
planned uses for the automobile, Pat is not confident that potential customers will
accurately reveal their usage patterns when different rental rates are applied to
different types of auto usage.
222 RICHARD J. PALMER AND HENRY H. DAVIS
Required
(4) Compare and contrast the cost method developed in (3) with that traditionally
assigned to rental cars.
(5) Devise a method to accurately monitor actual auto usage patterns of renters.
(6) How does Pat’s alternative costing/pricing method provide Pat with a compet-
itive advantage in the rental car business?
Faced with the need to raise productivity to survive, especially against low cost
competitors in nations such as China, North American companies are pushing to-
ward fully automated production processes (known as “lights-out” or “unattended
operations” manufacturing) (see, for example, Aeppel, 2002). As manufacturers
continue their inexorable movement toward increased automation, the issue of how
to allocate machine costs to products becomes increasingly important to organiza-
tional understanding of profitability dynamics. Ignorance of profitability dynamics
is dangerous in highly competitive manufacturing contexts. If machine costs are
allocated (i.e. depreciated) to products on a basis that is incongruent with the re-
alities of the machine use, then income and product profitability will be distorted
over the life of the asset.
Pat’s Car Rentals presents a simple scenario about the increasingly important
business and accounting problem of allocating fixed costs to products produced
or services rendered. We have found that this case is easily comprehended by and
provides useful insights to both undergraduate and graduate students, especially
those enrolled in advanced cost or strategic cost management courses.
There are two major hurdles to negotiate in teaching the case. The first hurdle
is to get students to understand the analogy between the “car” and advanced
manufacturing equipment. Like advanced manufacturing equipment, the car
performs multiple operations based on the demands of different customers.
Further, customer demands place differing amounts of stress on machinery (autos)
– in ways not always commensurate with machine hours (driving time). Once
this analogy is comprehended, students tend view depreciation of machinery
quite differently. The second hurdle relates to the topic itself. The significance of
depreciation is not adequately addressed in current accounting texts. Occasionally
our students have argued that depreciation represents an irrelevant sunk cost and
that any effort to allocate these charges is counterproductive.
224 RICHARD J. PALMER AND HENRY H. DAVIS
POPULAR DEPRECIATION
FRAMEWORKS FOR MACHINERY
Understanding machine resource consumption and the method of allocating the
consumption of machinery among products are choices organizations make within
the boundaries of accounting, legal, and regulatory boundaries. There are numerous
ways to calculate depreciation, accounting rules only require that the method by
which asset consumption is measured be rational and systematic (i.e. consistently
applied across time periods). Until recently the measurement and allocation of de-
preciation charges to products typically was an immaterial issue. However, as noted
above, increased levels of investment in machinery now make depreciation method-
ology a key element of accounting policy (see, for example, Brimson, 1989).
The most popular machine cost allocation schemes are time-based or volume-
based single factor models. However, both of these methods distort the cost of
products produced by the machine. Specific problems with the use of a single
factor method of depreciation, whether time or volume-based, are discussed below.
Depreciation methods that recover costs over a fixed period of time assume that
value added to the product is independent of individual products and the actual
utilization of technology during the recovery period. Time-based models of de-
preciation dominate modern corporate accounting practice primarily because they
guarantee (assuming that a conservative estimate of useful life is employed) that
machine cost will be assigned (or “recovered”) by the end of a specified deprecia-
ble life. In addition, time-based methods are simple to calculate, require minimal
on-going data collection, and are agreeable to virtually all regulatory reporting
agencies.4
However, there are several important and well-known problems with the use
of time-based depreciation models. First, time-based methods tend to increase
current period expense (and hence product costs) compared to volume-based or
activity-driven cost allocation schemes. This occurs because productivity often
declines significantly in the first year when compared with the prior technology.
The productivity drop in the first year is associated with the learning curve that
companies must scale as they familiarize themselves with new machinery (Hayes
& Clark, 1985; Kaplan, 1986). Hence, time-based depreciation charges are partic-
ularly onerous on individual product costs and return on investment calculations
in the early years.5
226 RICHARD J. PALMER AND HENRY H. DAVIS
By extension, all machine hours are not equal. The case of Pat’s Rentals, for
example, recognizes that some customers will put “hard miles” (or hours) on the
rented automobile while other customers will put “easy miles” (or hours) on his
car. Likewise, in manufacturing contexts differences in speed and feed rates of
CNC equipment subject the machinery to differing amounts of stress. Further,
depreciation amounts based on any one volume-related factor fail to address
situations where the machine is not in use. Typically, the machine loses service
potential without regard to actual use because of deterioration or obsolescence.6
Connecting Concepts of Business Strategy and Competitive Advantage 227
Dysfunctional employee behavior may also be encouraged by the use of any sin-
gle volume-based metric for overhead allocation. Cooper and Kaplan (1999) argue,
for example, that the use of machine hours alone to allocate depreciation encour-
ages acceleration in speed and feed rates to increase production with a minimum
number of machine hours. This behavior can damage machinery, reduce product
quality, choke downstream bottleneck operations, and encourage outsourcing of
machined parts that may result in a “death spiral” for the manufacturer.
Staubus suggested that it may be possible to create a refined measure of
depreciation that adjusts for unwelcome variations by either by refining the
measurement unit itself or arbitrarily varying the weights of measured service
units. An example of the former case is John Deere Component Works, where
kilowatt-hours were multiplied by a calculated machine “load” factor to assign
utility costs to four different machines (Kaplan, 1987). An example of the latter
approach would be to use an “adjustment factor” that assigns higher costs to the
early miles in the life of transportation equipment.
Milroy and Walden (1960) recognized multiple causal factors associated with
the consumption of capital resources and suggested “it may well be possible that
a [scientific] method could be devised in which consideration would be given to
variation in the contribution of service units to firm revenue” (p. 319). One means
by which an organization can recognize several significant causes of depreciation
of high technology machinery is to develop a multiple-factor model.
linear miles (Kaplan, 1985). More recently, Cummins Engines (Hall & Lambert,
1996) managers described their use of a modified units-of-production method
that assumes that depreciation of an asset is a function of both time and usage.
Consequently, depreciation in periods of extremely low production volume is a
fixed amount; yet, as production volume increase above low levels, depreciation
is increasingly attributable to volume.
Though previous multi-factor depreciation models are both important and
practical, they do not provide an overarching framework that can be applied to
a broader set of business contexts. The comprehensive multi-factor depreciation
model presented in Pat’s Rentals is built upon the activity-based costing logic
of Cooper and Kaplan (1999). According to Cooper and Kaplan, there are three
types of activity cost drivers: transaction, duration, and intensity. Transaction
drivers, such as the number of setup, number of receipts, and number of products
supported, count how often an activity is performed. Duration drivers, such as
setup hours, inspection hours, and machine hours, represent the amount of time
required to perform an activity. Intensity drivers directly charge for the resources
used each time an activity is performed, such as an engineering change notice or
creation of a pound of scrap. Intensity drivers are the most accurate activity cost
drivers but are the most expensive to implement because they, in effect, require a
direct charging via job order tracking of all resources used each time an activity is
performed.7
Duration Drivers
Intensity Drivers
Transaction Drivers
Step Three: As shown in Table 3, apply the different rates calculated in Step
Two to assign costs to customers and measure rental profitability. Pat will now
show a profit on rental to Customer A if the rental charge includes a depreciation
component that is above $0.40 per mile, while profitable rentals may be made to
of resource consumption.
232 RICHARD J. PALMER AND HENRY H. DAVIS
A (teenager) 1 1 1 3 0.40
B (trailer tow in mountains) 0 1 1 2 0.30
C (delivery service) 0 1 1 2 0.30
D (city driving) 0 0 1 1 0.24
E (high speed highway) 1 0 0 1 0.24
F (off-road) 0 1 0 1 0.24
G (grandparent) 0 0 0 0 0.20
Customer G for half that amount. Other customers fall somewhere in between,
depending on usage patterns.
APPLICATION OF MODEL TO
MACHINE-RELATED COSTS
The concept that underlies the activity-based machine cost allocation model can be
applied to other significant machine-related costs such as repairs and maintenance,
tooling (and associated costs of purchasing, material handling, tool crib, etc.) and
utilities (including compressed air, water, and power where not more accurately
measured by separate metering).
Ultimately, the logic embodied in this case should be employed to further the
development of more refined models of machine cost allocation. Simulations of
machine use and maintenance by engineers, analogous to that presented in this
paper, could also provide a useful estimate of machine resource consumption. For
purposes of depreciation estimation, accountants may one day defer judgment to
these engineering simulations, much as they do now with geologists estimating
the oil and gas “reserves” still in the ground. Provided the depreciation method
is consistently applied on a uniform basis, such an accounting mechanism would
pass muster for financial statement purposes and create no more additional work
than is commonly found reconciling financial statement and tax income. Third,
it should be noted that not every feature of the multi-factor model described in
this paper needs to be employed. The model is intended to portray the need for a
more comprehensive consideration of machine resource consumption. Companies
would be best served to use the model as a general guide, customizing their
own depreciation methodology in a manner consistent with their manufacturing
realities.
NOTES
and income in those years. Unfortunately, when the technological (but not physical) life of
the investment is over, managers have a double incentive to hold on to the old equipment
– to avoid the loss on disposition and the higher depreciation charges associated with
new equipment. Further, time-based approaches to depreciation typically are inconsistent
with the assumptions used in the investment justification decision. For consistency with
external reports, most companies use the arbitrary depreciable life span provided in the tax
code, rather than depreciating equipment over the shorter of the technological or physical
life of the equipment. This incongruity can become more distorted when the company has
invested in a FMS that produces products with short product life cycles.
6. In addition, there is greater potential for denominator forecast error when a single
volume factor is employed (Staubus, 1971).
7. Cooper and Kaplan (1999) state the some ABC analysts, rather than actually tracking
the time and resources required for an individual product or customer, may simulate
a duration or intensity driver with a weighted index approach that utilizes complexity
indexes. This technique might, for example, entail asking employees to estimate the
relative complexity of performing task for different products or customers. Thus, a standard
product might get a weight of 1, a moderate complexity product or customer a weight of
3, and a very complex product or customer a weight of 5.
8. In fact, these limits are designed in by manufacturers of advanced manufacturing
technologies. In most cases, information concerning these limits can be obtained from the
manufacturer.
9. An example of the shrinking cost of measurement technology (applicable to Pat’s
Rentals) is global positioning technology. Global positioning systems are now routinely
used in the trucking industry and have been used by one car rental agency to penalize
lessees by speeding violations (Wall Street Journal, 2001).
ACKNOWLEDGMENTS
Special thanks to Marvin Tucker, Professor Emeritus of Southern Illinois Univer-
sity and Mahendra Gupta of Washington University in St. Louis for contributions
to the concept in this paper.
REFERENCES
Aeppel, T. (2002, November 19). Machines still manufacture things even when people aren’t there.
Wall Street Journal, B1.
Brimson, J. A. (1989, March). Technology accounting. Management Accounting, 70(9), 47–53.
Cooper, R. (1996). Shionogi & Company, Ltd: Product and kaizen costing systems. Harvard Business
School Case.
Cooper, R., & Turney, P. B. B. (1988). Textronix: Portable Instruments Division (A) and (B). HBS
Cases 188–142 and 143.
Cooper, R., & Kaplan, R. (1999). The design of cost management systems (2nd ed.). Prentice-Hall.
236 RICHARD J. PALMER AND HENRY H. DAVIS
Finney, H. A., & Miller, H. E. (1951). Principles of intermediate accounting. Englewood Cliffs, NJ:
Prentice-Hall.
Hall, L., & Lambert, J. (1996, July). Cummins engines changes its depreciation. Management Account-
ing, 30–36.
Hayes, R. H., & Clark, K. B. (1985). Exploring the sources of productivity differences at the factory
level. In: K. B. Clark, R. H. Hayes & C. Lorenz (Eds), The Uneasy Alliance: Managing the
Productivity Dilemma. Boston, MA: Harvard Business School Press.
Kaplan, R. S. (1985). Union Pacific (A). Harvard Business School Case 186–177.
Kaplan, R. S. (1986, March–April). Must CIM be justified by faith alone. Harvard Business Review,
64(2), 87–93.
Kaplan, R. S. (1987). John Deere Component Works (A) and (B). HBS Cases #9–187–107 and 108.
Kaplan, R. S., & Hutton, P. (1997). Romeo Engine Plant (Abridged). HBS Case 197–100.
Milroy, R. R., & Walden, R. E. (1960). Accounting theory and practice: Intermediate. Cambridge,
MA: Houghton Mifflin Company.
Porter, M. E. (1992). Capital choices: Changing the way America invests in industry. Research Report
co-sponsored by Harvard Business School and the Council on Competitiveness.
Staubus, G. (1971). Activity costing and input-output accounting. Homewood, IL: Irwin.
The Wall Street Journal (08/28/2001). Big brother knows you’re speeding – Rental-car companies
install devices that can monitor a customer’s whereabouts.
The Wall Street Journal (2/12/02a). SEC accounting cop’s warning: Playing by rules may not ward off
fraud issues.
The Wall Street Journal (2/13/02b). SEC still investigates whether Microsoft understated earnings.
CHOICE OF INVENTORY METHOD
AND THE SELF-SELECTION BIAS
ABSTRACT
We examine the sample self-selection and the use of LIFO or FIFO inventory
method. For this purpose, we apply the Heckman-Lee’s two-stage regression
to the 1973–1981 data, a period of relatively high inflation, during which the
incentive to adopt the LIFO inventory valuation method was most pronounced.
The predicted coefficients based on the reduced-form probit (inventory choice
model) and the tax functions are used to derive predicted tax savings in the
structured probit. Specifically, the predicted tax savings are computed by
comparing the actual LIFO (FIFO) taxes vs. predicted FIFO (LIFO) taxes.
Thereafter, we estimate the dollar amount of tax savings under different
regimes. The two-stage approach enables us to address not only the man-
agerial choice of the inventory method but also the tax effect of this decision.
Previous studies do not jointly consider the inventory choice decision and the
tax effect of that decision. Hence, the approach we use is a contribution to the
literature. Our results show that self-selection bias is present in our sample
of LIFO and FIFO firms and correcting for the self-selection bias shows that
the LIFO firms, on average, had $282 million of tax savings, which explains
why a large number of firms adopted the LIFO inventory method during
the seventies.
INTRODUCTION
Management accounting provides critical accounting information for day-to-day
managerial decision-making. The choice of the inventory method influences
managerial behavior for purchasing, cash flow management, and tax manage-
ment. For instance, based on managerial accounting information regarding
expected cash flows on various products and services, managers may decide
that it is optimal to forego LIFO (last-in, first-out) tax savings. Thus, managers
need to have a good understanding of the expected cash inflows and outflows
from various segments of the business. They should also have information on
available tax saving vehicles, including depreciation, interest, tax losses and
the affect of these variables on taxable earnings. Not only could management
accounting provide information on the initial selection of inventory method but
it could also assist in deciding to continue with the inventory accounting method
currently in use. One important role of managerial accounting in this area is in
monitoring inventory-levels to prevent LIFO layer liquidation in the event LIFO
is used.
Over the past twenty years, researchers in accounting have examined various
issues arising from a firm’s choice of accounting methods. Much of this literature
has been on the choice of the inventory costing method. Early research on inventory
selection method estimated the tax effects of the LIFO vs. FIFO (first-in, first-out)
method under the assumption that operating, financing, and investing activities
remain unaffected as a result of a change in inventory method. Researchers have
previously recognized that this ceteris paribus assumption ignores endogeneity
and self-selection of LIFO and FIFO samples (see Ball, 1972; Hand, 1993;
Jennings et al., 1992; Maddala, 1991; Sunder, 1973). The endogeneity problem
in the choice of LIFO vs. FIFO method is particularly important because the tax
effects of the inventory method may affect firm valuation (see Biddle & Ricks,
1988; Hand, 1995; Jennings et al., 1992; Pincus & Wasley, 1996; Sunder, 1973).
However, it is not possible to observe what the managerial decision would have
been had they used an inventory method different from the method currently in
use. Hence, a number of studies have developed “as-if” calculations to estimate the
tax effects of LIFO vs. FIFO (see Biddle, 1980; Dopuch & Pincus, 1988; Morse
& Richardson, 1983).1 These studies estimate a LIFO firm’s taxes as if it was a
FIFO firm and LIFO taxes for a FIFO firm. The “as-if” approach assumes that a
firm’s managerial decisions would have remained unchanged with the use of an
alternative method.
The purpose of this study is to re-examine the choice of LIFO vs. FIFO by using
Heckman (1976, 1979) and Lee’s (1978) two-stage method that incorporates
Choice of Inventory Method and the Self-Selection Bias 239
PRIOR RESEARCH
A number of studies have attempted to explain why firms do not use the LIFO
method in periods of rising prices and thus forego the opportunity of potential
tax savings (see Adel-Khalik, 1985; Hunt, 1985; Lee & Hsieh, 1985). One
explanation advanced is that FIFO firms are concerned about the drop in stock
prices upon the adoption of LIFO. Still another explanation is that the cost of
LIFO conversion may be more than the tax benefits of adoption. For instance,
using 1974–1976 LIFO data, Hand (1993) estimated that the cost of LIFO
adoption for his sample of firms was as high as 6% of firm value, a sizable cost for
most firms.
Empirical studies on whether LIFO tax saving is valued by investors has been
extensively studied. Many of these studies suffer from the problems of event date
specification, contaminating events, firm size, etc. (Lindahl et al., 1988). Kang’s
(1993) model predicts that positive price reaction to LIFO adoption occurs only
when the expected LIFO adoption costs are less than expected LIFO tax savings.
He argues that the positive stock price reaction will occur because the switch to
LIFO will recoup previously lost LIFO tax savings. Some studies demonstrate
positive stock price reaction surrounding LIFO adoption (Ball, 1972; Biddle &
Lindahl, 1992; Hand, 1995; Jennings et al., 1992; Sunder, 1973) while other
studies have reported negative market reaction to LIFO adoption (see Biddle &
Ricks, 1988; Ricks, 1982). Pincus and Wasley (1996) results show some degree
of market segmentation. They found positive market reaction to OTC firms and
negative market returns for NYSE/ASE firms. Finally, Hand’s (1993) results
indicate that the LIFO adoption or non-adoption decision resolves uncertainty
regarding LIFO tax savings.
Contracting cost theory also provides reasons why firms may not adopt the
LIFO inventory method. LIFO adoption decreases asset values and net income,
potentially causing some firms to be in violation of debt covenants. Furthermore,
managers on bonus contracts may not want lower LIFO earnings because the
use of LIFO may reduce their total compensation. Adel-Khalik (1985), Hunt
(1985), and Lee and Hsieh (1985) provide some evidence that debt covenants
help to explain the choice of inventory method but compensation plans do not.3
Another reason why firms decide to use a particular method is that firms differ
systematically in the nature of the production-investment opportunity set available
to them. Therefore, the LIFO method is an optimal tax reporting choice for some
firms and not for others. A common empirical approach to estimate the amount of
tax savings firms could have obtained from an alternate method, other than their
observed choice of inventory accounting method, is the as-if method (see Biddle,
1980; Biddle & Lindahl, 1982; Dopuch & Pincus, 1988; Morse & Richardson,
Choice of Inventory Method and the Self-Selection Bias 241
CONCEPTUAL FRAMEWORK
This section presents the conceptual basis for the empirical analysis that follows.
Assume firms have only two inventory valuation methods available: LIFO or FIFO.
A typical firm’s decision to adopt LIFO depends on its own assessment of the ben-
efits to be gained and the costs that must be incurred. As previously stated, LIFO
costs are associated, among others, with implementation, negative market reaction,
and contracting costs. In addition, there may be LIFO layer liquidation costs
resulting from price decline (e.g. electronics industry). As a result, a firm would
rationally choose to use LIFO only if the expected benefits outweigh the expected
costs.4 Otherwise, it would remain as a FIFO firm. With the LIFO or FIFO status
thus determined, the firm’s LIFO benefits depend on their operating and financial
characteristics, the nature of the industry, and the provisions of the tax code.
Let the benefit of adopting the LIFO method be measured by the tax savings
received by the firm. We assume that tax savings is a benefit to the firm because the
increased cash flow widens the set of feasible production-investment opportunities
and, thus, improves the firm’s long-term prospects. However, foregoing tax bene-
fits may be an optimal strategy for a firm in a framework of Scholes and Wolfson
(1992) or when the inventory valuation method is used to signal firm value.5
Let the total taxes T paid by each type of firm be written as:
T L = aX L + e L (1)
T F = bX F + e F (2)
where the subscripts L and F denote the LIFO and FIFO firms, X is a vector of
explanatory variables common to both groups of firms, a and b are vectors of
coefficients, and e is the random error term. Firms choose the type of inventory
valuation method that will maximize their overall tax benefits given other con-
straining factors. Thus, they choose the LIFO method if
TS = (T F − T L ) > C (3)
where TS is the positive dollar tax savings (assuming T F > T L ), and C is the
associated dollar cost, of choosing LIFO. There is unlikely to be a single observed
number in the firm-level data set to represent the cost of choosing LIFO, although,
in practice, many reasons can be found to justify the assumption that choosing LIFO
is not cost-free. Prior literature suggests that the dollar cost of LIFO adoption may
242 PERVAIZ ALAM AND ENG SENG LOH
C = cY + n (4)
T F − T L > cY + n (5)
I = LIFO if I ∗ > 0
I = FIFO if I ∗ ≤ 0
where
I ∗ = ␣0 + ␣1 (T F − T L ) + ␣2 Y − (6)
where is the error term of the probit function. In practice, we observe the tax
payments of either the LIFO or FIFO firm, but never both simultaneously, implying
that the probit Eq. (6) cannot be estimated directly. One way to proceed is to
estimate the tax Eqs (1) and (2), via ordinary least squares (OLS) and use the
predicted values in Eq. (6). However, because of the self-selected nature of the
LIFO and FIFO firms, the expected mean of the error terms in the tax equations
are non-zero, i.e. E(e L |I = LIFO) = 0 and E(e F |I = FIFO) = 0. Thus, the OLS
estimation of the tax equations leads to inconsistent results. There is no guarantee
that the estimated coefficients will converge to the true population values even in
large samples. To avoid this bias, we proceed via the two-stage method suggested
by Heckman (1976, 1979) and Lee (1978).6 We begin by estimating the reduced
form probit equation found by substituting Eqs (1) and (2) into Eq. (6),
I = LIFO if I ∗ > 0
I = FIFO if I ∗ ≤ 0
where
I ∗ = ␥0 + ␥1 X + ␥2 Y − ( − e). (7)
Choice of Inventory Method and the Self-Selection Bias 243
Equation (7) is the form of the estimation equation commonly found in the LIFO
determinants literature. After estimating (7), we derive the inverse Mills’ ratios,
−(u)
L = , (8)
(u)
(u)
F = (9)
1 − (u)
where u is the predicted value of the error term from the reduced form probit, is
the standard normal probability density function (pdf) for u, and its cumulative
density function (cdf). In the second stage, the coefficient of the lambda terms in
the tax functions serve as sample covariances between the tax function and the
criterion I∗ (i.e. a 2 = L and b 2 = F ). Hence, the following the tax functions
are obtained:
T L = a 1 X L + a 2 L + L (10)
T F = b 1 X F + b 2 F + F . (11)
Equations (10) and (11) are then estimated using the self-selectivity and the OLS
approaches to construct predicted FIFO tax payments for observed LIFO firms and
predicted LIFO taxes for observed FIFO firms.7 In essence, the Heckman-Lee two-
stage procedure treats the self-selection bias as arising from a specification error: a
relevant variable is omitted from each of the tax equations. Statistical significance
on a2 and b2 shows that these covariances are important and that management
selection of the LIFO or the FIFO inventory valuation method is not random. In
short, self-selection is present. Interpretation of the firms’ behavior depends on
the signs of L and F , which may be either positive or negative. For instance,
if L < 0 then firms whose expected LIFO taxes are lower than average, should
have a lower chance of being a FIFO firm. Similarly, if F < 0 then firms whose
expected FIFO taxes are lower than average should have a lower chance of being
LIFO firms. Although these covariances can bear any sign, model consistency
requires that L > F (see Trost, 1981). This condition of the covariance term
(L > F ) ensures that the expected FIFO taxes of FIFO firms will be less than
their expected taxes if they switched to LIFO status. Similarly, the expected tax
payments of LIFO firms will remain less than their expected tax payments if they
switched to FIFO.
An important assumption of the Heckman-Lee model is that the error terms in
the structural equations (L , F , and ) are joint-normally distributed. Inconsistent
estimates result if the underlying population distribution is non-normal (although,
strictly speaking, this problem exists with any parametric model). Given the
perceived rigidity of the joint-normality assumption, researchers have suggested
244 PERVAIZ ALAM AND ENG SENG LOH
We use Lee and Hsieh’s (1985) model for purposes of estimating the reduced-form
probit model described in Eq. (7). We select their model because of the comprehen-
siveness of the variables examined and the theoretical justification for the selection
of those variables. They test the joint effect of political cost, agency theory, and
Ricardian theory on the LIFO-FIFO decision, by using eight proxy variables and
an industry dummy to capture the features of the production-investment opportu-
nity set that are pertinent to the choice of the inventory accounting method. The
variables they use are: firm size, inventory variability, leverage, relative firm size,
capital intensity, inventory intensity, price variability, income variability, and in-
dustry classification.9 Thus in this study, the inventory choice model is expressed
as follows:
where:
I = 1 for LIFO firms and I = 0 for FIFO firms;
LGTASST = firm size computed as the log value of total assets;
INVVAR = inventory variability computed as the coefficient of variation (vari-
ance/mean) for year-end inventories;
LEV = agency variable derived as the ratio of long-term debt less capitalized
lease obligations to net tangible assets;
CI = capital intensity variable computed as the ratio of net fixed assets
to net sales;
RELASST = relative firm size derived as the ratio of a firm’s assets to the total
industry assets;
INVM = inventory intensity computed as the ratio of inventory to total assets;
CPRICE = price variability derived as the relative frequency of positive price
change for each four-digit SIC industry code;
INCVAR = accounting income variability as the coefficient of variation (vari-
ance/mean) of before tax accounting income; and
IDNUM = captures the industry effect by assigning a dummy variable to each
of the 19 two-digit industries. Thus, IDNUM is a vector of 19 two
digit SIC codes.
Table 1 provides further description of the variables used in Eq. (12). The table
lists the variables used, their description, and the Compustat data items used to
derive the variables.
A brief description of the reasons for the selection of the regressors used in
the probit function (Eq. (12)) follows.10 Ceteris paribus, larger firms (proxied by
LGTASST) are likely to adopt LIFO because of their comparative advantage in
absorbing costs of LIFO conversion and related bookkeeping and tax-reporting
costs. The high inventory variability (INVVAR) suggests that the cost of inventory
control may be higher because of the possibility of liquidation of LIFO layers or
possible excess inventories. On the other hand, it is likely that adopting LIFO may
lead to lower inventory variability in order to maintain LIFO layers. Hence, it is
difficult to predict the association of the INVVAR variable to LIFO use. The lever-
age variable (LEV) serves as an agency proxy. Firms with higher leverage are more
likely to default on debt covenant restrictions (Smith & Warner, 1979), driving them
to choose income-increasing accounting methods. Hence, the leverage variable is
likely to be negatively related with the use of LIFO method. The relative firm size
(RELASST) is a measure of size with respect to industry. It is expected that rela-
tively larger firms in an industry will have a comparative advantage in using LIFO.
Firms with high values of capital intensity (CI) generally possess necessary
resources to engage in extensive financial and production planning needed to
246 PERVAIZ ALAM AND ENG SENG LOH
I Dependent variable used in probit estimation where I = 1 if the firm uses LIFO and 0
if the firm uses FIFO (59).
LNTXPAY Dependent variable used in tax functions. Defined as the natural logarithm of total
income tax expenses (16).
LGTASST Firm size measured as log of total assets (6).
INVVAR Coefficient of variation (variance/mean ratio) for year-end inventories.
LEV Leverage ratio computed by dividing long-term debt less capitalized lease obligations
to net tangible assets (9−84)/(6−33).
RELASST Ratio of a firm’s assets to the total of industry assets based on the SIC four-digit
industry codes.
CI Capital intensity measured as net property, plant, and equipment divided by net sales
(8/12).
INVM Inventory materiality computed as a ratio of inventory to total assets (3/6).
CPRICE Relative frequency of positive price changes for each SIC four-digit industry code
over the sample period. Producer Price Index was obtained from the publications of
the U.S. Department of Commerce.
INCVAR Coefficient of variation (variance/mean ratio) of before tax accounting income.
FIXED Property, plant, and equipment, net (8) divided by the market value of equity at the
fiscal year-end (24 × 199).
NDTS Non-debt tax shield computed as the ratio of the sum of deprecation and investment
tax credits (103 + 51) to earning before interest, taxes, and depreciation (172 + 15 +
16 + 103).
LATS Available tax savings measured as natural logarithm of tax loss carryforwards (52)
multiplied by cost of goods sold (41).
INVTS Inventory to sales measured as inventories (3) to net sales (12).
TLCF Net operating tax loss carryforward measured as a ratio of net operating tax
carryforward to net income before interest, taxes, and depreciation expenses (52/(172
+ 15 + 16 + 103)).
use LIFO. Hagerman and Zmijewski (1979), Lee and Hsieh (1985), and Dopuch
and Pincus (1988) suggest that large capital-intensive firms have a comparative
advantage in adopting LIFO. The inventory to total assets ratio (INVM) serves as
a proxy for measuring how efficiently the inventory has been managed. Following
Lee and Hsieh (1985), INVM is expected to be negatively associated with the
use of the LIFO method. The price variability (CPRICE) variable is a proxy for
inflation. The higher the inflation rate, the higher the likelihood that firms would
adopt LIFO. Lee and Hsieh (1985) argue that production-investment opportunity
sets will vary from industry to industry. Therefore, a dummy variable is assigned
to each of the two-digit SIC industries.
Choice of Inventory Method and the Self-Selection Bias 247
The regressors for the tax functions were identified based on the review of the
relevant tax literature (see Biddle & Martin, 1985; Bowen et al., 1995; Dhaliwal
et al., 1992; Trezevant, 1992, 1996). The tax functions are listed below:
where:
TL or TF = the logarithmic value of total taxes for LIFO or FIFO firms,
respectively;
FIXED = net property, plant, and equipment divided by the market value of
equity;11
NDTS = non-debt tax shield derived as the sum of depreciation and investment
tax credits divided by earnings before interest, taxes, and depreciation;
LATS = available tax savings measured as the logarithmic value of tax loss
carryforwards times cost of goods sold;
INVTS = inventory turnover measured as inventories to net sales; and
TLCF = net operating loss carryforward to net income before interest, taxes,
and depreciation.
The variables used in tax functions (13a) and (13b) are based on the assumption that
a firm’s taxes depend upon net fixed assets, non-debt tax shield, tax savings avail-
able from adopting LIFO, efficiency of inventory management, and the amount of
the tax loss carryforward. It is important to note that firms do trade-off or substitute
various tax shields to minimize the marginal tax rate. The model used in this study
examines not only the effect of the individual coefficients in the tax function but
also the joint effects of these coefficients. Thus, when high tax shields increase the
possibility of tax exhaustion, the firm is likely to have a lower marginal tax rate
which may decrease the likelihood of LIFO use.
Ceteris paribus, we expect that firms with relatively high values of net property,
plant, and equipment scaled by the market value of equity (FIXED) are likely to
pay lower taxes. FIXED is a measure of debt securability (Dhaliwal et al., 1992;
Trezevant, 1992). Firms with a larger proportion of their assets represented by
fixed assets are likely to raise larger amounts of debt or lower the cost of financing
(Titman & Wessels, 1988). Therefore, the variable FIXED provides a tax shield by
248 PERVAIZ ALAM AND ENG SENG LOH
enhancing the possibility of increased debt financing, which increases the level of
interest deductibility, and consequently the FIXED variable indirectly lowers taxes.
Assuming no available substitution of tax shields, the higher proportion of non-
debt tax shield (NDTS) would lower the marginal tax rate, and therefore lower will
be the taxes.12 The variable LATS is a proxy measure for tax savings. It is computed
by multiplying the cost of goods sold with tax loss carryforwards. This measure is
based on the argument that firms with relatively higher cost of goods sold and tax
loss carryforwards are likely to pay lower taxes (see Bowen et al., 1995). Therefore,
the expected sign of the coefficient for the LATS variable is negative.13
The variable INVTS represents efficiency in inventory management (Lee &
Hsieh, 1985). The INVTS coefficient is expected to be negatively associated with
taxes. This relationship could be best explained by using the following illustration.
Suppose, net sales increases from $150,000 to $175,000, and cost of goods sold
and ending inventory remain unchanged at $80,000 and $20,000, respectively.
This would cause inventory to sales ratio (INVTS) to decrease from 13.3% to
11.4%, increasing gross margin from $70,000 to $95,000 thereby increasing taxes
assuming that the marginal tax rate is the same as in the previous year.
Finally, the variable TLCF is expected to be negatively associated with taxes
for LIFO firms. In other words, firms are less likely to use LIFO if they have
tax loss carryforwards, which could be used to shield taxes.14 Auerbach and
Porterba (1987) indicate that firms expecting persistent loss carryforwards are
likely to experience lower marginal tax rates. On the other hand, firms are
more likely to use FIFO or other income-increasing methods even if they have
tax loss carryforwards in the event the alternative available tax shields are tax
exhaustive. Thus, the expected sign of the TLCF coefficient cannot be predicted.
Table 1 gives further description of the variables used in the tax functions. It also
gives the data item numbers used for extracting financial statement values from
Compustat tapes.
The data for the variables used in this study were obtained from the back data
Compustat files. The sample firms were obtained from 1973 to 1981 years, a
period of historically high inflation rates in the United States during which firms
adopting LIFO could obtain substantial tax savings. This yielded an initial sample
of 10,777 observations for firms using either the LIFO or FIFO inventory method.
Since firms use a combination of inventory methods for financial reporting, LIFO
firms are those who use LIFO for most of their inventory accounting and FIFO
Choice of Inventory Method and the Self-Selection Bias 249
LIFO FIFO
firms are those who use FIFO as their predominant inventory valuation method.
After eliminating missing observations, the total sample is made of 6,090 obser-
vations. Of this number 1,050 (247 firms) represents LIFO observations and 5,040
observations (1006 firms) are of FIFO firms. Table 2, Panel A lists the sample
selection procedure.15
Table 2, Panel B shows the two-digit SIC code industry composition of the LIFO
and FIFO samples. The LIFO group consists of 1,050 observations distributed over
5 different two-digit industries with the manufacturing industry being the largest
followed by the wholesale and retail industries. The FIFO observations are dis-
tributed over seven different two-digit industries where the largest concentration
is also in the manufacturing industries followed by the wholesales and retail in-
dustries. Overall, the sample distribution for the two groups of firms appears to be
concentrated in the manufacturing, wholesale, and retail industries.
Table 3 presents descriptive statistics for the variables used for multivariate
analyses, classified by LIFO and FIFO groups. We corrected for price inflation
whenever a variable entered as a dollar value, using 1982 as the base-year and
250 PERVAIZ ALAM AND ENG SENG LOH
Note: Tasst, sales, and inventories represents total assets, sales, and inventories in millions of dollars.
All other variables are defined in Table 1. The t-value tests differences in means between LIFO
and FIFO samples. ∗∗∗ , ∗∗ significant at 0.01, and 0.05 levels, respectively. Dollar values adjusted
for price inflation using 1982 as the base year.
CPI as the index. Table 3 shows the mean, median, and standard deviation
values for each of the two groups. Also given is the t-value for testing significant
differences in mean values for each of the variables. The t-test shows that with
the exception of INVM, INCVAR, FIXED, and TLCF, the remaining variables
used in the probit and tax functions are significantly different between LIFO
and FIFO groups thereby suggesting that the two groups differ from each other
on several dimensions.16 Table 3 also provides statistics on selected financial
statement variables for each of the two groups of firms. The total assets (TASST)
of LIFO firms (median value of $94.6 million) are about four times the value
of total assets for FIFO firms (median value of $21.8 million). Similarly, the
size of LIFO inventory (median value of $26.5 million) is nearly five times the
size of inventory for FIFO firms (median value of $4.8 million). The results
of the Kolmogrov-Smirnov and Shapiro-Wilks tests show that the observed
distribution of individual variables is not significantly different from a normal
distribution.
Choice of Inventory Method and the Self-Selection Bias 251
EMPIRICAL RESULTS
Estimates of Reduced-Form Probit Equations
Table 4 shows the estimated coefficients for the reduced-form probit Eq. (12) in
which the dependent variable is a dichotomous dummy variable defined to be unity
if the firm is LIFO and zero if FIFO. We present the results without the coefficient
for the industry variable. Columns 1 and 2 provide estimated regression coefficients
and t-values.
Table 4, Columns 1 and 2 shows that sample firms are more likely to be
LIFO firms because of price-level increases. The inflation (CPRICE) variable is
statistically significant at the 1% level. Columns 1 and 2 also shows that firms
may not use the LIFO inventory method because of high variability of inventories
(INVVAR), high leverage (LEV), high capital intensity (CI), and relatively high
inventory as a component of total assets (INVM). The table shows that with the
Note: Expected sign of the coefficients is in parentheses. CPRICE is the relative frequency of price
increases in each industry during the 1973–1981 period. See Table 1 for definition of variables.
∗ Significant at 0.10 level.
∗∗ Significant at 0.05 level.
∗∗∗ Significant at 0.01 level.
252 PERVAIZ ALAM AND ENG SENG LOH
1 2 3 4
Note: See Table 1 for definition of variables. Lambda is the inverse Mills’ ratio derived as: =
[−(u)/(u)], where u is the predicted value of the reduced form probit, is the standard
normal probability density function for u, and is its cumulative density function. a Figures
in parentheses are t-values based on heteroskedasticity-consistent variance-covariance matrix
derived in Heckman (1979), Columns 1 and 2, and White (1980), Columns 3 and 4.
∗∗ Significant at 0.05 level.
∗∗∗ Significant at 0.01 level.
Choice of Inventory Method and the Self-Selection Bias 253
Table 1. As stated in Section 3, the LAMBDA terms are the inverse Mills ratio.19
The statistical significance of the LAMBDA term in Columns 1 and 2 indicate a
non-zero covariance between the error term in the inventory choice and the LIFO
(FIFO) tax equations. In short, the self-selection of firms into the LIFO (FIFO)
category is confirmed.20 The negative sign of LAMBDA terms show that firms,
which expect to pay higher than average taxes in the LIFO (FIFO) category are
less likely to be LIFO (FIFO) firms. Thus, the average firm’s choice of the LIFO or
the FIFO method for inventory valuation in this sample is consistent with rational
decision-making. The typical LIFO or FIFO firm in the sample would have been
worse off had it chosen the alternate method. Note also that the sign pattern on
the LAMBDA terms satisfy the condition for model consistency: L > F . The
lambda coefficient for LIFO firm is −3.612 and for FIFO firms it is −9.614 and
both these values are statistically significant.
Other results shown in Table 5, Column 1 are noteworthy. In Column 1, all
of the remaining regressors are significant at the 1% level and the sign of the
regression coefficients are generally in the expected direction for LIFO firms with
the exception of TLCF. We find that LIFO firms’ taxes are inversely related to tax
shield provided by fixed assets (FIXED), non-debt tax shield (NDTS), available
tax savings (LATS), inventory turnover (INVTS) and positively related to tax
loss carryforwards (TLCF). In Table 5, Column 2, the results for FIFO firms
show that the sign of the coefficients are in the expected direction for each of the
regressors, with the exception of TLCF. Similar to LIFO firms, the positive sign
of the TLCF coefficients suggest that firms paying higher taxes also have higher
values of TLCF on their books. Taken together, these reported sign patterns are
largely consistent with the theoretical predictions listed in Section IV.21
For comparison, Columns 3 and 4 in Table 5 show the corresponding estimates
for the specifications without correcting for self-selection. The results are
generally consistent with those of the selectively adjusted regressions reported in
Columns 1 and 2. The extent of self-selection bias is assessed by comparing the
corresponding coefficients in Columns 1 and 3 vs. Columns 2 and 4 of Table 5.
The implied elasticity of the firms’ tax payments with respect to each of the
independent variables (measured by multiplying the estimated coefficients by their
respective means) is used to estimate the differences in the size of the coefficients
between the selectivity-adjusted and OLS estimates.22 Our calculations reveal that
for LIFO firms the implied elasticity for variables, FIXED and INVTS, is sizably
different between selectivity-adjusted and OLS regressions. For instance, a 10%
increase in inventory to sales (INVTS) is expected to decrease taxes by 7.64%
(−4.603 × 0.166) using selectivity approach and 15.36% (−9.255 × 0.166)
decrease using OLS. Large differences in implied elasticity between selectivity-
adjusted and OLS regressions are also found for FIFO firms. A 10% increase in
254 PERVAIZ ALAM AND ENG SENG LOH
available tax savings (LATS) will decrease taxes by 13.02% (−0.233 × 5.59) un-
der selectivity approach and by 10.96% (−0.196 × 5.59) using the OLS approach.
Also, about 24% difference in implied elasticity is found for the INVTS variable
between the selectivity-adjusted and OLS categories. In addition, we also per-
formed the Wald test for the overall differences in the selectivity-adjusted and OLS
regressions in each of the LIFO and FIFO categories. Our results show significant
differences (p < 0.001) across selectivity-adjusted and OLS estimates.23
The estimates of structural-probit (not reported) model are based on the results of
the reduced-form probit. Thus, using reduced-form probit, we derive the estimated
value for predicted tax savings under LIFO and FIFO approaches. These economet-
rically computed predicted tax savings are introduced as an added variable in the
structural probit equations. The results of the estimated structural probit equations
based on Eq. (12) show that the sign and significance of the coefficients are largely
similar to those for the reduced-form probit except the size variable (LGTASST),
which is significant at the 1% level, and the inflation variable (CPRICE) which is
negatively and significantly associated with LIFO use. As previously noted, for the
reduced-form probit the LGTASST was not significant and the CPRICE variable
was significantly positive.
The results of the structural probit (not reported) also show the coefficient for the
predicted tax savings, PRTXSAV, calculated as the difference between predicted
less actual taxes under the two inventory valuation methods for each firm in the
sample. For LIFO firms, it is calculated as the difference between predicted FIFO
taxes and actual LIFO taxes. For FIFO firms, it is the difference between predicted
LIFO taxes and actual FIFO taxes. For identifiability reasons, the structured probit
equations exclude the INCVAR variable. The estimation results (not reported)
reveal that the predicted tax savings variable is the most significant and a positive
coefficient in explaining the use of LIFO, indicating that increases in the tax
benefit of adopting LIFO increase the likelihood that firms will choose the LIFO
method.
Since the results of structural probit may depend on the choice of the variable
excluded from the inventory choice equation, we conducted sensitivity tests by
re-estimating the structured probit model with a different variable excluded in
turn. The results of these tests are reported in Table 6. Each row in the Table
represents a separate estimation of the structured model. The excluded variable
in each case is listed in the first column. Clearly, all the predicted tax savings
Choice of Inventory Method and the Self-Selection Bias 255
Note: Dependent variable equals 1 if the firm is LIFO, 0 if FIFO. Each coefficient reported above
represents predicted tax savings computed as the difference between predicted taxes less actual
taxes in separate estimates of the probit equation. The variable removed from the structural
probit equation is named in the first column. For example, the coefficient on predicted taxes in
Table 6 is reported as the first entry labeled INCVAR when INCVAR is removed.
∗∗∗ Significant at the 0.01 level.
in Table 6 are positive and significant at less than the 1% level and qualitatively
similar in magnitudes in each case. Thus, the positive effect of the predicted
tax savings variable appears to be robust with respect to differences in the
specification of the structural probit equations.
Table 7. Predicted Dollar Tax Savings Under Different Regimes for LIFO and
FIFO Firms.
Selectivity-Adjusted OLS
LIFO FIFO LIFO FIFO
1 2 3 4
Note: Tax savings for the LIFO (FIFO) firms is the difference between the actual LIFO (FIFO) taxes
and predicted FIFO (LIFO) taxes. The negative sign indicate tax savings and positive sign indi-
cates tax savings foregone. Selectivity-adjusted values are based on Heckman-Lee (1979, 1978)
procedure. OLS estimates are ordinary least squares regression-based estimates. To eliminate
the effect of extreme values 18 observations were dropped from LIFO sample and 9 observations
from the FIFO sample. Our results remain qualitatively the same with or without these outliers.
using pre-selected LIFO and FIFO firms. Knowing the self-selection bias, we are
able to estimate the tax savings, which would have been had the firms not been
pre-selected and the sample was randomly derived. The estimated Mills ratio
(lambda term) based on the two-stage method identifies the extent of self-selection
bias and then explicitly computes the best measure of the predicted tax savings in
a structured probit estimate.
Our analysis yields the following results. First, we find strong evidence that
self-selection is present in our sample of LIFO and FIFO firms, consistent with the
hypothesis that the managerial decision to choose the observed inventory account-
ing method is based on a rational cost-benefit calculation. Second, correcting for
self-selection leads to the inference that LIFO firms would, on average, pay more
taxes as FIFO firms, and FIFO firms could have had tax savings had they been
LIFO firms. Our selectivity-adjusted calculation shows that the mean tax savings
is $282.2 million for LIFO firms and that FIFO firms would, on the average, pay
$12.3 million less in taxes if they were LIFO firms. Without correction for selec-
tivity bias, the mean tax benefit foregone is $40.6 million for LIFO firms and $11.3
million for FIFO firms. Overall, the results suggest that the difference between the
LIFO/FIFO tax savings could partly be the function of firm size. LIFO firms in our
sample are on the average larger than the FIFO firms. In addition, the difference
in tax savings may be related to specific industries. Finally, we believe that the
inventory method (LIFO or FIFO) is reflective of the various economic constraints
confronting the firm. Hence, the inventory method used by a firm is a rational
economic decision.
Two weaknesses may be related to our work. Despite controlling for firm size in
the accounting choice function, one may argue that our results are affected by the
size difference between the LIFO and FIFO firms. The LIFO firms in our sample are
on the average larger than the FIFO firms. In addition, the difference in tax savings
may also be specific to selected industries. We do not compute tax savings on
industry-level because of insufficient data. Future studies could explore this issue
further. Future work may also investigate the presence of self-selection bias on
other management accounting issues. For instance, the effect of the selection of the
depreciation method on managerial performance could be studied. An additional
area of work could focus on the effect of the selection of the pooling vs. purchase
method of accounting on the post-merger performance of merged firms.
NOTES
1. The LIFO reserve reported by LIFO firms since 1975 is also an as-if number (see
Jennings et al., 1996).
Choice of Inventory Method and the Self-Selection Bias 259
2. Martin (1992) argues that FIFO method may be a logical tax minimizing strategy for
some firms in the event sales and production grow faster than the inflation rate when idle
capacity exists, and fixed manufacturing costs is relatively large.
3. Sunder (1976a, b) develops models to estimate the differences between the net present
value of tax payments under LIFO vs. FIFO inventory valuation methods. He shows that
the expected value of net cash flows depend on the future marginal tax rates, anticipated
change in the price of inventories, cost of capital of the firm, pattern of changes in the
year-end inventories, and the number of years for which the accounting change will remain
effective. Caster and Simon (1985) and Cushing and LeClere (1992) found that tax loss
carryforward and taxes are significant factors in the decision to use the LIFO method,
respectively.
4. The accounting literature has yet to develop a unified theory explaining managers’
choice of an accounting method. However, an emerging body of accounting literature has
advocated the concept of rational choice as a basis of managerial decision-making (see
Watts & Zimmerman, 1986, 1990).
5. We recognize that a principal-agent problem can arise when self-interest of managers
do not coincide with the interest of firms’ shareholders (Jensen & Meckling, 1976). We
do not incorporate the agency issue into the theoretical framework for two reasons: (1) it
is another source of self-selection in the data in addition to the self-selection caused by
maximizing shareholders’ wealth; and (2) some empirical studies have found that man-
agerial compensation and managerial ownership variables are not significant regressors in
explaining inventory valuation choice (Adel-Khalik, 1985; Hunt, 1985).
6. The Heckman-Lee method has also been used in previous accounting studies. For
instance: Adel-Khalik (1990a, b) applied the Heckman-Lee model to firms acquiring man-
agement advisory services from incumbent auditors vs. other sources, and to the endogenous
partitioning of samples into good news and bad news portfolios of quarterly earnings an-
nouncements, Shehata (1991) uses the Heckman-Lee model to examine the effect of the
Statement of Financial Accounting Standard No. 2 on R&D expenditures, and Hogan (1997)
shows that the use of Big 6 vs. non-Big 6 auditor in an initial public offering depends upon
a strategy which minimizes the total cost of under-pricing and auditor compensation.
7. The error terms in Eqs (10) and (11) are heteroskedastic and a correction must be
made in calculating the correct standard errors of the estimates. In this study, this correction
is achieved by using LIMDEP software in implementing the Heckman-Lee model. See
Greene (1990) for a description of the correction process.
8. Following Dopuch and Pincus (1988), we also estimate using the “as-if” approach
but we do not report the results in this paper.
9. Lindahl et al. (1988) characterize Lee and Hsieh’s (1985) probit model to be compre-
hensive which includes many of the variables used in previous studies.
10. The rationale for the use of the regressors in the probit function is covered extensively
in Lee and Hsieh (1985).
11. Dhaliwal et al. (1992) compute the FIXED variable by including long-term debt in
addition to the market value of equity in the denominator.
12. DeAngelo and Masulis (1980) demonstrated that a firm’s effective marginal tax
rate on interest deductions is a function of the firm’s non-debt tax shields (e.g. tax loss
carryforwards, investment tax credits).
13. Trezevant (1996) investigates the association of debt tax shield to changes in a non-
investment tax shield (cost of good sold) in the post LIFO adoption period.
260 PERVAIZ ALAM AND ENG SENG LOH
14. A similar argument is made by Mackie-Mason (1990) when considering tax car-
ryforward and debt financing. He argues that tax carryforward have a large effect on the
expected marginal tax rate on interest expenses since each dollar of tax carryforward is
likely to reduce a dollar of interest deduction.
15. We included firms with as few as one LIFO (FIFO) year, as long it has not been
a FIFO (LIFO) firm in any other years in this period. There are several reasons for our
choice: (1) specifying a minimum number of years for inclusion in the sample is arbitrary
and introduces potential bias in estimates; (2) the time element is unimportant in the pooled
cross-sectional analysis as each year of the data is treated as an independent observation; and
(3) since firms obtain the benefits from choosing LIFO (FIFO) in the year of adoption, firms
using LIFO (FIFO) for only one year will still provide us with as much relevant information
on the inventory method choice as those using LIFO or FIFO for more than one year.
16. Aside from differences in inventory methods and substantial economic differences,
the differences in t-values between the two groups may also be a function of possible
violation of the assumptions of the t-test.
17. Probit results with industry dummies included are similar to the results reported in
Table 4. In addition, the results indicate that the regression coefficients are positive and
significant for the textile, chemicals, petroleum and coal, rubber, leather, primary metal and
wholesale industries and are negative and significant for electronic and business services
industries.
18. We also used the logarithmic value of income taxes paid (income taxes-total [Compu-
stat item 16] minus deferred income taxes [Compustat item 126] as the dependent variable).
The results are essentially similar to those reported in Table 5. In addition, we examined
the possibility of using effective corporate tax rate as the dependent variable but decided
against its use for lack of consistency in the definition of the effective tax rate measure in
the literature (Omer et al., 1990).
19. The lambda term is based on the predicted value of the error term derived from the
reduced form probit. Hence, the selectivity adjustment reported in Table 5 is based on the
probit reported in Table 4.
20. The t-statistics shown in Table 5 are based on the correct asymptotic variance-
covariance matrix derived in Heckman (1979). Also, the OLS regression reveals that there
is no multi-collinearity among regressors (VIF values are no more than 2.0).
21. The results obtained using a different set of regressors for the tax function, are
qualitatively similar. For instance, we developed a model containing the following variables:
NDTS, TLCF, FIXED, INVVAR, RELASST, and INCVAR. The corresponding coefficients
of the lambda terms are −1.815 (t = −5.292) for LIFO firms and −7.071 (t = −22.462) for
FIFO firms. The sign and significance of other regressors in the tax equations are generally
in the expected direction. Similar results are also found when we drop the INCVAR variable
from the above tax function.
22. Differentiating the dependent variable with respect to the variable, FIXED, for ex-
ample, yields (␦T/T␦F) where T is dollars of tax payments and F represents the FIXED
variable. Multiplying by the mean value for FIXED gives us (F␦T/T␦F), the elasticity of
taxes associated with fixed assets. Note that for the LATS regressor, both the dependent
and independent variables are already in natural logarithmic value. Thus, the estimated
coefficient is itself the elasticity figure.
23. We tested the normality assumption for each of the variables prior to adjusting
for selectivity by either using the Kolmogrov-Smirnov or the Shapiro-Wilks statistic. The
Choice of Inventory Method and the Self-Selection Bias 261
results show that the observed distribution is not significantly different from the normal
distribution. In addition, following Pagan and Vella (1989), we perform a moment-based
test for normality in selectivity models. In this test, the predictions from the probit model are
squared and cubed, weighted by the Mills ratio. Using the two-stage least squares results,
the null hypothesis that squared and cubed terms are zero cannot be rejected.
24. The difference between actual LIFO (FIFO) taxes and predicted FIFO (LIFO) taxes
for LIFO (FIFO) firms is either tax savings or tax savings foregone. The negative difference
suggests tax savings and the positive sign difference indicates tax savings foregone.
25. We also computed tax savings/tax forgone using the “as-if” method described in
Lindahl (1982), Morse and Richardson (1983), Dopuch and Pincus (1988), and Pincus and
Wasley (1996). Our results (not reported show that the average “as-if” tax savings for LIFO
firms is $10.9 million. The FIFO firms’ average tax savings foregone under the “as-if”
approach is $60.0 million. Overall, our results show that the selectivity-adjusted approach
has the highest tax savings for LIFO firms. The selectivity approach takes into consideration
the joint decision of the inventory method choice and the tax effect of the decision.
ACKNOWLEDGMENTS
An earlier version of this paper was presented at the 1998 Annual Meeting of
the American Accounting Association and at the 2003 Management Accounting
Research Conference. We are thankful to C. Brown, D. Booth, J. Ohde, M.
Myring, S. Mastunaga, M. Pearson, M. Pincus, M. Qi, R. Rudesal, and S. Sunder
for comments and suggestions and to D. Lewis and J. Winchell for computer
programming assistance. The first author is thankful to the Kent State University’s
Research Council for providing partial financial support for this project. A
previous version of this paper is also available on ssrn.com.
REFERENCES
Adel-Khalik, A. R. (1985). The effect of LIFO-switching and firm ownership on executives pay.
Journal of Accounting Research (Autumn), 427–447.
Adel-Khalik, A. R. (1990a). The jointness of audit fees and demand for MAS: A self-selection
analysis. Contemporary Accounting Research (Spring), 295–322.
Adel-Khalik, A. R. (1990b). Specification problems with information content of earning: Revisions
and rationality of expectations, and self-selection bias. Contemporary Accounting Research
(Fall), 142–172.
Auerbach, A. J., & Porterba, J. M. (1987). Tax-loss carryforwards and corporate tax incentives. In: M.
Feldstein (Ed.), The Effects of Taxation and Capital Accumulation (pp. 305–338). Chicago:
University of Chicago Press.
Ball, R. (1972). Changes in accounting techniques and stock prices. Journal of Accounting Research
(Suppl.), 1–38.
262 PERVAIZ ALAM AND ENG SENG LOH
Biddle, G. C. (1980). Accounting methods and management decisions: The case of inventory costing
and inventory policy. Journal of Accounting Research (Suppl.), 235–280.
Biddle, G. C., & Lindahl, R. W. (1982). Stock price reactions to LIFO adoptions: The association
between excess returns and LIFO tax savings. Journal of Accounting Research (Autumn),
551–588.
Biddle, G. C., & Martin, R. K. (1985). Inflation, taxes, and optimal inventory policy. Journal of
Accounting Research (Spring), 57–83.
Bowen, R., Ducharme, L., & Shores, D. (1995). Stakeholders’ implied claims and accounting method
choice. Journal of Accounting and Economics (December), 255–295.
Cushing, B. E., & LeClere, M. J. (1992). Evidence on the determinants of inventory accounting policy
choice. Accounting Review (April), 355–366.
DeAngelo, H., & Masulis, R. (1980). Optimal capital structure under corporate and personal taxation.
Journal of Financial Economics, 8, 3–29.
Dhaliwal, D., Trezevant, R., & Wang, S. (1992). Taxes, investment-related tax shields and capital
structure. Journal of American Taxation Association, 1–21.
Dopuch, N., & Pincus, M. (1988). Evidence on the choice of inventory accounting methods: LIFO
versus FIFO. Journal of Accounting Research (Spring), 28–59.
Duncan, G. (1986). Continuous/discrete econometric models with unspecified error distribution.
Journal of Econometrics, 32, 139–153.
Greene, W. H. (1990). Econometric analysis. Englewood, NJ: MacMillan.
Hagerman, R. L., & Zmijewski, M. R. (1979). Some economic determinants of accounting policy
choice. Journal of Accounting and Economics (January), 141–161.
Hand, J. R. M. (1993). Resolving LIFO uncertainty: A theoretical and empirical reexamination of
1974–1975 LIFO adoptions and nonadoptions. Journal of Accounting Research (Spring),
21–49.
Heckman, J. J. (1976). The common structure of statistical models of truncation, sample selection,
and limited dependent variables and a simple estimation for such models. Annals of Social
and Economic Measurement (Fall), 475–492.
Heckman, J. J. (1979). Sample selection bias as a specification error. Econometrica (January), 153–161.
Hogan, C. E. (1997). Costs and benefits of audit quality in the IPO market: A self-selection analysis.
Accounting Review (January), 67–86.
Hunt, H. G., III (1985). Potential determinants of corporate inventory decisions. Journal of Accounting
Research (Autumn), 448–467.
Jennings, R., Mest, D., & Thompson, R., II. (1992). Investor reaction to disclosures of 1974–1975
LIFO adoption decision. Accounting Review (April), 337–354.
Jennings, R., Simko, P., & Thompson, R., II (1996). Does LIFO inventory accounting improve
the income statement at the expense of the balance sheet? Journal of Accounting Research
(Spring), 85–109.
Jensen, M. C., & Meckling, W. H. (1976). Theory of the firm: Managerial behavior, agency costs and
ownership structure. Journal of Financial Economics (September), 305–360.
Kang, S. K. (1993). A conceptual framework for the stock price effects of LIFO tax benefits. Journal
of Accounting Research (Spring), 50–61.
Lee, L. F. (1978). Unionism and wage rates: A simultaneous equations model with qualitative and
limited dependent variables. International Economic Review (June), 415–433.
Lee, J. C., & Hsieh, D. (1985). Choice of inventory accounting methods: Comparative analyses of
alternative hypotheses. Journal of Accounting Research (Autumn), 468–485.
Lindahl, F., Emby, C., & Ashton, R. (1988). Empirical research on LIFO: A review and analysis.
Journal of Accounting Literature, 7, 310–333.
Choice of Inventory Method and the Self-Selection Bias 263
Kwang-Hyun Chung
ABSTRACT
Acquisition is one of key corporate strategic decisions for firms’ growth and
competitive advantage. Firms: (1) diversify through acquisition to balance
cash flows and spread the business risks; and (2) eliminate their competitors
through acquisition by acquiring new technology, new operating capabilities,
process innovations, specialized managerial expertise, and market position.
Thus, firms acquire either unrelated or related business based on their
strategic motivations, such as diversifying their business lines or improving
market power in the same business line. These different motivations may be
related to their assessment of market growth, firms’ competitive position,
and top management’s compensation. Thus, it is hypothesized that firms’
acquisition decisions may be related to their industry growth potential,
post-acquisition firm growth, market share change, and CEO’s compensation
composition between cash and equity. In addition, for the two alternative
acquisition accounting methods allowed until recently, a test is made if the
type of acquisition is related to the choice of accounting methods. This study
classifies firms’ acquisitions as related or unrelated, based on the standard
industrial classification (SIC) codes for both acquiring and target firms. The
empirical tests are, first, based on all the acquisition cases regardless of
the firm membership, and then, deal with the firms acquiring only related
businesses or unrelated businesses exclusively.
The type of acquisitions was more likely related to industry growth oppor-
tunities, indicating that the unrelated acquisition cases are more likely to be
followed by higher industry growth rate than the related acquisition cases.
While there were a substantially larger number of acquisition cases using the
purchase method, the related acquisition cases used the pooling-of-interest
method more frequently than in the unrelated acquisition cases. The firm-level
analysis shows that the type of acquisition decisions was still related to acquir-
ing firms’ industry growth rate. However, the post-acquisition performance
measures, using firm’s growth and change in market share, could support
prior studies in that the exclusive-related acquisitions helped firms grow more
and get more market share than the exclusive-unrelated acquisitions. CEO’s
compensation composition ratio was not related to the types of acquisition.
1. INTRODUCTION
For the last three decades, mergers and acquisitions have been important corporate
strategies involving corporate and business development as the capital markets
rapidly expanded. Increased uncertainty about the economy has made it difficult
for firms to resort only to internal growth strategies. Firms diversified through
acquisition to balance cash flows and spread the business risks. They also tried to
eliminate their competitors through acquisition by acquiring new technology, new
operating capabilities, process innovations, specialized managerial expertise, and
market position.
Mergers and acquisitions (M&A), as an increasingly important part of corporate
strategy, enable firms to grow at a considerable pace. Also, firms can quickly
restructure themselves through M&A when they find it necessary to reposition.
M&A provides firms with the creation and identification of competitive advantage.
Porter (1987) identified the following four concepts of corporate strategy from
the successful diversification records of 33 U.S. companies from 1950 through
1986: portfolio management, restructuring, transferring skills and sharing skills.
Those concepts of corporate strategy explain the recent corporate takeovers as
either related diversification or conglomeration. Also, Porter (1996) stressed
that the essence of corporate strategy is choosing a unique and valuable position
rooted in systems of activities that are much more difficult to match, while the
operational techniques, such as, TQM, benchmarking, and re-engineering are easy
to imitate. Thus, many companies take over the target firms with such positions
in the context of operational effectiveness competition, instead of repositioning
themselves based on products, services, and customers’ needs. Firms’ sustainable
Corporate Acquisition Decisions under Different Strategic Motivations 267
competitive advantage through buying out rivals could be another key motivation
for corporate takeovers.
According to Young (1989), it is necessary to identify how M&A will result in
added value to the group, how quickly these benefits can be obtained and how the
overall risk profile of the group will be affected when considering M&A as part of
an overall corporate and business development strategy. The study suggests two
key criteria for the successful corporate takeovers: the level of business affinity and
the business attractiveness (i.e. market size, growth, profitability etc.) of the target
company. In reality, firms acquire either unrelated or related business based on
their strategic motivations: diversifying their business lines or improving market
power in the same business line. These different M&A decisions are hypothesized
to be influenced by the firms’ strategic motivations explained in the prior literature.
This paper attempts to identify the different strategic motivations for acquisition
activities of firms. More specifically, this study tests whether firms’ acquisition
decisions are related to their industry growth potential, post-acquisition firm
growth, market share change, and CEO’s stock compensation composition. It also
tests if the type of acquisition is related to the choice of accounting method.
In light of what prior studies (e.g. Scanlon, Trifts & Pettway, 1989) have
done, this study classifies firms’ acquisitions as related or unrelated, based on the
standard industrial classification (SIC) codes for both acquiring and target firms.
This study utilizes Securities Data Company (SDC)’s Worldwide Mergers &
Acquisition Database from 1996 to 1999. This database includes all transactions
involving at least 5% of the ownership of a company where a transaction was val-
ued at $1 million or more, and each firm may have many acquisition cases over the
test period. Thus, the empirical testing is based on each case as well as on each firm.
The rest of this paper is structured as follows: Section 2 explains how the
hypotheses are developed with regards to the accounting methods, the motivations
of firm’s acquisition decision including industry growth opportunities and CEO’s
compensation as well as the consequences of the acquisition decision in terms of
improved operating performance and increased market presence. The data used
and test variables are explained in Section 3. Section 4 summarizes empirical
results in both all-cases analysis and firm-level analysis. Finally, the concluding
remarks are provided in Section 5.
2. HYPOTHESES DEVELOPMENT
2.1. Acquisition Trends and Industry Growth Opportunities
If a firm’s motivation for acquisition is to balance cash flows and spread the business
risks, it is more likely to acquire a different line of business. Also, when the industry
268 KWANG-HYUN CHUNG
is faced with limited growth potential, firms are less likely to enter new markets
because it would be riskier for the management to manage new unrelated business.
This hypothesis is supported by product-market theory, which suggests that risk
increases as a firm moves into a new unfamiliar area.
The conglomerate type of diversification was often used to fuel tremendous
corporate growth as firms purchased many unrelated businesses during the 1960s,
regardless of what good or service they sold. In the 1970s, managers began to
emphasize diversification to balance cash flows individual businesses produced.
Acquisition was regarded as a diversification to balance between businesses that
produced excess cash flows and those that needed additional cash flows. It was
known as portfolio management to reduce risk. During the 1980s there was a
broad-based effort to restructure firms, scratching out unrelated businesses and
focusing on a narrower range of operations. Expansion through acquisition was
often limited to the vertical integration. In the 1990s, there was an increasing
number of diversification into related businesses that focused on building dynamic
capabilities as an enduring source of competitive advantage and growth. If firms’
motivation for M&A is the use of core competence in the acquired business and/or
to increase the market strength, the new business should have enough similarity
to existing business and this benefit can be augmented in the acquisition of the
same line of business. The use of core competence and/or the increase of market
strength should produce the competitive advantage that consequently increases
market share. Thus, the following hypothesis is made in an alternate form with
regards to each acquisition case:
The hypothesis above can be rephrased using firm’s own growth rate, instead
of industry growth rate for the firm-level analysis. However, this study uses the
ex-post growth rate after the acquisition, and thus, the post-acquisition growth
rate would be interpreted as a firm’s post-acquisition performance instead of its
own assessment of the growth potential. In fact, many firms acquired both related
and unrelated businesses in each year. Therefore, this study selects two distinct
Corporate Acquisition Decisions under Different Strategic Motivations 269
cases such as those who acquired exclusively the same business lines in each test
period (exclusive-related acquisition) and those who acquired only the different
line of business in each test period (exclusive-unrelated acquisition).1 Prior M&A
studies suggested that changes in the opportunity to share resources and activities
among business units have contributed to post-acquisition performance.2 Most
studies find improved performance in the 1980s acquisition, compared to the earlier
conglomerate acquisition wave of the 1960s, because of increased opportunities
to share resources or activities in the acquired firm (i.e. more operating synergy
effect).3 Diversifying character of the unrelated acquisition could be the reason
for poorer performance,4 especially in the short period, compared to the related
acquisition where we find a high opportunity for shared activities or resources.
Thus, the following hypothesis is formed in an alternate form with regards to a
firm’s post-acquisition growth rate:
H2 . Firms’ acquisition decision, unrelated or related, would lead to the different
post-acquisition growth. More specifically, firms with exclusive related acqui-
sition cases are more likely to have higher growth rate after the acquisition than
firms with exclusive unrelated acquisition cases.
In addition to firms’ post-acquisition growth, I hypothesize that firm’s acquisition
type can lead to the competitive position in its industry using the market share
because one of the crucial motivations for the acquisition may be to increase
market strength. Especially in a more competitive market, firms will be more
likely to acquire the other competing firms to increase the market power, compared
to the less competing market environment. Thus, the firms with exclusive related
acquisition cases are more likely to be motivated by increasing market power
than the firms with exclusive-unrelated acquisition cases. Thus, the following
hypothesis in an alternate form is made with regards to the firm’s change in
market share:
H3 . Firms that acquired the same line of businesses are more likely to
expect higher increase in market share in the industry because they get core
competences and market power than the firms that acquired only the different
lines of businesses.
Before June 2002, there were two generally accepted methods of accounting for
business combinations. One is referred to as the purchase method and the other is
known as the pooling of interests method. These two methods are not alternative
270 KWANG-HYUN CHUNG
ways to account for the same business combination. The actual situation and the
attributes of the business combination determine which of the two methods is
applicable. The purchase method would be applicable in a situation where one
company is buying out another. The pooling of interest method would be the case
where the shareholders of one company surrender their stock for the stock of
another of the combining companies.
There has been a combining of ownership interests, which is not regarded as
purchase transaction. Thus, firms can avoid recognizing goodwill. That’s why
the pooling of interest method required certain criteria regarding the nature
of consideration given and the circumstances of the exchange. However, the
problem in the method is to determine the equivalent number of common shares
acquired from the combining companies. If two companies in the business
combination are similar each other, it is perhaps easier to determine the shares
to be exchanged, compared to heterogeneous combinations.5 Thus, the following
hypothesis in an alternate form is stated in terms of two different accounting
methods in the acquisition.
H4 . Firms with exclusive-related acquisition cases are more likely to use the
pooling of interest method than those with exclusive-unrelated acquisition
cases.
10 20 4 24 6 8 5 5
13 180 37 217 85 57 38 37
14 17 9 26 1 7 10 8
15 22 19 41 11 5 15 10
16 2 8 10 2 2 3 3
17 2 8 10 2 3 5 0
18 0 3 3 3 0 0 0
20 199 61 260 54 61 67 78
21 5 3 8 3 1 1 3
22 24 22 46 17 11 8 10
23 33 21 54 10 18 14 12
24 20 42 62 8 13 28 13
25 15 9 24 4 9 5 6
26 68 37 105 36 20 27 22
27 152 95 247 55 70 64 58
28 275 217 492 126 129 136 101
29 10 60 70 17 25 16 12
30 20 25 45 7 12 15 11
31 6 4 10 3 3 3 1
32 23 59 82 13 27 19 23
33 57 65 122 31 35 32 24
34 45 73 118 31 33 38 16
35 236 454 690 164 169 177 180
36 270 354 624 107 135 166 216
37 129 185 314 64 83 78 89
272 KWANG-HYUN CHUNG
Table 1. (Continued )
Two-Digit Related Unrelated Total 1996 1997 1998 1999
SIC Acquisition Acquisition Cases
10 3 1 5 0 3 0 2 0 13 1
13 29 4 22 3 17 5 20 1 88 13
14 1 0 1 2 0 2 1 1 3 5
15 1 2 3 2 4 1 1 2 9 7
16 1 1 0 0 0 1 0 2 1 4
17 0 0 0 1 0 1 0 0 0 2
18 0 2 0 0 0 0 0 0 0 2
20 13 1 20 1 16 2 15 4 64 8
21 1 0 0 1 1 0 2 1 4 2
22 3 4 5 1 4 0 3 2 15 7
23 6 2 4 5 6 2 5 2 21 11
24 2 3 1 1 1 1 2 4 6 9
25 0 1 3 1 2 1 2 0 7 3
26 10 6 8 3 7 7 3 5 28 21
27 10 5 9 9 11 5 8 6 38 25
28 31 16 26 22 33 16 25 7 115 61
29 1 8 1 7 2 8 0 5 4 28
30 1 2 4 4 3 4 1 3 9 13
31 2 1 0 1 2 1 1 0 5 3
32 4 4 4 3 2 3 1 4 11 14
33 7 10 6 9 11 4 8 6 32 29
34 6 4 4 9 4 10 3 4 17 27
35 13 18 18 28 17 27 18 28 66 101
36 15 24 32 28 25 23 25 22 97 97
273
37 11 8 12 15 13 18 6 14 42 55
274
Table 2. (Continued )
Two-Digit 1996 1997 1998 1999 Total Firms
SIC
Related Unrelated Related Unrelated Related Unrelated Related Unrelated Related Unrelated
38 23 25 19 16 16 14 20 13 78 68
39 0 1 4 2 2 3 3 4 9 10
40 7 0 1 0 1 1 1 0 10 1
41 0 0 1 0 1 0 0 1 2 1
42 1 1 2 1 2 1 3 2 8 5
44 2 3 1 1 1 0 1 2 5 6
45 1 0 3 1 3 2 2 1 9 4
46 0 0 0 1 0 1 1 0 1 2
47 1 0 1 0 2 1 1 0 5 1
48 17 3 17 9 14 3 12 5 60 20
49 12 11 20 10 24 10 19 15 75 46
50 8 11 10 11 5 14 4 8 27 44
51 10 9 7 9 4 7 3 8 24 33
52 0 2 0 2 1 0 1 2 2 6
53 4 5 6 5 5 6 4 7 19 23
54 5 1 2 0 3 1 2 1 12 3
55 0 1 4 0 2 0 4 0 10 1
KWANG-HYUN CHUNG
56 4 0 1 3 5 0 4 1 14 4
57 1 0 0 1 3 0 3 0 7 1
58 11 0 9 0 7 1 9 1 36 2
59 7 3 5 5 5 2 4 3 21 13
60 46 13 42 8 30 12 23 12 141 45
61 4 2 3 2 5 1 2 3 14 8
62 2 3 1 8 4 5 4 2 11 18
63 17 9 19 17 17 11 14 9 67 46
64 1 2 3 2 0 1 4 0 8 5
Corporate Acquisition Decisions under Different Strategic Motivations
67 1 7 3 11 1 7 0 10 5 35
70 2 1 2 2 7 0 2 0 13 3
72 2 2 5 1 3 2 5 1 15 6
73 47 17 59 7 65 16 63 10 234 50
75 0 0 2 0 2 0 1 0 5 0
76 0 1 0 0 0 0 1 0 1 1
78 3 1 2 2 1 1 0 1 6 5
79 2 1 3 2 3 1 3 3 11 7
80 23 3 17 3 21 5 12 3 73 14
81 0 1 0 0 0 1 1 1 1 3
82 1 1 1 0 0 0 2 0 4 1
86 0 1 0 1 0 0 0 0 0 2
87 6 7 8 5 7 10 6 4 27 26
Total 442 275 471 304 456 282 396 256 1765 1117
275
276 KWANG-HYUN CHUNG
industry growth rates at least one year after the acquisition, which is the proxy for
the industry growth potential.
For further firm-level analysis, I sorted 9,058 cases by the acquiring firms,
and identified two distinct firm groups to test the hypotheses developed in the
previous section. The first group is where the firms acquired exclusively the related
businesses (i.e. firms in the industry of the same two-digit SIC codes), and the other
comparison group is where the firms acquired exclusively the unrelated businesses
in each test period. Table 2 summarizes the 1765 exclusive related-acquisition
firms and the 1117 exclusive unrelated-acquisition firms by year and the two-
digit SIC codes. Every year saw more related acquisition firms than the unrelated
acquisition firms. This firm-level analysis compares such test variables as industry
growth rate as industry growth opportunity, firm’s growth rate as post-acquisition
operating performance, firm’s change in market share as the acquisition motivation
as increasing market power, the accounting method in mergers and acquisition, and
the CEO’s equity compensation ratio as the proxy for firm’s growth potential.
Both all-cases analysis and firm-level analysis have two-group parametric
difference test (univariate t-test) between the related acquisition decision and the
unrelated acquisition decision, and logistic regression using the test variables as
independent variables. The all-cases analysis uses the logistic regression using
the industry growth rates and the accounting method for each acquisition as the
independent variables. The industry growth rate was measured one-year after
from the acquisition to the fiscal year 2001, where the data are currently available
in the Compustat tape, but the logistic regression includes the growth rate at least
over the two years. The industry growth rate was calculated as the annual change
in average net sales of the firms that belonged to the two-digit SIC codes. The
firm’s growth rate and change in market share was also measured one-year after
from the acquisition to the fiscal year 2001, and the logistic regression models
includes them at least over two-year period. The CEO’s compensation ratio is
measured from the Standard & Poor’s ExecuComp by dividing stocks granted
and Black-Scholes value of options granted by his/her total annual compensation.
The higher the variable is, the more dependent on equities their compensation is.
The accounting method is a dichotomous variable where the purchase method is
given 0, and the pooling-of-interest method is 1.
4. EMPIRICAL RESULTS
4.1. All-Cases Analysis
A total 9,058 M&A cases were analyzed each year, and they are divided into
related acquisition cases and unrelated acquisition cases. Table 3 shows the
Corporate Acquisition Decisions under Different Strategic Motivations
Table 3. Comparison Between Related Acquisition Cases and Unrelated Acquisition Cases (All Cases Analysis).
Year 1996 1997 1998 1999
Related Unrelated t-Stat. Related Unrelated t-Stat. Related Unrelated t-Stat. Related Unrelated t-Stat.
Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion
Cases Cases Cases Cases Cases Cases Cases Cases
277
c Significance at the 10% level (two-tailed test).
278 KWANG-HYUN CHUNG
comparison between the two groups from year 1996 to year 1999. Except in year
1999, the growth rates of the industry to which each acquisition case belongs show
the significant difference between two groups in a hypothesized direction. The
growth rates measured up to 2001 were even smaller in some industries than those
measured up to 2000. This may contribute to the weaker result in the industry
growth rates till year 2001, compared to those till year 2000. This study can’t
expand the industry growth rate beyond 2001 for data unavailability. Also, the
industry growth rate is ex-post instead of ex-ante to surrogate the industry growth
potential. The accounting method supported the alternative hypothesis, indicating
that while most acquisitions are accounted for by the purchase method, firms
are more likely to adopt the pooling-of-interest method in the related acquisition
cases, compared to the unrelated acquisition cases.
The multivariate logistic regression results are provided by years in Table 4.
In year 1996, all industry growth rates except from 1996 to 1998 are statistically
significant in differentiating related and unrelated acquisition cases in a hypothe-
sized direction. As consistent with the univariate comparison, the industry growth
rates in both years 1997 and 1998 are all significant in explaining the firm’s
acquisition decision. The unrelated acquisition decisions are more often made
when the industry growth opportunities are foreseen. The accounting method is
statistically significant all years.
Likelihood ratio 14.945 25.937 11.47 4.59 25.788 54.109 32.976 24.579 46.561 14.003
Intercept 0.4595 0.577 0.44 0.349 0.3661 0.5818 0.403 0.3065 0.5013 0.1185
(51.07a ) (61.8a ) (44.87a ) (30.05a ) (36.73a ) (64.61a ) (44.29a ) (31.37a ) (54.9a ) (4.77b )
Industry growth
5-years −0.2377
(10.94a )
4-years −0.4204 −0.323
(21.76a ) (12.12a )
3-years −0.332 −0.8002 −0.2001
(7.57a ) (39.36a ) (3.43c )
2-years −0.1514 −0.7437 −0.8254 −0.116
(0.7) (19.07a ) (24.36a ) (0.36)
Accounting method 0.2979 0.3008 0.3013 0.3098 0.5593 0.565 0.5383 0.6392 0.6261 0.567
(3.48c ) (3.54c ) (3.56c ) (3.78c ) (12.93a ) (13.04a ) (11.95a ) (19.27a ) (18.34a ) (13.19a )
% Concordant 45.5 52 50.5 47.2 51.1 56.4 54.6 45.6 54.5 48.8
Remarks: For year 1996
5-years: year 1996–2001
4-years: year 1996–2000
3-years: year 1996–1999
2-years: year 1996–1998
For year 1997
4-years: year 1997–2001
3-years: year 1997–2000
2-years: year 1997–1999
For year 1998
3-years: year 1998–2001
2-years: year 1998–2000
For year 1999
2 years: year 1999–2001
a Significance at the 1% level.
b Significance at the 5% level.
279
c Significance at the 10% level.
280
Table 5. Comparison Between Exclusive Related Acquisition Firms and Exclusive Unrelated Acquisition Firms
(Firm-Level Analysis).
Year 1996 1997 1998 1999
Test Variables Exclusive Exclusive t-Stat. Exclusive Exclusive t-Stat. Exclusive Exclusive t-Stat. Exclusive Exclusive t-Stat.
Related Unrelated Related Unrelated Related Unrelated Related Unrelated
Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion Acquistion
Firms Firms Firms Firms Firms Firms Firms Firms
KWANG-HYUN CHUNG
Observations (firms) 297–413 175–243 310–426 215–272 323–388 189–225 312–335 192–211
5-years 0.788 0.6124 0.55
4-years 0.553 0.454 0.39 0.5748 0.2541 2.41b
3-years 0.4394 0.3613 0.45 0.4335 0.1805 2.56b 0.2479 0.0703 2.23b
2-years 0.3237 0.3098 0.11 0.295 0.1691 2.01b 0.1214 0.0476 1.41 0.1089 0.0865 0.5
1-year 0.1411 0.1692 −0.51 0.1715 0.1053 1.99b 0.0589 0.0345 0.95 0.0311 0.0494 −0.64
Accounting method
Observations (firms) 0.0973 0.08 0.78 0.1158 0.0757 1.9a 0.1269 0.066 2.77c 0.135 0.0703 2.8c
442 275 475 304 457 288 400 256
Corporate Acquisition Decisions under Different Strategic Motivations
Equity compensation ratio
Observations (firms) 0.3652 0.3431 0.94 0.3921 0.3946 −0.1 0.4065 0.4441 −1.55 0.4393 0.4374 0.07
355 216 383 253 409 247 363 228
281
Table 6. Logistic Regression Results for Exclusive Related and Unrelated Acquisition Firms (Firm-Level Analysis).
Year 1996 1997 1998 1999
Observations 381 418 453 504 381 418 453 504 452 498 552 452 498 552 494 528 494 528 492 492
Likelihood 14.8395 14.5963 9.4373 5.0893 13.294 14.4448 10.432 5.2089 10.5679 11.6314 6.5834 14.212 14.0328 7.128 12.3096 11.9747 12.46 12.01 5.9521 6.3144
ratio
intercept 0.6431 0.7855 0.5715 0.4891 0.6579 0.7992 0.5671 0.4908 0.4043 0.729 0.4503 0.3922 0.7099 0.45 0.6168 0.7605 0.6037 0.7517 0.4099 0.4028
(8.81a ) (12.29a ) (8.37a ) (7.18) (9.39a ) (5.62a ) (8.23a ) (7.23a ) (4.61b ) (12.87a ) (6.69a ) (4.32b ) (12.11a ) (6.71a ) (10.77a ) (14.40a ) (10.38a ) (14.08) (5.07b ) (4.87b )
Industry growth
5-years −0.4779 −0.355
(7.86a ) (4.58b )
4-years −0.5957 −0.4297 −0.3052 −0.1216
(8.79a ) (4.89b ) (2.25) (0.35)
3-years −0.6316 −0.411 −0.7261 −0.581 −0.3418 −0.1626
(5.71b ) (2.47) (7.1a ) (4.45b ) (1.48) (0.35)
2-years −0.465 −0.1926 −0.229 −0.057 −0.9106 −0.7725 −0.2747 −0.1884
(1.41) (0.24) (0.44) (0.03) (5.26b ) (4.11b ) (0.61) (0.3)
Firm growth
5-years 0.174
(3.59c )
4-years 0.2199 0.1691
(4.32b ) (4.25b )
3-years 0.2422 0.1068 0.1664
(3.28c ) (2.03) (2.42)
2-years 0.3052 0.1884 0.1237 0.0677
(3.12c ) (2.25) (1.24) (0.18)
Change in 0.241 0.3358 0.3626 0.3612 0.3418 0.2399 0.242 0.2418 0.186 0.1492
market share
c b b c b c c
(3.38 ) (3.94 ) (3.87 ) (3.19 ) (6.14 ) (3.56 ) (2.69 ) (2.53) (1.25) (0.53)
CEO 0.239 0.0596 0.2415 0.0646 0.2445 0.0683 0.2419 0.0608 −0.3481 −0.4206 −0.375 −0.3984 −0.4567 −0.388 −0.2988 −0.1485 −0.297 −0.143 0.0844 0.0777
compensation
(0.34) (0.02) (0.44) (0.04) (0.36) (0.03) (0.44) (0.03) (1.09) (1.79) (1.59) (1.4) (2.09) (1.69) (0.86) (0.23) (0.85) (0.22) (0.07) (0.06)
Accounting 0.0515 0.0271 0.0616 0.2475 0.0649 0.0245 0.0514 0.2568 0.5036 0.355 0.4724 0.4693 0.3395 0.468 0.906 0.7754 0.9136 0.7778 0.6667 0.6584
(0.01) (0.01) (0.03) (0.43) (0.02) (0.01) (0.02) (0.46) (2.3) (1.25) (2.32) (1.98) (1.13) (2.27) (6.02b ) (5.1b ) (6.12b ) (5.12b ) (4.45b ) (4.34b )
% Concordant 57 58.8 58.4 56 56.7 58.6 58.6 56.1 56.9 56.4 54.4 57.3 56.6 54.7 56.3 57 56.1 57.2 54.9 55.5
a
Significance at the 1% level.
b
Significance at the 5% level.
c
Significance at the 10% level.
284 KWANG-HYUN CHUNG
The multivariate logistic regression models are formed to explain the firm’s
acquisition decision using the industry growth potential, motivation of increasing
market powers and improving operating performance, CEO’s wealth-increasing
motivation, and the firm’s motivation of avoiding goodwill by using the pooling-
of-interest method. Because of the high correlation between the firm’s growth rate
and change in market share, those variables are not in the model simultaneously for
the potential multi-collinearity. Firm’s industry growth rates as proxy for industry
growth potential are mostly significant in a hypothesized direction in explaining
different types of acquisitions, except for year 1999, possibly because of the
shorter period of measurement. The growth rate as post-acquisition performance
measure was a less significant variable, compared to the industry growth rate.
The post-acquisition performance is a good indicator to explain the firms’
exclusive-related acquisitions in years 1996 and 1997. Also, the change in market
share can explain the types of different acquisition in years 1996 and 1997. On
the contrary, the accounting method can explain the types of different acquisitions
only in the later test periods. This result may be explained by the fervent use of the
pooling-of-interest method in the late 1990s because of the imminent abolishment
of the method. As found in Table 5, none of the CEO’s compensation ratios
can explain the firms’ acquisition types in the models. The multivariate models
explaining the firm’s acquisition decision strategy are not fit, especially in the late
1990s (Table 6).
5. CONCLUDING REMARKS
Through M&A, firms can diversify to balance cash flows and spread the business
risks, or firms can improve efficiency or effectiveness by reducing competition,
and firms can foster their growth by creating more market power. Depending on
their corporate strategic motivations, firms can acquire the unrelated businesses
and/or the related businesses. Diversification motivation is more likely to lead to
the unrelated acquisitions while the related acquisitions are more likely to result
from the motivation of reducing competition and/or creating market power. Thus,
this paper attempts to identify the firms’ different strategic motivations for their
M&A activities by relating the corporate acquisition decisions to their assessment
of industry growth potential, post-acquisition firm growth, market share change,
choice of accounting methods, and CEO’s stock compensation composition.
The empirical tests reveal that in all acquisition cases, the industry growth
opportunities play a key role in choosing between unrelated acquisition cases and
the related acquisition cases, regardless of firm membership. Both univariate com-
parison test and the multivariate logit regression show that we tend to have higher
Corporate Acquisition Decisions under Different Strategic Motivations 285
industry growth rate for unrelated acquisition cases, compared to related acquisi-
tion cases, which is consistent with product-market theory indicating higher risk
under unrelated diversification acquisitions. Also, the choice of accounting method
was different between two types of acquisition cases. Firms under the related
acquisition cases tend to favor the pooling-of-interest method to the unrelated ac-
quisition cases, although most acquisitions were accounted for using the purchase
method. Because this analysis includes all acquisition cases regardless of firm
membership, a firm can have related and unrelated acquisitions in the same year.
However, in the firm-level analysis, I excluded these firms that have both
related and unrelated acquisitions in the same year so that only exclusive-related
acquisition firms and exclusive-unrelated acquisition firms are shown in each
year. This test also shows that the types of acquisition decisions were more
related to acquiring firms’ industry growth rate. The post-acquisition performance
measures, using firm’s growth and change in market share, were consistent in
explaining the types of exclusive related and unrelated acquisitions in the earlier
test periods (i.e. 1996 and 1997). The accounting choice in the firm-based analysis
was compatible with the types of acquisition as in the industry-wide analysis
except the earlier years. However, CEO’s compensation composition was not
related to the different types of acquisition though previous studies suggested that
firm’s growth opportunities affect CEO’s stock compensation composition ratio.
This study has some limitations. First, there is potential misclassification
between related acquisition and unrelated acquisition, because the two-digit SIC
codes of both acquiring and acquired firms were mechanically used. Second, for
the firms’ assessment of industry growth potential, the ex-post industry growth
rate was used instead of the ex-ante variable. Because of the use of ex-post industry
growth rate, the test variables for the recent test periods have a short measurement
time span for the data availability. Last, this study used the compensation data
confined to S&P 1,500 companies while the all acquisition cases cover most of
public firms in the SDC’s Worldwide Acquisitions and Mergers Database.
NOTES
1. Thus, the exclusive related-acquisition firms in a test period could be classified as
the exclusive unrelated-acquisition firms in the other test period.
2. Dess, Ireland and Hitt (1990), Hoskisson and Hitt (1990), Davis and Thomas (1993),
and Brush (1996).
3. Walker (2000).
4. Berger and Ofek (1995).
5. In the late 1990s, the FASB indicated that the pooling-of-interest method is no longer
appropriate accounting principle in business combinations. The potential accounting rule
286 KWANG-HYUN CHUNG
change seems to have pushed many firms in the business combination to use the method
since the late 1990s.
ACKNOWLEDGMENTS
This research was sponsored by Lubin Summer Research Grant (2002). I appreciate
the comments and suggestions from J. Lee and M. Epstein (the editors), as well as
participants at the 2003 AIMA Conference. I also thank Ryan Shin for excellent
research assistance.
REFERENCES
Berger, P. G., & Ofek, E. (1995). Diversification’s effect on firm value. Journal of Financial Economics,
37(January), 39–65.
Brush, T. H. (1996). Predicted change in operational synergy and post-acquisition performance of
acquired businesses. Strategic Management Journal, 17(January), 1–23.
Davis, R., & Thomas, L. G. (1993). Direct estimation of synergy: A new approach to the diversity-
performance debate. Management Science, 39(November), 1334–1346.
Dess, G. G., Ireland, R. D., & Hitt, M. A. (1990). Industry effects and strategic management research.
Journal of Management, 16(March), 7–27.
Hoskisson, R. E., & Hitt, M. A. (1990). Antecedents and performance outcomes of diversification: A
review and critique of theoretical perspectives. Journal of Management, 16(June), 461–509.
Narayanan, M. P. (1996). Form of compensation and managerial decision horizon. The Journal of
Financial and Quantitative Analysis, 31(December), 467–491.
Porter, M. E. (1987). From competitive advantage to corporate strategy. Harvard Business Review,
65(May–June), 43–59.
Porter, M. E. (1996). What is strategy? Harvard Business Review, 74(November–December), 61–78.
Scanlon, K. P., Trifts, J. W., & Pettway, R. H. (1989). Impacts of relative size and industrial relatedness
on returns to shareholders of acquiring firms. The Journal of Financial Research, 12(Summer),
103–112.
Walker, M. M. (2000). Corporate takeovers, strategic objectives, and acquiring-firm shareholder
wealth. Financial Management, 78(Spring), 53–66.
Young, B. (1989). Acquisitions and corporate strategy. Financial Management, 67(September), 19–21.
THE BALANCED SCORECARD:
ADOPTION AND APPLICATION
ABSTRACT
Technological advances and increasing competition are forcing organ-
isations to monitor their performance ever more closely. The concept
of the balanced scorecard offers a systematic and coherent method of
performance measurement that in particular concentrates on assessing
present performance in the light of an organisation’s strategy and takes
into account the importance of the various policy aspects. In this paper we
study the extent to which the concept contributes to the desired improvement
of performance. To this end, we examine the motives for adopting the
concept and the decision-making process around this adoption. We study the
functioning of the balanced scorecard as a means to control performance,
assuming that its functioning is linked to an organisation’s problems and
is influenced by other control instruments used. This is why we have done
case research.
INTRODUCTION
performance pyramids (Judson, 1990; Lynch & Cross, 1991), integrated perfor-
mance measurement systems (Nanni et al., 1992): these are only some examples
out of many. Some even seem to compete with one another, like the balanced
scorecard and the performance pyramid. In theory and practice especially the
balanced scorecard has been at the centre of interest. Possibly this is partly
because of the authors being rather well-known in the consulting profession.
The balanced scorecard is an instrument with which the performance of
organisations can be measured systematically and coherently. In recent years
attention has shifted more and more from measuring performance towards
managing it. Measuring is a means to achieve eventual performance management
and control. Kaplan and Norton (1996a, b) claim that the aim is discovering
cause and effect relations between the various areas of organisational activity and
the organisational outcomes. Therefore the balanced scorecard concept defines
critical success factors and performance indicators which reflect performance.
The thinking behind this is that critical success factors determine the realisation
of the strategic aims of the organisation and that the performance indicators are
a more detailed concretisation. The indicators indicate which activities now and
in the near future have to be carried out in order to successfully realise the aims.1
This paper is on the adoption and application of the balanced scorecard
concept on the level of a specific organisation. For this organisation the adoption,
implementation and use of the balanced scorecard has been examined. The paper
is theoretically informed by institutional theory as concretised by Abrahamson
(1991, 1996) and Abrahamson and Rosenkopf (1993). Using institutional theory
we will examine the process of adopting, developing and using the balanced
scorecard at NedTrain, a Dutch organisation in the field of public transport.
Furthermore, the NedTrain study is informed by theoretical notions on control
concepts underlying the use of the balanced scorecard concept. Particularly the
paper will draw on control concepts distinguished by Simons (1994, 1995). By
examining the adoption as well as the implementation and use of the balanced
scorecard we hope to get an insight into highly relevant questions to professional
practitioners who are confronted with decision-making connected with perfor-
mance and control systems. Is the adoption indeed a consequence of deliberate
decision-making by professional practitioners? If so, does the balanced scorecard
live up to (either implicit or explicit) expectations in the adoption phase? How
does it affect firms’ operations? And if not, what then drives the adoption? And
what happens with the balanced scorecard after the adoption?
The issues raised in this paper are not only relevant to academics, but also
to practitioners. Our study in particular meets Lukka’s (1998) criticism of much
current management accounting research: it is insufficiently aimed at accounting
and control possibilities for it to be able to intervene in a firm’s operations. We
The Balanced Scorecard 289
would like to add that in current management accounting research the professional
controller (or management accountant) has a very low profile. In this study, being
the professional that is responsible for the economic rationality of all business
processes (including those around the adoption, development and application of a
balanced scorecard) he is given a high profile.
The paper is organised as follows. In the following section we will briefly present
the balanced scorecard’s origin and, drawing on Simons (1994, 1995), go into
choices in the design of the control system around the scorecard, with particular
attention to the role of the controller (or management accountant). In the next
section, drawing on institutional theory, we will describe the motives underlying
the adoption of the balanced scorecard, after which we will justify the case research
method and procedures chosen. In the last section but one we will extensively report
on our case research and describe the adoption and application processes of the
balanced scorecard. Finally we will make some general remarks about the major
findings of the case study.
It is already about fifteen years ago that Kaplan and Johnson wrote the book
Relevance lost: The rise and fall of management accounting. It proved to be an im-
portant milestone in what at present is often called the Relevance Lost Movement.
In reality, this movement was already started before 1987 in two papers by Kaplan
in the important journal The Accounting Review (1983, 1984). Kaplan asserts
that systems and procedures of cost accounting and management control were
originally developed for firms manufacturing mass-produced standard products.
They are simple and direct cost information and responsibility accounting systems
mainly aiming at minimising production costs. In his view they are not suitable for
modern industry, which is especially characterised by client-specific production,
short life-cycles, CAD/CAM technology and much “overhead.” The solution
proposed consists of a number of elements, including refining costing and cost cal-
culation techniques, prolonging the time horizon of control techniques and a shift
away from a firm-centred orientation to a value chain approach. Another important
element is the balanced scorecard, involving a widening of the scope of accounting
reports with non-financial information from four perspectives: the customer, the
firm’s internal processes, innovation and financial aspects. From each perspective
a restricted number of critical success factors are formulated on the basis of
290 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN
It is the controller’s core business to prepare and find arguments for particular
choices of control systems. It may be assumed that, depending on the controller’s
292 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN
position and conception of duty, the balanced scorecard-based control concepts will
be planned differently. Jablonsky, Keating and Heian (1993) have done research
into the changing role of the “financial executive” in internationally operating
companies in the United States. In view of the job description of the position of
“financial executive” the conclusions are also relevant to the position of controller
in European companies. Interviews in six companies and a survey among over 800
companies, in which the opinions of financial as well as non-financial managers
were elicited, result in a distinction between two profiles: that of the “corporate
policeman” and that of the “business advocate.” The core values of the “corporate
policeman” profile are: “oversight and surveillance, administration of rules and
regulations, and impersonal procedures” (p. 2). The core values of the “business
advocate” profile are: “service and involvement, knowledge of the business, and
internal customer service” (p. 2).
The controller as “corporate policeman” is an extension of the senior man-
agement and will introduce an instrument like the balanced scorecard in the
organisation top-down. The most important part of the balanced scorecard will
be an account of the delegated powers with an emphasis on realising performance
standards. These standards are derived from the strategic policy drafted by the
senior management.
The controller as “business advocate” is a member of the businesses and
supports the business managements. This controller is expected to know about
the activities of the businesses, so that he/she can contribute to the discussions
about the developments and changes in the business. As a team member he/she
advances ideas about the financial organisation that most adequately fit the
changes taking place in the business. The balanced scorecard will in the first place
be an instrument to achieve a joint formulation of a strategy for the businesses and
a more detailed elaboration of strategic policy. By planning and elaborating the
balanced scorecard by mutual arrangement a strategic learning process develops.
This brings about communication of the strategy and enables all participants
to check which contributions they are making to realising the chosen strategy
(also cf. de Haas & Kleingeld, 1999). The balanced scorecard contributes to the
growth of a generally shared view of the organisation and the way this vision
can be realised. By means of determining the standards everyone’s contribution
to this becomes clear. Such a use of the balanced scorecard is aimed much less
at accountability.
The controller as an extension of the senior management fits into the traditional
concept of “responsibility accounting.” The controller in the capacity of “business
advocate” fits much more into the interactive control system discussed above.
Design and functioning of the balanced scorecard will run closely parallel with
the role of the controller.
The Balanced Scorecard 293
Internal Incentives
certainty cannot be found. Put differently: owing to the uncertainty and complexity
the professional decision-maker will have to sail into uncharted waters.
The impossibility of making a reliable cost-benefit analysis beforehand
does not mean that in practice there are no demonstrable reasons of technical
efficiency to consider the adoption of a balanced scorecard. One of them being
that the performance measurement systems in use are deemed to be insufficiently
effective. Thus, traditional systems are on the whole strongly financially oriented.
They assess the performance in the short term, and in doing so hardly link up with
strategic policy. The balanced scorecard does link up with long-term policies and
assesses performance in the light of these policies. By assessing the performance
from various perspectives the emphasis is not exclusively placed on financial
performance. Information about non-financial performance moreover provides an
insight into the causes of the financial results.
Moreover, an important reason for adopting the balanced scorecard may be
found in changes in and in the environment of the organisation. It is those changes
which organisation-internally necessitate a reconsideration of existing control
systems. These changes may take place in market conditions, due to which for
example existing products come under pressure. Changes may also be initiated
by technological developments, due to which existing products become obsolete
and new production methods become possible. Developments in information
technology can also bring about changes, because the data gathering and process-
ing possibilities increase and information becomes accessible at all levels and
workplaces in the organisation. The growth of an organisation, in size as well as ge-
ographical extent, can have consequences for the way activities are organised and
hence controlled.
It is therefore at least plausible that Kaplan and Norton developed the balanced
scorecard in reaction to changes in production and service organisations. It may
be assumed that, on the level of the organisation, the effectiveness criterion
completely supersedes the efficiency criterion when adoption is being considered.
Put differently: professional decision-makers will not need precise cost-benefit
analyses in connection with the adoption of the balanced scorecard. They are sim-
ply looking for effective systems. It is only gradually that the “costs” of the system
will appear.
External Incentives
particular governmental organisations can have so much (legal) power that they
can impose the use of new instruments on other organisations.
Mimetic behaviour can also lead to the adoption of new instruments. Sometimes
organisations belong to the same organisational collective, that is, to the same
group of competing organisations with respect to performance and/or raison
d’être. Imitating this group can lead to a so-called “fad.” Then the decision
to adopt the balanced scorecard is not based on effectiveness and efficiency
considerations concerning the instrument, but on the simple fact that (many)
other organisations have already done the same. In such processes two phases
have been distinguished (DiMaggio & Powell, 1983; Tolbert & Zucker, 1983). In
the first phase efficient choice behaviour is uppermost; especially the assessment
of the technical effectiveness and efficiency is important. In a second phase the
real fad starts to take off. As more and more organisations adopt the balanced
scorecard the attractiveness for a non-adopter increases. This may be connected
with the pressure from “stakeholders” like customers, suppliers and capital
providers. A high incidence of the balanced scorecard can make this instrument
rational for stakeholders: they associate adoption of the balanced scorecard with
rational decision-making. Inversely, non-adoption can make them suspect that
the organisation’s management is incapable of rational decision-making (also cf.
Meyer & Rowan, 1977). This may result in their discontinuing their contributions
to the organisation. Therefore many decision-makers may be expected to keep
on the safe side and decide on adoption after all. For, as more organisations in a
collective have chosen to adopt the balanced scorecard, the continuity of a specific
organisation has an interest in adoption. Political factors that are not or not easy
to calculate will then make the decision to adopt rational after the event.
Decisions to adopt within a collective of organisations for that matter do not
always have to be fad- or fashion-driven. Quite possibly, information from early
adopters may enable late adopters to decide on grounds of technical efficiency.
Such information, for this to happen, does have to be made available and actively
influence decision-making by non-adopters.
Mimetic behaviour may also create a fashion in response to actions by trend-
setting organisations or networks. The latter include researchers at universities
and business schools, and also organisation consultants. They disseminate their
ideas by means of various media, like professional journals, books, seminars and
personal communication and may be considered to be suppliers in a market for
balanced scorecards (and other administrative innovations). They are economi-
cally active actors, whose self-interest is paramount. Processes of fashion setting,
accompanied by standardisation of products, assist the suppliers. This improves
the marketability of the products. The demand in the market is from professionals
from the business world and governmental organisations. They do wish to take
296 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN
their decisions to adopt (purchase decisions) on the grounds of efficiency and effec-
tivity motives, but have to move into uncharted waters because there is uncertainty
about aims, aims-means relations and future conditions of the environment.
In short, there is ambiguity (March & Olson, 1976). Fashion setting processes
can help the buyer, because they may give the decision to adopt a semblance of
rationality. According to Abrahamson (1996) fashions and fads only have a chance
when there arises not only the collective belief that adoption of the balanced score-
card is rational, but also when there is progress. This means that, in the view of the
stakeholders, a clear improvement vis-à-vis the original situation must take place.
Fashions and fads are not necessarily good or bad for organisations. The adoption of
the balanced scorecard can for example enhance a company’s political image. The
attractiveness for (potential) stakeholders increases in such cases, with inherent
positive economic consequences for the organisation. Adoption is then mainly a
legitimising decision for the stakeholders. However, adoption can be more than an
act of legitimising. It can kick-start a learning process, the fruits of which may be
reaped by the decision-maker.
Decisions to adopt which are mainly externally inspired and are by the
participants hardly deemed to contribute to an increase of the effectiveness and
efficiency of the activities, will lack broad internal support. As long as external
legitimisation has hardly any consequences for what is being done internally
there will be “loose coupling” (DiMaggio & Powell, 1983; Meyer & Rowan,
1977). The balanced scorecard will internally hardly be significant, because the
concept will be elaborated superficially and the information it generates will play
no significant role in decision-making and influencing behaviour.
If the internal participants are of the opinion that performance has to be improved
and are convinced that the balanced scorecard can make an important contribution,
adoption may be expected to be followed by the actual implementation and use of
the concept. The internal participants will be prepared to invest time in designing
scorecards. They will also use information from the cards when taking decisions.
We also aim to look at the role of the controller in the adoption process and the
functioning of this instrument. Such an insight can only be gained to a sufficient
degree if in-depth research is done into a real-life situation. Hence, the choice was
made for a single case-study. In the last subsection we indicated that, when external
legitimisation is the predominant adoption motive, the instrument will only have
a symbolic function. We emphatically want to study the internal functioning of
the instrument and want to know if applying it makes the expected contribution to
improving performance. Therefore, we opted for a situation from practice where
internal incentives also played a role in the decision to adopt. Moreover we are
only interested in a practical case in which the balanced scorecard has been tested
for some years.
This means that the decision to adopt the balanced scorecard took place
some years ago and that we, for information on this ex-ante phase, will have to
rely on the memories of people closely involved with the decision. Hence, an
interpretation of the decision a posteriori cannot be entirely excluded. Such a
distortion of information can be neutralised as much as possible by interviewing
more persons about this phase and by consulting written documents.
People play various roles in organisations and will therefore interpret processes,
activities and information differently. The balanced scorecard is a management
accounting instrument. Such instruments are regarded as being among the tools
of the controller. Nonetheless, the instrument is expected to be a means with
which members of an organisation can control performance. The position of
controller is a supportive one and differs from the line functions. Because
of these different roles acquiring an insight into how the balanced scorecard
functions requires the perspective of the controller as well as the perspective from
the line.
We did the case-study at the Dutch Railways. This company had a long tradition
as a public company. In the early nineties the government decided to change
the Dutch Railways into a private company. Within the Dutch Railways we did
research in the service unit NedTrain, which in the context of the privatisation
introduced the concept of the balanced scorecard. Here we conducted interviews
with the central controller and line managers and controllers of the separate units.
These interviews were conducted from October 1998 to February 2000. Moreover
written material has been studied.
In the following section we will report on the NedTrain case-study. We will start
by describing the activities of NedTrain. Further, we will discuss the changing
positioning of NedTrain due to the privatisation process Dutch Railways has
been undergoing since 1995. Then we will describe the changes NedTrain have
made in order to function as a business unit responsible for its own results. In this
change process the Balanced Scorecard has played an important role.
298 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN
commission’s remit was to study a Dutch Railways hive-off. Further, to find out
how the Dutch Railways’ monopoly could be transformed into a market position
in which competition plays a role. According to its recommendations the Dutch
Railways should develop into an independent, enterprising and customer-oriented
company. It would eventually be listed on the stock exchange in order to cut the fi-
nancial links with the government as well. This recommendation had far-reaching
consequences for the position of the Dutch Railways and its internal functioning.
It implied a radical change with respect to its protected position in the past. The
proposal was to cut up the Dutch Railways into an operating part, to be privatised,
and an infrastructural part, which was to remain state-owned. This split, which was
also required by European regulations, has by now been put into effect. Afterwards,
additional agreements were made in order to curb Dutch Railways’ monopoly
position. The Dutch Railways were to concentrate on the core network. Regional
lines were opened up to independent transport companies. After the departure of
Goods Traffic by the end of 1999 the Dutch Railways has further restricted itself
to passenger transport only.
The Dutch Railways operating units were also confronted with these radical
changes. The service unit NedTrain became an independently operating unit,
which had to pay its own way. It had to become result-responsible, enterprising
and customer-oriented. It was not only to receive internal customers, but would
also quite clearly have to behave as a market party and try to attract customers
in competition with others. Before this, NedTrain had owned all rolling stock
and decided what maintenance had to be carried out. Public Transport and Goods
Traffic, the users of stock and equipment, had hardly any influence on all this. This
situation now changed. Public Transport now became the owner and was hence-
forth to be buyer of maintenance services. Thus, a customer-supplier relationship
arose which was unthinkable before. The owners of the stock and equipment can
now freely decide to buy external services. NedTrain now clearly has to take into
account the wishes of the customers and be able to offer its services in conformity
with the market. This turnaround had to be completed in a couple of years.
In 1993 NedTrain, on the eve of the hiving-off of the Dutch Railways, started
to reconnoitre the changes to take place. To this end a business plan had been
drafted, defining new markets. However, quantified objectives were all but absent.
The process of creating a result-responsible unit was described and the necessity
of developing and implementing a market was discussed. Result-responsibility
requires knowledge of costs and benefits, the various buyers and their wishes
and an insight into inter alia one’s market-position and competitors. The starting
300 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN
point was that NedTrain did not possess the accounts with which the financial
results could be related to operational activities. For this, the necessary basic
accounts were lacking. There were no assets and debtors accounts, nor was there
a profit and loss account. Nor was information available about buyers, markets
and competitors. In the past, technical aspects used to be exclusively focussed
upon. This was also reflected in the composition of the company’s management,
where financial and commercial expertise did not feature.
NedTrain’s management realised that without external assistance the changes
would not be successful. Therefore, an external consultant was hired, who made
an important contribution to the design of the changeover process. In addition, in
the course of time, new kinds of expertise have been taken on board. NedTrain’s
management was supplemented with a financial and commercial manager. In
NedTrain’s business units, too, a number of new people were appointed on the
managerial level.
The structure of the organisation was adjusted. NedTrain used to be an internal
service unit within the Dutch Railways. It had its own budget and was responsible
for the costs, which should stay within the budget. Now NedTrain became a
business unit with responsibility for its own results. Before the privatisation
NedTrain delivered its services only internally and, as owner of the rolling
stock, determined itself the quality and level of services. After the privatisation
NedTrain was no longer the owner of the rolling stock. Now it delivered its
services to both internal and external customers, who decided about the quality
and level of services. In order to make all the participants aware of the changing
position of NedTrain its management introduced own-result responsible units.
This was a radical change, as NedTrain used to have a functional structure with
many operating units. Thus, the business unit Overhaul and Service was created
for short-term maintenance, the business unit Refurbishment and Overhaul for
long-term maintenance and the business unit NedTrain Consulting for advice for
the purchase of new, and the conversion of existing, stock and equipment. Each
unit was given its own management, including a controller. Since a withdrawal to
the core network, and hence a decrease in service to the internal buyers, had been
anticipated, the new policy planned for an increase in service to third parties.
Moreover, by the end of 1999 Goods Traffic left the Dutch Railways and became an
external customer. Figure 1 gives an overview of the new organisational structure.
NedTrain had to become a profit oriented unit with its own customers. In the
past performance was expressed in technical terms and, as the unit determined
The Balanced Scorecard 301
the quality and level of services itself, it did not need to be aware of customers’
wishes. The new orientation required information about costs and revenues of
the various services and per customer, and information about customers’ wishes
and satisfaction. Because the level of internal services was expected to decrease
NedTrain was also expected to focus on external customers. Therefore, information
about market developments and competitors had to be collected.
The external consultant suggested the use of the balanced scorecard as one of
the vehicles of change for NedTrain. As discussed before, the entire complex of
changes should bring about a decrease in technical domination and place financial
and commercial consideration more at the centre. The aim was for the engineers
to be made to realise as quickly as possible that NedTrain’s operations had to
generate a profit and contribute to Dutch Railways’ profitability. To achieve this,
it is no longer enough to only look at technical aspects and strive for technical
perfection. The wishes of customers and the financial consequences of the
technical proposals must also be taken into consideration.
In addition to this an important objective was to force the internal orientation
towards a more external one: customers, competitors and market developments
should play an important role in decision-making. The balanced scorecard fits these
objectives excellently. This concept is based on the thinking that performance is to
be examined from various perspectives, in which process not only internal perspec-
tives are relevant but also external ones as mentioned above. Further, the concept
recognises the importance of learning and adjusting to new developments. This
was an important feature as NedTrain’s management knew that the change pro-
cesses would create an unstable situation for a longer period in which continuous
adaptations to new insights and environmental changes would be at the fore.
The eventual decision to adopt the balanced scorecard was taken in common
consultation by the NedTrain management team, which includes the central
302 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN
management and the management of the business units, the central controller
and the consultant. At the moment of adoption there was no clear vision on the
design of the control system connected with the balanced scorecard. Nor had
the management team developed a clear strategic vision on NedTrain’s future
position in the market and within the Dutch Railways. It was thought necessary
for the organisation to become as quickly as possible fully aware of the fact that
technology is also expensive and that there should be an external orientation. The
balanced scorecard was considered a good instrument for putting these subjects
on the agenda. Design, implementation and usage costs of the instrument played
no role in the decision-making. According to the central controller: “the feeling
was to start tentatively; then you cannot go wrong over that.”
Implementation
The following process was followed for the implementation of the balanced score-
card. By the end of 1993 a “kick-off meeting” was organised, with the management
team and all managers and controllers of the various units being present. In the
spring of 1994 the concept of the balanced scorecard was presented unit by unit to
all managers in workshops. Per unit so-called tandems were used: a local controller
and an external consultant acted as pioneers of the balanced scorecard concept.
Next, working groups were formed in each of the several companies to formulate
critical success factors, which were drafted per perspective. So there was no inte-
grated approach for drafting the critical success factors for the various perspectives.
Occasionally, the sheets with critical success factors per perspective were simply
stapled together. Dozens of critical success factors and their indicators were not
unusual. The local management including the local controllers made a selection
from all these factors. NedTrain’s managing director and the central controller, in
cooperation with the managers of the various units, determined the critical success
factors and their indicators per business unit, there being about 15 such indicators
per business unit and between two and three applying across the board. In these
discussions some important indicators were removed, being those providing an
insight into the capacity utilisation of staff, the amount of service and logistic
performance. NedTrain’s central controlling staff further elaborated the balanced
scorecards, that is, the layout of reports and systems for drawing them up.
The implementation process took place without any major problems. People
knew that the structure, working methods, management and control of NedTrain
would have to change. It was also known that the changes would have to take
place over a relatively short period. It was realised that henceforth financial
aspects would have to be paramount and that an external orientation is necessary.
The Balanced Scorecard 303
Because the introduction process was rather rapid it turned out afterwards that the
design of the cards had not been clear on all points. Thus, there were problems
with the nature of some of the performance indicators, with the uniformity of the
concepts used and the accessibility of the reports. Some important performance in-
dicators turned out to be absent: the ones referring to customer satisfaction, logistic
performance and innovative capability. These problems have been addressed in the
course of time, starting by defining the concepts unambiguously and improving
the reports.
It was also rather tricky to define performance standards. In fact, the discussion
about NedTrain’s future and the objectives derived from this had not even started
yet, nor were people used to working with clear performance standards. Therefore,
initially attention was exclusively focussed on performance measurement, without
feasibility and desirability being considered. The scorecards only comprised
information about the existing situation without any linkages to targets, because
they did not exist. The balanced scorecard was, at first, mainly used to acquire
an insight into the relation between technical and financial aspects and to make
people in the organisation ponder the critical success factors of their own units.
This also supported the strategic discussion about the internal and external
positioning of NedTrain as a whole, as well as also the strategic discussion within
the business units themselves. Following this discussion the units’ management
developed targets, which over time were included in the scorecards.
After the appointment of the commercial manager the market perspective has
received much more attention. With the aid of so-called customer dashboards for
the various markets the relationship with the customers is systematically charted.
This process is still going on. An insight into market position and characteristics
of customers provide relevant information for the strategic discussion. In this
discussion the positioning of NedTrain and the development of strategic alliances
are addressed. Issues were discussed, such as what are the activities we should
304 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN
carry out, which activities can be outsourced to external parties, what is the posi-
tioning of companies building rolling stock, were discussed. The last-mentioned
companies are in the process of acquiring rolling stock maintenance companies,
so that they not only can sell the rolling stock but are also able to take care of its
maintenance. This enables them to conclude contracts with railway companies
during the lifetime of the rolling stock. NedTrain’s key asset is that they possess
knowledge about the relation between the rolling stock and the rail infrastructure,
i.e. rail tracks and energy. The tendency is towards concluding long-term contracts
with customers through which NedTrain guarantees the functioning of the
customer’s rolling stock. This leads to another way of bidding and pricing, and
subsequently other cost information. The role of the controllers is to support their
line managers in developing these types of contracts and to deliver the relevant
cost information. Further, the NedTrain management discusses the deployment
of subcontractors for specific activities in a more structured way.
Meanwhile, the internal process perspective has also been picked up. The
various units have taken over the concept of the balanced scorecard and have
started their own interpretation. The units argue that the central scorecard is too
financially oriented and hence less suitable for their own management. Moreover,
for a report to the top management underlying information is required. Starting
from their own critical success factors the units scrutinise the operational activities.
Doing so, some units descend to the shop-floor. Thus, the NedTrain Consulting
business unit has developed cards on two levels within their own organisation,
with which all processes and activities are assessed. The controller of this business
unit has played a prominent role in developing these cards. He involved all the
organisational units in the development of appropriate performance indicators and
targets. All echelons contributed to the strategic discussion and the thinking about
consequences for processes and activities. In this way everybody knows what is
going on and responsibilities are shared. A benchmark and customer satisfaction
study have revealed more about their own functioning and helped in formulating
the objectives and targets. There are discussions about a further elaboration of the
innovation perspective and people find it hard to concretise this. Beside the concept
of the balanced scorecard the quality model EFQM is being used. The advantage
of the EFQM model is that it is more complete and has development stages,
allowing one to see where one stands at this moment and which steps have to be
taken. The information derived from the scorecards and the quality model lays the
foundation for the central scorecards.
The central scorecards are the business unit management’s monthly means
of reporting to the central management and largely determine the topics of the
central management team’s meetings. Within the units the cards are viewed as an
important management tool. They make action-oriented management possible,
The Balanced Scorecard 305
At that time the balanced scorecard concept was a hype. We needed a new instrument in order to
emphasise the financial consequences of our activities in the first place. As the consultant advised
the use of the balanced scorecard, we accepted this advice without deliberate discussions.
The design processes have encouraged people to ponder the critical success factors
and the contributions to this from the internal units. The adoption of the concept
by the several units and the drill-down processes to the shop floor have sparked
off across-the-board discussions in these units. Thanks to these discussions the
The Balanced Scorecard 307
In consultation with the business units the central management has determined the central
scorecard. Each month we report about a brief set of indicators to the central management.
Quarterly we report about the whole set of indicators followed by an in-depth discussion with
the central management. Using this information about the current situation we are able to
discuss the environmental developments and their implications for the business units’ activities.
I have advised using the concept also within our business unit. We were convinced that people
throughout the unit should be informed about the operational processes and their financial
consequences. Our scorecards are much more focussed on operational processes, in particular
the scorecards used within our units. The scorecards have helped us to discuss our internal posi-
tioning, both within NedTrain and the Dutch Railways, and our external positioning. NedTrain
Consulting used to be more externally focussed due to its advisory role about introducing new
technological developments and determining the technical requirements of new rolling stock.
As the demand of internal customers is decreasing we are widening our focus on the external
market. Therefore, we have paid a lot of attention to customers’ satisfaction and its influence
on the internal operations and processes. We have developed service specifications for each
of the processes, which can be measured. Further, we have put much effort in measuring
the performance of our Research & Development unit as a means to manage their activities.
I have regular meetings with people of this unit in order to discuss the most appropriate
performance indicators.
We use the scorecards for discussing the current situation and whether we are on the right
track to realise our strategic goals. We do not have a culture of “settling accounts” but of
“talking to.” What is very important is that all the people are informed about what is going on
and that there is a feeling of shared responsibility.
The central and decentral management are satisfied with the concept. The financial
perspective is claimed to have been put on the agenda and to have contributed to
a more external orientation, the two objectives deemed urgent at the beginning of
the change process. At present the concept functions as an important management
instrument. It determines the topics of discussion centrally as well as within the
several units. The success of the implementation is ascribed to a number of factors.
In the first place, the problems were widely acknowledged as was the necessity
308 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN
of carrying through changes. Moreover, the concept had – and still has – the
backing of the entire management and moreover very well matched the technical
orientation of NedTrain’s personnel, making the concept easily accessible and
widely supported. A lot of energy has also been lavished on the introduction and
implementation of the concept and attempts have been made to involve as many
people as was possible in organising this. In this process the central controller and
the business unit controllers played a pivotal role. In the implementation process
they functioned as intermediaries between the management and the people within
the units. They supported them in developing their scorecards. They also paid
attention to the linkages between the various performance perspectives and were
not only focussed on the financial aspects. They participated in the monthly and
quarterly discussions about the scorecards within the various management teams.
We can conclude that the role controllers play is that of a business advocate.
At the beginning the concept played a central role in changing the content of the
agenda: it introduced financial, customer and market aspects. The information of
the scorecards was very helpful for conducting the strategic discussion throughout
the organisation. At present the concept plays a central role in managing perfor-
mance and in the ongoing strategic discussion. This is, because the information
from the cards determines to a large extent the agendas of the meetings of the
central and decentral management teams. Nevertheless, now that the strategy
has become much clearer the concept is changing to a means of reporting and
discussing actions for the coming period. Although the scorecards are regularly
discussed, this does not remove the fact that there are certainly differences between
the various units with respect to the significance of the concept for the way things
are done. In the NedTrain Consulting business unit the balanced scorecard has
been accepted by the entire organisation. In the Refurbishment and Overhaul
business unit this is less so. Here cards have been drafted on various levels, but they
have less meaning the lower one descends in the organisation. The difference in ac-
ceptance is ascribed to the difference in knowledge and perhaps the larger number
of personnel plays a role.
We asked ourselves whether using the concept of the balanced scorecard
produces the effects on performance anticipated during the adoption phase. Some
remarks are called for here. In the specific case of NedTrain we are observing a
radical change. It turns out that initially the management did not have any clear
ideas of all consequences of such a change for the organisation. There were ideas
about the direction of the changes and there were more well-defined thoughts
about the changes that would at any rate have to take place as quickly as possible.
Steps were taken without being able to survey beforehand their consequences.
Gradually, things became clearer, without for that matter making the picture
complete. We can also conclude that in a radical change process it is not one
The Balanced Scorecard 309
NOTE
1. Some of the basic assumptions underlying the balanced scorecard concept have been
criticized by Nörreklit (2000). Part of her criticism concerns the causality concept of the
balanced scorecard. She concludes that there is not a causal but rather a logical relationship
among the areas analysed (p. 82). Rather than viewing the relationship between non-financial
measures as causal, the focus should be on coherence between measurements. Coherance
focuses “on whether the relevant phenomena match or complement each other” (p. 83).
REFERENCES
Abrahamson, E. (1991). Managerial fads and fashions: The diffusion and rejection of innovations.
Academy of Management Review, 16, 586–612.
Abrahamson, E. (1996). Management fashion. Academy of Management Review, 21, 254–285.
Abrahamson, E., & Rosenkopf, L. (1993). Institutional and competitive bandwagons: Using mathe-
matical modelling as a tool to explore innovation diffusion. Academy of Management Review,
18, 487–517.
Anthony, R. N. (1989). The management control function. Boston: Harvard Business School Press.
Argyris, C. (1990). The dilemma of implementing controls, the case of managerial accounting.
Accounting, Organizations and Society, 6, 503–511.
Bjørnenak, T., & Olson, O. (1999). Unbundling management accounting innovations. Management
Accounting Research, 10, 325–338.
DiMaggio, P. J., & Powell, W. W. (1983). The iron cage revisited: Institutional isomorphism and
collective rationality in organizational fields. American Sociological Review, 48, 147–160.
Haas, M., & de Kleingeld, A. (1999). Multilevel design of performance measurement systems:
Enhancing strategic dialogue throughout the organization. Management Accounting Research,
10, 233–261.
Jablonsky, S. F., Keating, P. J., & Heian, J. B. (1993). Business advocate or corporate policeman?
Morristown, NJ: Ferf Research, Financial Executives Research Foundation.
Judson, A. S. (1990). Making strategy happen, transforming plans into reality. London: Basil
Blackwell.
Kaplan, R. S. (1983). Measuring manufacturing performance: A new challenge for managerial
accounting research. The Accounting Review, 686–705.
Kaplan, R. S. (1984). The evolution of management accounting. The Accounting Review, 3, 390–418.
Kaplan, R. S. (2001a). Transforming the balanced scorecard from performance measurement to
strategic management. Part 1. Accounting Horizons, 15(1), 87–105.
Kaplan, R. S. (2001b). Transforming the balanced scorecard from performance measurement to
strategic management. Part 2. Accounting Horizons, 15(2), 147–161.
310 JELTJE VAN DER MEER-KOOISTRA AND ED G. J. VOSSELMAN
Kaplan, R. S., & Norton, D. P. (1992). The balanced scorecard-measures that drive performance.
Harvard Business Review (January–February), 71–79.
Kaplan, R. S., & Norton, D. P. (1993). Putting the balanced scorecard to work. Harvard Business
Review (September–October), 134–147.
Kaplan, R. S., & Norton, D. P. (1996a). Using the balanced scorecard as a strategic management
system. Harvard Business Review (January–February), 75–85.
Kaplan, R. S., & Norton, D. P. (1996b). Strategic learning & the balanced scorecard. Strategy &
Leadership (September–October), 18–24.
Lukka, K. (1998). Total accounting in action: Reflections on Sten Jönsson’s accounting for
improvement. Accounting, Organizations and Society, 3, 333–342.
Lynch, R. L., & Cross, K. F. (1991). Measure up! Yardsticks for continuous improvement. London:
Blackwell.
March, J. G., & Olson, J. (1976). Ambiguity and choice in organizations. Bergen, Norway:
Universitetsforlaget.
March, J. G. (1978). Bounded rationality, ambiguity and the engineering of choice. Bell Journal of
Economics, 587–608.
Meyer, J., & Rowan, B. (1977). Institutional organizations: Formal structure as myth and ceremony.
American Journal of Sociology, 83, 340–363.
Nanni, A. J., Dixon, J. R., & Vollmann, T. E. (1992). Integrated performance measurement: Man-
agement accounting to support the new manufacturing realities. Journal of Management
Accounting Research (Fall), 1–19.
Nörreklit, H. (2000). The balance on the balanced scorecard – A critical analysis of some of its
assumptions. Management Accounting Research, 11, 65–88.
Simons, R. (1994). Control in an age of empowerment. Harvard Business Review (March–April),
80–88.
Simons, R. (1995). Levers of control: How managers use innovative control systems to drive strategic
renewal. Boston: Harvard Business School Press.
Tolbert, P. S., & Zucker, L. (1983). Institutional sources of change in the formal structure of
organizations. Administrative Science Quarterly, 28, 22–39.