Vous êtes sur la page 1sur 10

Earnings Quality

Background

Over the years, researchers have devised various measures of earnings quality to represent
decision usefulness in specific decision contexts. These measures, however, have become
proxies for earnings quality in a generic sense, absent a decision context. The result is that
some papers use a proxy for earnings quality that does not match the hypothesized form of
decision usefulness in their study, but they nonetheless find results that are consistent with their
hypothesis. Other papers are intentionally agnostic and find robust results across multiple
proxies for earnings quality. The fact that researchers find consistent and robust results across
proxies suggests that there is common component to the various measures of quality, which is
the firms fundamental earnings process. Existing research does not clearly distinguish the
impact of a firms fundamental earnings process on the decision usefulness (quality) of its
earnings from the impact of the application of accounting measurement to that process.
Research attention has focused on earnings management that reduces the reliability of earnings
rather than on the ability of specific features of an accrual-based accounting system to provide a
more decision-useful measure, conditional on the firms fundamental earnings process.

We begin with a definition of earnings quality that sets the scope of our review. Higher
quality earnings more faithfully represent the features of the firms fundamental earnings process
that are relevant to a specific decision made by a specific decision-maker. Our definition implies
that the term earnings quality is meaningless without specifying the decision context, because
the relevant features of the firms fundamental earnings process differ across decisions and
decision makers.
This broad scope is motivated by the varied and often imprecise use of the term earnings
quality by practitioners (including regulators, enforcement agencies, and courts), the financial
press, and academic researchers. Lev (1989) popularized the adjective quality as a descriptive
characteristic of earnings for academic researchers when he stated that one explanation for low in
earnings/returns models is that: No serious attempt is being made to question the quality of the
reported earnings numbers prior to correlating them with returns. Levs statement implicitly
suggests that he defines earnings quality as decision-usefulness in the context of equity valuation
decisions.
Accounting researchers continue to use the descriptor quality in reference to the
decision-usefulness of earnings in equity market valuation, but use of the term has been
extended to other contexts as well, likely because of our conversational understanding of the
term quality as an indication of superiority or excellence. This evolution of a term such as
earnings quality to its current state of ambiguity is not unique. Schelling (1978) describes the
phenomenon:
Each academic profession can study the development of its own language. Some
terms catch on and some dont. A hastily chosen term that helps meet a need gets
initiated into the language before anybody notices what an inappropriate term it is.

People who recognize that a term is a poor one use it anyway in a hurry to save
thinking of a better one, and in collective laziness we let inappropriate terminology
into our language by default. Terms that once had accurate meanings become
popular, become carelessly used, and cease to communicate with accuracy
.

Beneish Score
General
The call for an effective method to identify earnings manipulation has increased with each
exposed accounting scandal. This study reviews the Beneish probit model and its ability to
discriminate between manipulators and non-manipulators. The model belongs to the most
cited and used tools to determine the likelihood of earnings management. This can be
explained by the ease of use and its holistic nature. Beneish probabilities were calculated for
firms accused by the SEC of fraudulent activities and for a vast amount of control firms.
The first finding of this assessment depicts a closing gap in Beneish probabilities between
control firms and SEC firms, which designates the diminished power of the model.
Subsequent analysis of macro-economic trends suggests that non-manipulating firms are
identified by the model as more risky in the bullish period of the late nineties. A second
analysis reveals that the Beneish probabilities, against expectations, significantly correlate
with stock returns. Seemingly, these two outcomes are contradicting. This implies that either
the model is less able to detect the large cases of fraud in the recent decade, but still detects
more subtle cases of fraud. Or, due to its widespread use the model has become a self
fulfilling prophecy so that stock prices for firms with a high Beneish probit automatically go
down. Finally, a newly estimated model shows that some variables have gained importance in
detecting fraud.
Never call an accountant a credit to his profession; a good accountant is a debit to his
profession." This is an old quote from Charles Lyell (1797-1875), portraying the ever
ongoing discussion concerning the value relevance of accounting information and the
disputed reputation of accountancy as being a highly ethical profession. The last decade,
especially after the burst of the internet bubble, has produced a new wave of accounting
scandals. Remarkably, it were not solely the start-up internet and IT firms that caused broad
encountered upheaval related to fraud: A major part of the latest accounting scandals have
taken place in large public companies. One of the most notorious examples is the debacle
surrounding Enron Corp. The self acclaimed worlds leading energy company, had to file
bankruptcy in late 2001 as the result of a an accounting fraud involving the use of special
purpose entities to conceal the losses it suffered. The succeeding crash of the stock had a
devastating effect as thousands of Enron employees and investors lost their savings and
pensions. Securities law historian Joel S. Seligman, described the social and legal impact of
the collapse as The most important corporate scandal of our lifetimes. It was one of the
immediate causes of the Sarbanes-Oxley Act, the governance reforms of the New York Stock
Exchange and NASD, and the most consequential reorientation of corporate behaviour in
living memory (www.rochester.edu). Only a few months later the news was divulged that
WorldCom, the largest United States telecom company after AT&T, was involved in an
accounting fraud with asset inflation estimated to be 11 billion. The accountancy profession
suffered another blow as Arthur Anderson LLP, the external auditor involved in both the

Enron and WorldCom scandal, had to surrender its licences and its right to practise before the
SEC.
As the consequences of such hefty frauds is felt throughout society at large, the
scepticism towards accountancy and audited financial reports increases with each exposed
scandal. The conventional notion that accounting is more an art than science, is based on the
manner in which accountants define and resolve problems. The complexity of corporate
financial information such as postretirement benefits, lease contracts and performance based
salaries create a necessity for some kind of subjective valuation as opposed to rigid scientific
rules. As a consequence managers have some discretion over accounting methods, in order to
obtain a financial picture as close to reality as possible. This subjectivity combined with the
lack of inside information on the side of external auditors and government agencies, can lead
to intentional misstatements of financial reports. Given that managers can choose between a
set of accounting policies and have discretion over how to measure certain transactions it is
natural to expect they choose a method maximizing their own utility or the market value of
the firm. Earnings management can be thought of in two complementary ways: First, it can
be thought of as opportunistic behaviour on managements behalf to maximize their own
utility at the expense of other stakeholders. A second more positive approach is to think about
it as a method used by management to utilize the flexibility in accounting policies to lower
the interest rates on debt, or to prevent heavy stock price fluctuations (Scott, 1997). It is
common practise to label this latter example as income smoothing. While accounting policies
allow for some discretion over earnings, it is a definite illegal act to manage earnings outside
the boundaries of these policies. To assure that the financial statements of large firms comply
with the standards and policies of law makers and standard setters, external auditors are hired
to give an opinion concerning the legitimacy of the firms financial reports.
When auditing a firm, a wide variety of simple analytical procedures can be chosen to
detect abnormalities in the bookkeeping system. An example would be to compare a certain
cost to prior periods, or to check for large changes in the gross profit to sales ratio.
Ultimately, the objective is to detect material misstatements which may have been a result of
a bookkeeping mistake, an erroneous valuation or even an intentional misstatement. To
discover intentional misstatements through simple analytical procedures might me difficult as
management is aware that large fluctuations in key relationships and ratios draws the
attention of auditors.
As the economy drives on trust and reliable financial information, stakeholders call
for methods to detect earnings manipulation. For more than a decade academic researchers
have been attempting to systematically model the relationship between certain financial
characteristics and earnings management in particular. One popular method is to model
discretionary accruals, initiated by Jones (1991), as a proxy for the ability managers have to
manipulate earnings. The study of Jones estimates unexpected accruals in order to measure
the degree of managers use of accounting discretion. In other words, the portion of net
income that is seen as abnormal is assumed to be the portion of net income over which
management has discretion. Higher discretion then, equals a higher likelihood of earnings
manipulation. Numerous researchers have attempted to enhance this model with more
variables or using a different approach. Other researchers have questioned the reliability and
power of the Jones approach with its inevitable degree of error.

An Introductory History :

A totally different approach was introduced by Beneish (1997) who constructed a


more holistic model that included ratios based on incentives for earnings manipulation and
ratios that, according to other research, have a positive relationship with earnings
manipulation. The model specifically calculates a percentage which is a proxy for the chance
of earnings manipulation. It is unique in this respect and allegedly outperforms conventional
accrual models. As its acceptance has grown over the last years, the Beneish model is now
being taught at universities across the globe and has become an accepted tool at investment
banks and governmental organizations. While other methods like the abnormal accrual
models have had a vast amount of criticism and are known to have limitations, such a critical
view of the Beneish model seems lacking. One critique came from Beneish himself as he
published a revised and simplified version of his model only two years after the publication
of the initial paper. A simplification of the model in terms of less variables, also meant
significant different parameters. Another point of concern is that both versions of the model
are based on old data. Could it be that the economic reality has changed so that revisions are
needed? For that reason, the question arises:
Is the Beneish model still able to detect earnings manipulation when applied to a U.S.
sample
of companies in the period 1996-2004?

To investigate the validity of the Beneish model, Comparision a set of companies


accused of fraudulent activities by the SEC to a general set of control companies. The data
used in this research differs from the Beneish research in that both samples a) contain the
latest financial information available instead of roughly 20 year old data, and b) contain more
companies than the initial research. By excluding a large sum of available data for the control
sample, Beneish attempted to enhance the comparability between the manipulating and nonmanipulating sample.
The control sample was constructed to mirror the industry classifications of the rather small
sample of fraudulent firms. Additionally, control firms were limited in number as for undisclosed
reasons. As such, it increased the internal validity of the research, but it comes at a cost: A lower
external validity. The research at hand recognizes that government agencies, portfolio managers
and other stakeholders alike can only make better decisions when such a model is based on a
realistic externally valid model.

When applying a regression analysis in order to establish a relationship between a


dependent variable and one ore more independent variables, it is common to use Ordinary
Least Square estimation (OLS). This is, however, not always the most sound method. A
dependant variable that is qualitative in nature might be difficult to interpret as a quantitative
variable. For example, the Beneish model needs to distinguish between manipulators and
non-manipulators which would the dependant variable be described as a dummy variable (0
for a non-manipulator, 1 for a manipulator) and the generated outcome of the model as an
earnings manipulation potential index (Wiedman, 1999). There are different methods to
ensure that predictive values fall in the 0-1 range. Beneish used a probit model to calculate
the chance of earnings manipulation for a given financial report, hence the often cited
denotation Beneish probit model.
Beneish identified 64 firms who, in the period 1987-1993, have been known to be
earnings manipulators. The sample was assembled out of firms subject to enforcement
actions by the SEC or firms that were discussed in the media as having manipulated earnings.
The control sample totalled 1989 firms who resembled the violators given that they had large
positive discretionary accruals, but had not been judged to have violated GAAP (classified as
aggressive accruers). In addition, firms with large positive discretionary accruals are likely
to suffer negative abnormal returns (Sloan, 1996); a shared trait of both the violators and the
aggressive accruers. It is because of these similarities that a model that is able to distinguish
between the two samples is stronger and more conservative.

Beneish Indicators:
The final Beneish (1997) probit model is based on variables which either tend to
capture the likelihood of the detection of distorted financial data, or the incentives and
abilities of managers to violate GAAP. The first set of variables assessing the likelihood of
detection is presented below:

1. Days Sales in Receivables Index (DSRI)- Measured as the ratio of days in receivables
in the first year that the manipulation is discovered (year t) to the same measure in
year t-1. It measures whether the receivables and revenues are in balance in two
successive years: A large increase could be the result of a change in credit policy, but
a disproportional increase might suggest revenue inflation. As a consequence, it is
expected that a large increase in receivables increases the likelihood of earnings
manipulation.

2. Gross Margin Index (GMI)- Measured as a ratio of total sales revenue minus the cost
of goods sold divided by sales in year t-1 to the corresponding measurement in year t.
A GMI above 1 indicates a decline in gross margins which in turn is related to poorer
business prospects and a higher probability of manipulation.

3. Asset Quality Index (AQI)- Calculated as the proportion of non-current assets other
than property plant and equipment to total assets. Assessing this ratio over time allows
the user to identify changes in asset realization risk, and hence the probability that a
firm capitalizes expenses on the balance sheet in order to defer costs. An increase in
this measure is predicted to increase the probability of manipulation.

4. Depreciation Index (DEPI)- This variable is computed as the rate of depreciation in


year t-1 divided by the depreciation rate in year t, with the rationale that lower
depreciation expenses results in more discretion over income and thus a higher
probability of manipulation.

5. Sales General and Administration Index (SGAI)- Since the relationship between
SG&A and sales is known to be quite static, it is alarming when SG&A expenses
increase without a simultaneous increase in sales. Calculated as a ratio of SG&A to
sales in year 1 relative to the corresponding measure in year t-1, it is expected that a
higher SGAI increases the likelihood of manipulation.

6. Total Accruals to Total Assets (TATA)- A ratio commonly used to gauge the degree
to which earnings are cash-based. An increasing degree of accruals as part of total
assets would indicate a higher chance of manipulation.

The following variables are used as proxies for the ability and incentives to violate GAAP:
1. Capital Structure (CS)- Higher leverage is an explicit incentive for managers to
violate GAAP as they tend to reduce the cost of capital and avoid debt covenant
violation.
2. Prior Market Performance (PMP)- Declining stock prices create incentives to
inflate earnings and violate GAAP.
3. Time Listed (TL)- It is noted that younger firms are more likely to experience
financial distress. For the same reason, the SEC imposes closer scrutiny on young
firms and perceives them as riskier. Also, earnings manipulation might be very

rewarding for a young firm in light of an IPO.


4. Sales Growth Index (SGI)- Growth does not necessarily lead to manipulation. It is
however hypothesised that high growth firms inherently face more inducements for
earnings manipulation since such firms encounter vast drops in stock price when the
stagnating growth becomes public.
5. Positive Accrual Dummy (PAD)- A dummy variable denoting whether or not
accruals were positive in the current and prior year. The fact of the matter is that
accruals need to reverse sooner or later, meaning that firms displaying positive
accruals year after year could inflate earnings and attempt to avoid accrual reversals.
6. Declining Cash Sales Dummy (DCSD)- A dummy variable denoting whether or not
sales in the current year are lower than in the previous year
The results of the probit estimations of the model is combined in an earnings manipulation index
(M-score) computed with the following linear relation:
(5) Manipulation index = -2.224 + 0.221(DSRI) + 0.102(GMI) + 0.07(AQI) + 0.062(DEPI) +
0.198(SGAI) - 2.415(TATA) + 0.040(SGI) - 0.684(PMP) - 0.001(TL)
+ 0.587(CS) + 0.421(PAD) 0.413(DCSD)

The end users -such as auditors, regulators and investors- are able to calculate the M-score
fairly easily using publicly available information with the premise that the higher the score
the higher the probability of manipulative behaviour. The mean probability was 12.3 percent
for GAAP violators versus 2.6 percent for control firms. It should be noted that while the
average probabilities estimated through the model are rather low, given the before mentioned
conservative nature of the tests, the model does a good job at distinguishing between
violators and non-violators.
Two years after the publication of the initial article, Beneish published another article
containing a simplified version of the model. The new model excluded four variables and was
based on a sample of industry matched firms instead of aggressive accruers. The framework
and the calculation of the remaining eight variables are exactly the same as in the older
model. The remainder of this study will focus on this newer version since it is supposedly
better, simpler and more in use. The simplified model looks as follows:
(6) Manipulation index = -4.840 + 0.920(DSRI) + 0.528(GMI) + 0.404(AQI) + 0.892(SGI)
0.115(DEPI) 0.172(SGAI) + 4.679(TATA) 0.327(LVGI)

The focus on the Beneish model over accrual models (most notable the modified Jones model
(Dechow et al., 1995) and time series models, is warranted as for a number of reasons.
Accrual models are of limited use when assessing firms with extreme financial performance

and need a long range of data, creating a cost disadvantage. Also, the Beneish model
considers variables related to both the detection and incentives for fraud, and it allows the
user to assess the different aspects of a firms performance simultaneously instead of
reviewing them in isolation.

So far, this section has provided an overview of the most imperative developments in
business failure and fraud detection literature over the last decades. Beneish (1997)
recognised that a non-accrual model could lead to more robust results when attempting to
detect fraud. It remains to be seen however, whether the Beneish probit model still contains
detective power in light of the numerous regulative and economic changes since his research
came to print. This rationale will be more extensively discussed in the next section, leading to
the hypotheses of the study at hand.

Application of Beneish Model


Case study of Comptronix Corporation

Vous aimerez peut-être aussi