Do managers manage earnings to ‘just meet or beat’ analyst forecasts?: Evidence from Australia

Preview:

Citation preview

Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91

Contents lists available at ScienceDirect

Journal of International Accounting,Auditing and Taxation

Do managers manage earnings to ‘just meet or beat’ analyst forecasts?Evidence from Australia

Ahsan Habiba, Mahmud Hossainb,∗

a Department of Accounting, Faculty of Business, Auckland University of Technology (AUT), Private Bag 92006, Auckland 1142, New Zealandb Division of Accounting, Nanyang Business School, Nanyang Technological University, Nanyang Avenue, Singapore 639798, Singapore

a r t i c l e i n f o

JEL classification:M40M41

Keywords:Analyst earnings forecastsEarnings managementAustralia

a b s t r a c t

This paper examines whether managers manage earnings to ‘just meet or beat’ analystforecasts in Australia. Previous Australian studies on benchmark-beating have focused onloss avoidance and small earnings increases as benchmarks [Coulton, J., Taylor, S., & Taylor,S. (2005). Is ‘benchmark beating’ by Australian firms evidence of earnings management?Accounting and Finance, 45, 553–576; Holland, D., & Ramsay, A. (2003). Do Australian com-panies manage earnings to meet simple earnings benchmarks? Accounting and Finance,43, 41–62]. This paper extends this earlier research on benchmark-beating in Australia byincorporating analyst forecast as an important benchmark. Using three different models ofunexpected accruals as proxies for earnings management, this study did not find any sig-nificant difference between the mean and median unexpected accruals of the “‘just meetor beat” group as against the “just miss” group. Furthermore, for a long period of time(1997–2002), the proportion of Australian firms ‘just meeting or beating’ analyst forecastsbenchmark increased, although such increase was not statistically significant.

© 2008 Elsevier Inc. All rights reserved.

1. Introduction

The purpose of this paper is to examine whether managers manage earnings to ‘just meet or beat’ analyst forecasts inAustralia. Previous Australian research on ‘benchmark-beating’ used loss avoidance and small increases in earnings as theearnings benchmarks, but not analyst forecasts. For instance, Holland and Ramsay (2003) found that there were abnormallylarge (small) numbers of observations for the interval immediately above (below) zero for both scaled earnings levels andscaled earnings changes variables. But Holland and Ramsay (2003) did not examine whether managers manage earnings toachieve those benchmarks. Coulton, Taylor, & Taylor (2005) tested that premise but failed to discover any evidence of earningsmanagement. They suggested, therefore, that discontinuity around zero does not necessarily indicate earnings managementdesigned to meet or beat earnings targets. However, their use of the last year’s earnings benchmark is questionable, sinceanalysts have been shown to incorporate a broader set of information and to be timelier in their forecasts than time-seriesmodels (Ramnath, Rock, & Shane, 2008). The present paper fills this gap in earlier research on benchmark-beating in Australia,by incorporating analyst forecasts as an important benchmark.

Recent research suggests that meeting or beating analyst forecasts became the most significant benchmark for managers(Brown & Caylor, 2005). This could result from the markets’ increasing focus on just meeting analysts’ expectations (becausethese forecasts are becoming more accurate and precise) or from increases in the number of analysts and in the media

∗ Corresponding author.E-mail address: amhossain@ntu.edu.sg (M. Hossain).

1061-9518/$ – see front matter © 2008 Elsevier Inc. All rights reserved.doi:10.1016/j.intaccaudtax.2008.07.004

80 A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91

attention paid to their forecasts. Bartov, Givoly, and Hayn (2002) found firms that ‘just meet or beat’ current analyst earningsexpectations enjoyed a higher stock return. Similar results were reported by both Kasznik and McNichols (2002) and Lopez& Rees (2002). Failure to meet or beat forecasts was associated with significant adverse consequences. Matsunaga andPark (2001) found a significant negative incremental effect on the CEO’s bonus where managers failed to meet quarterlyearnings forecasts. Given the significant benefits (costs) associated with just meeting (failing to meet) analyst forecastsbenchmark, managers are no longer passive in the earnings game. Rather, they are actively trying to win the game by alteringreported earnings and/or influencing analyst expectations. Meeting benchmarks boosts management’s credibility by meetingstakeholders’ expectations and avoiding costly litigation that could potentially be triggered by unfavorable earnings surprises(Bartov et al., 2002).

The evidence of earnings management to ‘just meet or beat’ analyst forecasts, comes primarily from the studies conductedin the United States of America (hereafter US). Whether this finding can also be generalized to the Australian reportingenvironment needs to be assessed in light of at least two questions, namely whether (a) investors value analyst forecasts asa gauge of managerial performance; and whether (b) just meeting or beating analyst forecasts results in a value premiumfor Australian companies. There is a dearth of evidence in the Australian context to answer these questions. The availableevidence has investigated the factors associated with forecast accuracy and bias, but there is hardly any evidence on theinformation content of analyst forecasts error.

Brown, Clarke, How, and Lim (2002), using consensus forecast data from 1985 to 1998, found that analyst dividend forecastswere, on average, more accurate (and less biased) than their corresponding earnings forecasts. Brown, Taylor, and Walter(1999) reported that analyst forecast errors actually increased after the passage of the Corporations Law Reform Act 1994.Hope (2003), in an international study, found that strong enforcement regimes lead to higher forecast accuracy. He foundthat mean forecast accuracy was highest in Australia, although Australia ranked well below other common law countriesin terms of enforcement mechanisms. Basu, Hwang, and Jan (1998), on the other hand, reported that the average forecasterror is 1.53% higher in Australia than in the US. The findings from the present paper could provide indirect evidence of thevalue of analyst services in the Australian context. It is expected that managers will engage in costly earnings managementto ‘meet or beat’ analyst forecasts only when they find it beneficial to do so.

Earnings management to ‘just meet or beat’ analyst forecasts is a phenomenon that is strongly shaped by the businessreporting environment. Reporting environments in the US and Australia differ in some important respects, and the threatof litigation faced by corporate managers is an important attribute of such differences. In particular, consistent evidenceof earnings management to ‘just meet or beat’ analyst forecasts in the US has been attributed to, among other factors, theexcessive threat of litigation. In particular, high-tech stocks have been extremely vulnerable when they fall below analystforecasts benchmark (Skinner & Sloan, 2002). However, the lack of legal class-action privileges, and the entitlement of suc-cessful defendants to cost recovery from the plaintiff (Taylor & Taylor, 2003), has discouraged investors from suing corporatemanagers in cases of poor firm performance in Australia.

Furthermore, analyst following of Australian companies has not been intensive. For the year 2004, forecast data for only458 companies were available in the I/B/E/S, which represented a mere 30% of the total number of domestic listed companies.1

Additionally, US-based research on ‘benchmark-beating’ has strongly established that stock option-based executive compen-sation schemes were among the most important determinants of managerial incentives to ‘just meet or beat’ analyst forecasts(Cheng & Warfield, 2005). The extent to which these findings also hold in Australia is not clear because of the lower analystcoverage of Australian companies and a lack of evidence on the usefulness of long-term incentive packages.

In an international context, O’Brien (1988) raised concerns about whether analysts’ ability to forecast earnings is importantoutside the US. She argued that financial statements in some countries are prepared to satisfy legal requirements, rather thanto inform investors. However, in Australia this consideration is less relevant because of the separation of financial from taxreporting. Australia belongs to a common law tradition where investors rely on publicly disclosed accounting informationto make investment decisions (Ball, Kothari, & Robin, 2000). Common law countries provide analysts with an incentive toengage in private information acquisition, because their service is demanded by the market. Also, the high level of company-specific disclosure in common law countries makes forecasting easier. These factors suggest that investors might use analystforecasts to evaluate managerial performance. Furthermore, some recent class-action lawsuits made against corporationslike AWB and Multiplex may encourage Australian investors to take legal actions against managers failing to meet or beatanalyst forecasts. The contrasting perspectives on the value of analyst services in Australia outlined here, therefore, justifyan empirical examination of whether mangers manage earnings to ‘just meet or beat’ analyst forecasts.

Using analyst forecasts data from the I/B/E/S for 1995 to 2004, and employing three different models of discretionaryaccruals (hereafter DACCR) measurement, this study failed to find any significant difference in DACCR levels between the“just meet or beat” and the “just miss” groups. Additional analysis revealed that there was no discernible increase in themanagerial propensity to ‘just meet or beat’ analyst forecasts in Australia over the sample period. The paper proceeds asfollows: the next section presents the background for the study and a brief description of the related literature. Section 3explains the research design issues. Sample selection procedure is discussed in Section 4. Section 5 provides the substantivetest results and Section 6 concludes.

1 A total of 1515 companies were listed at the end of 2004 in the Australian Stock Exchange.

A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91 81

2. Related literature and background

Despite the prominence of cash flows in asset valuation models and in the Generally Accepted Accounting Principles(hereafter GAAP), investors and analysts use financial data to predict earnings. One of the reasons for the heavy emphasison earnings, especially earnings per share (hereafter EPS), is that analysts evaluate a firm’s progress based on whether acompany hits a consensus EPS (Graham, Harvey, & Rajgopal, 2005).

Because analysts have been shown to incorporate more relevant information and to be timelier in their forecasts thantime-series models (Ramnath et al., 2008), investor-focus on analyst forecasts to evaluate firm performance is appropriate.Academic research provides strong evidence of positive valuation consequences associated with meeting or just beatingconsensus analyst forecasts (Bartov et al., 2002; Kasznik & McNichols, 2002; Lopez & Rees, 2002). Furthermore, this trendhas become more intense since the mid-90s. Brown and Caylor (2005) found that late in their study period (1996–2001)managers were more likely to avoid negative earnings surprises than losses and earnings decreases, which is consistentwith Brown (2001). Brown and Caylor (2005) attributed this finding to the markets’ increasing focus on meeting analysts’expectations in the later study period (1996–2001).

The theory behind investor reliance on simple heuristics like just meeting or beating analyst forecasts to evaluatemanagerial performance, is grounded on Kahneman and Tversky (1979) ‘prospect theory’. This theory postulated thatdecision-makers derive value from gains and losses with respect to a reference point, rather than from absolute levelsof wealth. Thus, individuals derive the highest value when wealth moves from a loss to a gain relative to a reference point(consensus analyst forecasts in the present case). Drawing on this theory Burgstahler and Dichev (hereafter BD) (1997)showed that managers manage earnings to meet or beat earnings benchmarks. In a related study, Burgstahler and Eames(2003) showed that analysts made far more zero forecasts relative to realizations, whereas, for small positive earnings val-ues, realized earnings outnumbered forecast earnings. With respect to the incentives for earnings management to meetor beat forecasts, managers with high equity incentives were found to sell more after just meeting or beating consen-sus forecasts (McVay, Nagar, & Tang, 2006; Cheng & Warfield, 2005). Additionally, Matsumoto (2002) found that firmswith a greater reliance on implicit claims with their stakeholders exhibited a greater propensity to avoid negative earningssurprises.

Extant research also documents a number of devices used by managers to ‘just meet or beat’ analyst forecasts. Dhaliwal,Gleason, and Mills (2004) argued that substantial discretion in estimating tax expenses could motivate managers to usethat discretion to meet or beat forecasts, and documented that firms decreased their annual effective tax rate from thethird quarter to the fourth quarter, as earnings (less tax expenses) fell short of the consensus forecast.2 Moehrle (2002)showed that managers were more likely to reverse restructuring charges when earnings before such reversals fell shortof analyst forecasts. As another example of earnings management devices, Hribar, Jenkins, and Johnson (2006) showedthat managers engaged in EPS increasing stock repurchases when consensus analyst forecasts would not have been metwithout the stock repurchase transaction. Management could use accruals management, forecast guidance, real activitiesmanipulation or classification shifting (McVay, 2006)3 as well as a combination of these techniques to manage earningsin order to meet or beat analyst forecasts. Roychowdhury (2006) reported that managers engaged in price discounts totemporarily increase sales; overproduction to report lower cost of goods sold and reduction of discretionary expenditures toimprove earnings in order to meet forecasts. With regard to forecast guidance Richardson, Teoh, and Wysocki (2004) showedthat the earnings-guidance game became most pronounced when firms and insiders became net sellers after an earningsannouncement.

Evidence for earnings management to ‘just meet or beat’ analyst forecasts, as discussed above, comes primarily from theUS.4 However, the extent to which this finding applies in the Australian context is an empirical issue, since the institutionalenvironments between the two countries vary in many ways. Firstly, the threat of litigation in cases of poor performance ismuch higher in the US than in Australia. The US legal environment is characterized by the American Rule, which requires eachparty to bear its own legal costs, thereby encouraging litigation. In contrast, the English rule requires losers at trials to paythe winner’s legal fees, thereby reducing the frequency of frivolous lawsuits (Brown & Higgins, 2001). Australia follows theEnglish rule and is “characterized by the absence of class-action privileges, the prohibition of contingency based litigation,the entitlement of successful defendants to cost recovery from the plaintiff, and the determination of such costs by thejudiciary (rather than jury). . .” (Taylor & Taylor, 2003, 8). However, class-action suits have been brought recently againstsome companies like AWB and Multiplex. For example, Multiplex Limited has been accused of breaching the continuousdisclosure provisions of the Australian Stock Exchanges (ASX) Listing Rules and the Corporations Act and/or engaging inmisleading or deceptive conduct, by not properly disclosing to security-holders and the ASX full story regarding materialcost increases and delays in the construction of a project (Multiplex Class Actions, 2007).

2 Phillips et al. (2003) examined whether managers used deferred tax expense as an earnings management tool to meet or beat analyst forecasts. They,however, failed to find any such evidence.

3 This technique shifts expenses from core expenses to special items and thereby overstates core earnings. McVay (2006) finds that managers useclassification shifting technique to meet the analyst forecast earnings benchmark (although she did not directly test just meet versus just miss phenomenon).

4 Brown and Higgins (2001) compared the distribution of earnings surprises in the US with those of 12 other countries (Australia is one of the samplecountries) and found consistent evidence that US managers are more likely to meet or beat analyst forecasts threshold. Increasing focus on stock and optioncompensation, as well as a concurrent increase in corporate litigation, was hypothesized as being responsible for this result.

82 A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91

Secondly, a significant difference between the US and Australian institutional environments that might affect the propen-sity of managers to ‘just meet or beat’ analyst forecasts is the dominance of equity-based compensation schemes in theUS. For example, by 1994, options had become a major component of CEO compensation with 70% of CEOs receiving newoption grants (Core, Guay, & Larcker, 2003). The increasing use of equity-based compensation had created incentives formanagers to inflate reported earnings to meet or beat analyst forecasts (Cheng & Warfield, 2005). In Australia, Coulton andTaylor (2002) revealed that awarding executive stock options was common in Australia but not very systematic. Matolcsyand Wright (2007) reported that around one third of the CEOs of Australia’s largest firms did not receive any equity-basedcompensation and, therefore, the relationship between equity-based compensation and earnings management in Australiamight be weak.

Previous Australian research on ‘benchmark-beating’ used loss avoidance and small increase in earnings as the earningsbenchmarks, but not analysts’ forecasts (Coulton et al., 2005; Davidson, Goodwin-Stewart, & Kent, 2005; Holland & Ramsay,2003).5 This paper makes several refinements to prior Australian studies that generate new insights into the earnings man-agement pattern of Australian firms. Firstly, this study uses consensus analyst earnings forecasts as the benchmark againstwhich to compare the earnings management patterns of firms that “just meet or beat” with firms that “just miss” this bench-mark. Secondly, by using such earnings forecasts as a beatable benchmark, this study also provides a bridge between (a)Holland and Ramsay (2003), who investigated the empirical relationship between earnings management and the propensityof firms to report positive profits and sustain previous performance, and (b) Coulton et al. (2005) who used time-seriesforecasts as benchmarks to investigate earnings management, rather than analyst earnings forecasts.

3. Research design issues

Empirical investigation into whether managers manage earnings to meet or beat analyst forecasts requires discussion ofthe following variables.

3.1. Bin width

The choice of an interval width must be balanced. If the width is too small, then spurious fine structure becomes visible, onthe other hand, if it is too large, then essential detail is masked (Holland, 2003). There are statistical approaches to calculatinginterval width as documented by Silverman (1986) and Scott (1992). According to the formula they proposed, interval widthis positively correlated with the variability of the data and negatively correlated with the number of observations. An intervalwidth of 0.01 was chosen for the present study following Scott’s (1979) formula of 3.5�n−1/3, where � is the standard deviationof the sample and n is the total number of observations. However, sensitivity analysis used alternative bin widths of 0.005as well as 0.02.

3.2. Test statistics

BD (1997, 103) developed a test statistic for determining the statistical significance of the hypothesized avoidance ofearnings decreases and losses. The only assumption of the test is that “under the null hypothesis of no earnings management,the cross-sectional distributions of earnings changes and earnings levels are relatively smooth. Operationally, [the] definitionof smoothness is that the expected number of observations in any given interval of the distribution is the average of thenumber of observations in the two immediately adjacent intervals.”6

Notationally,

EM = [AQi − EQi]SDi

where EM is earnings management, AQi and EQi are the actual and expected numbers of observations for intervali, and wherethe interval is immediately to the right of zero; and SDi is the estimated standard deviation of the difference between theactual and expected numbers of observations around interval i. In particular,

EQi = [AQi−1 + AQi+1]/2 andSDi = [Npi(1 − pi) + (1/4)N(pi−1 + pi+1) (1 − pi−1 − pi+1]1/2

The BD (1997) test statistic, however, is likely to produce unreliable results when the peak of the distribution lies adjacentto the benchmark interval (Coulton et al., 2005, 558).

Degeorge, Patel, and Zeckhauser (1999) developed an alternative test statistic to calculate the mean and standard deviation,measuring the change in the slope of the near intervals excluding the interval to be tested. However, Holland and Ramsay

5 Davidson et al. (2005) found that the existence of an audit committee was negatively correlated with small changes in the earnings threshold. However,neither board nor audit committee independence were associated with small increases in earnings.

6 Besides calculating a test statistic for the two immediately adjacent intervals, they also considered test statistics for four adjacent intervals and theaverage of the two closet intervals apart from the two immediately adjacent ones. These alternatives produced similar results (footnote 5 of BD).

A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91 83

(2003, 49) raised concerns with that approach, noting that the Degeorge et al. (1999) approach, “.exacerbates the linearassumption by extending the number of intervals required to calculate a test statistic. . . By observing a normal curve, it isobvious that the wider the cross-section taken the less likely it is to represent a linear function”. Therefore, the present studyused the BD (1997) test statistic to evaluate the significance of discontinuity around zero forecast error benchmark.

3.3. Scaling of the variables

Although BD (1997) showed that the discontinuity around zero is caused by managerial earnings management behavior,Durtschi and Easton (2005) argued that the earnings discontinuity around zero is not evidence of earnings managementrather, is caused by the “deflation” of earnings levels and other changes. They showed that market prices of profit firmsdiffered markedly from those of loss firms, and hence placed a one cent per share loss in earnings/price interval furtherfrom zero than a one cent per share profit. They also showed that the discontinuity in the frequency distribution of theI/B/E/S/annual actual EPS, and changes in the EPS, was due to the fact that the proportion of firms with small losses that arefollowed by I/B/E/S/was much smaller than the proportion of firms with small profits. With respect to analyst forecasts errordistributions, the authors showed that forecast errors tended to be much greater when they were optimistic than when theywere pessimistic.

One way of circumventing this problem is to scale the analyst forecasts by the number of outstanding shares, as is done inthe present paper. The share-scaling of forecasts avoids possible effects from the differential pricing of profit and loss firmscaused by price-scaling or by any other scalar closely correlated to price, such as total assets (Coulton et al., 2005, 572).7

3.4. Discretionary accruals

Dechow, Richardson, and Tuna (2003) compared four different versions of DACCR models and found that the ModifiedJones (MJ, 1995) model augmented with lagged total accruals (LTACC) and sales growth (GROWTH) performed the best interms of explanatory power. Accordingly the present study uses MJ (1995) as the base model, then a second model thataugmented MJ (1995) by including LTACC, and finally a new model of DACCR developed by Ball and Shivakumar (hereafterBS, 2006).8

Although the Jones (1991) and the MJ (1995) models were used extensively in the earnings management research, amajor shortcoming with these models is the linearity assumption inherent in their development (BS, 2006). BS (2006) useda non-linear accruals model which provides the highest explanatory power for total accruals compared with the other extantmodels. The purpose of employing BS (2006) model is twofold. Firstly, compared with the linear version of accruals model,the non-linear accrual models are superior because of the asymmetric gain and loss recognition properties identified byBasu (1997). Secondly, extant research showed that reporting practices in Australia are conservative in the sense of timelierrecognition of economic losses than gains (Ball et al., 2000; Ball, Robin, & Sadka, 2008; Bushman & Piotroski, 2006; Taylor &Taylor, 2003).

The regression specifications of the three DACCR models used in subsequent analysis are given below:

TACC = ˛0 + ˛1(�REV − �REC) + a2PPE + ε, (1)

TACC = ˇ0 + ˇ1(�REV − �REC) + ˇ2PPE + ˇ3LTACC + ε (2)

TACC = �0 + �1�REV + �2PPE + �3CFO + �4DUM + �5CFO ∗ DUM + ε (3)

Eq. (1) is the MJ (1995) model, Eq. (2) is the MJ (1995) augmented by LTACC model and finally, Eq. (3) is the BS (2006)non-linear accruals model. TACC is total accruals calculated as the difference between net income before abnormal items inyear t and cash flow from operations (CFO) in year t, �REV is the change in sales revenue from period t − 1 to period t, �RECis the change in accounts receivable from period t − 1 to period t and PPE is the gross value of property, plant and equipment.All variables were scaled by the opening value of total assets. LTACC is the lagged value of total accruals.

Accruals are less persistent than cash flow as a result of the way they reverse, and so the inclusion of lagged total accrualsshould help capture the predictable component. DUM is a binary variable taking the value of 1 if CFOt < 0, and zero otherwise.

7 Recent evidence, however, questions the Durtschi and Easton (2005) findings. Jacob and Jorgensen (2007) tested BD (1997) methodology using earningsmeasured over the close of the first three quarters of the fiscal year, on the assumption that earnings measured over these periods were less likely to sufferfrom the effects of managerial income manipulation. If the pattern of BD (1997) findings were attributable to earnings management at fiscal year end, thenthis should be evident only for fiscal year end earnings but not for the other three periods. Alternatively, if the patterns were spurious and induced byscaling then similar patterns should be observable for the three alternative periods. Their empirical test supported the former conjecture. In another study,Ayers et al. (2006) found that the significant positive association between the DACCR proxies and firm profits existed for more than the expected numberof comparisons of adjacent firm year bins with segregation based on pseudo profit targets. However, for the actual analyst forecasts benchmark, earningsmanagement seems to be the most plausible explanation for the unusually large number of observations lying immediately to the right of the zero forecasterror.

8 MJ (1995) model augmented with GROWTH as well as the forward looking version of DACCR proposed by Dechow et al. (2003) have not been usedbecause implementation of these models significantly reduces the sample observations.

84 A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91

It is expected that �3 < 0 due to the noise reduction role of accruals (Dechow, 1994, Dechow, Kothari, & Watts, 1998), and�5 > 0 because of the asymmetric loss recognition property of accounting information.

4. Sample

Analyst forecasts data were retrieved from the I/B/E/S for the period 1995–2004. Analyst forecasts error was calculatedas the difference between actual earnings as reported in the I/B/E/S and the mean consensus analyst forecasts immediatelybefore the earnings announcement. Financial statement data were derived from the Aspect database. A total sample of4379 annual forecast observations was available from 1995 to 2004. However, the final sample reduced to 1947 firm-yearobservations primarily because of (a) missing actual earnings numbers in the I/B/E/S database (n = 936) and (b) missingobservations in cross-matching forecast data with DACCR values (n = 1117) A further 379 firm-year observations in financialinstitutions industries was excluded because of the unique regulatory environment of these industries. Sample selectionprocess is outlined in Panel A of Table 1. For consistency with earlier studies the regression specifications (1–3) were estimatedfor each ASX two-digit industry using at least eight observations on an annual basis from 1994 to 2004.

5. Results

Table 1, panel B, presents basic descriptive statistics for the discretionary accruals models used in our study. The BS (2006)model provided the highest explanatory power at 37% compared with just 12% for the MJ (1995) and nearly 20% for the LTACCmodel. Moreover, the percentage of positive [�Sales − �Rec] variable was 59% for the BS (2006) model compared with 45%for the MJ (1995) model. The coefficient on PPE was expected to be negative and all the models support this conjecture. Forthe BS (2006) model, the coefficients for CFO and CFO*DUM were overwhelmingly negative, and positive, respectively, aspredicted.

Panel C presents descriptive statistics for analyst forecasts error in Australia. Over the whole sample period of 1995–2004,mean (median) analyst forecasts error was −2.17 (−0.0005) respectively, implying analyst optimism in Australia. The fre-quency of negative forecast error column shows that from the period 2000–2002 there is a monotonic increase in thefrequency of negative forecast errors. However, the pattern started to reverse from 2003 with the frequency of negativeforecast errors being lowest in 2004 (exactly one third of the sample observations in 2004 represented negative forecasterror observations).

BD (1997) used histogram analysis to show the discontinuity around zero earnings targets and then tested for thedifference in numbers of observations between “just beat” and “just miss” groups. Following BD (1997), a histogram offorecast errors is reported in Fig. 1. Forecast errors are defined as [actual earnings per share – consensus analyst forecastsper share]. As mentioned in the research design section, an interval width of 0.01 was chosen for the present study fol-lowing Scott’s (1979) formula of 3.5�n−1/3, where � is the standard deviation of the sample and n is the total number ofobservations.

Fig. 1 shows that there was a kink in the forecast error distribution immediately to the right of zero. To test whether thedifference in the number of observations between the “just beat” and “just miss” groups is statistically significant, the teststatistic developed by BD (1997) is reported. This statistic is the difference between the actual number of observations inan interval and the expected number of observations in the interval, divided by the standard deviation of the difference.The BD (1997) t-statistic was calculated to be 12.21 for the interval 0.00 to 0.01 which is statistically highly significant. Thecomparable statistic for the forecast error range of −0.00–0.01 is 1.64.

However, this is not prima facie evidence of earnings management. To establish that managers manage earnings to ‘justmeet or beat’ analysts’ forecasts, it is necessary to document that there is a significant difference in the DACCR level betweenthe “just beat” and “just miss” groups. If managers manage earnings to just beat analyst forecasts, then the level of income-increasing DACCR will be higher in the “just beat” group compared with its “just miss” counterpart.

Table 2, panel A, compares discretionary accruals for the “just meet or beat” group with those of the “just miss” group.This provides direct evidence for managerial use of income-increasing DACCR choices designed to avoid failure to attainanalyst forecasts. Panel B compares DACCR for the “just meet or beat” with those of the “all other” group. Finally, panel Ccompares DACCR for the “just miss” with those of the “all other” group. Results for the three different models of DACCR arereported.

Panel A reports that none of the three DACCR models yielded a statistically significant difference between these groups.Only the BS (2006) DACCR model reported a higher mean and median for the “just meet or beat” group (3.2% and 2.2% oflagged total assets), respectively, compared with its “just miss” counterpart. Overall, the evidence from Panel A suggests thatthere was no significant difference in the mean and median DACCR values between the “just meet or beat” and the “just miss”group implying that the kink shown in the histogram analysis does not necessarily reflect earnings management practices.Of the other firm characteristics reported in Panel A, NPAT, CFO, and CL were significantly higher in the “just meet or beat”group compared with their “just miss” counterparts.

Panel B shows that the mean and median DACCR values calculated on the basis of BS (2006) were significantly higherfor the “just meet or beat” group compared with the “all others” group. Mean DACCR was 3.2% (1.83%) of the lagged totalassets for the former (latter) group, respectively, and the difference in means was statistically significant at the better than10% level. The mean and median DACCR values of the MJ (1995) and LTACC (2003) models were higher for the “just meet

A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91 85

Table 1Panel A: Sample selection process.

Selection criteria Observations

Initial forecast observations from I/B/E/S for 1995–2004 4379Less: missing actual earnings information from I/B/E/S 936Less: firms belonging to financial industry 379Less: observations lost due to available forecast but missing DAACR values 1117Final sample 1947

Panel B: Descriptive statistics of the three DACCR models

MJ (1995) model : TACC = ˛0 + ˛1(�REV − �REC) + ˛2PPE + ε, (1)LTACC model : TACC = ˇ0 + ˇ1(�REV − �REC) + ˇ2PPE + ˇ3LTACC + ε (2)BS (2006) model : TACC = �0 + �1DREV + �2PPE + �3CFO + �4DUM + �5CFO ∗ DUM (3)

Models Mean Median S.D. 25% 75% %Positive

(i) Modified JonesIntercept −0.0284 −0.0278 0.0724 −0.0676 0.0156�Sales − �Rec −0.0147 −0.0092 0.1851 −0.0844 0.0530 45.81%PPE −0.0371 −0.0281 0.1372 −0.0918 0.0303 34.07%Adjusted R2 0.1209 0.0439 0.2404 −0.016 0.1915 64.00%DACCR 0.0002 0.0123 0.2413 −0.0628 0.0815 54.97%TACCR −0.0606 −0.0375 0.2207 −0.1090 0.0119 30.00%

(ii) BS (2006)Intercept 0.0026 0.0119 0.0910 −0.03129 0.0492�Sales − �Rec 0.0113 0.0137 0.1353 −0.04525 0.0722 59.00%PPE −0.0247 −0.0204 0.1491 −0.0725 0.0241 39.00%CFO −0.5152 −0.5585 0.5974 −0.81252 −0.196 16.00%DUMMY −0.0160 −0.0126 0.3114 −0.08763 0.0469CFODUM 0.6219 0.5001 2.5659 −0.01225 0.9731 73.96%Adjusted R2 37% 33% 0.3258 10.00% 63.38% 89.90%DACCR 0.0005 0.0126 0.1861 −0.0484 0.0666 56.84%TACCR −0.0613 −0.0538 0.1250 −0.10 −0.0156 18.74%

(iii) LTACC modelIntercept −0.0389 −0.0382 0.0519 −0.0740 −0.0080�Sales − �Rec −0.0251 −0.0184 0.1352 −0.0874 0.0320 37.00%PPE −0.0309 −0.0171 0.1064 −0.0678 −0.0048 24.00%TACCRt-1 0.1902 0.2076 0.3861 0.0565 0.3408 82.00%Adjusted R2 19.17% 12.20% 0.2737 0.0138 0.2874 77.00%DACCR −0.0003 0.0123 0.1947 −0.0568 0.0763 55.17%TACCR −0.0643 −0.0394 0.2185 −0.111 0.0093 29.69%

Panel C: Yearly distribution of FE

Years N Mean FE Median FE Frequency of negative FE (in %) JMBE = 1 (in %)

All years 1988 −2.17 −0.0005 48.00 56.601995 100 1.45 −0.0012 56.00 62.501996 103 0.0081 0.0014 40.77 57.581997 109 −0.0356 −0.002 55.04 61.191998 151 −0.0013 0.00 47.02 60.001999 178 −0.0263 0.0017 46.06 55.222000 221 −0.2189 −0.001 52.94 51.652001 278 −12.626 −0.0071 58.38 49.002002 302 −3.0135 −0.008 63.58 63.212003 267 −0.0462 0.0017 45.32 65.002004 279 0.08207 0.006 33.33 56.60

Notes. Eq. (1) is the MJ (1995) model, Eq. (2) is the MJ (1995) augmented by LTACC model and finally, Eq. (3) is the BS (2006) non-linear accruals model. TACCis total accruals and calculated as the difference between net income before abnormal items in year t and cash flow from operations (CFO) in year t, �REVis the change in sales revenue from period t − 1 to period t, �REC is the change in accounts receivable from period t − 1 to period t and PPE is the grossvalue of property, plant and equipment. All variables were scaled by the opening value of total assets. LTACC is the lagged value of total accruals. DUM is abinary variable taking the value of 1 if CFOt < 0 and zero otherwise. The number of observations for estimating DACCR varied according to the estimationmodel. For the Modified Jones (1995) model, DACCR was estimated based on a regression of 180 industry-year observations from 1995 to 2004 yielding 8512firm-year observations. The BS (2006) model was estimated based on a total sample of 8374 firm-year observations. Finally, the LACCR model as proposedby Dechow et al. (2003) used 7231 firm-year observations. FE is actual EPS minus the final consensus mean EPS before both earnings announcements fromI/B/E/S. The frequency of negative forecast errors was calculated by dividing the number of negative forecast error observations (less than 0.00) by the totalnumber of observations in each year. JMBE = 1 is the number of observations “just meeting or beating” analyst forecasts benchmark divided by the numberof observations “just missing” the benchmark.

86 A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91

Fig. 1. Histogram analysis of the forecast error distribution. The bin width was 0.01, based on Scott’s formula of (3.5*n*�)−1/3. The number of sampleobservations was 1947. However, for the sake of brevity observations lying beyond −0.06 and +0.06 were not graphed resulting in the number of histogramobservations of 1511 firm-year observations from 1995 to 2004. The BD (1997) t-statistics was 12.21 for the interval 0.00–0.01 which is statistically highlysignificant. The test statistic is the difference between the actual number of observations in an interval and the expected number of observations in theinterval, divided by the standard deviation of the difference. t = [AQi − EQi]/SDi , where AQi and EQi are the actual and expected numbers of observationsfor interval i, and where the interval is immediately to the right of zero. SDi is the estimated standard deviation of the difference between the actual andexpected numbers of observations around interval i. In particular, EQi = [AQi−1 + AQi + 1]/2 and SDi = [Npi(1 − pi) + (1/4)N(pi−1 + pi + 1) (1 − pi−1 − pi + 1]1/2.

or beat” group than for the “all others” group. The difference was, however, not statistically significant. With respect tothe other firm characteristics, NPAT and CFO were both positive and significantly higher in the “just meet or beat” groupcompared with the “all others” group. The use of income-increasing discretionary accruals could be responsible for thesignificant difference in NPAT between the two groups. However, the higher values of CFO for the “just meet or beat” groupcast doubt on the earnings management explanation, because CFO was interpreted to be devoid of earnings management.However, Roychowdhury (2006) found clear support for the manipulation of cash flow numbers to achieve the analystforecasts benchmark.

Finally, panel C compares the mean and median DACCR between the “just miss” and “all others” groups. Contrary to theearnings management hypothesis, the mean and median DACCR values of all the models were higher in the “just miss” groupthan in their “all others” group counterpart. However, only the difference in mean DACCR value based on the MJ (1995) modelwas statistically significant at the 10% level. Both the mean and median TACCR was significantly smaller and the median NPATwas significantly larger in the “just meet or beat” group versus the “just miss” group.

Overall, the evidence suggests that although histogram analysis of forecast error distribution clearly showed a kink in thebin (0.00–0.01) of forecast error, there was no evidence that mangers use income-increasing DACCR in order to achieve thetarget. This could be due to a number of factors:

(a) Managers manage earnings to meet or beat forecasts, and DACCR is a good proxy for capturing such earnings management,but the histogram width of 0.01 is not appropriate:To alleviate the concern that 0.01 might not represent the true binwidth used by investors, alternative widths of 0.005 and 0.02 were used. Given the absence of a theory underlying theappropriate bin width, any choice of a width is arbitrary. Unreported results showed findings similar to the findingsbased on the 0.01 bin width analysis. The kink was obvious but again there was no significant difference in the mean andmedian DACCR values between the “just meet or beat” group versus the “just miss” group.

(b) A histogram width of 0.01 was appropriate, but the models used to calculate DACCR failed to detect such earningsmanagement because of measurement error.

Reliable estimation of discretionary accruals using existing models is fraught with measurement error (Fields, Lys, &Vincent, 2001; Kothari, Leone, & Wasley, 2005). To minimize the measurement error in calculating DACCR, this studyused three different models for calculating DACCR. The use of multiple DACCR measures reduces the concern that theresult is solely driven by measurement error. However, a concern with the DACCR models is that they fail to control for

A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91 87

Table 2Comparison of DACCR values.

Characteristics “Just meet or beat” group‘ ‘Just miss” group Test for differences Observations

Mean Median S.D. Mean Median S.D. t-test z-test Just meet/Just miss

Panel A: Comparison of DACCR and other variables between “Just meet or beat” and “Just miss” groupDACCR(MJ 95) 0.0148 0.0144 0.1354 0.0254 0.0156 0.1463 −1.05 −0.48 464/332DACCR (BS 06) 0.0320 0.022 0.10 0.0267 0.0194 0.0910 0.755 1.48 429/309DACCR (LACCR) 0.0200 0.0167 0.1107 0.0216 0.0183 0.1069 −0.17 −0.06 339/242TACCR −0.0344 −0.0383 0.1341 −0.0317 −0.0365 0.1214 −0.29 −0.21 796NPAT 0.0798 0.0700 0.2001 0.0489 0.0591 0.1700 2.29** 3.34* 796CFO 0.1081 0.1034 0.1757 0.0804 0.0898 0.1876 2.14** 2.37** 796CA 0.5708 0.4404 0.9446 0.5568 0.4227 0.8210 0.22 −0.67 796CL 0.3441 0.2638 0.4052 0.3131 0.2333 0.3533 1.22 1.98** 796Leverage 0.2041 0.1566 0.2670 0.1975 0.1675 0.2282 0.37 −0.13 796Size (log TA) 8.3737 8.3341 0.7414 8.4427 8.3852 0.8230 −1.24 −1.32 796

Characteristics “Just meet or beat” group “All others” group Test for differences Observations

Mean Median S.D. Mean Median S.D. t-test z-test

Panel B: Comparison of DACCR and other variables between “Just meet or beat” and “All others” groupDACCR(MJ 95) 0.0148 0.0144 0.1354 0.0107 0.0123 0.1748 0.47 0.46 464/1483DACCR (BS 06) 0.0320 0.022 0.10 0.0183 0.01669 0.10 2.56** 3.11* 429/1456DACCR (LACCR) 0.0200 0.0167 0.1107 0.0129 0.0131 0.1190 1.01 1.11 339/1139TACCR −0.0344 −0.0383 0.1341 −0.051 −0.0432 0.1320 2.34** 3.01* 1947NPAT 0.0798 0.0700 0.2001 0.0252 0.0488 0.2579 4.18* 7.11* 1947CFO 0.1081 0.1034 0.1757 0.0755 0.0857 0.2824 2.35** 4.20* 1947CA 0.5708 0.4404 0.9446 0.5487 0.4009 1.1971 0.36 0.99 1947CL 0.3441 0.2638 0.4052 0.3421 0.2468 1.0336 0.04 1.56 1947Leverage 0.2041 0.1566 0.2670 0.2115 0.1434 0.4965 −0.31 1.40 1947Size (log TA) 8.3737 8.3341 0.7414 8.3975 8.3606 0.8401 −0.58 −0.65 1947

Characteristics “Just miss” group “All other” group Test for differences Observations

Mean Median S.D. Mean Median S.D. t-test z-test

Panel C: Comparison of DACCR and other variables between “Just miss” and “All others” groupDACCR(MJ 95) 0.0254 0.011 0.1463 0.0090 0.0121 0.1693 1.81*** −1.04 332/1615DACCR (BS 06) 0.0267 0.0194 0.0910 0.0204 0.0176 0.10 1.04 −0.49 309/1576DACCR (LACCR) 0.0216 0.0183 0.1069 0.0132 0.0130 0.12 1.09 0.79 242/1236TACCR −0.0317 −0.0365 0.1214 −0.0504 −0.0435 0.1353 2.34** 2.27** 1947NPAT 0.0489 0.0591 0.1700 0.0340 0.0519 0.3020 1.25 1.73*** 1947CFO 0.0804 0.0898 0.1876 0.0822 0.0883 0.3052 −0.10 −0.41 1947CA 0.5568 0.4227 0.8210 0.5567 0.4048 1.2228 0.002 0.06 1947CL 0.3131 0.2333 0.3533 0.3512 0.2545 1.0082 −1.21 −1.56 1947Leverage 0.1975 0.1675 0.2282 0.2116 0.1404 0.4821 −0.52 0.94 1947Size (log TA) 8.4427 8.3852 0.8230 8.3764 8.34 0.82 0.42 1.42 1947

Note. A total sample of 4379 annual forecast observations was available from 1995 to 2004. However, the final sample reduced to 1947 firm-year observationsprimarily because of (a) missing actual earnings numbers in the I/B/E/S database (n = 936) and (b) missing observations in cross-matching forecast datawith DACCR values (n = 1117). A further 379 firm-year observations in financial institutions industries was excluded because of the unique regulatoryenvironment of these industries. The threshold level was 0.01 forecast error. *,** and *** represent statistical significance at the 1%, 5% and 10% levels,respectively (two-tailed test).

the operating performance, which has been shown to affect the magnitude of DACCR.9 In this setting DACCR modelsmay correctly detect such earnings management. On the other hand, such models can be mis-specified when applied tosamples of firms with extreme performance, because performance and estimated DACCR may be mechanically related(Kothari et al., 2005).

To address this problem, the proportion of firms reporting positive DACCR values was calculated and is shown in Fig. 2. Asis evident from the figure, there was a sharp increase in the proportion of firms reporting positive DACCR in the “just meetor beat” group (forecast error of 0.00–0.01) compared with its “just miss” counterpart. This might be expected if managersmanage the accruals component of earnings to meet or beat forecast thresholds. There is, however, evidence of a positivecorrelation between the DACCR value and firm performance, as is evident from the fact that firms with positive forecasterrors generally report a higher level of DACCR values. Specifically, firms beating the analyst forecasts by 5 cents, reportedthe highest proportion of positive DACCR values. Caution is, therefore, needed in relying on DACCR measures to captureearnings management activities, as suggested by Kothari et al. (2005).

9 Dechow et al. (1995, 193) examined the specification and power of various discretionary accrual models and concluded that “all models reject the nullhypothesis of no earnings management at rates exceeding the specified test levels when applied to samples of firms with extreme financial performance”.

88 A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91

Fig. 2. Proportion of positive unexpected accrual firms in each income class. The BS (2006) DACCR values were reported because this model had the highestexplanatory power for TACCR in this study. Intervals beyond −0.09 and +0.09 were excluded because of the very small number of observations pertainingto those intervals.

To further validate that DACCR models used in this study detects earnings management an additional test was conductedto examine whether small profit firms with positive DACCR values have larger earnings and CFO declines than small lossfirms in the following year. Unreported results revealed that there was no significant difference in the future performancebetween the two groups (t statistics of −0.41 and −0.46, respectively).

Overall, the evidence showed that there was a kink in the interval of 0.00–0.01 forecast error, and the BD (1997) teststatistic was significant in this interval. The mean and median DACCR values, however, did not differ significantly betweenthe “just meet or beat” and the “just miss” groups.10

5.1. Temporal analysis of the just meet or beat analyst forecasts benchmark

Dechow et al. (2003) in the US context provided strong evidence that managerial propensity to “just meet or beat”analyst forecasts compared with “just missing” the target has increased significantly over their sample period. This canbe attributed to the significant penalties associated with failing to “just meet or beat” forecasts. However the general-izability of this observation in the Australian context is an empirical question. As discussed earlier, the incentives forcorporate managers in Australia to engage in earnings management to meet or beat analyst forecasts may be weak becauseof significant differences between the US and Australia regarding litigation threats as well as equity-based compensationschemes.

Table 3 seems to support this conjecture. Table 3 reports, for each year, the ratio of firms just meeting or beating forecaststo firms just missing those forecasts. A ratio greater than 1.00 implies that number of observations in the “just meet or beat”group is higher than their “just miss” counterpart. Except for 2002, the ratio was greater than 1.00 in each of the sample years.However, the regression of ratio value against time provided a coefficient of 0.0096, which is statistically indistinguishablefrom zero. Fig. 3 presents the temporal analysis of analyst forecast threshold.

10 A comparison of mean DACCR between high and low analyst group regarding just meeting or beating forecasts revealed that mean DACCR was 0.0066and 0.0254 for the two groups, respectively. The difference, however, was not statistically significant. For “just miss” category, the comparative DACCRvalues were 0.0268 and 0.0238 for the two groups, respectively. The difference, too, was statistically indistinguishable from zero. High (low) analyst groupwas defined as firm-year observations with number of analyst following more (less) than median number of analyst following, respectively.

A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91 89

Fig. 3. Time-series behavior of the ratio of “just meet or beat” to “just miss” analyst forecasts observations. Firms reporting a forecast error in the range of0.00–0.01 (both inclusive) were classified as the “just meet or beat” group. The value of the ratio was derived by dividing the number of “just meet or beat”firms by the number of “just miss” firms lying in the range of −0.01 ≥ miss < 0.

Table 3Temporal analysis of analyst forecasts threshold.

1995 1996 1997 1998 1999 2000 2001 2002 2003 2004

Ratio 1.30 1.67 1.36 1.58 1.50 1.23 1.07 0.96 1.72 1.86Trend 0 1 2 3 4 5 6 7 8 9

Ratio = �0 + �1 Trend + ε (0.0096) (0.28).

5.2. Net operating assets (NOA), benchmark-beating and the DACCR

The results presented so far indicate that managerial use of DACCR to meet or beat analyst forecasts was not evidentin the Australian context. As argued before, such a result could be due to an inappropriate bin width and/or to the lack ofpower in DACCR models. The use of three different DACCR models should reduce concerns regarding their lack of power,and analysis using alternative interval widths also gave no indication of earnings management. Another possibility, is thatmanagers actually believed that the market rewarded them for meeting or beating thresholds but, as hypothesized by Bartonand Simko (2002), their ability to bias earnings optimistically in order to meet a threshold decreased with the extent to whichNOA were already overstated in the balance sheet (assuming NOA can act as a proxy for managerial discretion).

If managerial ability to meet or beat analyst forecasts benchmark decreases with the magnitude of NOA, then it mightbe expected that the “just meet or beat” group would have a lower value of NOA compared with its “just miss” counterpart.To test this hypothesis, mean and median NOA values were compared between the two groups. NOA was calculated as[Shareholders’ equity − cash and marketable securities + total debt]/sales as per Barton and Simko (2002), all measured at thebeginning of the year.

Table 4 reports the result. The mean NOA values reported by the “just miss” (“just meet or beat”) group was 1.75 (1.55),respectively. The difference, however, was not statistically significant. A similar result was obtained when median NOA valueswere compared. This result, however, should be interpreted with caution because of its univariate nature. Barton and Simko(2002) estimated a multivariate regression model with a number of control variables in order to isolate the negative effectof NOA on “just meet or beat” propensity. Unreported results for a multivariate regression of a “just meet or beat” dummyon firm size, firm profitability, analyst following, and NOA showed the hypothesized negative effect of NOA (a coefficientvalue of −0.02), but this was significant only at a p-value of 0.16. An interesting result that emerged from the regression

Table 4Comparison of NOA values among “just meet or beat”, “just miss”, and “all others” group.

Groups Mean Median Test of difference (mean) Test of difference (median)

Just beat 1.55 0.70Just miss 1.75 0.81 0.59 1.34All others 2.33 0.78 1.86*** 2.38**

NOA (all) 2.15 0.76

Note. NOA was calculated as [Shareholders’ equity − cash and marketable securities + total debt]/sales, as per Barton and Simko (2002), all measured at thebeginning of the year. Australian companies report operating revenue, other revenue and total revenue in the income statement. Because other revenueconsisted primarily of non-operating revenue, operating revenue (OPREV) at the beginning of the year was used as the deflator. The sample was reduced by81 observations which did not have the information necessary to calculate NOA. Furthermore, extreme observations made the distribution highly skewedand asymmetrical. To reduce the effect of outlier observations, the top and bottom 1% of the NOA observations were eliminated. The final sample consistedof 1884 firm-year observations from 1995 to 2004. *, ** and *** represent statistical significance at the 1%, 5% and 10% levels, respectively (two-tailed test).

90 A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91

was a significantly positive correlation between the number of analysts following a firm and the “just meet or beat” dummyvariable. The mean and median NOA reported for the “all others” group were significantly higher than those for the “justmeet or beat” group.

6. Conclusion

This paper extends previous benchmark-beating research in Australia by incorporating analyst forecasts as anotherbenchmark. Analysts have been shown to incorporate more relevant information, and to be timelier in their forecasts, thantime-series models. This paper shows that there were a large number of observations just meeting or beating analyst fore-casts, and an unexpectedly low number just missing the forecast. To investigate whether managers managed earnings tomeet or beat this important benchmark, three different models of DACCR values were compared between the “just meet orbeat” and “just miss” groups. However, no significant difference was found, nullifying the presence of earnings management.The three different versions of DACCR models were used because the measurement of DACCR contains errors which couldbias the reported results. The BS (2006) version of the non-linear accruals model provided the highest explanatory powercompared with the MJ (1995) and the MJ (1995) augmented with LACCR variable.

As argued in Section 2, the absence of earnings management by Australian firms to meet or beat analyst forecasts can beattributed to institutional differences between Australia and the US. However, the present study may fail to detect earningsmanagement because managers have a number of options available to meet or beat forecasts. In addition to accruals manipu-lation, managers could use forecast guidance, classification shifting, and/or real activities manipulation to manage earnings.Analysts engaged in the forecast guidance game tend first to issue optimistic earnings forecasts and then to ‘walk down’ theirestimates to a level that is beatable by the company (Richardson et al., 2004). Managers could also meet or beat forecastthresholds by engaging in real activities manipulation. Roychowdhury (2006) reported that managers discount prices totemporarily increase sales, over-produce to report a lower cost of goods sold, and cut down discretionary expenditure toimprove earnings in order to meet earnings thresholds.

Lin, Radhakrishnan, and Su (2006) examined a broad set of mechanisms to meet or beat analyst forecasts. These are (i)forecast guidance; (ii) discretionary accruals; (iii) classification shifting; and (iv) real activities manipulation. Downwardforecast guidance, upward discretionary accruals, and upward classification shifting increased the probability of meeting orbeating analyst forecasts by 9%, 5% and 10%, respectively. The effect of real activities manipulation, however, was minimal.Future research needs to consider the effect of forecast guidance, real activities manipulation and any other mechanismsused by managers to meet or beat analyst forecasts in Australia.

Acknowledgements

We wish to thank two anonymous reviewers for many helpful suggestions. We also thank Alan Ramsay and seminarparticipants at 2007 annual congress of the European Accounting Association for helpful comments.

References

Ayers, B. C., Jiang, J., & Yeung, P. E. (2006). Discretionary accruals and earnings management: An analysis of pseudo earnings targets. The Accounting Review,81(3), 617–652.

Ball, R., Kothari, S. P., & Robin, A. (2000). The effect of international institutional factors on properties of accounting earnings. Journal of Accounting andEconomics, 29, 1–51.

Ball, R., & Shivakumar, L. (2006). The role of accruals in asymmetrically timely gain and loss recognition. Journal of Accounting Research, 44(2), 1–36.Ball, R., Robin, A., & Sadka, G. (2008). Is financial reporting shaped by equity markets or debt markets? An international study of timeliness and conservatism.

Review of Accounting Studies, 13(2–3), 168–205.Barton, J., & Simko, P. J. (2002). The balance sheet as an earnings management constraint. The Accounting Review, 77(Supplement), 1–27.Bartov, E., Givoly, D., & Hayn, C. (2002). The rewards to meeting or beating earnings expectations. Journal of Accounting and Economics, 33, 173–204.Basu, S. (1997). The conservatism principle and the asymmetric timeliness of earnings. Journal of Accounting and Economics, 24, 3–37.Basu, S., Hwang, L. S., & Jan, C.-L. (1998). International variation in accounting measurement rules and analysts’ earnings forecast errors. Journal of Business

Finance & Accounting, 25, 1207–1247 (9 &10).Brown, L. D. (2001). A temporal analysis of earnings surprises: Profits versus losses. Journal of Accounting Research, 39(2), 221–241.Brown, L. D, & Caylor, M. L. (2005). A temporal analysis of quarterly earnings thresholds: Propensities and valuation consequences. The Accounting Review,

80(2), 423–440.Brown, P., Clarke, A., How, J. C. Y., & Lim, K. J. P. (2002). Analysts’ dividend forecasts. Pacific Basin Finance Journal, 10, 371–391.Brown, L. D., & Higgins, H. N. (2001). Managing earnings surprises in the US versus 12 other countries. Journal of Accounting and Public Policy, 20, 373–398.Brown, P., Taylor, S. L, & Walter, T. S. (1999). The impact of statutory sanctions on the level and information content of voluntary corporate disclosure. Abacus,

35(2), 138–162.Burgstahler, D., & Dichev, I. (1997). Earnings management to avoid earnings decreases and losses. Journal of Accounting & Economics, 24, 99–126.Burgstahler, D., & Eames, M. J. (2003). Earnings management to avoid losses and earnings decreases: Are analysts fooled? Contemporary Accounting Research,

20(2), 253–294.Bushman, R. M., & Piotroski, J. D. (2006). Financial reporting incentives for conservative accounting: The influence of legal and political institutions. Journal

of Accounting and Economics, 42, 107–148 (1&2).Cheng, Q., & Warfield, T. D. (2005). Equity incentives and earnings management. The Accounting Review, 80(2), 441–476.Core, J. E., Guay, W. R., & Larcker, D. F. (2003). Executive equity compensation and incentives: A survey. Federal Reserve Bank of New York Economic Policy

Review (April), 27–50.Coulton, J., & Taylor, S. (2002). Option awards for Australian CEOs: The Who What and Why. Australian Accounting Review, 12(1), 25–35.Coulton, J., Taylor, S., & Taylor, S. (2005). Is ‘benchmark beating’ by Australian firms evidence of earnings management? Accounting and Finance, 45, 553–576.Davidson, R., Goodwin-Stewart, J., & Kent, P. (2005). Internal governance structures and earnings management. Accounting and Finance, 45, 241–267.

A. Habib, M. Hossain / Journal of International Accounting, Auditing and Taxation 17 (2008) 79–91 91

Dechow, P. M. (1994). Accounting earnings and cash flows as measures of firm performance: The role of accounting accruals. Journal of Accounting andEconomics, 18, 3–42.

Dechow, P. M., Kothari, S. P., & Watts, R. L. (1998). The relation between earnings and cash flows. Journal of Accounting and Economics, 25, 133–168.Dechow, P. M., Richardson, S. A., & Tuna, I. (2003). Why are earnings kinky? An examination of the earnings management explanation. Review of Accounting

Studies, 8, 355–384.Dechow, P., Sloan, M., & Sweeny, R. A. (1995). Detecting earnings management. The Accounting Review, 70, 193–226.Degeorge, F., Patel, J., & Zeckhauser, R. (1999). Earnings management to exceed thresholds. Journal of Business, 72(1), 1–33.Dhaliwal, D. S., Gleason, C. A., & Mills, L. F. (2004). Last-chance earnings management: Using the tax expense to meet analysts’ forecasts. Contemporary

Accounting Research, 21(2), 431–459.Durtschi, C., & Easton, P. (2005). Earnings management? The shapes of the frequency distributions of earnings metrics are not evidence ipso facto. Journal

of Accounting Research, 43(4), 557–592.Fields, T., Lys, T., & Vincent, L. (2001). Empirical research on accounting choice. Journal of Accounting & Economics, 31, 255–307.Graham, J. R., Harvey, C. R., & Rajgopal, S. (2005). The economic implications of corporate financial reporting. Journal of Accounting and Economics, 40, 3–73.Holland, D., & Ramsay, A. (2003). Do Australian companies manage earnings to meet simple earnings benchmarks? Accounting and Finance, 43, 41–62.Holland, D., 2003. Earnings management: A methodological review of the distribution of reported earnings approach (Working paper). Monash University.Hope, O.-K. (2003). Disclosure practices, enforcement of accounting standards, and analysts’ forecast accuracy: An international study. Journal of Accounting

Research, 41(2), 235–272.Hribar, P., Jenkins, N. T., & Johnson, W. B. (2006). Stock repurchases as an earnings management device. Journal of Accounting and Economics, 41, 3–27 (1&2).Jacob, J., & Jorgensen, B. N. (2007). Earnings management and accounting income aggregation. Journal of Accounting and Economics, 43, 369–390 (2&3).Jones, J. (1991). Earnings management during import relief investigations. Journal of Accounting Research, 40, 105–134.Kahneman, D., & Tversky, A. (1979). Prospect theory: An analysis of decisions under risk. Econometrica, 47, 263–291.Kasznik, R., & McNichols, M. (2002). Does meeting earnings expectations matter? Evidence from analyst forecast revisions and share prices. Journal of

Accounting Research, 40(3), 727–759.Kothari, S. P., Leone, A. J., & Wasley, C. E. (2005). Performance matched discretionary accrual measures. Journal of Accounting and Economics, 39, 163–197.Lin, S., Radhakrishnan, S., & Su, L. 2006. Earnings management and guidance for meeting or beating analysts’ earnings forecasts (Working paper). California

State University, University of Texas and The Hong Kong Polytechnic University.Lopez, T. J., & Rees, L. (2002). The effect of beating and missing analysts’ forecasts in the information content of unexpected earnings. Journal of Accounting

Auditing and Finance, 17(2), 155–184.Matolcsy, Z., & Wright, A. (2007). CEO compensation structure and firm performance (Working paper). Australia: University of Technology, Sydney.Matsumoto, D. A. (2002). Managers’ incentives to avoid negative earnings surprises. The Accounting Review, 77(3), 483–514.Matsunaga, S. R., & Park, C. W. (2001). The effect of missing a quarterly earnings benchmark on the CEO’s annual bonus. The Accounting Review, 76(3),

313–332.McVay, S. L., Nagar, V., & Tang, V. W. (2006). Trading incentives to meet the analyst forecasts. Review of Accounting Studies, 11(4), 575–598.McVay, S. L. (2006). Earnings management using classification shifting: An examination of core earnings and special items. The Accounting Review, 81(3),

501–532.Moehrle, S. R. (2002). Do firms use restructuring charge reversals to meet earnings targets? The Accounting Review, 77(2), 397–413.Multiplex Class Actions. 2007. Retrieved from http://www.mauriceblackburncashman.com.au/areas/class actions/multiplex.aspO’Brien, P. (1988). Analysts’ forecasts as earnings expectations. Journal of accounting and Economics, 10, 159–193.Phillips, J., Pincus, M., & Rego, S. O. (2003). Earnings management: New evidence based on deferred tax expense. The Accounting Review, 78(2), 491–521.Ramnath, S., Rock, S., & Shane, P. (2008). The financial analyst forecasting literature: A taxonomy with suggestions for further research. International Journal

of Forecasting, 24(1), 34–75.Richardson, S., Teoh, S. H., & Wysocki, P. D. (2004). The walk-down to beatable analyst forecasts: The role of equity issuance and insider trading incentives.

Contemporary Accounting Research, 21(4), 885–924.Roychowdhury, S. (2006). Earnings management through real activities manipulation. Journal of Accounting and Economics, 42, 335–370.Scott, D. W. (1979). On optimal data-based histograms. Biometrika, 66, 605–610.Scott, D. W. (1992). Multivariate density estimation: Theory, practice and visualization. New York: Wiley.Silverman, B. W. (1986). Density estimation for statistics and data analysis. London: Chapman and Hall.Skinner, D. J., & Sloan, R. (2002). Earnings surprises, growth expectations, and stock returns or don’t let an earnings torpedo sink your portfolio. Review of

Accounting Studies, 7, 289–312.Taylor, S., & Taylor, S. 2003. Earnings conservatism in a continuous disclosure environment: empirical evidence (Working paper). University of Melbourne and

University of New South Wales.

Recommended