This article is intended primarily to provide an airing of omissions and other flaws in generally accepted auditing standards, primarily regarding the use of materiality in auditing. It is also intended to clarify the relationship of materiality in an auditing context to its underlying statistical sampling concepts. The authors hope it will help auditors understand these relationships and concepts, to better use them in their audits and help standards setters and others interested in strengthening auditing standards refocus their agendas on the weaknesses in the current standards. This article will not discuss the recent, highly controversial FASB proposals on accounting materiality.

Distinguishing between Accounting and Auditing Materiality Concepts

The distinction between auditing materiality (as used in planning for scope determination) and accounting materiality (as a limiting threshold for waiving adjustments, including with respect to omitted or misstated disclosures) is important. It is muddied, however, by the use of accounting definitions for auditing purposes, and the distinction is neither clearly set forth nor illustrated in the auditing literature, nor is it typically discussed in most publicly available nonauthoritative sources. Simply described, the purpose of auditing materiality is to provide a framework for how much the auditor needs to look for misstatements, while accounting materiality helps the auditor decide what to do with the known and projected misstatements that are found. Although related, the accounting and auditing usages of “material” and “materiality” are quite different. Although the use of an accounting materiality is adequately discussed in considerable, useful detail in the auditing standards that apply to evaluating findings at the end of an audit [the AICPA’s AU-C 450 and the PCAOB’s Auditing Standard (AS) 2810], it is not named. Because there is no discussion of this distinction in the auditing literature, auditors are prone to confuse those concepts. The hidden distinction is most easily illustrated and recognized in a statistical sampling context.

An authoritative discussion of materiality for audit planning (i.e., scope determination) purposes in an auditing standard first appeared in 1983 as part of the discussion on audit planning in the AICPA’s Auditing Standards Board’s (ASB) Statement on Accounting Standards (SAS) 47, Audit Risk and Materiality in Conducting an Audit. Among other things, SAS 47 provided that once planning materiality is determined, a smaller waived adjustment threshold is established. Although unstated in the standard, a waived adjustment threshold is an accounting materiality concept (i.e., the predetermined maximum aggregate value of unadjusted misstatements that an auditor will accept while still issuing a clean audit opinion). Such aggregate value may be reduced based on qualitative considerations relative to individual proposed adjustments.

Although materiality is defined in the auditing standards only in its accounting sense, first in reference to FASB’s Statement of Financial Accounting Concepts (SFAC) 2, Qualitative Characteristics of Accounting Information, and later in reference to SFAC 8, it is used primarily in its auditing sense. In contrast, AS 2105, Consideration of Materiality in Planning and Performing an Audit, refers instead to the Supreme Court definition set forth in 1978. (FASB’s proposed change to SFAC 8 would refer instead to the 1978 Supreme Court definition, a large part of the aforementioned controversy.) Unfortunately, both definitions speak only to a limit on unadjusted measurement and disclosure deficiencies rather than to a determination of audit scope. To add to the confusion, the SEC’s rules present accounting definitions of “material” that are similar to—but different from—the ones referenced in either auditing standard. Rule 1-02(o) of Regulation S-X states that “material” refers to “those matters about which an average prudent investor ought reasonably to be informed,” and Rules 12b-2 and 405 of the 1934 and 1933 Securities Acts, respectively, say the term applies to “those matters to which there is a substantial likelihood a reasonable investor would attach importance in determining whether to buy or sell the securities registered.”

Accounting materiality is driven by the perceived probable decisions of a “reasonable investor” or other user. Audit planning materiality, however, is usually expressed as a multiple of what is described as an accounting materiality because if accounting materiality were to be used for audit planning, it would generally result in significant audit inefficiencies, which in some cases could translate into audit ineffectiveness. For example, by using a small materiality for a sampling procedure, a large sample size for a substantive test, say, several hundred selections, would likely result. The auditor may adjust the sampling parameters so as to reduce the selection to a more workable number, say, 120 items and possibly try to compensate for the reduction by substituting a less effective substantive procedure, which may inadvertently increase detection risk. Alternatively, some auditors may arbitrarily reduce sample size without applying compensating procedures; this is highly risky. [See “Thematic Audit Quality Review of the Financial Reporting Council—Materiality (UK),” 2013, and summaries of PCAOB inspection results dealing with similar matters.]

AU-C 320.10 and .A12-.A13 (AS 2105.06) discuss materiality determinations for particular classes of transactions, account balances, or disclosures, and AU-C 320.A2 and AU-C 530.05 and .A6 define and discuss “tolerable misstatement,” which is an auditor judgment representing an upper error limit for a given sampling application that includes a provision (whether measured or not) for the sampling precision concept (AS 2315.18 and .19). Per AU-C 320.10 and .A14 (AS 2105.06), one or more reduced amounts of materiality (called “performance materiality”) are to be applied to various classes of transactions, account balances, or disclosures deemed necessary in the auditor’s judgment, similar to the way sampling precision is used in sampling. These matters are quite complex, and the authors think the literature should be substantially expanded to be useful in this area. The copious guidance large firms generally provide may serve as a model in this respect.

These matters are quite complex, and the authors think the literature should be substantially expanded to be useful in this area.

Qualitative Materiality

Auditors commonly subscribe to the belief that a quantitative materiality threshold may be higher for disclosure than for measurement matters when it does not affect any key measurement metric in the financial statements or when there are no significant qualitative materiality considerations for sensitive matters. Unfortunately, this notion is again not mentioned in any auditing standards. Auditors and financial statement issuers would be better protected from litigation risk if this concept were legitimatized in the authoritative literature. Instead, the literature [AU-C 320.10 and A12-.A14 (AS 2105.06)] affords brief guidance for determining a disclosure materiality only by vaguely suggesting when, based on qualitative considerations, a special purpose value less than the accounting materiality used generally throughout the audit might be appropriate.

The authors also note the debate in the last few decades over qualitative materiality, its abuse, and the initial response from the SEC in Staff Accounting Bulletin 99 and then by private sector standards-setters in SAS 98, Omnibus Statement on Auditing Standards—2002. Auditors are now effectively cautioned by AU-C 320.06 and AU-C 450.10–.11, .A22, and .A23 (AS 2105.12 and 2810.17) that the waived adjustment threshold initially determined when planning the audit—

does not necessarily establish an amount below which uncorrected misstatements, individually or in the aggregate, will [or should] always be evaluated as immaterial. The circumstances related to some misstatements may cause the auditor to evaluate them as material even if they are below materiality. Although it is not practicable to design audit procedures to detect misstatements that could be material solely because of their nature (that is, qualitative considerations), the auditor considers not only the size but also the nature of uncorrected misstatements, and the particular circumstances of their occurrence, when evaluating their effect on the financial statements.

As noted above, the waived adjustment threshold initially determined in planning should often be reduced based on qualitative considerations regarding sensitivity to user needs on certain items, such as related party transactions or illegal acts. In some circumstances, the threshold for waiving adjustments should be near zero, such as when the entity is on the cusp of a debt covenant violation.

Relationship of Planning and Performance Materiality to Sampling Concepts

The concept of planning materiality was introduced in SAS 39, Audit Sampling, in 1981 and expanded in SAS 47. Although neither SAS 39 nor SAS 47 discussed it in considerable depth, it was covered further in Appendix L of the 2006 audit guide, Assessing and Responding to Risk in a Financial Statement Audit, and this guidance was brought forward to more recent editions.

Planning materiality is the expected maximum aggregate value of all identified and unidentified misstatements (akin to tolerable misstatements in a single sampling application) that an auditor can tolerate without affecting the audit opinion, given the maximum desired level of audit risk. In this context, the aggregate maximum tolerable misstatement comprises projected and known misstatements, plus an allowance for estimated unknown or undetected misstatements (precision). Since an audit is invariably based on tests of less than 100% of the data, there is always some risk of unknown misstatements.

Inherent and control risk considerations, as well as the acceptable level of detection risk, are affected by such factors as the entity’s size and the complexity of its transactions, estimates and related disclosures, as well as business risk factors, which are discussed in the standards only with respect to the reporting entity and defined as risks that result from “significant conditions, events, circumstances, actions, or inactions that could adversely affect an entity’s ability to achieve its objectives and execute its strategies or from the setting of inappropriate objectives and strategies” (AU-C section 315, “Understanding the Entity and Its Environment and Assessing the Risks of Material Misstatement,” and similarly in AS 2110.A2). Business risk to the entity “is broader than the risk of material misstatement of the financial statements, though it includes the latter” (AU-C section 315.A37); the authors, however, use the term even more broadly than that. In the authors’ view, business risks also refer to risks that are not mentioned in the standards but that affect an auditor’s judgment as to the level of risk of material misstatement that would be acceptable under any circumstances. Examples of auditors’ business risks include the probability of adversarial action by users of the audit report, punitive action by regulators, or adverse publicity from either. Judgments used in setting the level of planning materiality are generally based on considerations of user needs, and they include business risk considerations relative to the reporting entity.

“Performance materiality” is defined in AU-C 320.09 as an “amount or amounts set by the auditor … to reduce to an appropriately low level the probability that the aggregate of uncorrected and undetected misstatements exceeds materiality for the financial statements as a whole.” This reduction is based on the auditor’s judgment (believed to be conservative) and intended to be used repeatedly throughout the audit without making complex, timeconsuming statistical calculations. It is derived by reducing planning materiality to a lower value and is used 1) to set the maximum tolerable misstatement for determining sample size in statistical tests (a further reduced value), 2) as a minimum value of a misstatement that a planned substantive analytical procedure is judged precise enough to be reasonably likely to detect, and 3) as a limit on the aggregate value of an untested population. The auditing standards preclude the sole use of analytical procedures as a source of audit evidence to support a significant assertion unless supported by tests of details or controls. Some auditors, however, are skeptical about the use of a substantive analytical procedure as a primary audit procedure (see N.B. Hitzig, “The Hidden Risk in Analytical Procedures,” The CPA Journal, February 2004).

Item number 3 above is a nonissue when a well-designed statistical sampling procedure is used for an entire population, since all items will have some chance of selection. If some untested items are excluded from the population being sampled, however, the possibility of a material misstatement among such excluded items must be considered. It is not uncommon to separate large or unusual items, but those items must still be tested somehow.

Risk, Precision and the Role of Audit Sampling

Sampling precision is not defined in the auditing standards, but it is defined in the ASB’s audit guide, Audit Sampling, as “a measure of the difference between a sample estimate and the corresponding population characteristic at a specified sampling risk.” For example, under monetary unit sampling (sometimes called “dollar unit sampling”), desired precision equals audit materiality less a statistical or nonstatistical estimate of the aggregate probable misstatement in the population tested. At the end of each statistical test, an auditor computes projected misstatement and the upper limit of misstatement; the difference between the two is called “sampling precision.” Measuring precision is not, however, required or suggested by any U.S. or international standard, nor by the ASB audit guide, and the authors think this is a problem.

AU-C 530.A24 and .A27 merely hint at the concept of sampling precision (calling it “sampling risk”) as follows: “due to sampling risk, this projection may not be sufficient to determine an amount to be recorded” and “if the projected misstatement is greater than the auditor’s expectations of misstatement used to determine the sample size, the auditor may conclude that there is an unacceptable sampling risk that the actual misstatement in the population exceeds the tolerable misstatement.” The ASB audit guide suggests that statistical methods provide “numerical control of and evaluation of sampling risk” (par. 1.19) but affords no further guidance to auditor as to its measurement. Similarly, AS 2315.26 vaguely hints at the concept of sampling risk or precision, but without requiring its measurement or providing any useful guidance to auditors.

AU-C 530.A27 and AS 2315.26 also provide that when the projected misstatement amount approaches tolerable misstatement, the risk that the actual aggregate misstatement in the population exceeds tolerable misstatement increases. AU-C 530.A28 provides that if the auditor believes the sampling results do not provide a reasonable basis for a conclusion, the auditor should ask management to investigate. (There is no corresponding PCAOB provision in AS 2315.) If sampling precision were instead to be measured and determined to be acceptable (i.e., less than tolerable misstatement), the results of a properly designed and executed statistical sample could be deemed conclusive based on statistical theory.

Therefore, the authors believe that the omission of a requirement to quantify sampling precision and related application guidance to be a significant flaw in the auditing standards. Without measuring sampling precision, auditors are unable to comfortably assess the reliability of a sampling result or whether a statistically calculated proposed adjustment, in fact, has the potential to create misstatement rather than reduce or eliminate it. Moreover, there is no reliable or practical way of measuring the risk of undetected misstatement when not using statistical sampling or sampling designed to resemble it. Accordingly, the authors believe the sampling standards should be revised to require or strong encourage the use of sampling and provide specific guidance as to the measurement of precision.

ARGUMENTS FOR AND AGAINST RULES VERSUS PRINCIPLES-BASED AUDITING STANDARDS

Arguments for and against rules versus principles-based standards have gone on for years and persist today. Those who generally support rules-based standards bring an extra argument to the table for standards that are highly technical. Although generally in favor of principles-based standards that allow for more auditor judgment, the authors lean towards the rules-based camp in the particular areas of materiality and sampling because auditors need specific direction in these areas that would lead to sounder and more consistent practice.

Benchmarking

“Benchmark” is a term used in the ASB standards to describe a basis for determining planning materiality from among key financial statement or other metrics. Such factors are discussed in AU-C 320.10 and .A5–.A9 (AS 2105.06). Clearly, materiality is a judgment, and so is the selection of an appropriate benchmark; although user considerations are primary, business risk factors to both the entity and the auditor are also important determinants. Certain benchmarks (e.g., pretax income from continuing operations), however, may be too volatile to be practicable or comparable. Such measures may also create impractical audit sample sizes, similar to when an accounting materiality measure is used for audit planning purposes. A common problem with such benchmarks is that subsequent (or previous) years’ measures commonly can vary significantly, creating an apples-to-oranges issue in scoping procedures. It also creates comparability issues when evaluating year-to-year waived adjustments. Therefore, auditors should consider the stability of the client’s basic business model and audit unusual transactions differently. Unusual transactions, for example, generally should be audited 100%, removing materiality as a factor.

The authors recommend that auditors use relatively stable benchmarks for determining planning materiality, such as the larger of assets or revenues, or, for public entities, a measure of entity value (e.g., public float). These measures can be sensitized in two ways, establishing ranges within the benchmarks so that more auditing is required for more riskprone entities and building a sliding scale for materiality so that relatively less auditing is required for larger entities. Auditors will generally have different judgments about risk and materiality, but sensible boundaries should be used. These thoughts are only vaguely embodied in the auditing literature (AU-C 320.06, AS 2105.12).

Without measuring sampling precision, auditors are unable to assess the reliability of a sampling result or whether a statistically calculated proposed adjustment has the potential to create misstatement.

The authors are also concerned when such measures defeat audit objectives by creating impractical audit sample sizes, as when an accounting materiality measure is used for audit planning (scoping) purposes.

Where Are We Now?

The accounting and auditing definitions of materiality are different, and the auditing literature is deficient in its failure to focus on and explain the difference. That difference is most easily recognized and understood within the context of statistical sampling. Unfortunately, it appears that the use of statistical sampling in auditing may have diminished in recent years, and this has probably contributed to the widespread misunderstanding of this distinction. Accounting materiality, which is based on the probable decisions of a reasonable investor or financial statement user, should be used for measuring quantitative accounting and disclosure misstatements. Unfortunately, it is often being used for audit planning, a decision that has significant consequences for the auditor and the issuer.

The authors believe that revisions are needed to strengthen the auditing standards, enhance audit quality and efficiency, and provide benefits to all who rely on audited financial statements. Accordingly, the time for the ASB and the PCAOB to revisit some of these issues is now.

Julian Jacoby, CPA (retired) ended his 51-year career as accounting and assurance director at Crowe Horwath International, New York, N.Y.
Howard B. Levy, CPA is a principal and director of technical services at Piercy Bowler Taylor & Kern, Las Vegas, Nev. He is a former member of the AICPA’s Auditing Standards Board and its Accounting Standards Executive Committee, and a current member of its Center for Audit Quality’s Smaller Firms Task Force. Both are members of The CPA Journal Editorial Board.

The authors wish to thank their friend and former partner, Abraham D. Akresh, CPA, U.S. Government Accountability Office (retired), international audit sampling consultant, and member of the AICPA’s Audit Sampling Guide Subcommittee (Task Force for all editions issued from 1983 to 2008), for his valuable comments.