In Brief
Technology can be used by auditors to evaluate the risks presented by engagements. This article introduces a Natural Language Processing (NLP) approach to conduct textual analysis of unstructured data from financial reports. Using NLP, corporate annual risk profiles are generated from 10-K filings and validated. The results indicate that auditors are more likely to issue modified audit opinions and internal control material weaknesses for clients with higher self-identified risks.
Assessing the risk presented by clients is of fundamental interest to auditors to mitigate litigation risk. Auditors evaluate client risks on the basis of entity-specific characteristics, such as financial condition, the effectiveness of corporate governance, and financial reporting quality. In addition to these traditional risk indicators, the authors introduce a Natural Language Processing (NLP) approach to extract risk information from unstructured data in corporate annual reports (i.e., 10-K filings).
Based on the Committee of Sponsoring Organizations (COSO) Enterprise Risk Management—Integrated Framework (2004), the authors constructed novel firm-specific risk measures in four categories—financial, strategic, operational, and hazardous risks—from companies’ 10-K documents. These novel risk measures have been validated by examining the relationship between these risk measures and audit opinions and internal controls over financial reporting. The authors’ research indicates that auditors are more likely to issue modified audit opinions and identify material weaknesses in internal controls when a business’s self-disclosed risks are high.
Textual Analysis for Corporate Disclosures
Textual analysis includes a range of techniques that seek to extract useful information from documents through the identification and exploration of patterns in the unstructured data of various types of documents (e.g., books, webpages, e-mails, reports, product descriptions). Market participants have a long history of efficiently utilizing valuable information from corporate textual disclosures such as 10-Ks; it is extremely challenging, however, for people to comprehend overwhelmingly voluminous information on a large scale. In the early stage of financial textual analysis, people deployed keyword searches to quantify a variety of 10-K disclosure attributes, such as total word count, sentence count, and keyword matches, to provide initial descriptive evidence on the relevance of these artifacts.
Beyond the traditional word-count methodology, a contemporary NLP approach empowers users to understand textual documents in different dimensions, tailored to their needs. The NLP approach combines humans’ linguistic capabilities with the speed and accuracy of a computer. In this article, the authors follow an NLP framework developed by Rong Yang, Yang Yu, Manlu Liu, and Kean Wu in “Corporate Risk Disclosure and Audit Fee: A Text Mining Approach” (European Accounting Review, vol. 27, no. 3, 2018, pp. 583–594), to create corporate risk profiles for each firm in each year. In brief, the process starts with a collection of documents. A customized text mining tool then retrieves a particular document and preprocesses it by checking its format and character sets. Finally, the documents go through textual analysis until the targeted information is extracted. The process is explained in detail below.
Technical Procedures Using NLP Techniques
The authors used four major functional modules to capture the risk profile for each 10-K. As shown in Exhibit 1, the four major functional modules are preprocessor, feature extractor, concept mapper, and aggregator. The core modules are written in Python; each module is composed of several core tasks. The authors created a GitHub reference to include Python programs for the main steps in our textual analysis (https://github.com/oracle9193/relevantSentences).
Textual analysis includes a range of techniques that seek to extract useful information from documents through the identification and exploration of patterns in the unstructured data of various types of documents.
In the preprocessor module, the authors downloaded the raw 10-K filings from the SEC Edgar website; the authors then gathered, cleaned, and reorganized these filings to corpus. Through the gathering process, 85,729 10-K filings in HTML format were collected for the years 2004 through 2014. In the cleaning process, the textual content was extracted by removing HTML tags, attachments, and graphs from the original HTML files. Finally, each sentence was recognized from textual content and stored in a database. In this step, “NLTK” and “beautifulsoup” were used to clean the HTML file and identify the individual sentences.
In the second step, the authors generated a core risk-related keywords list in the feature extractor module. They started with a core list of keywords from the literature of financial risk, such as “claim,” “legal,” “loss,” and “fraud.” The authors then selected 5,000 random 10-K filings to pick the sentences containing at least one keyword in the core list. Next, high-frequency noun phrases from these sentences were used to extend the core list. Finally, three domain experts compiled a word list and tagged each item into four categories—financial, strategic, operational, and hazardous risks, according to the aforementioned risk management standards issued by the Institute of Risk Management and COSO.
The third module is the most critical in the concept map. The authors performed three tasks on all 85,729 10-K documents. First, the skeleton of each sentence is extracted by recognizing all the noun phrases based on part-of-speech tagging in Python. The sentence then is mapped to one of four risk concepts by calculating the similarity between the extracted sentence skeleton and the risk word list in each of the four categories. Finally, the sentence is labeled as one of four types of risk if the highest similarity score exceeds a predefined threshold. For example, in order to determine the risk type described in the following sentence−“The valuation of the in-process research and development was determined using the income approach method, which includes an analysis of the markets, cash flows, and risks associated with achieving such cash flows”−we first calculate the four similarity scores by comparing the sentence skeleton, “The valuation of the in-process research and development was determined using the income approach method,” with the four predefined word lists. Then the authors assign the label “Financial Risk” to this sentence based on the highest similarity score.
Lastly, in the aggregator functional model, the authors aggregated the sentence-level results to generate the risk profile for each 10-K filing. Specifically, the number of risk sentences in 10-K was added for each type of risk; therefore, each 10-K profile contains measures for each of four types of risks. In addition, to incorporate four risk measures into one overall risk measure for each 10-K, an overall risk variable was generated using factor analysis based on the four types of risk. For readers’ reference, the authors have created a GitHub webpage to include the Python programs for the main steps in this textual analysis (https://github.com/oracle9193/Natural_Language_Processing-10K_Fillings).
Analysis Using Auditor’s Opinions as a Context
In this section, the authors attempt to validate the novel risk profiles generated by NLP technology from the contents of 10-K filings. Audit opinions and internal controls over financial statements are used as the context to perform the validity test. Audit opinions reflect auditors’ assessments of whether a client’s financial statements are free from errors and misstatements in accordance with GAAP. These opinions help financial statement users in their decision-making process for an assurance service.
Auditors issue a clean audit opinion when they do not have any significant reservations regarding matters contained in the financial statements. When auditors are unable to assess the appropriateness of audit evidence, they lower the threshold for a modified audit opinion, which effectively protects auditors from potential litigation risk (Steven Kaplan and David Williams, “Do Going Concern Audit Reports Protect Auditors from Litigation? A Simultaneous Equations Approach,” The Accounting Review, vol. 88, no. 1, 2012, pp. 199–232). In addition, auditors are more likely to issue internal control material weakness when the perceived client-related risk is higher than the industry average. Following the above logic, as auditees’ self-disclosed risk increases, auditors are more likely to issue modified audit opinions.
The authors used Compustat to obtain audit opinion data and merge them with company-year risk measures. The final sample consists of 61,135 company-year observations for the analyses of audit opinions on financial statements. For data on internal control over financial statements, the sample size shrinks to 37,361 observations [due to the elimination of non-accelerated filers, which are exempt from SOX section 404(b)]. Exhibit 2 presents descriptive statistics for the main variables. Consistent with previous studies, 30% of the final sample received modified opinions (including adverse opinion, qualified opinion, unqualified opinion with additional language, and no opinion), and 6% identified deficiencies in their internal control systems. On average, a company disclosed 254 sentences of risk in 10-Ks (strategic, 160; financial, 87; operational, 6; hazardous, 1). Exhibit 3 compares the means of five risk measures when auditors issue a clean audit opinion versus a modified audit opinion. The last two columns compute the mean difference and perform t-tests for statistical significance. For example, for companies receiving a clean opinion, on average there are 65 sentences related to financial risk in their 10-Ks. However, companies receiving a modified audit opinion report approximately 85 sentences related to financial risk in their 10-Ks, 30% greater than their counterparts. The results indicate that companies with modified audit opinions disclose all four types of risk more frequently in their 10-Ks than companies with clean audit opinions. In the last row, the result shows that overall risk is associated with the likelihood of issuing a modified audit opinion.
Exhibit 2
Descriptive Statistics for Main Variables (n=61,135)
Exhibit 3
Comparison of Risk Measures between Companies with Clean Audit Opinion and Modified Opinion
Previous studies have examined the determinants of internal ICMW and found that companies with ICMW have more complex operations, rapid growth, more-recent organizational changes, and greater accounting risk.
Besides audit opinion tests, the authors further compared the risk measures between companies with internal control material weaknesses (ICMW) and those without ICMW. The Sarbanes-Oxley Act (SOX) of 2002 requires the management of a public company to assess the effectiveness of the company’s internal control over financial reporting. Furthermore, SOX section 404 mandates that external auditors attest to management’s assessment and provide an opinion regarding the company’s internal control effectiveness. Previous studies have examined the determinants of internal ICMW and found that companies with ICMW have more complex operations, rapid growth, more-recent organizational changes, and greater accounting risk (Jeffrey Doyle, Weili Ge and Sarah McVay, “Determinants of weaknesses in internal control over financial reporting,” Journal of Accounting and Economics, vol. 44, no.1, 2007, pp. 193–223). Therefore, as clients’ self-disclosed risk increases, auditors are more likely to issue internal ICMW.
Exhibit 4 presents a comparison of the means of all five risk measures between companies with ICMW and those without. For example, on average, 87 sentences related to financial risks are disclosed in 10-Ks for companies without ICMW. In contrast, the average number of financial risk-related sentences increases to 103 for companies with ICMW. A similar pattern is shown for the other three risk measures, indicating that companies with ICMW present more risk-related sentences about strategic, operational, and hazardous risk. The results also show that the overall risk is higher for companies with ICMW reported by auditors, supporting the authors’ expectations. Overall, the authors conclude that auditors are more likely to issue ICMW for clients with higher financial, strategic, operational, and hazardous risks.
Exhibit 4
Comparison of Risk Measures between Companies with Internal Control Material Weaknesses (ICMW)
Practical Implications
This approach has several practical implications. As an important branch of artificial intelligence, a natural language processing (NLP) approach enables computers to understand human language on a large scale. NLP has broad application in accounting and auditing due to the large number of unstructured textual documents in financial reporting. As described above, the authors used a four-functional-module framework to decipher corporate 10-K filings. The evidence in the context of audit opinion offers some support that our methodology could extract valuable information effectively and efficiently for external users of financial statements.
Given the flexible nature of the authors’ four functional modules, this methodology has great potential to tackle various accounting and auditing issues. Accounting professionals tend to focus on different financial documents, such as earnings call transcripts, press releases, and social media posts. Textual analysis could also be applied to certain sections of financial documents, such as Item 1A “Risk Factors,” Item 7 “MD&A [management’s discussion and analysis],” or Item 1B “Unresolved Staff Comments.”
Last, this methodology sheds light on potential future analysis using an NLP approach. For example, if an analyst is interested in the uncertain tax positions of certain companies, she can apply the four-module methodology with certain modifications. For example, in module 2, adjustment could be made in the expert tagging step to construct a specific “tax risk-related” work list. In module 3, the concept mapper could focus on analyzing income tax risks in the FIN 48 section of 10-K reports.