A firm of independent auditors has a responsibility to adopt a system of quality control in conducting an audit practice. Thus, a firm should establish quality control policies and procedures to provide it with reasonable assurance that its personnel comply with the standards of the PCAOB in its audit engagements. (AS 1110, “Relationship of Auditing Standards to Quality Control Standards”)
The AICPA’s Statement on Quality Control Standards (SQCS) 8, A Firm’s System of Quality Control, defines its objective as:
to establish and maintain a system of quality control to provide it with reasonable assurance that (a) the firm and its personnel comply with professional standards and applicable legal and regulatory requirements and (b) reports issued by the firm are appropriate in the circumstances.
The PCAOB requirement and the AICPA objective are complementary and intended to achieve public confidence in the audit process. Over time, the QC process evolved with policies and procedures that make sense to the audit practice of the times. For example, nearly a decade ago, “QC-in-the-field” proved to be an efficiency boon to audits because concurring partners and QC partners had a better sense of the issues that the fieldwork had come up with. As a result, the QC process was less of a “waterfall” and more like an agile process whereby succinct and rapid iterations in the QC process replace a single moment when the fieldwork is concluded. (For a more comprehensive discussion of agile methodology, see https://digital.ai/resources/library.) Note that in February 2019, the International Auditing and Assurance Standards Board (IAASB) issued an exposure draft referring to Quality Control as “Quality Management”; for the purpose of this article, these terms are synonymous.
There is also the question of pervasiveness: how common is RPA in the audit process? Public accounting firms using RPA in audit engagements are mum about it. The Big Four and large regional firms do not publish results, nor do they share experiences. There is no data to quantify the extent of RPA in audit engagements, except to say that RPA is being used in some engagements. Hopefully, it is used responsibly primarily in low-risk engagements. A discussion of business processes and RPA is beyond the scope of this article, as the risks and responsibilities in a business setting are different than in an audit.
The challenge in the QC process—ascertaining audit quality—remains relatively consistent. The QC partners would see finished work product and had to probe into the detail without re-performing the work. As RPAs goes into an audit, the routine nature of RPA-supplanted procedures is rarely reperformed. This rarity is caused by two factors: a purported understanding of the RPA process by the QC partners, and engagement budget constraints. Some probes are superficial, such as the identity and timeline of sign-off on a work paper. Some probes are risk-based, whereby risk-prone areas are analyzed in greater detail than others. Yet other probes simply seek an overall compliance with applicable GAAS.
This article will discuss the underpinnings of robotic process automation, followed by a description of a method of testing the quality control of audits, as well as perhaps peer reviewers. Quality control professionals should be skeptical about the adaptation of this cutting-edge technology to traditional audit procedures. Auditors should also be interested in how to properly document and explain the basis of their audit procedures, both as a training ground for young auditors and as a client-facing discussion of their competitive edge.
RPA software often generates a significant audit trail, in the form of logs of its performance, inputs, processing, and results. This data set theoretically could be used for quality control purposes. From a CPA firm perspective, however, this pervasive audit log is mostly a risk, not a benefit: the quality control partners are unlikely to re-trace in detail the voluminous audit trail of RPA audit software. Their re-performance focus is most often on complex and unique calculations, such as valuations or covenant impairment. As discussed below, a “Condition Precedent” methodology can address any gap—if present—in the quality control process. Instead, there may be some analysis on the audit log, and this too is bound by the complexity of the audit trail and budget constraints. But if there is an audit failure, the auditors will be loath to hand over their audit trail because it would expose what went wrong in the RPA.
Robotic Process Automation
Robotic process automation (RPA) is a class of businesses processes that uses software, referred to as a “robot” (or colloquially as “bot”), that utilizes variants of artificial intelligence to perform tasks that are either repetitive or learnable by a software algorithm. Historically, RPA started as a form of “data scraping,” which obtained data from a fixed-format output (such as screens or paper printouts). Robotic software evolved to be both an enhanced graphic visualization software as well as an underlying data gathering technique (see Exhibit 1). With enhanced data gathering came software whose logic was primarily addressing repetitive activity such as tracing, matching, and vouching. Adding a layer of software-learned intelligence, exceptions, anomalies, and implicit-rule based analysis followed. (An example of applied matching can be found in J. Ciric, C. Seche, “Efficient canonical form for Boolean matching of complex functions in large libraries,” IEEE Xplore, June 2003.)
Today, RPA has application in banking, mortgage application process, sales, generic big-data applications such as optical character recognition (OCR) and data extraction, and—increasingly—audits (Michael Cohen, Andrea Rozario, and Chanyuan Zhang, “Exploring the Use of Robotic Process Automation (RPA) in Substantive Audit Procedures,” The CPA Journal, July 2019). A 2017 PricewaterhouseCoopers article estimated that 45% of worldwide workforce task can be automated (https://pwc.to/3itWwnG), and in November 2019, Deloitte disclosed that it utilizes RPA extensively in its financial audit process (“Deloitte leverages RPA for audit,” Accounting Today, Nov. 27, 2019, https://bit.ly/3BnKWTD). The rapid adoption response from 2017 to 2019 indicates not only a near-perfect match between the technology and the need; it also indicates that the return-on-investment argument has been substantially demonstrated.
The application of RPA to an audit is—like many smart software applications—a layered framework that begins with data gathering (a simple, repetitive task), and culminates with visualized deliverables (a changing, intelligent task):
- Data gathering underlies the RPA process. This is done by various techniques, such as OCR, extracting data from human-readable reports, importing data from data sources, or connecting to an existing database using open database connectivity (ODBC) methods. Although all these functions require human set-up and initial analysis, once the data gathering is set up, the learning of the robot software is generic, specific, and—most telling—it is repetitive.
- Rule-based activity is the matching, vouching, and tracing that can occur once data has been gathered. For example, once a set of invoices and delivery reports are obtained, either by OCR or some form of extraction from ledgers, rule-based software can create a matching set between the two, eliminating the need for a human to do so. The software can also be programmed to attempt to match multiple deliveries to a single invoice (or vice versa), by either obtaining details such as delivery dates from the underlying paper documents, or by assuming a certain date proximity. Of course, for entities that utilize extensible markup language (XML) or its business derivative format extensible business reporting language (XBRL), OCR is less relevant, and direct data extraction will result in fewer errors in the underlying process and the rule-based application that follows.
- Implicit rule analysis is more abstract and less specific than the preceding explicit rule-based phase (a.k.a. “implicit deep learning,” a discussion of the underlying science is outside the scope of this article). This mode of activity is sometimes more applicable to large sets of data elements of various velocity (over time), and volume (size of data elements), but it can be applied to structured data sets, such as a sales journal or banking activity ledger. In applying an implicit rule analysis, the software attempts to find patterns in the data. Large-enough data sets make for a more discernable pattern. For example, a banking data set may create an “implicit rule” that “every Wednesday there is a large deposit followed by a series of small withdrawals of sequentially close check numbers.” This of course is a predictable rule that describes a funding, and then cashing of payroll checks. Another less straightforward, and possibly more telling, example is an implicit rule for inventory that can be described as “Truck X always gets filled up with raw materials and returns empty.” Although on its face this may not be telling, fraud examiners will take note that an asset misappropriation scheme may be going on. Of course, there could be an innocent explanation as well: if the hypothetical “Truck X” can be associated with a specific customer, the profitability and utilization of raw materials from that customer could be a possibility for possible revenue, or increased profitability.
- Sample selection for human/auditor action is one of the top-level actionable phases that RPA can deliver: either by creating an explicit rule, or having exceptions to the vouching, tracing, or matching, a set of risk-ranked exceptions can be delivered for human interaction. Vouching, tracing, and matching are simple repetitive RPA tasks, which either result in a perfect result, or imperfect results with exceptions. These exceptions can be acted upon by a human in a manual business process or as otherwise applicable—for example, by an auditor. This phase is suitable for audits by creating implicit rules that can also be an audit procedure because it can ascertain what the expected behavior of the data set is, and then seek exceptions to it. Depending on their nature, exceptions can either represent a breakdown in internal controls or a basis for the substantive testing of specific transactions. For example, take the implicit rule that “adjusting journal entries’ time stamp to 20 days after the period they purport to adjust”; this exception would make sense. The month-end closing generally happens somewhere around three weeks after the end of the month. However, if adjusting journal entries appear significantly later than 20 days, then the auditor may be skeptical about these exceptions to the implicit rule, and select these exceptions for additional analysis by the audit team.
- Visual delivery for decisionmakers originated in the data visualizations for decisionmakers on static historical, unmodified data, or data that is rapidly changing while being analyzed. Overlayed into RPA, the visualized deliverable can provide, for example, risk-based visualization to assist in the audit process. For example, colors, shapes, and volume can assist the audit team in ascertaining areas of increased risk of misstatement, as well as areas for which such a risk has been addressed by audit procedures. The RPA algorithm can learn the issues in for example, auditing fixed assets, which are different than those of auditing revenues. Once the algorithm knows how to tell which implied rules and exceptions are more susceptible to risk, it can provide a smarter, even more focused deliverable for the auditor to review. For example, revenue overstatement and early recognition are a constant risk for most financial audits. If 10 implicit rules are derived and have a constant class of rules, the RPA may be able to rank them based on past audit responses. If an implicit rule regarding exceptions in the late booking of revenues by adjusting journal entries is consistently attended to by the human auditor, the RPA will learn to maximize its effort in this area—as compared to rounding errors that appear due to large invoice amounts, which auditors will generally be reluctant to be skeptical about.
A Brief Case Study of Tracing Accounts Payable
To perform tracing for accounts payable, some auditors seek to trace subsequent payments to establish the reasonableness of the accounts payable balance. In an elevated-risk audit, the existence of documentation such as receipts of goods or services may be traced as well. To accomplish the document-to-schedule tracing task in an RPA paradigm, the process will include scanning the document, extracting relevant data (e.g., terms of payment, delivery date), matching the data with the schedule of a accounts payable, and noting any exceptions to the matching. Generally, setting up an RPA for this audit procedure makes sense if the volume of evidence is high. It is during this type of audit engagement that the quality control of the RPA process is relevant.
Other repetitive, high-volume, RPA processes could be applied to similarly common audit procedures, as described in “Exploring the Use of Robotic Process Automation (RPA) in Substantive Audit Procedures” (by Michael Cohen, Andrea M. Rozario, and Chanyuan (Abigail) Zhang, The CPA Journal, August 2019, https://bit.ly/37jSgCe).
Applying Quality Control to RPA
The policies of a firm—or the application of peer review—require a risk-based approach to every audit. Not specified as a standard, but discussed among quality control teams and peer reviewers, is the concept that each audit file should “stand on its own.” To achieve this, along with the objectives of the financial audit’s quality control, the person who performs such a review of the audit procedures is faced with a challenge: how to establish that the financial audit addressed its objectives when the tools to perform these procedures are regularly changing; the RPA is a flexible algorithm. The robot-software can perform repetitive tasks, such as obtaining data, which can be analyzed. But reviewing a higher-level analysis such as implicit rules or exception handling can be difficult.
To complete this type of review, quality control professionals could attempt to re-perform or recreate the conditions under which, for example, an exception for tracing has occurred. Although this is a reasonable and traditional attempt, it may not always be possible, and—more to the point—it is highly inefficient. Instead, quality control professionals could augment their re-performance and re-tracing compliance effort with a testing of a “condition precedent” method.
The condition precedent approach is basically a truth table that allows for the testing of a large data set and the resulting exception. It does not begin with the identified exception and trace it back to the logic that identified it (and to the underlying business record/document). Rather, it identifies the conditions under which there could be an exception, finds such exception in the data set, and traces the exception forward to see if it was picked up by the software-robot and ultimately delivered to the auditors (see Exhibit 2).
Traditional tracing starts with the exception and “rationalizes” it by tracing it through the RPA deliverable to understand how it was created. Although this provides evidence that exceptions are correctly identified, it does not determine whether there were other unidentified exceptions.
The condition precedent approach first identifies the type of condition that would have caused an exception, then traces these specific exceptions from the underlying data to the report of unmatched exceptions, and finally to the auditor’s action. The following is a hypothetical weight table for the RPA to be acting on when exceptions or matches are noted; it gives each exception a score, and each match a reversing score:
Generically, the condition precedent approach looks at conditions and ranks them according to a learned pattern of acceptance by the auditors. Using the example above, the conditions remain the same, but the scoring becomes a function of past auditor selections and responses:
As illustrated, F(Condition) returns the result of the function of all “past acceptance” of the implicit rule as a weighted score (or on a scale of –1.0 to 1.0). Similarly, g(Condition n) returns the complement of the result of all the “past acceptance” of exceptions to an implicit rule as a weighted score (or on a scale of 1.0 to –1.0).
In adding the total “score” of the match, the RPA delivers a risk-based exception to the auditor. A quality control professional can identify transactions in the universe of “all records” and proceed by finding if the condition under which the RPA ought to identify the exception is working as noted (see Exhibit 3).
A New Paradigm
The RPA paradigm is being incorporated with business and the audit process specifically. Controlling the quality of the audit process through the traditional quality control procedure may not be sufficient for an audit firm to comply with professional quality control standards (both AICPA and PCAOB). To address that, a revision to the supervision and quality control process is needed, which require a deep understanding of the RPA methodologies, as well as how to address the inherit risk that they entail. By applying RPA, an audit firm can achieve efficiencies and effectiveness that surpass the constraints of the traditional audit. Understanding how to do this is in the best interests of the profession.