Pen Test Considerations
The concept of an outside party simulating an adversary’s attack on the organization’s resources to test the effectiveness of the latter’s security risk management practices appeals to many observers. The performance of a pen test can be a welcome cybersecurity oversight tool for an audit committee member. For someone without significant cybersecurity skills, the pen test can lure its stakeholders into a false sense of assurance by providing a simplistic pass-fail barometer that could be misleading if not correctly interpreted and reviewed.
Preferably, before engaging a penetration tester, the audit committee should communicate their needs or, at a minimum, review management’s planned scope and engagement letter. Optimally, if the independence of the penetration tester is critical, the audit committee should directly contract for the tester and specify the requisite scope, as they would with external audits. The audit committee should consider the following factors while engaging a penetration tester or reviewing the results of a pen test presented by management.
Incorrect understanding of a penetration test.
Unlike in auditing, where professionals leverage GAAS, a universally accepted reference does not exist for penetration testers. Although recognized organizations (e.g., NIST, OSSTMM, OWASP, PTES) publish testing recommendations, the information security community does not consistently use and apply them. The NIST glossary, a well-recognized and respected cybersecurity reference issued by the U.S. government, provides seven definitions for the term “penetration testing” (https://csrc.nist.gov/glossary/term/penetration_testing). Specific industries may have standards or regulatory requirements defining the pen test’s minimum scope (e.g., FedRAMP, PCI-DSS). From the audit committee’s perspective, ensuring that all parties share a common understanding of what has been tested, and ascertaining compliance with the minimum regulatory or industry expectations, is critical to performing governance oversight activities and complying with regulatory reporting.
Unclear penetration test scope.
A report without a properly defined and communicated scope may give its readers inappropriate assurance and enable business executives to justify allocating funds away from information security initiatives. For unsophisticated executive management or audit committee members, limited-purpose penetration test results can be used to project positive results for the entire organization. For example, a PCI-related penetration limited to assets that store or process payment cardholder data would not typically include testing all the organization’s assets. Sometimes these tests are limited to finding the first exploitation opportunity; then the test ends, and remediation begins. To avoid misunderstandings, audit committee members should clearly understand the scope of test results presented and inquire about management’s intention and strategy for testing assets outside the scope of the current assessment.
Rules of engagement not specified.
These rules reflect how the penetration test will be conducted. For example, they set the role the penetration tester will assume—whether as an outside with minimal information, a low-level insider, or a higher-level insider. The position assumed determines the amount of information and access provided to the tester. The rules would include names and IP addresses of assets, responsibilities to coordinate and adhere to third-party service provider responsibilities (e.g., AWS, Azure), and in-scope targets. The rules would also typically define what constitutes a successful penetration. The rules can also contain specific requirements that a penetration tester adheres to. For example, security professionals continuously debate the benefits of the tester exploiting vulnerabilities identified that might potentially harm the tested system or identify the vulnerability and assume that a hacker would exploit it. From the penetration tester’s perspective, the rules can also be used to request any accommodations accorded to the penetration tester, including potentially ignoring controls used to identify a hacker. Just as when confirming the project scope above, audit committee members should appreciate the rules’ impact on the test results, especially any accommodations made and assumptions used. Misunderstanding these rules can distort the results relied upon by the committee.
Confusing and technical reporting.
Reports describing penetration testing results can be confusing for many audit committee members. Although some federal government agencies have tried to require a standard format, each penetration tester uses its own design to communicate results, especially in the private sector. Sometimes the tester provides a judgmentally determined overall “letter grade” (A–F) or a simple pass/fail for the penetration. Although helpful, without substantiating how the grade was determined, confusion arises as to how many and which types of resources are needed to remedy the identified problem. Another example occurs when the penetration tester includes only highly technical information produced by a vulnerability assessment scan—or worse, performs only vulnerability scanning instead of the agreed-upon pen test.
Audit committee members should insist on reports that present results in a manner that businesspeople can understand. One recommended approach is a four-part report. The first part is an executive summary that indicates what was performed and accomplished, significant recommendations (including severity) from a governance perspective, and an overall conclusion. The second section describes how the test was performed and how vulnerabilities were exploited; appropriate screenshots are included to illustrate the process and explain the technology involved. The third part is an inventory of recommendations for improvement. The last part provides technical details to support all critical findings. Although management typically summarizes the report, by understanding and reading essential parts directly, audit committee members can reach their conclusion regarding the adequacy of testing and determining findings.
Insufficient remediation monitoring.
One of the most significant penetration testing challenges facing audit committees is the need to monitor whether the issues raised in the penetration test report are appropriately remediated within the organization’s time and risk tolerances. Unfortunately, some organizations restrict their remediation to findings from a limited penetration test rather than investigating whether similar exploits exist for other technology assets. Similar to following up with traditional audit reports, audit committee members should continue to monitor appropriate management commitment and the execution of remediation strategies.
Lack of security program reconciliation.
Management should present a security program reconciliation to the audit committee after remediation efforts are complete. The reconciliation should summarize the remedial actions taken, the results of follow-up testing as needed, the gaps in testing per the security program, and the intended future testing with anticipated dates. Some testing may include traditional audit tests—especially procedures performed by the internal audit function. The reconciliation will enable committee members to ensure completeness and relevancy of tests.
Whether through regulation or enhanced customer demands, ensuring the confidentiality, integrity, and availability of systems and related information is paramount in enabling an organization to achieve its desired objectives. Penetration testing can help the audit committee fulfill its governance obligations when used and assessed correctly. Although inherently technical, the committee can leverage its business acumen and interviewing skills to determine management’s ability to manage cybersecurity risk within the organization’s tolerances.