Enjoy Gartner’s Strategic Roadmap for Managing Threat Exposure | Bitsight
Key Findings
- Having a place to record and report potential impact of breaches based on a value-add assessment of the output of a continuous threat exposure management (CTEM) process enables tangible risk reduction which adds value to the organisation.
- Containment of risks to security can be conducted by a variety of methods comprising simulation, configuration assessment as well as formal testing, meaning unknown vulnerabilities can be detected and analysed at different points in the workflow process
- The solutions timetabled should be communicated to the management team promptly, and consulting on the adoption of mobilisation processes enables a positive feedback loop on proposed patches’ success rate.
Security and risk management leaders, especially CISOs, establishing or enhancing EM programs should:
- Build exposure assessment scopes based on key business priorities and risks, taking into consideration the potential business impact of a compromise rather than primarily focusing on the severity of the threat alone.
- Initiate a project to build cybersecurity validation techniques into EM processes by evaluating tools such as breach and attack simulation, attack path mapping and penetration testing automation products or services.
- Engage with senior leadership to understand how exposure should be reported in a meaningful way by using existing risk assessments as an anchor for these discussions, and by creating consistent categorization for discoveries that are agreed with other departments in the organization.
- Agree effective routes to resolution and prioritization characteristics before beginning to report new discovered exposures by working with leaders of adjacent departments across the business in areas such as IT management, network operations, application development and human resources
Strategic Planning Assumptions
Through 2028, validation of threat exposures by implementing or assessments with security controls deployed will be an accepted alternative to penetration testing requirements in regulatory frameworks.
Through 2026, more than 40% of organizations, including two-thirds of midsize enterprises will rely on consolidated platforms or managed service providers to run cybersecurity validation assessments.
The report emphasized the importance of a comprehensive internal policy where decision makers are held accountable and where the management team co-operates with strategic campaigns which are consistent with the business’s key objectives as regards managing the threat of professional exploits exploiting internal penetration points.
It insisted, “security must ensure that controls are aligned with the organization’s overall strategy and objectives, and provide clear rationale and prioritization for its objectives and activities. “
“Without impact context, the exposures may be addressed in isolation, leading to uncoordinated fixes relegated to individual departments exacerbating the current problems.”
A CTEM program concurrently runs multiple scopes simultaneously; scoping is a focus for reporting rather than the extent of the program’s reach (see Figure 2), as any number of scopes can be run concurrently via the ‘master scope’ which categorises threats in a translation of code debugging jargon; and a sub-scopes with a higher degree of technical explanation.
Breaches can occur from a variety of points, specifically
- Third-party applications and services — such as SaaS, supply chain dependencies and code repositories.
- Authentication — both applications, third-party services and adjacent authentication solutions such as authentication keys for API-driven systems.
- Consumer-grade services — social media/brand-impacting communications.
- Leaked data — covering both data stored in deep/dark web forums and self-leaked data via employee actions, password reuse or poor information hygiene.
Risks can be assessed based on external stakeholders’ access level to data, modern identity management i.e. one which uses MFA in a dynamically readjusting framework; operational technology (OT) and Internet of Things (IoT) systems; ensuring that potential penetration via exploitable access pathways is contained and the reputational damage as well as business disruption is minimised.
An illustrative example of how to map known and unknown threats co-locates them within the business infrastructure by siloing assets outside of core security controls as and when these interlap with both assets with business-critical apps, as well as assets with exploitable vulnerabilities, providing a heat-map of high-priority risks.
Application scanning is performed in the form of a test penetration by rearchers to exploit known vulnerabilities, using either authenticated or unauthenticated logins to gain access.
Assets which are discoverable within the IP address range, or subnet, are often layered and the task comprises categorising core available services – those actively promoted by the company – as well as system updates which may be corrupted or out-of-date.
The report acknowledges that
the scope of such scans is limited only to infrastructure that can be discovered in a closed or targeted business-managed environment
So external access to the software or platform is not scoped as is not in range of discoverable assets needing protection.
Whilst internal benchmarking scoreboards used to identify the threat level are an essential component of threat-mapping, the report emphasized that threat actor motivation and commercial or ‘public interest’ availability of the corrupted patch or platform version be accounted for. This enables a solution to be negotiated where the exploit is published on common security breach platforms.
The report’s authors stress that while determining the accessibility of discovered issues is necessary to limit exposure to fresh exploits, the end result to the business’s normal operations should also be considered in the context of the cost of disruption.
Attack-path mapping is predicated on Risk-based vulnerability management (RBVM, of which an Exploit Prediction Scoring System (EPSS) provides a benchmark quantifying the success of subsequent controls in retrograde, whether these are automated still ensures dynamic adaptation of security patches working within the system’s pre-existing schema for data storage and brokerage where third-party stakeholders have privileged access.
The default mode of a Common vulnerability scoring system (CVSS) enables an Attack Surface Assessment (ASA) which does involve impact mapping onto core internal and external stakeholders, but even with the intelligent design of a Security Configuration Management (SecCM), without dynamically re-adjusting system controls the problem of unauthorised access will only be contained with regard to known vulnerabilities and comprises legacy infrastructure that is still open to new exploits yet to be developed and deployed.
The Chief Information Security Officer must develop a forward-looking process of data collection and analysis of the extent of exposure is essential to containment and continous monitoring of risks; response plans should be prepared in advance, and these aligned with key performance indicators for the business as a whole, as well as having a reasonable probability of successful uptake.
To avoid remedial measures deployment being lost in translation to strategic decision-makers within the organisation, the report emphasized that
reporting and communicating with senior leadership is a key element to the success of any exposure management process, such reporting needs to be nontechnical, actionable and regularly updated.
In creating a ‘single picture of risk’ which is migrated into vulnerable system components, security researchers are required to work towards an effective solution benchmarking method which keeps workload within manageable parameters, that is to say to
“Limit the scope of a target set to ensure its manageability and applicability for the long term, ensuring that the scope is broad enough to highlight a business-linked problem and not an individual system issue.”
The report emphasised that known security issues should be categorised based on a cascading scale of potential consequences, with descriptive labels that are information-relevant and not alarmist, like “ransomware”. Security researchers can take ownership of high-impact problems, to ensure the threat is actively monitored and software additions are dynamically readjusting to both the nature of the threat and the potential impact of “business interruption.”
The report concludes that
“Communicating demonstrable risk reduction benefits through a single management platform is more achievable than attempting to deliver identification and resolution of discovered issues in silos. Armed with a place to measure benefits from risk reduction activities, CISOs can surface the greater value of the security operations team and justify why it should remain a key part of the operational fabric of the business.”