Tag: security

  • Palo Alto Unit 42 Response Report to Cyber Security Threats

    The group stated there were indications that “threat actors are finding leak site extortion less effective in compelling payments… threat actors are piling on additional tactics to ensure they get their payments.”

    In 2024, 86% of incidents to which Unit42 responded involved losses damaging to reputation or business processes, with attackers starting with encryption and data left, to lock users out of collectively managed files, to deleting VMWare and corrupting data entries with tampering or deletion.

    A popular tactic was to target “deep partner networks”, requiring a costly containment operation that was time-consuming, once the patch had been applied, to re-authenticate the connection.

    Clients operating in industries such as healthcare, hospitality, manufacturing and critical infrastructure have had to “grapple with extended downtime, strain on partner and customer relationships and bottom-line inputs”. The medium extortion rate increased nearly 80% to $1.25mn in 2024 from $695,001 in 2023.

    However, in cases where a payment was negotiated to the hackers, Palo Alto found that the median ransom payment rose just £30,000 to $267,500 in 2024, representing more than 50% decline from the original amount.

    The median initial demand for 2024 is 2% of an organisation’s perceived annual revenue, with over half of ransom demands falling between 0.5% and 5% of the victim’s perceived revenue, although outliers existed where over half annual turnover was demanded.

    In terms of the nature of attacks, just over one-third of incidents involved cloud-based data, with dangling logins left stranded as virtual infrastructure (SaaS) was exploited via connection re-routing. Lack of Multi-factor-authentication was just 1/4 of reported attacks, vs 1.3 in 2023.

    On numerous occasions, Unit 42 reported threat actors as having used “leaked API/access keys for initial access. This often gives threat actors leverage for further compromise….

    In 45% of cases when we observed exfiltration, attackers sent the data to cloud storage… a technique that can mask the attacker’s activity within legitimate organisational traffic.”

    Inactive personal accounts can be leveraged to launch internal attacks in an organisation’s software configurations (T1484 – Domain or Tenant Policy Modification); web-scraping for privileged account logins can be successfully masked as the attacker leverages “Abuse admin-level access” – or they can cloak their plugin’s activity by high-jacking cloud resources, taking snapshots of storage parameters to identify data the organisation considers valuable.

    Palo Alto said that although attackers have capacity to disable or modify tools, system firewall and Windows Event logging, even where exploiting a botte-necked workflow pipeline for privilege escalation, it is worth noting that in 75% of incidents investigated, “critical evidence of the initial intrusion was present in the logs. Yet, due to complex, disjointed systems, that information wasn’t readily accessible or effectively operationalised.”

    The group suggests the application of a “zero-trust” policy which is able to pivot quickly around a breach to contain it, and to prioritise security of valuable data by accurately monitoring access levels and data flows, to “stop unauthorised transfers, shielding your authorisation from IP theft, compliance violations and financial repercussions.”

    An emerging threat is the proliferation of AI-assisted attacks, against which it recommends the following precautions:

    • Deploy AI-driven detection to spot malicious patterns at machine speed, correlating data from multiple sources.
    • Train staff to recognise AI-generated phishing, deepfakes and social engineering attempts.
    • Incorporate adversarial simulation exercises in tabletop exercises to prepare for rapid, large-scale attacks.
    • Develop automated workflows so your SOC can contain threats before they pivot or exfiltrate data.
  • Pao Alto Networks SecOps white paper – executive summary

    Utilising GenAI and machine learning helps in operational deployment at scale, where previously it was among the top 5 KPIs, now SecOps teams are reporting “more efficient threat detection and response”, in key areas: 

    • Extended detection and response (XDR) 
    • Security information and event management (SIEM) 
    • Addition of GenAI engineering on platforms improves operational efficiency. 

    To gain further insights into these mega-trends and other developments in the security operations space, TechTarget’s Enterprise 

    Strategy Group surveyed 366 IT and cybersecurity professionals at large midmarket and enterprise organizations in North America 

    (US and Canada) involved with security operations technology and processes 

    The top 6 SecOps challenges were: 

    1. Monitoring security across a growing and changing attack surface (42%) 
    1. Managing too many disconnected port tools for security analytics and operations, making it difficult to piece together a holistic strategy and investigate complex threats (33%). 

    However, more than 

    half (55%) of organizations report that consolidation efforts 

    are streamlining the management and operations of the 

    many security tools and processes in use. 

    1. Operationalising cyberthreat intelligence (33%) 
    1. Spending too much time on high-priority or emergency issues and not enough time on strategy and process improvement (32%) 
    1. Detecting and/or responding to security incidents in a timely manner (31%) 
    1. Gaining the appropriate level of security with cloud-based workloads, applications, and SaaS. (31%) 

    Areas for improvement include: 

    Detecting or hunting for unknown threats (32%) and being able to visualise the threat landscape in targeting a reaction to integrated systems’ embedded rewrites by bad actors (36%). 

    Another core performance indicator was “keeping up with” a changing infrastructural service offering (27%) and ensuring a proportionate and targeted response based on threat priority analysis (27%). This was seen as an essential precursor for complying with regulatory compliance or corporate governance requirements (26%), on data brokerage and disclosure of known systemic threats. The timing of the response was also deemed important, with 25% stating it could be improved. 

    Maintaining a database of known threats is de rigueur for the majority of participants, most of whom say managing a growing data security set – 77% say this is not something they struggle with. Engineering automation was also an area just 18% of respondents would label an area for improvement, while 24% were concerned what the efficacy of stress testing patches and system updates deployed in the cloud in a reactive SaaS managed offering. 

    An estimated 80% of respondents were happy with their ability to triage threats before escalating them. 

    Know your toolset 

    At the moment, around 91% of organisations reported the usage of a minimum of 10 SecOps tools, though 30% have recently consolidated their offering to ensure systemic integration for existing and pipeline data protection solutions. 

    Nearly 9 in 10 respondents already using an XDR solution (64% of the sample) expect them to supplement vs replace SIEM and other SecOps tools; for XDR solutions still in development, reported 21% of the sample. 

    Drawbacks of SIEM solutions were cited as exorbitant costing on software licensing as the threat catalogue expands and requires consistent patching (32%); the expertise required to perform more advanced analytics than that sold over the counter (OTC) (32%); and that the context of threat intelligence to business processes was often overlooked (23%) as the process hinged on detecting rule creation in dynamic response to events (25%) which must be constantly redefined as the threat evolves. 

    Continuous threat monitoring and management were seen as a key component of 

    gaining appropriate levels of security oversight 

    with cloud-based workloads, applications, and SaaS moved up in terms of the number of organizations prioritizing it as an issue, reflecting continuing growth and change in cloud 

    infrastructure and applications. 

    Key drivers of these consolidation campaigns were cited as: cost optimisation (39%), reducing tools management overhead by simplifying and streamlining the offering (35%); and the desire to enhance more advanced threat detection capacity (34%).  

    The context of the threat, say respondents, can be lost in the weight of the response, with the security operations stack generating an “unmanageable” load of alerts (33%), and in parallel with this target was the desire to “reduce overhead associated with point tools integration, development and maintenance” (32%), so that after threats are ranked in terms of their potential damage to the system, permanent threat management plug-ins can be worked in which are reactive and deliver a cost-effective solution which is proportional to the degree of the threat and can be dynamically re-adjusted. 

    In terms of data governance in repositories, 

    • 43% are in centralised silos 
    • 47% are in “more centralized, but some distributed or federated data” 
    • With just 7% using distributed ledger technology 
    • And 3% with the majority of data either distributed or federated, but with some centralised data. 

    In relation to XDR response tools, the survey found that 39% of respondents found current tools were not appropriately assimilated, meaning threat detection was “more cumbersome” than it should have been; while 35% noted specific “gaps” in cloud detection and response. 

  • Enjoy Gartner’s Strategic Roadmap for Managing Threat Exposure | Bitsight 

    Enjoy Gartner’s Strategic Roadmap for Managing Threat Exposure | Bitsight 

    Key Findings 

    • Having a place to record and report potential impact of breaches based on a value-add assessment of the output of a continuous threat exposure management (CTEM) process enables tangible risk reduction which adds value to the organisation.  
    • Containment of risks to security can be conducted by a variety of methods comprising simulation, configuration assessment as well as formal testing, meaning unknown vulnerabilities can be detected and analysed at different points in the workflow process  
    • The solutions timetabled should be communicated to the management team promptly, and consulting on the adoption of mobilisation processes enables a positive feedback loop on proposed patches’ success rate. 

    Security and risk management leaders, especially CISOs, establishing or enhancing EM programs should: 

    • Build exposure assessment scopes based on key business priorities and risks, taking into consideration the potential business impact of a compromise rather than primarily focusing on the severity of the threat alone. 
    • Initiate a project to build cybersecurity validation techniques into EM processes by evaluating tools such as breach and attack simulation, attack path mapping and penetration testing automation products or services. 
    • Engage with senior leadership to understand how exposure should be reported in a meaningful way by using existing risk assessments as an anchor for these discussions, and by creating consistent categorization for discoveries that are agreed with other departments in the organization. 
    • Agree effective routes to resolution and prioritization characteristics before beginning to report new discovered exposures by working with leaders of adjacent departments across the business in areas such as IT management, network operations, application development and human resources 

    Strategic Planning Assumptions 

    Through 2028, validation of threat exposures by implementing or assessments with security controls deployed will be an accepted alternative to penetration testing requirements in regulatory frameworks. 

    Through 2026, more than 40% of organizations, including two-thirds of midsize enterprises will rely on consolidated platforms or managed service providers to run cybersecurity validation assessments. 

    The report emphasized the importance of a comprehensive internal policy where decision makers are held accountable and where the management team co-operates with strategic campaigns which are consistent with the business’s key objectives as regards managing the threat of professional exploits exploiting internal penetration points.

    It insisted, “security must ensure that controls are aligned with the organization’s overall strategy and objectives, and provide clear rationale and prioritization for its objectives and activities. “

    “Without impact context, the exposures may be addressed in isolation, leading to uncoordinated fixes relegated to individual departments exacerbating the current problems.”

    A CTEM program concurrently runs multiple scopes simultaneously; scoping is a focus for reporting rather than the extent of the program’s reach (see Figure 2), as any number of scopes can be run concurrently via the ‘master scope’ which categorises threats in a translation of code debugging jargon; and a sub-scopes with a higher degree of technical explanation.

    Breaches can occur from a variety of points, specifically

    • Third-party applications and services — such as SaaS, supply chain dependencies and code repositories. 
    • Authentication — both applications, third-party services and adjacent authentication solutions such as authentication keys for API-driven systems. 
    • Consumer-grade services — social media/brand-impacting communications. 
    • Leaked data — covering both data stored in deep/dark web forums and self-leaked data via employee actions, password reuse or poor information hygiene. 

    Risks can be assessed based on external stakeholders’ access level to data, modern identity management i.e. one which uses MFA in a dynamically readjusting framework; operational technology (OT) and Internet of Things (IoT) systems; ensuring that potential penetration via exploitable access pathways is contained and the reputational damage as well as business disruption is minimised. 

    An illustrative example of how to map known and unknown threats co-locates them within the business infrastructure by siloing assets outside of core security controls as and when these interlap with both assets with business-critical apps, as well as assets with exploitable vulnerabilities, providing a heat-map of high-priority risks. 

    Application scanning is performed in the form of a test penetration by rearchers to exploit known vulnerabilities, using either authenticated or unauthenticated logins to gain access. 

    Assets which are discoverable within the IP address range, or subnet, are often layered and the task comprises categorising core available services – those actively promoted by the company – as well as system updates which may be corrupted or out-of-date. 

    The report acknowledges that 

    the scope of such scans is limited only to infrastructure that can be discovered in a closed or targeted business-managed environment 

    So external access to the software or platform is not scoped as is not in range of discoverable assets needing protection. 

    Whilst internal benchmarking scoreboards used to identify the threat level are an essential component of threat-mapping, the report emphasized that threat actor motivation and commercial or ‘public interest’ availability of the corrupted patch or platform version be accounted for. This enables a solution to be negotiated where the exploit is published on common security breach platforms. 

    The report’s authors stress that while determining the accessibility of discovered issues is necessary to limit exposure to fresh exploits, the end result to the business’s normal operations should also be considered in the context of the cost of disruption. 

    Attack-path mapping is predicated on Risk-based vulnerability management (RBVM, of which an Exploit Prediction Scoring System (EPSS) provides a benchmark quantifying the success of subsequent controls in retrograde, whether these are automated still ensures dynamic adaptation of security patches working within the system’s pre-existing schema for data storage and brokerage where third-party stakeholders have privileged access.  

    The default mode of a Common vulnerability scoring system (CVSS) enables an Attack Surface Assessment (ASA) which does involve impact mapping onto core internal and external stakeholders, but even with the intelligent design of a Security Configuration Management (SecCM), without dynamically re-adjusting system controls the problem of unauthorised access will only be contained with regard to known vulnerabilities and comprises legacy infrastructure that is still open to new exploits yet to be developed and deployed. 

    The Chief Information Security Officer must develop a forward-looking process of data collection and analysis of the extent of exposure is essential to containment and continous monitoring of risks; response plans should be prepared in advance, and these aligned with key performance indicators for the business as a whole, as well as having a reasonable probability of successful uptake. 

    To avoid remedial measures deployment being lost in translation to strategic decision-makers within the organisation, the report emphasized that 

    reporting and communicating with senior leadership is a key element to the success of any exposure management process, such reporting needs to be nontechnical, actionable and regularly updated. 

    In creating a ‘single picture of risk’ which is migrated into vulnerable system components, security researchers are required to work towards an effective solution benchmarking method which keeps workload within manageable parameters, that is to say to 

    “Limit the scope of a target set to ensure its manageability and applicability for the long term, ensuring that the scope is broad enough to highlight a business-linked problem and not an individual system issue.”

    The report emphasised that known security issues should be categorised based on a cascading scale of potential consequences, with descriptive labels that are information-relevant and not alarmist, like “ransomware”. Security researchers can take ownership of high-impact problems, to ensure the threat is actively monitored and software additions are dynamically readjusting to both the nature of the threat and the potential impact of “business interruption.” 

    The report concludes that 

    “Communicating demonstrable risk reduction benefits through a single management platform is more achievable than attempting to deliver identification and resolution of discovered issues in silos. Armed with a place to measure benefits from risk reduction activities, CISOs can surface the greater value of the security operations team and justify why it should remain a key part of the operational fabric of the business.”