Tag: ai

  • Confluent’s Predictions for GenAI in 2026

    According to a recent survey, 68% of IT leaders recently attributed data silos as a major impediment to AI success. To adapt, companies will need to take these steps: 

    • Exposing agent-safe APIs 
    • Adopting tokenised payment protocols 
    • Making real-time product data available, to forestall transaction bottlenecks down the pipeline for an “always-synchronised commerce layer.” 

    Confluent’s 2026 forecast explained, 

    “Companies will need to figure out how to optimize sales and marketing for the machines that will increasingly do the decision making, and in some cases, ultimately the buying. If you think human customers are fickle, machine customers can be ruthless: they have zero patience for latency, no brand loyalty, and can switch vendors mid-transaction whenever a better offer appears.” 

    Despite security concerns about the open-source protocol being shared with participants, who may not have been appropriately vetted, “the gravitational pull towards a single, easy open protocol that reduces friction and developer overhead will prove irresistible in 2026.” 

    Although it concedes that “other competing standards like Agent2Agent (A2A) and Agent Communication Protocol (ACP) continue to vie for relevance in agent-to-agent communication,” adopting the Model Context Protocol (MCP) framework will ensure flexibility in uptake of contributing LLMs, ensuring that your data vendor is fit for purpose and allowing trade-in for alt vendors. 

    The report asserts that context engineering will mainstream in 2025, whereas 2024 saw the roll-out of agentic AI. Context engineering is designed to iterate on data with priority schemas at the forefront of the analytic process, while not overburdening the LLM with run codes that slow down or impede the accuracy of its performance, in a continuous evaluation process guided by context. By using pre-fitted models, you are able “to refine logic, and adjust live through prompts, rules, and context.” 

    More offloading of queries to caches or additional databases will be mandated, to enable systems to cope with the data usage levels in 2026. “Now is the time to implement change data capture (CDC) pipelines and ensure data is flowing in near real time,” as agentic queries are run on a more comprehensive scale than those overseen by human actors and day-to-day data hygiene tasks are increasingly automated. 

    Confluent also stressed the vital role of cyber security in 2026, with many technical leaders citing it as a core priority. Current forecasts for global cyber crime account for losses of about £12trillion, although the uptake of AI by threat actors could bump the figure as high as $18trillion p.a. Attack volume may actually double, making scenario modelling and penetration testing essential to containing registered threats and prioritising responses according to business impact, to isolate contaminated divisions stacked by tiers according to priority. Cyber security managers need to be able to pivot around a threat, using segregated “high volume, low-latency data infrastructure for instant analysis and automated response.” 

    In the context of data governance, for safeguarding standards to be upheld, ensuring data trust, quality and lineage, data protection officers need to be proactive about monitoring reuse of sensitive data and linked queries run against data tables where access is permitted based not only on shared values, but also on overlapping fields in segregated data tables where bulk processing requirements can override usage rules. The report stated that 84% of technical leaders recently called data management and governance a top-tier technology priority. 

    Confluent is pushing its cold data siloing solution, Iceberg, as a way of meeting requirements for data reuse. 

     Iceberg is poised to lead this transformation, with continued maturity in its Puffin metadata format, advancements in data compaction and sorting, and emerging row-level lineage capabilities. Compared to other open table formats, like Databricks Delta Lake (which is optimized for “hot” read performance), Iceberg’s merge-on-read and partitioning design make it particularly well suited for cold data workloads.” 

    Durable execution engines will be the smart investment choice, with event-driven responses to retries, timeouts and compensations. LangGraph and Pydantic AI have pioneered the adoption of fault-tolerant frameworks, which allow for solution via live diagnostics. This enables complex patterns, such as those run by Sage and CQRS, to transition from microservice code into work-flow-as-code. 

    “At the same time, Kafka will remain the scalable event backbone and Flink a high-throughput low latency processing and real-time analytics engine.  

    The report concluded with a call to CIOs not to overwrite or override the existing infrastrucutre, but to work with GenAI to adapt the system to changing usage needs. 

    “Executives should begin to re-analyze the cost-risk for legacy system modernization with GenAI, leveraging specialized integrators for generative code translation and understanding. Developers should focus on migrating legacy messaging systems (like JMS) to modern event-driven architectures, using AI tools to accelerate the process and transform hard-to-maintain legacy code into new, valuable capabilities.” 

  • Pao Alto Networks SecOps white paper – executive summary

    Utilising GenAI and machine learning helps in operational deployment at scale, where previously it was among the top 5 KPIs, now SecOps teams are reporting “more efficient threat detection and response”, in key areas: 

    • Extended detection and response (XDR) 
    • Security information and event management (SIEM) 
    • Addition of GenAI engineering on platforms improves operational efficiency. 

    To gain further insights into these mega-trends and other developments in the security operations space, TechTarget’s Enterprise 

    Strategy Group surveyed 366 IT and cybersecurity professionals at large midmarket and enterprise organizations in North America 

    (US and Canada) involved with security operations technology and processes 

    The top 6 SecOps challenges were: 

    1. Monitoring security across a growing and changing attack surface (42%) 
    1. Managing too many disconnected port tools for security analytics and operations, making it difficult to piece together a holistic strategy and investigate complex threats (33%). 

    However, more than 

    half (55%) of organizations report that consolidation efforts 

    are streamlining the management and operations of the 

    many security tools and processes in use. 

    1. Operationalising cyberthreat intelligence (33%) 
    1. Spending too much time on high-priority or emergency issues and not enough time on strategy and process improvement (32%) 
    1. Detecting and/or responding to security incidents in a timely manner (31%) 
    1. Gaining the appropriate level of security with cloud-based workloads, applications, and SaaS. (31%) 

    Areas for improvement include: 

    Detecting or hunting for unknown threats (32%) and being able to visualise the threat landscape in targeting a reaction to integrated systems’ embedded rewrites by bad actors (36%). 

    Another core performance indicator was “keeping up with” a changing infrastructural service offering (27%) and ensuring a proportionate and targeted response based on threat priority analysis (27%). This was seen as an essential precursor for complying with regulatory compliance or corporate governance requirements (26%), on data brokerage and disclosure of known systemic threats. The timing of the response was also deemed important, with 25% stating it could be improved. 

    Maintaining a database of known threats is de rigueur for the majority of participants, most of whom say managing a growing data security set – 77% say this is not something they struggle with. Engineering automation was also an area just 18% of respondents would label an area for improvement, while 24% were concerned what the efficacy of stress testing patches and system updates deployed in the cloud in a reactive SaaS managed offering. 

    An estimated 80% of respondents were happy with their ability to triage threats before escalating them. 

    Know your toolset 

    At the moment, around 91% of organisations reported the usage of a minimum of 10 SecOps tools, though 30% have recently consolidated their offering to ensure systemic integration for existing and pipeline data protection solutions. 

    Nearly 9 in 10 respondents already using an XDR solution (64% of the sample) expect them to supplement vs replace SIEM and other SecOps tools; for XDR solutions still in development, reported 21% of the sample. 

    Drawbacks of SIEM solutions were cited as exorbitant costing on software licensing as the threat catalogue expands and requires consistent patching (32%); the expertise required to perform more advanced analytics than that sold over the counter (OTC) (32%); and that the context of threat intelligence to business processes was often overlooked (23%) as the process hinged on detecting rule creation in dynamic response to events (25%) which must be constantly redefined as the threat evolves. 

    Continuous threat monitoring and management were seen as a key component of 

    gaining appropriate levels of security oversight 

    with cloud-based workloads, applications, and SaaS moved up in terms of the number of organizations prioritizing it as an issue, reflecting continuing growth and change in cloud 

    infrastructure and applications. 

    Key drivers of these consolidation campaigns were cited as: cost optimisation (39%), reducing tools management overhead by simplifying and streamlining the offering (35%); and the desire to enhance more advanced threat detection capacity (34%).  

    The context of the threat, say respondents, can be lost in the weight of the response, with the security operations stack generating an “unmanageable” load of alerts (33%), and in parallel with this target was the desire to “reduce overhead associated with point tools integration, development and maintenance” (32%), so that after threats are ranked in terms of their potential damage to the system, permanent threat management plug-ins can be worked in which are reactive and deliver a cost-effective solution which is proportional to the degree of the threat and can be dynamically re-adjusted. 

    In terms of data governance in repositories, 

    • 43% are in centralised silos 
    • 47% are in “more centralized, but some distributed or federated data” 
    • With just 7% using distributed ledger technology 
    • And 3% with the majority of data either distributed or federated, but with some centralised data. 

    In relation to XDR response tools, the survey found that 39% of respondents found current tools were not appropriately assimilated, meaning threat detection was “more cumbersome” than it should have been; while 35% noted specific “gaps” in cloud detection and response. 

  • Enjoy Gartner’s Strategic Roadmap for Managing Threat Exposure | Bitsight 

    Enjoy Gartner’s Strategic Roadmap for Managing Threat Exposure | Bitsight 

    Key Findings 

    • Having a place to record and report potential impact of breaches based on a value-add assessment of the output of a continuous threat exposure management (CTEM) process enables tangible risk reduction which adds value to the organisation.  
    • Containment of risks to security can be conducted by a variety of methods comprising simulation, configuration assessment as well as formal testing, meaning unknown vulnerabilities can be detected and analysed at different points in the workflow process  
    • The solutions timetabled should be communicated to the management team promptly, and consulting on the adoption of mobilisation processes enables a positive feedback loop on proposed patches’ success rate. 

    Security and risk management leaders, especially CISOs, establishing or enhancing EM programs should: 

    • Build exposure assessment scopes based on key business priorities and risks, taking into consideration the potential business impact of a compromise rather than primarily focusing on the severity of the threat alone. 
    • Initiate a project to build cybersecurity validation techniques into EM processes by evaluating tools such as breach and attack simulation, attack path mapping and penetration testing automation products or services. 
    • Engage with senior leadership to understand how exposure should be reported in a meaningful way by using existing risk assessments as an anchor for these discussions, and by creating consistent categorization for discoveries that are agreed with other departments in the organization. 
    • Agree effective routes to resolution and prioritization characteristics before beginning to report new discovered exposures by working with leaders of adjacent departments across the business in areas such as IT management, network operations, application development and human resources 

    Strategic Planning Assumptions 

    Through 2028, validation of threat exposures by implementing or assessments with security controls deployed will be an accepted alternative to penetration testing requirements in regulatory frameworks. 

    Through 2026, more than 40% of organizations, including two-thirds of midsize enterprises will rely on consolidated platforms or managed service providers to run cybersecurity validation assessments. 

    The report emphasized the importance of a comprehensive internal policy where decision makers are held accountable and where the management team co-operates with strategic campaigns which are consistent with the business’s key objectives as regards managing the threat of professional exploits exploiting internal penetration points.

    It insisted, “security must ensure that controls are aligned with the organization’s overall strategy and objectives, and provide clear rationale and prioritization for its objectives and activities. “

    “Without impact context, the exposures may be addressed in isolation, leading to uncoordinated fixes relegated to individual departments exacerbating the current problems.”

    A CTEM program concurrently runs multiple scopes simultaneously; scoping is a focus for reporting rather than the extent of the program’s reach (see Figure 2), as any number of scopes can be run concurrently via the ‘master scope’ which categorises threats in a translation of code debugging jargon; and a sub-scopes with a higher degree of technical explanation.

    Breaches can occur from a variety of points, specifically

    • Third-party applications and services — such as SaaS, supply chain dependencies and code repositories. 
    • Authentication — both applications, third-party services and adjacent authentication solutions such as authentication keys for API-driven systems. 
    • Consumer-grade services — social media/brand-impacting communications. 
    • Leaked data — covering both data stored in deep/dark web forums and self-leaked data via employee actions, password reuse or poor information hygiene. 

    Risks can be assessed based on external stakeholders’ access level to data, modern identity management i.e. one which uses MFA in a dynamically readjusting framework; operational technology (OT) and Internet of Things (IoT) systems; ensuring that potential penetration via exploitable access pathways is contained and the reputational damage as well as business disruption is minimised. 

    An illustrative example of how to map known and unknown threats co-locates them within the business infrastructure by siloing assets outside of core security controls as and when these interlap with both assets with business-critical apps, as well as assets with exploitable vulnerabilities, providing a heat-map of high-priority risks. 

    Application scanning is performed in the form of a test penetration by rearchers to exploit known vulnerabilities, using either authenticated or unauthenticated logins to gain access. 

    Assets which are discoverable within the IP address range, or subnet, are often layered and the task comprises categorising core available services – those actively promoted by the company – as well as system updates which may be corrupted or out-of-date. 

    The report acknowledges that 

    the scope of such scans is limited only to infrastructure that can be discovered in a closed or targeted business-managed environment 

    So external access to the software or platform is not scoped as is not in range of discoverable assets needing protection. 

    Whilst internal benchmarking scoreboards used to identify the threat level are an essential component of threat-mapping, the report emphasized that threat actor motivation and commercial or ‘public interest’ availability of the corrupted patch or platform version be accounted for. This enables a solution to be negotiated where the exploit is published on common security breach platforms. 

    The report’s authors stress that while determining the accessibility of discovered issues is necessary to limit exposure to fresh exploits, the end result to the business’s normal operations should also be considered in the context of the cost of disruption. 

    Attack-path mapping is predicated on Risk-based vulnerability management (RBVM, of which an Exploit Prediction Scoring System (EPSS) provides a benchmark quantifying the success of subsequent controls in retrograde, whether these are automated still ensures dynamic adaptation of security patches working within the system’s pre-existing schema for data storage and brokerage where third-party stakeholders have privileged access.  

    The default mode of a Common vulnerability scoring system (CVSS) enables an Attack Surface Assessment (ASA) which does involve impact mapping onto core internal and external stakeholders, but even with the intelligent design of a Security Configuration Management (SecCM), without dynamically re-adjusting system controls the problem of unauthorised access will only be contained with regard to known vulnerabilities and comprises legacy infrastructure that is still open to new exploits yet to be developed and deployed. 

    The Chief Information Security Officer must develop a forward-looking process of data collection and analysis of the extent of exposure is essential to containment and continous monitoring of risks; response plans should be prepared in advance, and these aligned with key performance indicators for the business as a whole, as well as having a reasonable probability of successful uptake. 

    To avoid remedial measures deployment being lost in translation to strategic decision-makers within the organisation, the report emphasized that 

    reporting and communicating with senior leadership is a key element to the success of any exposure management process, such reporting needs to be nontechnical, actionable and regularly updated. 

    In creating a ‘single picture of risk’ which is migrated into vulnerable system components, security researchers are required to work towards an effective solution benchmarking method which keeps workload within manageable parameters, that is to say to 

    “Limit the scope of a target set to ensure its manageability and applicability for the long term, ensuring that the scope is broad enough to highlight a business-linked problem and not an individual system issue.”

    The report emphasised that known security issues should be categorised based on a cascading scale of potential consequences, with descriptive labels that are information-relevant and not alarmist, like “ransomware”. Security researchers can take ownership of high-impact problems, to ensure the threat is actively monitored and software additions are dynamically readjusting to both the nature of the threat and the potential impact of “business interruption.” 

    The report concludes that 

    “Communicating demonstrable risk reduction benefits through a single management platform is more achievable than attempting to deliver identification and resolution of discovered issues in silos. Armed with a place to measure benefits from risk reduction activities, CISOs can surface the greater value of the security operations team and justify why it should remain a key part of the operational fabric of the business.”

  • Microsoft’s annual report demonstrates continued AI innovations available across the income spectrum; and its commitment to diversity and inclusion, and cyber security

    Microsoft announced a record-beating amount in annual revenue of over $245 billion, a 16 percent year-on-year increase, with operating income up 24 percent at more than $109 billion. 

    As of June 30, 2024, $10.3 billion remained of the $60.0 billion share repurchase program which commenced in November 2021.  

    The last reported dividend was 14 December 2023, where Microsoft paid out $0.75 per share. Whilst the earnings per share compared to the S&P 500 and Nasdaq 500 shows it consistently beat both indexes, shareholders await the revelation of the dividend for Q1 2024. The total of the last dividend payout amounted to $5,574. 

    Fair Market Value (FMV) of actively traded shares amounted to $349.91 correct as of June 2024. A comparison of 5 year cumulative total return puts the calculation for the NASDAQ Computer Index at $331.2 and the S&P 500 Index’s aggregated return since 2019 at $201.5. These figures represent the net return on $100 invested on 6/30/19 in stock or index, factoring for reinvestment of dividends. 

    Its Diversity and Inclusion Report (hyper-link) highlighted its healthy workplace culture, whereby “Just as our culture has been critical in getting us to this point, it will be critical to our success going forward. At Microsoft, we think of our culture as being both input and output… For us, that means constantly exercising our growth mindset and confronting our fixed mindset—each one of us, every day. It is the only way we will succeed.” 

    Matched donations by 106,000 employees and employer amounted to a total $250million to almost 35,000 nonprofits across 111 countries, with the time spent volunteering by employees to charitable causes put at over 1million. Chairman and Chief Executive Officer Satya Nadella said in a CEO statement prepared October 2024 praised their non-profit oriented stakeholder engagement: 

    “I am deeply grateful for my colleagues’ dedication to making a difference. Together, we can continue to empower everyone around the world.” 

    In the context of AI developments, Microsoft was pleased to announce the roll-out of CoPilot as an add-on in both Business and Home and Personal versions of Microsoft Office 365…. 

    Co-Pilot for professionals is underpinned by secure GitHub repositories. One case study of Brazil’s largest bank, Itau, has since the application was rolled out across terminals seen a 68% increase in deployment speed and a 75% rate of code re-use demonstrating continued internal use-cases. The organisation bore witness to a 93% increase in deployment speed since it linked to the new GitHub repositories. This, says the case study write-up, helps them allocate more time to developing new systems, with server connectivity assured. 

    In Kenya, where much of the population does not have easy access to a bank account and no way to demonstrate their credit score, street vendors used the deployment M-Kopa, a social enterprise using Azure ML to do its forecasting utilising large language models for leads gen of financially inclusive loans issuance. 

    Microsoft’s annual report said, 

     “We offer leading frontier models, thanks to our strategic partnership with OpenAI. With Phi-3, which we announced in April, we offer a family of powerful, small language models. And, with Models as a service, we provide API access to third-party models, including the latest from Cohere, Meta, and Mistral. In total, we have over 60,000 Azure AI customers, up nearly 60 percent year-over-year… 

    This year, we also introduced Copilot Workspace, a Copilot-native developer environment, which helps any developer go from idea, to code, to software—all in natural language.” 

    Its offering Power Platform provides LLM accessible to all users, whether their use case is developing a website, automating workflows, or building a website. Year-on-year there was a net 40% increase in the user base of Power Platform, to a monthly figure of 48million users. 

    Data processing is dependent on large secured data lakes and effective connectivity when undergoing data warehousing. Microsoft said its Microsoft Intelligent Data Platform enabled business intelligence spanning storage siloes with vector embedding driving access to AI capabilities. Its new AI-powered, next-generation data platform Microsoft Fabric has a paid user base of 14,000 customers who can leverage and action their data insights within a unified SaaS fix. 

    It said that even its Microsoft Teams platform was seeing a huge up-ticc in popularity, enabling encrypted communications for a secure workplace environment, – Teams Premium surpassed 3 million seats, up nearly 400 percent year-over-year.  

    Professionalizing its GitHub Co-Pilot offering, which is used by 60% of Fortune 500 companies to streamline and increase velocity on workflow desks, has resulted in for example the Dynamics 365 Contact Center being able to integrate existing legacy infrastructure of CRM systems with advanced AI capability. 

    New use cases of targeted business applications have been found in the healthcare arena – with the DAX Co-Pilot, more than 400 healthcare organizations are increasing physician productivity and reducing burnout. On average, clinicians save more than five minutes per patient encounter. And 77 percent say it also improves documentation quality. 

    Its commitment to cyber security is evidenced by collaboration across systemically important IT service providers. “ At the Munich Security Conference in February, we came together with others across the tech sector and pledged to help prevent deceptive AI content from interfering with global elections. As part of this pledge, we have worked to empower campaigns, candidates, election officials, and voters to understand the risks of deceptive AI in elections and to take steps to protect themselves and democracies. To date, we’ve conducted deepfake trainings in over 20 countries. And our corresponding public awareness campaign has reached over 355 million people.”