More businesses assess AI’s risks than last year

The World Economic Forum (WEF) has uncovered a positive trend in the world of AI – with companies finally taking action to address the security risks of AI, as nearly two in three (64%) are now assessing the risks before deploying tools (up from 37% last year).
When it comes to their cybersecurity strategies as a whole, almost all (94%) agree that AI tools will be the biggest driver of change in 2026.
This comes from the 2026 version of the Global Cybersecurity Outlook, published in collaboration with Accenture.
AI and cybersecurity strategies are finally being developed hand-in-hand

The reported changes in attitude are likely prompted by the fact that 87% believe that AI-related vulnerabilities have increased. Data leaks (34%) are CEOs’ biggest concerns, technical security of AI systems saw the biggest increase (13% in 2026 vs 5% in 2025), and the advancement of adversarial capabilities saw the biggest drop (29% in 2026 vs 47% in 2025) despite being the second-biggest concern.
Today, around two-thirds (64%) of organizations factor in geopolitically motivated attacks, and many are moving towards sovereign cloud options. Still, there are differences in how the C-suite perceive AI threats. CEOs now quote fraud and AI vulnerabilities as their biggest concerns, but CISOs are most concerned about ransomware and supply chain disruptions. Both leader types noted software vulnerability exploitations as their third-highest concern.
Despite widespread agreement that AI-enabled threats have risen, companies are still turning to AI to respond. Three-quarters (77%) now use AI for cybersecurity, with the most common applications being phishing detection (52%), intrusion detection (46%), and automating security operations (43%).
On the flip side, a lack of skills (54%), the need for human validation (41%), and uncertainty about risks (39%) are the key barriers to using AI in cybersecurity.

Leave a Reply