By Limor Kessem, X-Force Cyber Crisis Management Global Lead, IBM —
The cyber landscape has changed dramatically with the rapid adoption of artificial intelligence. In the frenzied race to harness the potential of AI, organizations often find themselves up against the clock, eager to deploy AI without first assessing their foundational cybersecurity measures. This creates a dangerous parallel: while businesses scramble to adopt AI for competitive advantage, cybercriminals are just as rapidly incorporating these technologies into their attack arsenals.
It’s not all bad news. For the first time in five years, global data breach costs have declined. IBM’s newly released 2025 Cost of a Data Breach Report found that average global costs dropped to USD 4.44 million—down from USD 4.88 million, or 9%, in the year prior. The catalyst? Faster breach containment driven by AI-powered defenses. According to the report, organizations were able to identify and contain a breach within a mean time of 241 days, the lowest it’s been in nine years.
Yet this progress comes with a caveat: the very speed of AI and automation deployment that’s helping organizations defend better is also creating new risks. This phenomenon of AI adoption outpacing oversight can lead to significant security debt, posing risk for enterprises determined to maintain a competitive edge. This debt—the cumulative consequences of delayed or inadequate cybersecurity practices—can lead to severe vulnerabilities over time. With AI, organizations are already starting to flash the warning signs.
The AI oversight gap
Consider this: a staggering 97% of breached organizations that experienced an AI-related security incident say they lacked proper AI access controls, according to findings from the Cost of a Data Breach Report. Additionally, among the 600 organizations researched by the independent Ponemon Institute, 63% revealed they have no AI governance policies in place to manage AI or prevent workers from using shadow AI.
This AI oversight gap is carrying heavy financial and operational costs. The report shows that having a high level of shadow AI—where workers download or use unapproved internet-based AI tools—added an extra USD 670,000 to the global average breach cost. AI-related breaches also had a ripple effect: they led to broad data compromise and operational disruption. That disruption can stop organizations from processing sales orders, providing customer service and keeping supply chains running.
By neglecting foundational cybersecurity practices when adopting AI, companies leave themselves vulnerable to operational disruption of AI-based workloads, large-scale data breaches that span multi-cloud and on-premise environments, and the potential exposure of intellectual property used to train or tune their AI implementations.
As business leaders continue to dive into, and drive, the AI hype, they must confront the bloated risk that persists within their overall infrastructures. This is especially true when it comes to cloud security, where AI workloads and data spend most of their time. To ensure these remain within organizational risk appetite levels, security leaders need to help their businesses win at AI by reassessing their cybersecurity frameworks. These leaders must ensure their companies can adapt to the evolving risks that accompany AI technologies.
This includes regular audits of security and data protection policies, adapting controls, evolving response plans and investing in employee training. As newly appointed Chief AI Officers (CAIOs) gradually join the C-suite ranks, security leaders need to be right there next to them. They should strengthen their ties with the governance, risk and compliance (GRC) teams to help break down current or emerging silos with the department overseeing regulatory compliance. This will go a long way toward ensuring alignment and creating a strong crisis-response bond in case of a data breach involving AI assets.
How risky is this situation? Cybercriminals are acutely aware of this situational weakness, positioning AI workloads as high-value targets ripe for compromise. Is the risk materializing in the real world? The answer, as you can see in the data above, is yes. The report reveals that 13% of surveyed organizations have experienced an attack that impacted their AI models or applications. That percentage is small, for now. We are likely to see many more in the coming 12 months, unless security leaders and their business counterparts recognize the risk and pivot to focus more intently on AI security.
Essential measures to reduce security risk
Read the full blog on IBM.com for details on how organizations can mitigate these risks and strengthen their security posture, which includes reinforcing cloud security, strengthening AI governance, and providing continuous education and training.
About IBM
IBM is a leading provider of global hybrid cloud and AI, and consulting expertise. We help clients in more than 175 countries capitalize on insights from their data, streamline business processes, reduce costs and gain the competitive edge in their industries. More than 4,000 government and corporate entities in critical infrastructure areas such as financial services, telecommunications and healthcare rely on IBM’s hybrid cloud platform and Red Hat OpenShift to affect their digital transformations quickly, efficiently and securely. IBM’s breakthrough innovations in AI, quantum computing, industry-specific cloud solutions and consulting deliver open and flexible options to our clients. All of this is backed by IBM’s legendary commitment to trust, transparency, responsibility, inclusivity and service. Visit www.ibm.com for more information.
Source: IBM
Tags: Artificial Intelligence (AI), cyber attacks, cyber risk, cyber security, IBM