
A new study reveals a stark gap in enterprise security: 67% of CISOs report limited visibility into AI usage across their organizations, leaving critical systems vulnerable. Despite AI’s widespread adoption, security leaders are largely dependent on outdated tools and a severe lack of specialized expertise to defend against emerging AI-specific threats. This challenge is not budget-driven but stems from foundational skill and tool deficiencies, as The Hacker News reports from Pentera's 2026 AI and Adversarial Testing Benchmark Report.
This severe lack of insight means basic security questions often remain unanswered. Security teams struggle to identify which identities AI systems use, what data they access, or how they behave during control failures. Such foundational gaps make effective risk assessment nearly impossible. The expanding use of AI in enterprises is prompting CISOs to rethink their data protection strategies, as field CISO Chris Cochran from the SANS Institute notes.
Organizations must proactively evaluate how new technologies use company data and continuously monitor traffic flow. This stance helps identify if new controls are needed for AI integration, ensuring systems can benefit from evolving vendor solutions.
AI introduces new behaviors like autonomous decision-making, indirect access paths, and privileged interactions between systems. Without the right expertise and active testing, it becomes difficult to assess whether existing controls are effective. Most companies extend existing security controls to cover AI infrastructure, with a striking 75% of CISOs relying on legacy security tools like endpoint or application security. Only 11% reported having security tools designed specifically for AI, a pattern reminiscent of past technology shifts where organizations adapt existing defenses before tailored practices emerge.
This reliance on legacy controls creates inherent vulnerabilities because they were not built for AI's unique access patterns and expanded attack surfaces. The RSA Conference 2026 highlighted these AI infrastructure security gaps, urging CISOs to address knowledge deficiencies as AI initiatives move from pilot to production, according to CSOonline.
CrowdStrike CTO Elia Zaitsev notes that existing endpoint detection and response (EDR) tools can capture necessary behavioral data. He explains that activities benign for humans might require different policies when performed by AI agents. This shift emphasizes needing visibility into how AI agents behave once operational, not just how they are built.
The case of the Anthropic ban from government systems underscores the emerging supply chain risks in AI. It forces CISOs to identify and potentially remove specific AI technologies without a clear understanding of their embeddedness, CSOonline also reported. Meanwhile, strategic thinking, communication, initiative, and relationship building remain vital human skills that AI cannot replace, as Forbes emphasizes, underscoring the importance of human expertise in an AI-driven economy.
Prioritize AI Security Skill Development
With 50% of CISOs identifying lack of internal expertise as a top barrier, immediately invest in specialized training for your security teams on AI-specific risks and attack vectors to close the skill gap.
Enhance AI Visibility Tools
Given 67% of CISOs report limited visibility, deploy solutions that effectively map AI system interactions, data access, and identities across your enterprise to gain essential oversight.
Rethink Legacy Control Reliance
Since 75% of CISOs rely on legacy tools not designed for AI, proactively evaluate and implement AI-specific security controls that account for new access patterns and expanded attack surfaces introduced by AI systems.
Monitor AI Agent Behavior
Focus on runtime security by tracking how AI agents behave within your network using existing EDR capabilities, applying different policies for AI versus human actions to mitigate new and evolving risks.
The biggest challenge is a lack of internal expertise. A recent study found that 50% of CISOs identify the shortage of specialized skills as the top barrier to AI security, outweighing concerns about budget constraints or insufficient tools. This expertise gap makes it difficult to properly assess and mitigate AI-related risks.
Most CISOs have limited visibility into AI usage. According to a recent study, 67% of CISOs report limited visibility into how AI is being used across their organizations, meaning they struggle to identify which identities AI systems use, what data they access, or how they behave during control failures. No respondents indicated they have full visibility.
Existing security tools are generally not adequate for protecting AI systems. The majority of CISOs (75%) rely on legacy security tools like endpoint or application security, which were not designed for AI's unique access patterns and expanded attack surfaces. Only a small percentage (11%) have security tools specifically designed for AI.
Legacy security tools are insufficient because they were not built for AI's unique characteristics. AI introduces new behaviors like autonomous decision-making, indirect access paths, and privileged interactions between systems. These tools often lack the ability to monitor and control these new behaviors, creating inherent vulnerabilities.
Organizations should proactively evaluate how new technologies use company data and continuously monitor traffic flow. This helps identify if new controls are needed for AI integration, ensuring systems can benefit from evolving vendor solutions. Addressing the lack of internal expertise is also crucial, as is investing in security tools designed specifically for AI systems.
More insights on trending topics and technology







