A new study reveals a stark gap in enterprise security: 67% of CISOs report limited visibility into AI usage across their organizations, leaving critical systems vulnerable. Despite AI’s widespread adoption, security leaders are largely dependent on outdated tools and a severe lack of specialized expertise to defend against emerging AI-specific threats. This challenge is not budget-driven but stems from foundational skill and tool deficiencies, as The Hacker News reports from Pentera's 2026 AI and Adversarial Testing Benchmark Report.
AI Adoption Outpaces Security Readiness
AI systems are now deeply integrated across corporate technology, from cloud platforms and identity systems to applications and data pipelines. This widespread deployment, however, comes with fragmented ownership across disparate teams, which has caused centralized oversight to collapse. As a direct result, 67% of CISOs reported limited visibility into how AI is being used across their organization. No respondents indicated they have full visibility.This severe lack of insight means basic security questions often remain unanswered. Security teams struggle to identify which identities AI systems use, what data they access, or how they behave during control failures. Such foundational gaps make effective risk assessment nearly impossible. The expanding use of AI in enterprises is prompting CISOs to rethink their data protection strategies, as field CISO Chris Cochran from the SANS Institute notes.
Organizations must proactively evaluate how new technologies use company data and continuously monitor traffic flow. This stance helps identify if new controls are needed for AI integration, ensuring systems can benefit from evolving vendor solutions.
The Core Challenge: Expertise, Not Funding
While AI security is a frequent topic in boardrooms, the main obstacles are not financial. CISOs identified the lack of internal expertise (50%) as their top barrier, closely followed by limited visibility into AI usage (48%). Insufficient security tools designed specifically for AI systems (36%) also pose a significant challenge. Only 17% cited budget constraints. This indicates a willingness to invest but a critical shortage of specialized skills to evaluate AI-related risks in real environments.AI introduces new behaviors like autonomous decision-making, indirect access paths, and privileged interactions between systems. Without the right expertise and active testing, it becomes difficult to assess whether existing controls are effective. Most companies extend existing security controls to cover AI infrastructure, with a striking 75% of CISOs relying on legacy security tools like endpoint or application security. Only 11% reported having security tools designed specifically for AI, a pattern reminiscent of past technology shifts where organizations adapt existing defenses before tailored practices emerge.
This reliance on legacy controls creates inherent vulnerabilities because they were not built for AI's unique access patterns and expanded attack surfaces. The RSA Conference 2026 highlighted these AI infrastructure security gaps, urging CISOs to address knowledge deficiencies as AI initiatives move from pilot to production, according to CSOonline.
Adapting Security for the AI Frontier
The findings highlight that AI security issues stem from fundamental gaps in expertise and tooling, rather than a lack of awareness. As AI becomes integral to enterprise operations, organizations must focus on building specialized knowledge and improving how they validate security controls across environments where AI already operates. Security leaders suggest that monitoring AI agent behavior inside enterprise systems is a crucial new frontier for CISOs.CrowdStrike CTO Elia Zaitsev notes that existing endpoint detection and response (EDR) tools can capture necessary behavioral data. He explains that activities benign for humans might require different policies when performed by AI agents. This shift emphasizes needing visibility into how AI agents behave once operational, not just how they are built.
The case of the Anthropic ban from government systems underscores the emerging supply chain risks in AI. It forces CISOs to identify and potentially remove specific AI technologies without a clear understanding of their embeddedness, CSOonline also reported. Meanwhile, strategic thinking, communication, initiative, and relationship building remain vital human skills that AI cannot replace, as Forbes emphasizes, underscoring the importance of human expertise in an AI-driven economy.







