Regardless of widespread use of AI, solely 6% have applied a complete, AI-native safety technique
SandboxAQ launched its inaugural AI Safety Benchmark Report, revealing a big disconnect between enterprise AI adoption and cybersecurity readiness. Whereas 79% of organizations are already utilizing AI in manufacturing environments, solely 6% have applied a complete, AI-native safety technique, leaving the overwhelming majority of enterprises weak to threats they aren’t but outfitted to detect or mitigate.
Regardless of widespread use of AI, solely 6% of organizations have applied a complete, AI-native safety technique
Based mostly on a survey of greater than 100 senior safety leaders throughout the US and EU, the report highlights widespread concern in regards to the dangers AI introduces, from mannequin manipulation and information leakage to adversarial assaults and the misuse of non-human identities (NHIs). But regardless of rising anxiousness amongst CISOs, solely 28% of organizations have performed a full AI-specific safety evaluation, and most are nonetheless counting on conventional, rule-based instruments that had been by no means designed to deal with dynamic, machine-speed programs.
Additionally Learn: AiThority Interview with Pete Foley, CEO of ModelOp
Key findings embody:
- Solely 6% of organizations have applied AI-native safety protections throughout each IT and AI programs.
- 74% of safety leaders are extremely involved about AI-enhanced cyberattacks, and 69% are extremely involved about AI uncovering new vulnerabilities of their environments.
- Simply 10% of corporations have a devoted AI safety crew; in most organizations, accountability falls to conventional IT or safety groups.
The rise of NHIs, which embody autonomous AI brokers, companies, and machine accounts, has additional difficult the safety panorama. These programs usually function independently, holding and exchanging cryptographic credentials, accessing delicate sources, and interacting with different software program with out human oversight. Most safety groups lack visibility into these entities or management over their behaviors, undermining core rules of Zero Belief and exposing gaps in id governance and cryptographic hygiene.
Additionally Learn: Proactive Product Resilience: Leveraging AI-driven PLM for Predictive Provide Chain Stability and Design Threat Mitigation
The report’s findings mirror what SandboxAQ has seen throughout large-scale cryptographic environments and AI deployments: enterprises are struggling to increase core safety practices like automated stock, visibility, and coverage enforcement to the identities and property that AI programs depend on. Via options like AQtive Guard, enterprises are in a position to modernize cryptographic and id governance on this new layer of infrastructure with the identical urgency they as soon as utilized to conventional IT.
“This isn’t only a answer hole, it’s a conceptual one,” stated Marc Manzano, Basic Supervisor of the Cybersecurity Group at SandboxAQ. “AI is radically altering the cybersecurity paradigm at an unprecedented velocity. This report highlights a rising recognition amongst safety leaders that defending in opposition to evolving threats requires new assumptions and approaches, not simply new layers or patches to present tooling.”
Regardless of these gaps, funding is accelerating. Eighty-five p.c of organizations plan to extend AI safety spending within the subsequent 12 to 24 months, with 1 / 4 planning important will increase. Areas of focus embody defending coaching information and inference pipelines, securing non-human identities, and deploying automated incident response capabilities tailor-made to AI-driven infrastructure.
[To share your insights with us as part of editorial or sponsored content, please write to psen@itechseries.com]