AI Security Experts Convene in D.C. to Tackle Mythos-Driven Threats

AI Security Experts Convene in D.C. to Tackle Mythos-Driven Threats

AI Security Experts Convene in D.C. to Tackle Mythos-Driven Threats

Top cybersecurity experts and policy leaders gather in Washington, D.C., to address the growing security challenges posed by advanced AI systems like Anthropic Mythos.

Experts Debate AI Security Standards

Shoshana Cox, global AI policy lead at OWASP AI Exchange, organizes the AI Security Policy Forum. The event brings together a cross-sector group of AI security practitioners, standards-setters, and policy experts to define what securing AI should look like.

Rob van der Veer, chief AI officer at Software Improvement Group and a founder of the OWASP AI Exchange, leads the session. He highlights how systems like Mythos are accelerating the discovery of vulnerabilities, often before developers are aware of them. “This shifts the balance toward attackers and reduces the margin for error,” he says.

Current Challenges and Fragmentation

Concerns about Mythos primarily focus on its ability to find zero-day vulnerabilities in traditional software. However, it can also discover vulnerabilities in AI models and systems that enterprises are increasingly deploying. Most organizations are not yet ready to handle these emerging threats.

The field remains fragmented with overlapping frameworks and competing recommendations. Gary McGraw, cofounder of the Berryville Institute of Machine Learning, points out a core gap: today’s benchmarks measure how well AI systems can perform security tasks, not how secure the systems themselves are. “These meetings are a way to remind ourselves of the fundamentals, as we try to define what machine learning security actually is,” he says.

Dynamic Nature of AI Security

Apostol Vassilev, a research team supervisor at NIST, emphasizes that no finite set of guardrails is universally robust against adversarial prompts. “The security of AI systems is not a static problem—one that can be solved once and done,” he explains. Unlike traditional software, AI security requires a more dynamic approach, including continuous updates and internal red teaming to uncover new adversarial prompts.

References

← Back to all posts

Enjoyed this article? Get more insights!

Subscribe to our newsletter for the latest AI news, tutorials, and expert insights delivered directly to your inbox.

We respect your privacy. Unsubscribe at any time.