Hacker Group Warns AI Security Needs an Overhaul

ODSC - Open Data Science
2 min readFeb 12, 2025

--

A leading hacker collective is calling for a complete rework of AI security practices, arguing that current strategies are failing to address critical vulnerabilities. The warning comes from organizers of the DEF CON hacker conference, who released their first Hackers’ Almanack last week, highlighting major security gaps in artificial intelligence systems, via AXIOS.

What’s going on?

Ethical hackers emphasize that AI models remain alarmingly easy to infiltrate, raising concerns about what malicious actors could achieve. As AI technology advances and integrates deeper into society, these vulnerabilities pose growing risks to data privacy, misinformation, and even national security.

The Hackers’ Almanack, published in collaboration with the Cyber Policy Initiative at the University of Chicago, arrives as global leaders, AI executives, and policymakers gather in Paris to discuss AI safety and regulation. The report underscores the need for a systematic approach to tracking and mitigating AI security threats, warning that current methods remain insufficient.

Zooming In

Governments worldwide have urged AI companies to adopt red teaming — a process where ethical hackers test a system’s defenses to expose weaknesses. However, according to Sven Cattell, an organizer of DEF CON’s AI Village, this method falls short when addressing “unknown unknowns” — unpredictable vulnerabilities unique to AI models.

Unlike traditional cybersecurity, where flaws can be cataloged and patched systematically, AI vulnerabilities often emerge in ways that cannot be anticipated. Cattell suggests that AI security should follow the Common Vulnerabilities and Exposures (CVE) system, which helps categorize and rate software security flaws. “The goal of AI security is not to make it impossible to break a system, but to make any such break expensive and short-lived,” he wrote in the report.

The Bigger Picture

DEF CON’s push for a more structured approach comes at a time when both tech companies and U.S. policy appear to be shifting away from prioritizing AI security. Google recently removed language from its AI policy that previously prohibited the development of technologies likely to cause harm.

Additionally, one of Donald Trump’s first actions after returning to office was to revoke former President Joe Biden’s AI executive order, which had established guidelines for AI safety.

As AI systems become increasingly embedded in critical infrastructure, the debate over security measures will only intensify. Hacker groups and cybersecurity experts stress that without a standardized framework for AI security, companies and governments will remain vulnerable to emerging threats.

Sign up to discover human stories that deepen your understanding of the world.

Free

Distraction-free reading. No ads.

Organize your knowledge with lists and highlights.

Tell your story. Find your audience.

Membership

Read member-only stories

Support writers you read most

Earn money for your writing

Listen to audio narrations

Read offline with the Medium app

--

--

ODSC - Open Data Science
ODSC - Open Data Science

Written by ODSC - Open Data Science

Our passion is bringing thousands of the best and brightest data scientists together under one roof for an incredible learning and networking experience.

Responses (1)

Write a response