Secure AI 2025: Lessons from Google Cloud’s Dr. Anton Chuvakin

Security frameworks for GenAI are taking shape

Read our takeaways from Dr. Anton Chuvakin's speech during the iSMG security summit in February 2025. Key point: Securing GenAI requires a multi-layered, multi-stakeholder approach; it's complex but not impossible.

March 19, 2025

Secure AI 2025: Lessons from Google Cloud’s Dr. Anton Chuvakin


Based on remarks delivered at the iSMG Cybersecurity Summit, February 2025

As artificial intelligence (AI) becomes embedded in the infrastructure of modern enterprises, securing these systems is no longer a future problem—it’s today’s urgent priority. At the February 2025 iSMG Cybersecurity Summit, Dr. Anton Chuvakin, Senior Staff Security Consultant in the Office of the CISO at Google Cloud, shared his insights on the evolving threats and best practices for securing generative AI (GenAI) systems.

Here are the key takeaways—lessons that matter for cybersecurity students, educators, and professionals preparing to defend digital systems in an AI-driven world.

From Pilot to Production: GenAI Goes Critical

Over the past couple of years Dr. Chuvakin has observed many organizations build pilot projects to experiment with GenAI. Some of these projects have now been moved into production and classified as business critical. So the issue of securing GenAI in 2025 has much larger scope than in 2023.

Yes, Attackers Use GenAI Too—But Not Like in the Movies

A recent report from Google, Adversarial Use of GenAI, highlights how malicious actors are already experimenting with GenAI attacks. Interestingly, many of their use cases mirror those of legitimate users—such as using GenAI to conduct research or craft more convincing phishing messages.

However, Dr. Chuvakin emphasized that while these tactics are real, they’re not yet revolutionary. “This is not a game-changer today,” he said. But the potential for abuse is growing—and defenders need to be ready for what's coming.

No Single Fix: Securing AI Is a Team Sport

Unlike traditional software security, securing AI systems is a multi-stakeholder, multi-dimensional challenge. IT security teams must work hand-in-hand with HR, legal, cloud operations, software engineering, and data governance groups. A committee-based approach is often essential.

Furthermore, as GenAI systems evolve, so too must the security models we use to protect them.

The MAID Model: A New Framework for AI Security

Data governance may have once been considered a boring chore left for later. Suddenly GenAI brings data governance to the fore and makes it a chief priority. It must be carefully and thoughtfully decided what data goes into training and retrieval-augmented generation (RAG). This requires governance with best practices.

Dr. Chuvakin added, “In addition to securing the data, you must also secure the software." To frame a strategy, he recommends this useful acronym: MAID.

M – Model Security
A – Application Security
I – Infrastructure Security
D – Data Security

“You must cover all four layers,” he stressed. “If you secure only three of the four, you’re still insecure.” 

For example, securing a chatbot against prompt injection won’t help if the backend application is vulnerable to SQL injection.

This layered model can help students visualize and plan their future efforts to secure AI-driven systems.

How GenAI Helps Defenders—Today and Tomorrow

GenAI isn’t just a target for attackers—it’s also a valuable tool for defenders. Dr. Chuvakin sorts the use cases into two buckets:

1. Auxiliary Use Cases (The Big Bucket)

This includes time-saving, efficiency-boosting tools that help analysts do their work faster. Examples:

  • Drafting incident reports
  • Organizing and correlating alerts
  • Heuristic analysis

These tools aren’t game-changers, but they are already practical and useful.

2. Advanced Use Cases (The “Wow” Bucket)

One standout application: malware reverse engineering. Skilled analysts can use tools like Google’s Gemini to break down malicious code—much faster than traditional methods. But this isn’t for beginners. 

“It speeds the reverse engineering process so much,” said Chuvakin, “but it still requires an experienced analyst.”

Agentic AI: The Promise—and the Peril

Agentic AI, or systems that can take autonomous action on your behalf, holds promise for automating routine security tasks. But we’re not there yet.

Dr. Chuvakin flagged authentication as a particularly thorny challenge. If an AI agent performs an action, who’s responsible? “Did you buy the tickets—or did your agent buy the tickets?” These questions raise accountability and auditability issues that security teams must resolve before relying too heavily on autonomous AI.

Resilience Through AI: Not a Silver Bullet—Yet

In the long term, AI has the potential to make defenders more resilient. But in the short term, success depends on learning from past technology rollouts rather than repeating old mistakes.

As Dr. Chuvakin put it, 

“Are you learning the lessons or repeating the mistakes?”

The message to students: be curious, be cautious, and be ready to adapt as this space continues to evolve.

Why Governance Matters More Than Ever

AI governance is fundamentally different from traditional IT governance. GenAI has brought data governance to the forefront, forcing organizations to scrutinize what data goes into training and retrieval-augmented generation (RAG) systems.

But data protection alone isn’t enough. Governance must extend across all elements of the MAID framework—model, app, infrastructure, and data.

For those building AI curricula or studying security governance, this is a crucial lesson: governance must evolve alongside the technology.

Where to Learn More

Dr. Chuvakin encourages educators and students to explore SAIF.Google, Google’s public-facing resource on AI risks, controls, and best practices. It includes a detailed risk taxonomy and case studies that can enrich classroom discussions and cybersecurity capstone projects.

Final Thoughts

The AI revolution is not a distant future—it’s already here. For students preparing to enter the cybersecurity workforce, understanding the risks and realities of securing GenAI systems will be essential.

Dr. Chuvakin’s insights offer a roadmap: stay grounded in best practices, approach new tools with clarity and caution, and commit to continuous learning. The next generation of cyber defenders won’t just protect networks—they’ll secure the future of intelligent systems.

LinkedIn profile picture of Dr. Anton Chuvakin