GeminiJack: How a Zero-Click Flaw Took Down Google Workspace AI (2026)

A Wake-Up Call for AI Security: Unveiling the GeminiJack Flaw

In a recent revelation, Noma Labs has exposed a critical zero-click vulnerability, GeminiJack, within Google's Workspace AI, Gemini Enterprise. This flaw, which has impacted millions, highlights a new breed of AI-driven threats.

The Unseen Trust Flaw: A Recipe for Disaster

Noma Labs' investigation revealed that Gemini Enterprise's trust in Workspace content led to its downfall. Whenever an employee searched, Gemini automatically gathered and trusted all content, creating an opportunity for attackers to hide commands within seemingly harmless files.

A Stealthy Attack: No Clicks, No Warnings

GeminiJack exploited this trust, activating during routine queries. A single poisoned file, be it a Doc, email, or calendar invite, was enough to trigger the attack. The AI executed the hidden instructions without any prompts or warnings, making it invisible to traditional security measures.

But here's where it gets controversial: the attack bypassed data loss prevention tools, email scanners, and endpoint defenses. Even the exfiltration, disguised as an image request, went unnoticed.

The Power of a Single Activation

Once triggered, Gemini's model followed the attacker's cues, assembling a wealth of information beyond the user's intent. It accessed correspondence, project timelines, contracts, financial notes, and more, all with just a simple search. The attacker didn't need insider knowledge; general terms were enough to guide Gemini to sensitive data.

In one fell swoop, an attacker could gain a comprehensive understanding of an organization's inner workings.

Google's Swift Action: A Step Towards Security

Google responded swiftly, reworking Gemini Enterprise's content handling and separating Vertex AI Search to prevent future issues. However, Noma Labs emphasizes that this is just the beginning. As AI autonomy grows, so do the risks, challenging traditional detection models.

The case raises important questions: How can organizations ensure AI tools respect boundaries? How can we prevent routine access from becoming a security breach?

And this is the part most people miss: Google Chrome's new AI security update offers a $20,000 bounty for anyone who can break its safeguards. A bold move, indeed.

So, what's your take on this? Are we doing enough to secure AI systems? Let's discuss in the comments!

GeminiJack: How a Zero-Click Flaw Took Down Google Workspace AI (2026)

References

Top Articles
Latest Posts
Recommended Articles
Article information

Author: Kerri Lueilwitz

Last Updated:

Views: 6390

Rating: 4.7 / 5 (67 voted)

Reviews: 90% of readers found this page helpful

Author information

Name: Kerri Lueilwitz

Birthday: 1992-10-31

Address: Suite 878 3699 Chantelle Roads, Colebury, NC 68599

Phone: +6111989609516

Job: Chief Farming Manager

Hobby: Mycology, Stone skipping, Dowsing, Whittling, Taxidermy, Sand art, Roller skating

Introduction: My name is Kerri Lueilwitz, I am a courageous, gentle, quaint, thankful, outstanding, brave, vast person who loves writing and wants to share my knowledge and understanding with you.