Researchers bypassed Google Gemini’s defenses using natural language instructions, creating misleading events to leak private Calendar data. This method allows sensitive data exfiltration to an attacker via a Calendar event description. Gemini, Google’s LLM assistant, integrates across Google web services and Workspace apps like Gmail and Calendar.
The Gemini-based Calendar invite attack begins with sending a target an event invite containing a prompt-injection payload in its description. Exfiltration activities trigger when the victim queries Gemini about their schedule. This causes the assistant to load and parse all relevant events, including the one with the attacker’s payload.
Researchers at Miggo Security, an Application Detection & Response (ADR) platform, found they could deceive Gemini into leaking Calendar data by providing natural language instructions. These included: summarizing all meetings on a specific day, including private ones; creating a new calendar event with that summary; and responding to the user with a harmless message.
The researchers explained, “Because Gemini automatically ingests and interprets event data to be helpful, an attacker who can influence event fields can plant natural language instructions that the model may later execute.” They discovered controlling an event’s description field allowed embedding a prompt that Google Gemini would obey, even with a harmful outcome.
The malicious invite’s payload remains dormant until the victim asks Gemini a routine question about their schedule. Upon execution of the embedded instructions from the malicious Calendar invite, Gemini creates a new event. It writes the private meeting summary into this new event’s description. In many enterprise settings, the updated description becomes visible to event participants, potentially leaking sensitive information to the attacker.
Miggo noted that Google employs a separate, isolated model for detecting malicious prompts in the primary Gemini assistant. However, their attack bypassed this failsafe because the instructions appeared safe. Prompt injection attacks via malicious Calendar event titles are not new. In August 2025, SafeBreach demonstrated a malicious Google Calendar invite could exploit Gemini agents to leak sensitive user data.
Liad Eliyahu, Miggo’s head of research, informed BleepingComputer that the new attack demonstrates Gemini’s reasoning capabilities remain vulnerable to manipulation despite Google implementing additional defenses after SafeBreach’s report. Miggo shared its findings with Google, which has since added new mitigations. Miggo’s attack concept highlights the complexities of anticipating new exploitation models in AI systems driven by natural language with ambiguous intent.
The researchers suggest application security must evolve from syntactic detection to context-aware defenses to address these vulnerabilities.




