Gemini Trifecta exposes AI flaws in Google Gemini
Tenable has uncovered critical vulnerabilities in Google’s Gemini suite, calling them the Gemini Trifecta. These flaws, now fixed, showed how attackers could exploit Gemini itself to steal sensitive data from millions of users.
The Gemini Trifecta impacted three core features of the platform. In Gemini Cloud Assist, attackers could plant poisoned log entries that triggered hidden malicious instructions. In the Gemini Search Personalization Model, queries could be injected into a victim’s browser history, giving attackers access to location details and saved user memories. In the Gemini Browsing Tool, Gemini could be tricked into making hidden outbound requests that leaked private data to attacker-controlled servers.
These flaws revealed how Gemini could shift from being a target to becoming an active attack vehicle. Unlike traditional cyberattacks, no malware or phishing was needed. Gemini’s trusted integrations became the attack surface.
According to Tenable Research, the problem stemmed from Gemini failing to distinguish safe user input from attacker-supplied content. Poisoned logs, search history entries, or hidden web content were all treated as trusted context.
Liv Matan, Senior Security Researcher at Tenable, explained that the flaws show the unique risks of AI platforms. “Gemini draws its strength from pulling context across logs, searches, and browsing. That same capability can become a liability if attackers poison those inputs.”
The potential impact was significant. Attackers could have silently manipulated logs, stolen sensitive user data, abused cloud resources, and redirected private data to malicious servers.
Google has remediated all three vulnerabilities. No user action is required, but Tenable urges security teams to take proactive steps:
- Treat AI-driven features as active attack surfaces.
- Audit logs, search histories, and integrations regularly.
- Monitor for unusual tool executions and outbound requests.
- Test AI-enabled services against prompt injection attacks.
Matan emphasized that this is a wake-up call. “Securing AI isn’t just about patching flaws. It’s about anticipating how attackers could exploit the unique mechanics of AI systems and building layered defenses that prevent small cracks from becoming systemic exposures.”
