KEMBAR78
Google issues 'red alert' to Gmail users over new AI scam that steals passwords - The Mirror US


Skip to main content
The Mirror US

Google issues 'red alert' to Gmail users over new AI scam that steals passwords

Google has issued a warning to its 1.8billion account users over a new AI scam being used by cyber criminals

Google warned its 1.8 billion account holders of a new artificial intelligence scam that cyber criminals are reportedly exploiting. Tech guru Scott Polderman shed light on the data-snatching scheme, which leverages another Google innovation, Gemini, an AI chatbot.


"So hackers have figured out a way to use Gemini - Google's own AI - against itself," he said. "Essentially, hackers are sending an email with a hidden message to Gemini to reveal your passwords without you even realizing."

Article continues below

Scott pointed out that this scam is different from past ones because it pits "AI against AI" and might pave the way for similar future cyber threats.

Article continues below
READ MORE: Mom accused of stomping girl, 7, to death 'like an ant' for spilling cereal may face ultimate penaltyREAD MORE: Trump interrupts press conference to reveal 'what caused Howard Stern's axing'

He added: "These hidden instructions are getting AI to work against itself and have you reveal your login and password information."

Scott went on to describe why this particular scam is ensnaring so many users. "There is no link that you have to click [to activate the scam]," he noted, reports the Daily Record.

"It's Gemini popping up and letting you know you are at risk."


He also reminded users that Google has made it clear it will "never ask" for your login details or "never alert" you about fraud via Gemini.

Tech aficionado Marco Figueroa said explaining that the scam involves sending emails with prompts that Gemini recognizes, using a font size of zero and text color of white to make them invisible to the recipient.

A TikTok user added in with additional tips to fend off the scam, advising: "To disable Google Gemini's features within your Gmail account, you need to adjust your Google Workspace settings," they wrote.


They elaborated on the steps necessary, saying, "This involves turning off 'SMART FEATURES' and potentially disabling the Gemini app and its integration within other Google products."

Another person commented: "I never use Gemini, still I might change my password just in case."


One frustrated user said: "I'm sick of all of this already. I'm going back to pen and paper!".

Echoing the sentiment, another added, "I quit using Gmail a long time ago! Thank you for the alert! I'll go check my old accounts."

Google issued a warning on its security blog last month, stating, "With the rapid adoption of generative AI, a new wave of threats is emerging across the industry with the aim of manipulating the AI systems themselves. One such emerging attack vector is indirect prompt injections."


Click here to follow the Mirror US on Google News to stay up to date with all the latest news, sport and entertainment stories.

The tech giant explained the subtlety of the threat: "Unlike direct prompt injections, where an attacker directly inputs malicious commands into a prompt, indirect prompt injections involve hidden malicious instructions within external data sources. These may include emails, documents, or calendar invites that instruct AI to exfiltrate user data or execute other rogue actions.

"As more governments, businesses, and individuals adopt generative AI to get more done, this subtle yet potentially potent attack becomes increasingly pertinent across the industry, demanding immediate attention and robust security measures."

Article continues below

However, the tech behemoth aimed to calm user concerns, declaring: "Google has taken a layered security approach introducing security measures designed for each stage of the prompt lifecycle. From Gemini 2.5 model hardening, to purpose-built machine learning (ML) models detecting malicious instructions, to system-level safeguards, we are meaningfully elevating the difficulty, expense, and complexity faced by an attacker.

"This approach compels adversaries to resort to methods that are either more easily identified or demand greater resources."

Follow The Mirror US:


reach logo

At Reach and across our entities we and our partners use information collected through cookies and other identifiers from your device to improve experience on our site, analyse how it is used and to show personalised advertising. You can opt out of the sale or sharing of your data, at any time clicking the "Do Not Sell or Share my Data" button at the bottom of the webpage. Please note that your preferences are browser specific. Use of our website and any of our services represents your acceptance of the use of cookies and consent to the practices described in our Privacy Notice and Terms and Conditions.