Google is exploring the use of artificial intelligence (AI) to enhance cybersecurity measures against phishing attacks. In a recent experiment, the company employed generative AI to elucidate why specific messages were flagged as potential threats, aiming to educate users on recognizing dangerous content.
At the RSA Conference in San Francisco, Elie Bursztein, the research lead at Google DeepMind, presented the trial to demonstrate how AI chatbot technology could be pivotal in combating cyber threats. He explained that around 70% of the malicious documents currently blocked by Gmail incorporate both text and images, including falsified company logos designed to deceive users.
One notable example highlighted by Bursztein involved a malicious PDF document masquerading as an email from PayPal. Google’s AI was able to identify discrepancies such as a phone number that did not match PayPal’s official support line and language designed to instill a sense of urgency—common tactics used by scammers to pressure victims.
This AI-driven approach promises to enhance users’ ability to identify phishing attempts by providing detailed, contextual explanations of the threats, similar to the analysis an experienced cybersecurity analyst would offer.