Google’s VirusTotal service, a platform for analyzing and cataloging malware, has not detected any definitive evidence that artificial intelligence is being used to create malware, according to Vicente Diaz, a threat intelligence researcher at the company. Speaking at the RSA Conference in San Francisco, Diaz addressed concerns about the potential misuse of generative AI technologies like OpenAI’s ChatGPT in cybersecurity threats.
Recent findings by security firm Proofpoint hinted at the possible involvement of AI in refining malware, particularly noting the language used in the code of a discovered malware attack. However, Diaz emphasized the challenge in conclusively determining whether malware originates from AI, as distinguishing AI-generated code from human or other sources remains complex.
Diaz proposed that only exceptionally advanced malware, beyond human capability to create, would clearly indicate AI involvement. To date, Google has not identified such advanced threats. He also questioned the necessity of using AI for malware creation, citing the already low barriers for hackers to commit cybercrimes without it.
On the defensive front, Diaz highlighted how VirusTotal is leveraging AI to enhance its malware scanning services. The introduction of Code Insight by VirusTotal last year marked a significant advancement, enabling a deeper analysis of how scanned files operate, thus improving the explanation of malware behaviors to users.
Additionally, there are concerns about state-sponsored groups employing AI for tasks like refining phishing schemes and researching potential targets, as noted by entities like Microsoft and OpenAI. These activities demonstrate the broader implications of AI in cybersecurity beyond direct malware creation.