AI company Anthropic has confirmed that it’s LLM Claude has been used by hackers in a series of cyber attacks. The hackers used the AI to generate malicious code, infiltrate networks, and make strategic decisions about what data to steal and how to extort their victims.
In one case, the hackers used Claude to breach at least 17 organisations, including government bodies. The chatbot advised on which files to exfiltrate, how to word threats, and how much ransom to demand.
The company also revealed attempts by North Korean operatives to use Claude in employment fraud schemes targeting US Fortune 500 tech firms.
Fraudsters reportedly leaned on AI to create fake profiles, draft job applications, and assist with coding tasks once hired remotely. By doing so, sanctioned workers were able to bypass cultural and technical barriers that would normally expose them.
Geoff White, co-presenter of The Lazarus Heist says by hiring these fraudsters, the company is then in breach of international sanctions by unknowingly paying a North Korean.
This style of cyber attack and fraud have occured before AI. But by lowering technical barriers, AI has allowed cybercriminals to move faster and more efficiently than before. Without appropriate cyber defence systems, in the wrong hands, AI will continue to cause huge damage to companies.