North Korea-Linked Hackers Exploit AI to Launch Sophisticated Phishing Attack
A North Korea-backed cyber-espionage group, known as Kimsuky, has reportedly harnessed ChatGPT to create a fake South Korean military ID, deploying it in a targeted phishing campaign against journalists, human rights activists, and defence researchers.
South Korean cybersecurity firm Genians revealed that the attackers bypassed the AI’s safeguards, generating a convincing draft of the military ID that was used to lend credibility to malware-laden emails.
Overview of how the attack works (Source: Genians)
How AI Became a Tool for Espionage
Instead of including an actual ID image, the emails contained a hidden payload designed to steal sensitive data from recipients’ devices.
Mun Chong-hyun, director at Genians, said,
“Attackers can use AI to map scenarios, write malware, and even pretend to be recruiters.”
The campaign aligns with a broader pattern of North Korean cyber operations leveraging artificial intelligence at multiple stages—from planning and tool development to impersonation and phishing.
Earlier this year, the AI firm Anthropic discovered North Korean hackers exploiting its Claude Code model to infiltrate U.S. Fortune 500 companies.
The attackers reportedly used AI to create complete résumés, pass coding interviews, and even perform technical assignments after being hired.
These operations offered the regime direct access to corporate systems without breaching conventional cybersecurity defences.
Fooling Victims with Fake Military Emails
The phishing emails were carefully crafted to appear as though they originated from legitimate South Korean military accounts, ending in .mil.kr.
A phishing email sent from a South Korean military domain. (Source: Genians)
While the exact number of compromised devices remains unknown, the campaign demonstrates the growing sophistication of AI-assisted cyberattacks.
Genians’ attempts to replicate the technique showed that minor modifications to ChatGPT prompts could bypass content restrictions, producing templates capable of tricking unsuspecting recipients.
Attackers used deepfake technology to create a counterfeit South Korean military ID by exploiting a loophole that allows AI models to generate mock-ups of protected documents. (Source: Genians)
A Long-Standing Threat Intensifies
Kimsuky has been active since 2012, primarily targeting foreign policy experts, think tanks, and government agencies in South Korea, Japan, and the United States.
In 2020, the U.S. Department of Homeland Security described the group as “most likely tasked by the North Korean regime with a global intelligence-gathering mission.”
Source: SOCRadar
Their typical tactics involve spearphishing emails to extract sensitive information, monitor discussions on nuclear strategy, sanctions, and regional security, and gain insight into decision-making at high levels.
Authorities Urge Heightened Vigilance
U.S. and South Korean authorities have warned that such AI-assisted cyber operations are escalating.
Agencies including CISA, the FBI, and CNMF have urged individuals in sensitive fields related to North Korea to strengthen security measures, such as enabling multi-factor authentication, increasing phishing awareness training, and implementing stronger email filters.
American officials note that North Korea’s use of cyberattacks, cryptocurrency theft, and shadow IT contracts is part of a broader strategy to bypass sanctions, gather intelligence, and fund its nuclear weapons programme.
Are Companies Ready to Face AI-Powered Threats?
Coinlive considers the rise of AI-assisted attacks a pressing concern for businesses and individuals alike.
The ability of malicious actors to manipulate tools like ChatGPT or Claude to fabricate identities, pass professional assessments, or deploy malware highlights a new dimension of risk.
As AI becomes increasingly integrated into everyday workflows, the lines between legitimate digital activity and sophisticated deception blur, leaving companies vulnerable to schemes that exploit trust and automation.
Organizations must rethink not only cybersecurity infrastructure but also how reliance on AI could inadvertently open doors to fraud, espionage, and system infiltration.