North Korean Hackers Exploit ChatGPT to Forge Military IDs in Phishing Attack
Sep 17, 2025
Key Takeaways
North Korean hacking group Kimsuky used ChatGPT to create fake South Korean military ID cards for use in phishing campaigns.
Targets included journalists, researchers, human-rights activists, and defence-related personnel in South Korea.
The forged IDs were distributed via phishing emails impersonating defence institutions, designed to trick victims into clicking malware-laden links.
What Happened
In mid-2025, researchers discovered a phishing operation linked to the North Korean hacking group Kimsuky. The attackers leveraged ChatGPT to generate realistic mock-ups of South Korean military IDs, which were embedded into phishing emails.
These emails impersonated a South Korean defence institution and asked recipients to review the ID drafts. The real objective was to lure targets into clicking malicious links or downloading malware hidden in the attachments.
Although ChatGPT has safeguards against generating official identification, Kimsuky bypassed restrictions by framing the request as “sample” or “mock-up” designs, enabling them to obtain convincing visuals.
Who’s Involved
Actor | Role |
---|---|
Kimsuky | North Korean APT group behind the campaign, known for cyber-espionage against South Korea, the U.S., and allies. |
Genians | Cybersecurity firm that detected and analyzed the phishing campaign. |
Targets | Journalists, researchers, human-rights activists, and defence-sector personnel in South Korea. |
Tool | ChatGPT was exploited through prompt manipulation to generate fake military IDs. |
Exposed Capabilities & Techniques
Deepfake visuals: Realistic ID cards with logos, photos, and formatting mimicking South Korean military credentials.
Prompt manipulation: Attackers disguised their requests as mock-ups, circumventing AI safety measures.
Social engineering: Emails impersonated defence institutions to increase trust and engagement.
Malware delivery: Malicious links and attachments were embedded in the phishing emails.
Broader Impact
This incident underscores a new phase in AI-driven cyber threats. Generative AI is no longer limited to creating phishing text or fake resumes — it is now being exploited to fabricate synthetic identity documents.
Escalation of AI misuse: Guardrails can be bypassed through prompt engineering.
High-value targets: Civil society and defence-related personnel are increasingly exposed.
Policy pressure: Calls may intensify for stricter AI safeguards to prevent misuse for document forgery.
What’s Next
AI detection tools: Development of systems to spot AI-generated fake IDs and documents.
Awareness programs: Training for defence, research, and media personnel to identify phishing lures with forged documents.
Global replication: Other state-sponsored groups may adopt similar techniques in future espionage operations.
Conclusion
The Kimsuky campaign illustrates how AI-driven document forgery is raising the stakes in social engineering attacks. By exploiting ChatGPT to create convincing military IDs, North Korean hackers have shown that the boundary between genuine and synthetic identity documents is rapidly dissolving.
Defence institutions, civil society, and global policymakers must adapt quickly, ensuring both technical safeguards and human awareness remain ahead of adversaries who are increasingly turning to AI as a weapon.
Disclaimer: ClearPhish maintains a strict policy of not participating in the theft, distribution, or handling of stolen data or files. The platform does not engage in exfiltration, downloading, hosting, or reposting any illegally obtained information. Any responsibility or legal inquiries regarding the data should be directed solely at the responsible cybercriminals or attackers, as ClearPhish is not involved in these activities. We encourage parties affected by any breach to seek resolution through legal channels directly with the attackers responsible for such incidents.