A group known as Kimsuky, believed to be state-sponsored, has been using OpenAI’s ChatGPT to generate deepfake South Korean military IDs as part of phishing attacks, according to a new report. Cybersecurity researchers, including those from CrowdStrike, found that the group prompted ChatGPT to help design visually convincing forgeries, which were then used as bait in email‐based social engineering efforts aimed at sensitive military or governmental targets. The pivot toward using generative AI for document forgery marks a worrying escalation, as it reduces both the time and expertise required to produce convincing fake credentials. In response, South Korea and its allies are increasing scrutiny of AI tools and advocating for tighter regulations and more robust verification systems to combat this type of threat.
Sources: WebProNews, Bloomberg
Key Takeaways
– AI as a force‐multiplier in cyber espionage: The use of ChatGPT in creating forged military IDs shows how generative AI is being leveraged to automate tasks that previously required specialized graphic or forgery skills.
– State actors accelerating capabilities: Groups tied to North Korea like Kimsuky are increasingly integrating advanced tools into attacks targeting military and strategic infrastructure, signifying an evolution in both technique and ambition.
– Need for stronger defenses and governance: The incident underscores the urgency for tightened AI oversight, improved document verification, identity authentication, and international cooperation to detect and deter such forgeries.
In‐Depth
In recent developments, North Korea has demonstrated an alarming leap in its cyber capabilities by using AI tools like ChatGPT to support phishing operations involving deepfake military IDs. The Kimsuky operation, reportedly run with state backing, crafted convincing fake South Korean military identification documents using generative AI. These documents were then deployed via phishing emails to try to access sensitive networks and information. The forgery process—once laborious and prone to detection—is now increasingly rapid, scalable, and harder to distinguish from legitimate documents.
What makes this particularly concerning is how it reflects an era where traditional checks and human skepticism can be bypassed with AI-aided design. By supplying ChatGPT with prompts geared toward replicating layout, typography, official seals, and other visual cues, hackers can produce forgeries that pass cursory inspection. This lowers the barrier to entry for high-stakes social engineering, giving malicious actors tools once available only to well-resourced forgery operations. The democratization of AI tools, while bringing many benefits, is thus also accelerating risk.
From a policy and defensive standpoint, the exposure of these tactics is pushing governments and tech companies to reconsider document authentication protocols. Measures under consideration include embedding more difficult‐to-forge security features into ID cards, using multifactor identity verification, instituting thorough validation of credentials before granting access, and deploying AI tools themselves for detecting anomalies. On the regulatory side, there is growing support for international frameworks around the responsible release and monitoring of AI tools, especially those with potential for misuse in espionage, forgery, or disinformation.
As North Korea continues to build its AI tool-set, the challenge for defenders will be staying ahead — not just by blocking single attacks, but by designing systems and protocols that anticipate how AI-driven forgery will evolve. Without robust international cooperation and investment in detection and authentication, the risk is that deepfake IDs and similar tools become standard in state-level espionage efforts.

