Navigation
Join our brand new verified AMN Telegram channel and get important news uncensored!
  •  

AI-generated military IDs used in North Korean cyber attack

U.S. Marine Corps Lance Cpl. Joseph Hunt utilizes his computer system during Cyber Blue Zone 25-1 at Camp Courtney, Japan, May 21,2025. CBZ aims to train Marines and DoD’s Defensive Cyber Operations personnel by creating a cyber environment that gives teams the opportunity to practice new and current DCO techniques. Hunt, a native of Louisiana, is a cyber warfare operator with 3d Cyber Warfare Company, III Marine Expeditionary Force Information Group. (Photo by Cpl. Bridgette Rodriguez)
September 16, 2025

A new report published by a cybersecurity firm shows how a North Korean hacking group affiliated with the North Korean government used artificial intelligence to generate South Korean military identification card images that were used in a “spear-phishing attack.”

In the new report published by Genians, researchers confirmed that Kimsuky, a North Korean hacking group, used ChatGPT to generate fake military identification card images to trick victims into clicking on a malware link. The report noted that the North Korean hackers pretended to be officials from a South Korean defense institution and claimed to be helping issue identification cards for officials affiliated with the South Korean military.

In the report, Genian researchers confirmed that the Genians Security Center detected a “spear-phishing attack” from the Kimsuky group on July 17, 2025.

“This was classified as an APT attack impersonating a South Korean defense-related institution, disguised as if it were handling ID issuance tasks for military-affiliated officials,” researchers stated. “The threat actor used ChatGPT, a generative AI, to produce sample ID card images, which were then leveraged in the attack. This is a real case demonstrating the Kimsuky group’s application of deepfake technology.”

READ MORE: North Korean workers infiltrated US companies in major fraud scheme

Researchers noted that the July spear-phishing attack was discovered after North Korean hackers exploited OpenAI’s ChatGPT for “deepfake activity.”

“The threat actor generated fake images of military government employee ID cards with a generative AI service and launched a spear-phishing attack disguised as a draft review request,” Genian researchers added. “The sender’s email address was designed to closely mimic the official domain of a South Korean military institution.”

The report by Genians explained that while ChatGPT does not allow users to generate military identification cards, the artificial intelligence platform can fulfill requests to generate sample identification cards.

“For example, it may respond to requests framed as creating a mock-up or sample design for legitimate purposes rather than reproducing an actual military ID,” researchers stated. “The deepfake image used in this attack fell into this category. Because creating counterfeit IDs with AI services is technically straightforward, extra caution is required.”

According to Business Insider, Kimsuky has previously been linked to multiple espionage efforts against a variety of organizations and individuals in the United States, South Korea, and Japan. The outlet noted that the U.S. Department of Homeland Security previously claimed that Kimsuky is “most likely tasked by the North Korean regime with a global intelligence-gathering mission.”