Overview
In June 2025, cybersecurity experts at Huntress uncovered a highly targeted social engineering operation linked to the North Korean hacking group BlueNoroff, a known affiliate of the Lazarus Group. The attackers used AI-generated deepfake videos of corporate executives during fraudulent Zoom meetings to deceive an employee at a cryptocurrency foundation into installing malware on their macOS system. Security analysts are warning that similar tactics could be used against other organizations relying on remote collaboration tools.
In this case, the attack starts with a Telegram message from someone posing as a legitimate business associate, including a Calendly link to schedule a meeting. That link leads to a fake Zoom site controlled by the attackers. During the video call, the victim is shown AI-generated deepfake footage of their company’s leadership and is prompted to install a Zoom plugin to resolve a perceived microphone issue. Instead, the plugin is a malicious AppleScript that downloads and runs a series of harmful files, including a backdoor known as Root Troy V4. This malware grants an attacker remote access, enabling them to steal data and deploy additional threats, particularly on Apple Silicon Macs using Rosetta 2. The Huntress writeup contains detailed analysis of the intrusion.
Why it matters:
This incident demonstrates the growing danger of AI-driven impersonation in cyberattacks, especially as remote work and video conferencing remain common. The targeting of a cryptocurrency foundation aligns with North Korea’s history of cyber theft in the crypto space, however deepfakes make phishing and malware campaigns more convincing and harder to detect for everyone. Also, this attack highlights the increase in targeted attacks against MacOs, which users trust for its security reputation. Combating these well-conceived threats requires a combination of technical safeguards, like endpoint protection and domain monitoring, but also employee training to recognize sophisticated social engineering tactics. As deepfake tools become more accessible, organizations must evolve their security practices to defend against increasingly realistic and deceptive attacks.