Published: January 28th, 2026
A North Korea-linked hacking group is using artificial-intelligence-generated video calls to impersonate trusted contacts and trick cryptocurrency workers into installing malware, according to cybersecurity researchers.
The campaign, which relies on staged Zoom or Microsoft Teams calls and deepfake video, is part of a broader surge in AI-enabled impersonation scams that have driven crypto-related losses to record levels this year.
Western governments say the stolen coins are being funnelled back to Pyongyang, helping finance its weapons programmes.
The attacks came to light this week after Martin Kuchař, a co-founder of the BTC Prague conference, revealed that hackers had compromised his Telegram account and used it to lure contacts into a fake video call.
During the call, the hackers posed as a familiar colleague and persuaded the victim to install what was presented as a routine fix for an audio problem. The file was, in fact, malware that granted full access to the victim's computer.
The way the attacks work is disarmingly simple. An initial approach is made through Telegram or another messaging app, often from an account that appears legitimate or has already been compromised. The attacker proposes a quick call to discuss a work-related matter. Once the video link opens, the victim is greeted by a face they recognise, or believe they do.
Midway through the conversation, a “problem” with the audio happens. A plugin must be installed. A file must be opened. The fix is urgent, routine and familiar. After the victim falls for the ruse, the attacker no longer needs persuasion. They have control.
According to Huntress, a cybersecurity firm that documented the technique last year, the malware used in such attacks is particularly tailored to Apple computers, which are common among developers. Disguised as a Zoom-related utility, it launches a malicious AppleScript that initiates a multi-stage infection. The script disables shell history to obscure its tracks, checks for or installs Rosetta 2 on newer Apple Silicon devices, and repeatedly prompts the user for their system password until elevated privileges are granted.
What follows is a looting operation. Persistent backdoors are installed. Keystrokes and clipboard contents are logged. Cryptocurrency wallets are searched for and drained. Messaging accounts are taken over and repurposed to ensnare the next victim. In Mr Kuchař's case, his hijacked Telegram account was soon sending the same meeting requests that had ensnared him.
Huntress and other security researchers attribute the campaign with “high confidence” to a North Korea – linked advanced persistent threat tracked as TA444, also known as BlueNoroff, one of several aliases used by subgroups within the Lazarus umbrella.
Since at least 2017, Lazarus has focused heavily on cryptocurrency exchanges, developers and infrastructure providers, viewing the sector as both lucrative and comparatively defenceless.
A recent analysis by SlowMist, a blockchain-security firm, suggests that the attack disclosed by Kuchař is consistent with broader Lazarus operations. No single clue is enough to raise the alarm. Newly created video-conferencing accounts, look-alike Zoom or Teams domains, highly scripted conversations and a rapid push to install a so-called fix, all combine to paint a fuller picture of criminal intent.
The malware itself shows signs of reuse across campaigns. Install scripts, wallet-targeting logic and persistence mechanisms recur with minor variations. The aim is not espionage but theft: clean, fast and scalable. In that sense, deepfake video is an enabling tool, lowering the psychological defences of the target just long enough for the familiar social-engineering playbook to work.
For years, cybersecurity advice has rested on a handful of sturdy assumptions: verify the sender, check the domain, distrust unexpected attachments. Video calls were often treated as a higher bar of authenticity. It is harder to fake a face than an email address.
That assumption no longer holds. Generative AI has made real-time video impersonation cheap enough and convincing enough to deploy at scale. According to SlowMist, images and video can no longer be treated as reliable proof of identity. The firm argues that digital content should instead be cryptographically signed by its creator, with signatures protected by multi-factor authentication.
Even that may not be sufficient. These attacks rely as much on narrative as on technology. They follow familiar social patterns: urgency, routine troubleshooting, professional politeness. The setting is deliberately bland. Suspicion is disarmed by the sense of normality.
The approach is working. Chainalysis, a blockchain-analytics firm, estimates that AI-driven impersonation scams pushed crypto-related losses to a record $17 billion in 2025. Deepfake video, voice cloning and fabricated online identities now feature prominently in major thefts, particularly those targeting individuals with privileged access to code, wallets or internal systems.
Governments aren't ignoring the problem. In December, South Korea said it was considering revisiting its sanctions framework on North Korea, citing concerns that cryptocurrency theft is helping to bankroll Pyongyang's nuclear and missile programmes. The remarks followed a new round of American sanctions linking North Korean cyber operations directly to weapons financing.
North Korea's hackers are not uniquely inventive and criminal groups elsewhere are experimenting with the same tools. What sets Lazarus apart is discipline and persistence. Campaigns are reused, refined and redeployed until they work. When defences improve, tactics shift.
Artificial intelligence tilts the balance further. It automates persuasion, personalisation and scale, three elements that social engineering once struggled to combine. Defenders can respond with better detection and authentication, but the underlying contest remains asymmetric. Attackers need only succeed once.
For the crypto industry, the implications are sobering. Trust, long mediated by code and cryptography, is once again being negotiated at the human level. The face on the screen may look familiar.
The voice may sound right. But in an era of synthetic reality, even that may be part of the con. One takeaway from the incident is that technology has moved the problem. Security is still a matter of wallets and keys, but now it's also about stories, habits and assumptions, and how easily they can be tricked. Seeing, it turns out, is no longer believing.