When the CEO Isn’t Real: The Rise of Deepfake Scams Across Work Platforms

When the CEO Isn’t Real: The Rise of Deepfake Scams Across Work Platforms

Deepfake scams are no longer theoretical — they’re targeting real companies, real platforms, and professionals across industries. They’ve been used against teams, individual professionals, and C-level executives. And they’re not slowing down.

With AI video and voice tools becoming more realistic, attackers now use highly believable impersonations to mislead people and gain access to money, credentials, or internal systems. The threat is ongoing, and recent examples show that even well-known public figures are being used as entry points for large-scale deception.

Whether you’re a CEO, a team lead, or a working professional, one thing’s clear: it’s time to treat faces and voices cautiously.

Confirmed AI Deepfake Scam Incidents:

  • Deepfake of YouTube CEO Neal Mohan (March 2025): In March 2025, scammers circulated an AI-generated deepfake video of YouTube CEO Neal Mohan to mislead content creators. The fake video falsely announced changes to YouTube’s monetization policy as part of a phishing scheme. It was shared through private messages and emails that mimicked official YouTube communication, directing creators to a fake login page where they unknowingly entered their credentials. YouTube later confirmed it had not published any such video and warned users to be cautious. In addition, the deepfake video is no longer available on the internet.

  • Deepfake Impersonation of WPP CEO Mark Read (2024): Another incident involved fraudsters using deepfake audio/video to impersonate Mark Read, the CEO of advertising giant WPP. The attackers first contacted a senior WPP employee via WhatsApp, using a fake profile with Mark Read’s public image. They then invited the employee to a Microsoft Teams meeting, where a voice clone and video manipulated from real footage created the illusion of a live call with the CEO and another executive. During the call, they attempted to convince the employee to route money for a fake business venture. The fraud was stopped before funds were transferred.

  • Deepfake Scam Targeting Arup – ~$25 Million Loss (Early 2024): One of the costliest confirmed deepfake scams hit Arup, a UK-based engineering firm, resulting in about $25 million in losses. In this case, an Arup employee received an email invitation, appearing to come from Arup’s CFO, to join a confidential video call. The meeting featured AI-generated faces and voices of Arup executives, staged to look like an internal briefing. Believing it was legitimate, the employee transferred HK$200 million (≈£20 million) to accounts controlled by the attackers before the fraud was discovered. Arup confirmed the incident and reported it to Hong Kong police.

What This Means for Teams and Professionals?

These aren’t isolated events. They show a shift in how social engineering is done—from clumsy phishing emails to polished, AI-generated impersonations that can take place in real-time across trusted platforms.

The impersonated target could be anyone your team will likely follow — a CEO, a senior executive, or even a platform’s official representative. What used to be easy to dismiss is now harder to question.

Whether you’re running a business or are a part of a team, deepfakes challenge the most basic layer of communication: believing who you're talking to.

And the risks are already playing out - from creators tricked into handing over account credentials to employees transferring millions in fake internal meetings; these scams are costing companies both money and control over sensitive information.

6 Tips How To Stay EXTRA SAFE from Deepfake Impersonation

Deepfake scams don’t break in through systems — they use trust as the entry point. If a request comes through a video call, voice note, or platform DM, don’t rely on faces, titles, or tone of voice. Rely on verification. Here’s what to do now, before the next deepfake targets your team:

  • Don’t treat video as proof: Whether it’s a CEO or a colleague, verify identity through a separate, known channel before acting on urgent requests.

  • Always double-check on another platform: If something feels off — even slightly — confirm it via a different app, number, or method you’ve previously used with that person.

  • Check message history: If the person reaching out is someone you’ve spoken to before, make sure the account still contains your past messages. A new or empty chat can be a red flag.

  • Pause before action: Money transfers, login prompts, “quick syncs” — take a moment to assess whether anything feels out of the ordinary.

  • Secure internal workflows: Limit who can request payments or access sensitive information. Add verification steps and approval gates that create friction for potential attackers.

  • Use privacy-first tools for sensitive communication: When the topic matters, choose a platform designed to leave no trace. With tools like EXTRA SAFE, your conversations happen directly between devices — no central server in between, no personal data collected, and no information stored after the call ends. Encryption keys are generated on your device and never leave it, ensuring total control over your meeting.

About #EXTRASAFEcheck

New security risks pop up every day, spreading faster than ever. From AI flaws to data leaks, even the most popular apps can pose hidden threats, affecting both teams and individual users. That’s why our monthly review brings you the most important updates to keep you informed and protected. Follow #EXTRASAFEcheck to spot risks early and make safer online choices.