All Articles Technology AI AI, deepfakes and phishing

AI, deepfakes and phishing

How AI is making phishing easier on email, video and social media.

3 min read

AIDigital TechnologyMarketingTechnology

How generative AI makes scammers alarmingly convincing

“Taylor Swift” giving away cookware. “Tom Hanks” shilling for a dental plan. “Grandchildren” asking for emergency funds. The rise of generative AI has been a boon for many people, and unfortunately, this includes scammers and hackers. Now people are making their losses public, celebrities are speaking out about their digital impersonators and government agencies are considering harsher measures to fight the rising tide of AI-powered security hazards.

Moving pictures, faking conversations

Celebrity video deepfakes have taken the media spotlight, of course. Most recently, an AI clone of Taylor Swift purported to give away Le Creuset cookware — an attempt to have customers take surveys that would collect data for an unknown party. Le Creuset issued an apology and a warning as it worked to take down the ads. Tom Hanks had to speak out about the topic last October, when a fake dental plan used AI to create a video double of him. 

The problem goes beyond consumers and celebrities. In February, a Hong Kong bank lost $25 million when scammers faked a video call with employees, including the chief financial officer. AI is adding a whole new dimension to phishing – and while many people are alert for suspicious phrasing or email addresses, fewer think to doubt the evidence of their own eyes.

Voice, text and images

AI scams don’t have to involve video. Voice cloning can make people think requests for money or other aid are coming from individuals they know, or that people contacting them are actually from government organizations with authority to ask for personal information. AI-generated voices can even bypass security measures, giving hackers access to accounts so that they can access data, reroute direct deposits and otherwise profit at the expense of those they’ve defrauded. 

Con artists on dating sites have started using AI to improve their catfishing schemes as well, creating convincing images, text messages and videos to lure people into “relationships” with the goal of extracting money or personal information. Others take advantage of the technology to create far more fake job postings than ever before, then collect the data people submit when they apply or demand payment for “recruitment” services. 

The classic phishing email has gotten an AI boost as well, as generative AI can produce text without the spelling and grammatical errors that frequently give away human-authored attempts at gaining access.  

Scope and solutions

While the actual number of AI-based social media scams the FTC tracks remains relatively low, it went up seven times between February 2023 and 2024, and experts say that many scams don’t make it to the federal complaint level because customers contact the platforms instead. In one case, someone lost $7,000 to a fake Elon Musk ad: the idea of needing a fake Elon Musk to defraud people may strike many as ironic, but it illustrates the problem’s scope.

The FTC has started trying to tackle AI in its anti-fraud efforts, with a new contest aimed at helping people recognize voice clones, the ability to sue entities that impersonate government agencies and a proposed rule that would make companies liable if their AI tools are used in deepfake scams. AI-based fraud probably isn’t going anywhere, though, even if that proposal becomes effective: businesses and consumers alike must stay alert and double check anything that seems suspicious, even if it ostensibly comes from their grandchildren or Taylor Swift.