How to protect yourself from AI-created deepfake audio scams involving your children

1comment
Against a deep blue background two similar looking faces represent the practice of using AI to create deepfakes.
Europol's European Serious Organized Crime Threat Assessment is warning that AI is helping to accelerate the rate of crime. The organization writes, "AI is fundamentally reshaping the organized crime landscape. These technologies automate and expand criminal operations, making them more scalable and harder to detect." What that in mind, a different form of deepfake attack is gearing up to be used to scam and rip off more consumers. First, some background.

AI is used by scammers to manipulate you through "social engineering"


AI is being used more and more in online fraud scams that result in manipulating people to take certain actions due to persuasion, deception, and intimidation. This is called social engineering. For example, if you receive a bogus text or email from your carrier, the scammers make it sound that your service will be cut off if you don't pay them immediately. They know that you will do anything not to lose your service and they have manipulated you into making a payment you didn't need to make. At the same time, they've scammed you into revealing personal data that is used to access your financial accounts and wipe you out.

Video Thumbnail


Former NSA cybersecurity expert Evan Dornbush told Forbes that AI can help scammers create believable messages faster than ever. With AI, the cost of creating these attacks is much lower. Dornbush said, "AI is decreasing the costs for criminals, and the community needs novel ways to either decrease their payouts, increase their operating budgets, or both."

While most of you think of fake videos when you hear about "Deepfakes," the latest battle revolves around faked audio and its use to scam people. Imagine getting a phone call from your daughter that she is being kept against her will and will only be released if you pay a ransom. While scams like this have already succeeded without AI, imagine hearing the shaky, scared sound of your daughter, wife, or son, or even one of your parents telling you about their terrifying ordeal and asking you to pay whatever the bad guys demand. With AI, these calls can be created even if the relative you thought you just heard pleading for her life is safe at the movies or home.

How to prevent your family from getting scammed by a deepfake audio attack


Imagine that you are certain that the caller was one of your kids or another relative. You'd be on your way to the bank in no time. The FBI has already issued a  public service announcement (service alert number I-120324-PSA) warning people about these attacks. And the G-men have a great idea to counter such attacks. The next time you speak to a loved one, create a code that no one else knows. Each person should have their own code that only they know. This way, if you get a call from your daughter, you can confirm if that's really her voice coming out through your iPhone's speakers or an AI-created deepfake audio.

Don't put this off because you never know when you might be the next victim of a deepfake audio attack. If this all sounds familiar to you, we first reported on the FBI warning back in December. But now with Europol's involvement and the development of audio deepfakes, the level of concern is higher.

The article back in December generated a comment from a reader who called the creation of a secret word "the most idiotic and cynical advice ever." The author of the comment worries that in a time of stress, your relative might not remember the secret word and you would then ignore what might in reality be a real "ransom call." While that might be a legitimate concern, you need to make the code word something that is easy to remember even when stressed.

If you don't like the idea of creating a code word, listen to make sure that your relative's voice doesn't sound robotic and if certain phrases are repeated out of context, chances are you are listening to a deepfake audio.

The Honor Magic 7 Pro smartphone was released in January with a special deepfake detection feature. About this feature Honor has written, "The AI Deepfake Detection has been trained through a large dataset of videos and images related to online scams, enabling the AI to perform identification, screening, and comparison within three seconds. If synthetic or altered content is detected by the feature, a risk warning is immediately issued to the user, deterring them from continued engagement with the potential scammers for their safety." Check out our review of the Magic 7 Pro.
Did you enjoy reading this article?
There's more to explore with a FREE members account.
  • Access members-only articles
  • Join community discussions
  • Share your own device reviews
  • Manage your newsletter choices
Register For Free
Loading Comments...
FCC OKs Cingular\'s purchase of AT&T Wireless