
On 19 March 2025, the United States Commodity Futures Trading Commission (US CFTC) published an advisory titled “Criminals Increasing Use of Generative AI to Commit Fraud.” The advisory released by Office of Customer Education and Outreach (OCEO) of the US CFTC discusses how criminals are now leveraging advanced artificial intelligence tools to create highly deceptive scams. From deepfake videos and manipulated live-stream calls to forged financial documents and fake trading platforms, fraudsters are using AI to make their scams more realistic and convincing than ever before.
The advisory details how AI-generated images, voices, videos, and live-streamed video chats are being used to scam individuals and businesses. Fraudsters are also deploying AI-powered chatbots and social media profiles to gain victims’ trust, solicit investments, and facilitate financial fraud. The advisory cites an FBI public service announcement that warns of AI’s increasing use in relationship investment scams and identity fraud.
According to the US CFTC, criminals are using AI tools to improve language translations, correct grammatical errors, and enhance website functionality to make their schemes appear more convincing. These tactics have enabled fraudsters outside the United States to target US residents more effectively, significantly reducing common red flags that previously helped detect fraudulent activity.
The advisory outlines several deceptive techniques criminals are using, including AI-generated fake identities with realistic photos and videos used in dating scams and social media fraud, forged government and financial documents created with AI to deceive victims, manipulated real-time video calls allowing scammers to alter their facial features and voices using smartphone apps or open-source software, and AI-powered trading scams where fraudulent platforms mimic legitimate financial services to steal investors’ money.
The US CFTC advises the public to take proactive steps to protect themselves from AI-generated fraud. These include examining AI-generated images and videos for inconsistencies, such as distorted hands or unnatural facial movements, listening for anomalies in AI-generated voices, including unnatural tone or inconsistent speech patterns, tightening social media privacy settings to limit exposure to potential scammers, avoiding unsolicited messages, phone calls, or social media invitations from unknown individuals, never sending cryptocurrency or financial assets to individuals met only online or over the phone, and refraining from sharing sensitive personal or financial information with online contacts.
The US CFTC issued the advisory on 19 March 2025, reinforcing its ongoing efforts to combat financial fraud in the evolving digital landscape. The advisory provides practical guidance to help individuals and investors identify AI-generated scams and mitigate risks associated with fraudulent online activities.
The US CFTC’s warning aligns with broader regulatory efforts to address AI-enabled financial crimes. As AI fraud becomes more sophisticated, federal agencies, including the FBI and US CFTC, are intensifying surveillance and enforcement actions to protect consumers and market participants. The advisory urges victims of fraud to report their cases to the US CFTC at cftc.gov/complaint or the FBI at ic3.gov, reinforcing the importance of collective action against AI-driven scams.
(Source: https://www.cftc.gov/PressRoom/PressReleases/9056-25, https://www.cftc.gov/sites/default/files/2025/03/AI_fraud.pdf)