Select Page
Gary Gensler, US SEC Chair Discusses AI, Fraud, and Investor Protection in Securities Law

On 10 October 2024, Gary Gensler, the Chair of the United States Securities and Exchange Commission (US SEC), delivered a statement regarding the risks of fraud and deception in the context of artificial intelligence and its application in finance. His remarks discussed the timeless nature of fraud under US securities law and how new tools, such as AI, present both opportunities and risks. Gensler focused on the evolving challenges AI poses for investor protection, with a specific emphasis on programmable, predictable, and unpredictable harm.

Gensler began by referencing a key historical figure in computer science, Alan Turing, and his famous 1950 question, “Can machines think?” He used this reference to highlight the relevance of Turing’s question in today’s world of securities law, particularly regarding fraud and manipulation. Gensler discussed that although AI represents an advanced tool for today’s market participants, fraud remains fraud under US securities law, regardless of the tools used to commit it.

The first category of AI-related harm Gensler discussed was Programmable Harm. He explained that this type of harm occurs when someone uses an algorithm specifically to manipulate or defraud the public. Gensler was clear in stating that if AI models are intentionally programmed to deceive, this constitutes fraud under the US securities law, just as it would if a human actor engaged in such behavior. The risks associated with programmable harm are relatively straightforward, according to Gensler, because they involve deliberate acts of fraud executed through AI.

The conversation became more nuanced as Gensler moved into the second category: Predictable Harm. This category addresses situations where the harm is not explicitly programmed, but where someone deploying an AI model recklessly or knowingly disregards foreseeable risks. Gensler discussed the importance of acting reasonably when using AI in finance. He likened predictable harm to traditional violations such as front-running (trading ahead of customers) and spoofing (placing fake orders), both of which are illegal under US securities law. The core idea here is that even if an AI model is not intentionally designed to defraud, firms deploying such models must ensure that adequate guardrails are in place to prevent predictable risks.

As AI models become more complex, self-learning, and adaptive, Gensler acknowledged that the risks associated with AI may evolve. He pointed out the issue of AI hallucination, where AI models generate inaccurate or misleading results. Despite these technological complexities, Gensler reiterated that firms must still be held accountable for ensuring their AI models operate within the bounds of US securities law and do not expose investors to undue risks.

The third category Gensler discussed was Unpredictable Harm. This category involves risks that may arise from the deployment of AI models where the harm is not foreseeable, due to the nature of the AI system’s self-learning capabilities. Gensler recognized the challenges associated with holding firms accountable for harms that may fall outside the realm of predictability. He suggested that while courts may ultimately address these cases, current US securities law already covers much of the programmable and predictable harm that AI models can cause.

Gensler stressed that regardless of how AI evolves, companies deploying these models need to put in place appropriate safeguards. Whether the harm is predictable or not, investor protection remains paramount. He highlighted that companies should not rely on the evolving nature of AI as an excuse for negligence.

Historical Perspective and SEC’s Role

Gensler concluded his remarks by invoking Joseph Kennedy, the first Chair of the SEC and the father of President John F. Kennedy. He quoted Joseph Kennedy’s famous statement from 90 years ago: “The Commission will make war without quarter on any who sell securities by fraud or misrepresentation.” Gensler noted that this principle remains relevant today, extending even to those who deploy AI models in finance. The message was clear: fraud, whether carried out by humans or through AI models, will not be tolerated by the US SEC.

(Source: https://www.sec.gov/newsroom/speeches-statements/gensler-transcript-fraud-deception-artificial-intelligence-101024, https://youtu.be/Tym3pO261Gc)