Select Page
US CFTC Commissioner Kristin Johnson’s Remarks on “Exploring AI Risks and Opportunities Across the Digital and Cyber Landscape” Signals Compliance Shift in Financial Markets with GenAI, Agentic AI and Cyber Risk Management in Financial Markets

On 29 May 2025, the United States Commodity Futures Trading Commission (US CFTC) Commissioner Kristin Johnson delivered keynote remarks at the Federal Reserve Bank of Dallas Symposium on “Exploring AI Risks and Opportunities Across the Digital and Cyber Landscape.” Commissioner Johnson outlined the US CFTC’s evolving supervisory priorities regarding the rise of generative AI (GenAI) and agentic AI, placing particular emphasis on the dual nature of AI as both a regulatory opportunity and systemic risk. The address offered detailed insight into how the US CFTC interprets AI’s growing footprint in financial supervision, cybersecurity resilience, third-party risk governance, and operational compliance—signalling a proactive but cautious regulatory direction.

The US CFTC has made clear that AI is no longer speculative, it is operational. Commissioner Johnson categorised AI development into three strategic epochs: traditional machine learning, GenAI, and now agentic AI. While recognising the automation potential in compliance and market monitoring, she also signalled the agency’s preference for use-case-specific assessments. Importantly, the US CFTC is not proposing blanket regulation but a tiered and dynamic supervisory lens, guided by whether the application serves compliance objectives or exposes market integrity to automation risk. Regulated entities should prepare for the US CFTC’s differentiated oversight based on the type of AI model deployed and its functional integration into regulatory or trading operations. This implies readiness to provide AI governance documentation, model explainability, and error escalation protocols in case of autonomy failures.

GenAI’s Regulatory Promise Is Overshadowed by its Weakness in High-Stakes, Data-Driven Environments

The US CFTC acknowledges the value of GenAI in simplifying compliance operations, particularly in regulatory reporting, policy review, and customer interaction frameworks. However, Commissioner Johnson directly identified hallucinations, model opacity, and non-deterministic outputs as structural flaws for regulatory dependence. She cited real-world failures of GenAI models under dynamic environmental conditions, such as rerouting traffic in changing cityscapes, drawing a parallel to financial compliance under evolving regulations. Compliance teams should note that the US CFTC is unlikely to view GenAI outputs as reliable without robust guardrails. Institutions using GenAI for AML/KYC, stress testing, or disclosures should employ independent validation layers, deterministic safeguards, and audit trails for all outputs.

Agentic AI Is a Compliance Force Multiplier, But Only When Architected for Oversight

Unlike GenAI, agentic AI can act independently, adapt to data, and execute tasks without prompts making it a potent tool for fraud prevention, continuous monitoring, and autonomous transaction flagging. Commissioner Johnson spotlighted agentic AI’s ability to enhance real-time credit scoring, automate regulatory filings, and improve compliance interaction with customers. She welcomed these capabilities but underscored the non-negotiable need for input integrity and governance transparency. Financial firms must classify agentic AI implementations as “regulated systems” under internal compliance architectures. Risk and compliance functions should document: (a) model training data provenance, (b) override protocols, (c) error learning loops, and (d) AI output supervision parameters in line with US CFTC’s expectation of proactive governance.

US CFTC Flags the Autonomy-Risk Feedback Loop: Poor Inputs Will Poison AI Outputs at Scale

Commissioner Johnson warned that agentic AI, if trained on corrupted or biased data, can perpetuate false assumptions across autonomous actions. In the absence of “humans-in-the-loop,” these models can enter self-reinforcing feedback loops of faulty compliance or trading actions. Additionally, vulnerabilities such as data leakages, privacy breaches, model inference attacks, and hallucination propagation present amplified systemic threats. The US CFTC is likely to evaluate AI systems through the lens of operational resilience and model integrity. Firms should adopt layered governance covering input validation, periodic adversarial testing, model risk management, and kill-switch design for AI agents in mission-critical functions.

AI-Powered Cyber Threats are Rising: US CFTC Signals Supervisory Convergence with SupTech Adoption

Commissioner Johnson detailed how AI is now central to both cybersecurity threats and their mitigation. She cited rising AI-enhanced attacks such as deepfake-based fraud, synthetic identity generation, and phishing malware. At the same time, the US CFTC views AI as a foundational element in SupTech architecture, capable of detecting fraud patterns, real-time market manipulation, and system anomalies. Regulated firms should prepare for dual obligations: (i) integrating AI into their cybersecurity frameworks, and (ii) documenting how they defend against adversarial AI exploitation. US CFTC is likely to favour institutions that demonstrate “AI-aware cyber hygiene,” combining threat detection with pre-emptive architecture.

SupTech Is No Longer Optional: US CFTC May Soon Mandate Supervisory Technology Readiness

The US CFTC has positioned itself as both a user and regulator of AI. Johnson noted that 59% of global supervisory authorities now use SupTech tools. The implication is clear: the US CFTC is aligning itself with a global supervisory transformation, likely to expand expectations around regulated firms’ internal RegTech capabilities. Firms should align internal compliance transformation efforts with the US CFTC’s SupTech trajectory—by investing in data lakes, automated reporting pipelines, fraud detection AI, and RegTech integration for document review and transaction monitoring. Capability disclosures may soon become routine.

Third-Party Risk Management Becomes Structural: US CFTC Proposes Embedded TPRM for DCOs

Commissioner Johnson announced MRAC’s recommendation to enhance Rule 39.18 with formal Third-Party Risk Management (TPRM) requirements. This would compel Derivatives Clearing Organizations (DCOs) to implement TPRM programmes with full lifecycle oversight such as onboarding, performance monitoring, and exit strategies. The goal is to regulate not only the systems within firms, but also the AI ecosystems surrounding them. Expect regulatory expansion of TPRM across swap dealers, FCMs, and critical infrastructure providers. Compliance officers should pre-emptively map their third-party AI service dependencies like cloud providers, LLM vendors, APIs, and introduce risk-weighted controls, SLA-based accountability, and contingency planning aligned with proposed US CFTC frameworks.

Commissioner Johnson’s remarks shed lights on the US CFTC’s forward-leaning yet prudential posture toward AI. The Commission is neither resistant to innovation nor blind to its risks. Instead, it is building a supervisory framework that aligns AI usage with financial stability, fairness, and technological accountability. Markets, in her words, must be “fit-for-purpose”, a mandate that applies equally to infrastructure, innovation, and investor protection. Regulated firms should treat this speech as regulatory signalling: AI is now a formal vector of compliance scrutiny. Governance documentation, SupTech integration, adversarial risk testing, and agentic AI lifecycle policies should be embedded into firmwide controls. The US CFTC is laying the foundation for AI-inclusive financial supervision, and it will reward foresight, but penalise negligence.

(Source: https://www.cftc.gov/PressRoom/SpeechesTestimony/opajohnson19)