Executive Summary
The securities industry is rapidly adopting artificial intelligence (“AI”) and algorithmic technologies for a wide range of functions. While regulatory attention to date has primarily focused on AI washing and entity disclosure obligations, regulators have also issued guidance on testing and supervision requirements for applications and business processes incorporating AI. It is this guidance on which this client alert will focus. Among other applications, brokerdealers and investment advisers are utilizing AI for compliance, risk management, data analytics, algorithmic trading, marketing, and customer service.
As this regulatory landscape continues to evolve, firms must ensure they implement appropriate governance, testing, and oversight protocols relating to their utilization of AI. To this end, the Securities and Exchange Commission (the “SEC”) and the Financial Industry Regulatory Authority (“FINRA”) have provided guidance for navigating these requirements. We explore these topics in more detail below.
The Regulatory Web
As a general premise, the SEC and FINRA apply existing regulatory requirements to AI and algorithmic technologies without implementing requirements addressing such technologies specifically. Regulators have emphasized that their rules are “technologically neutral,” meaning that fundamental obligations for supervision, recordkeeping, and risk management apply regardless of whether firms use manual processes or sophisticated AI systems. This regulatory approach places the burden on firms to ensure their AI implementations comply with existing standards while adapting traditional compliance frameworks to address technology-specific risks, including issues of explainability, bias, and algorithmic drift.
Although the SEC first proposed specific requirements relating to AI and algorithmic technologies in July 2023,1 the agency withdrew this proposal in June 2025, declaring that it does not intend to issue a final rule with respect to the proposal.2 It further noted that if the Commission “decides to pursue future regulatory action” in this area, “it will issue a new proposed rule.”3 Significantly, the SEC's March 2025 roundtable and request for comments on the risks, benefits, and governance of AI in the financial industry signals that new proposed rules could be imminent.4
Additionally, in its Fiscal Year 2025 Examination Priorities,5 the SEC's Division of Examinations specifically noted that if “advisers integrate artificial intelligence (AI) into advisory operations, including portfolio management, trading, marketing, and compliance, an examination may look in-depth at compliance policies and procedures as well as disclosures to investors related to these areas.” This underscores the SEC staff's focus on this topic even under the existing regulatory framework and the many ways in which AI and algorithmic technologies are infiltrating financial services businesses.
In addition, in multiple releases, FINRA has provided ample guidance with respect to the wide-ranging requirements applicable to a broker-dealer's utilization of AI:
- FINRA Notice 15-09 (March 2015)6 advised that “testing of algorithmic strategies prior to being put into production is an essential component of effective policies and procedures.” Effective algorithmic trading policies should include comprehensive testing protocols that validate code functionality under various market conditions, maintain independent quality assurance processes, document all testing procedures and results, and ensure development environments remain separate from live production systems.
- FINRA RegTech Report (September 2018)7 defined RegTech as “new and innovative technologies designed to facilitate market participants' ability to meet their regulatory compliance obligations” and clarifies that firms must maintain “reasonable supervisory policies and procedures related to supervisory control systems” in connection with such technologies under FINRA Rules 3110 and 3120. The guidance emphasizes that outsourcing compliance functions to RegTech vendors and technology “does not relieve [firms] of their ultimate responsibility for compliance,” and recommends establishing cross-functional technology governance structures, conducting vendor due-diligence, ensuring data privacy compliance, and integrating security risk management into RegTech implementation.
- FINRA AI Report (June 2020)8 clarifies that, to comply with FINRA Rule 3110 and 3120, firms should “test[] [AI technologies] across various stages of their lifecycles,” establish a “cross-disciplinary technology governance group” with representation from business, technology, information security, compliance, legal, and risk management functions, and maintain “a detailed inventory of all AI models, along with any assigned risk ratings”—and confirms that existing business continuity requirements apply to AI systems, suggesting that firms “establish back-up plans in the event an AI-based application fails (e.g., due to a technical failure or an unexpected disruption).”
- FINRA Regulatory Notice 24-09 (June 2024)9 calls for “reasonably designed supervisory system[s]” for AI tools, with specific focus on “technology governance, reliability and accuracy of the AI model,” and notes that the “rules applicable to Gen AI use will depend on how a member firm deploys the technology.”
- FINRA 2025 Annual Regulatory Oversight Report (January 2025)10 states that because FINRA rules are technologically neutral, they apply to the utilization of Gen AI tools. The report recommends that firms supervise the use of Gen AI at both individual and enterprise levels, identify and mitigate associated risks such as those related to accuracy or biases, and ensure the cybersecurity program is robust enough to identify and address cybersecurity risks associated with Gen AI use and has strategies to identify how threat actors may utilize AI or Gen AI.
Selected Enforcement Actions
Earlier this year, SEC-registered investment advisers Two Sigma Investments, LP and Two Sigma Advisors, LP paid a combined $90 million in penalties to settle the SEC's charges that they breached their fiduciary duties by failing to reasonably address known vulnerabilities in their algorithmic investment models, and for related alleged compliance and supervisory failures.11 The case involved an employee who made unauthorized changes to model trading parameters while bypassing required approval processes. The settlement highlights regulatory expectations that firms using algorithmic trading models must implement proper supervisory oversight of technical personnel who typically handle algorithmic system modifications, along with robust access controls and approval processes to prevent unauthorized modifications.
In addition, at the end of 2024, FINRA fined Interactive Brokers $475,000 for segregation deficits totaling $30 million caused by a faulty algorithm in its securities lending program, compounded by insufficient supervisory oversight, including lack of direct monitoring of the creation, launch, and testing of the algorithm.12 In August 2024, FINRA fined Brex Treasury $900,000 for anti-money laundering program failures, including reliance on an automated identityverification algorithm that was not reasonably designed to verify customer identities, which resulted in the approval of hundreds of deficiently vetted accounts that attempted over $15 million in transactions using funds that failed to settle.13
Thematically, these cases involve sophisticated firms with otherwise mature compliance programs failing to appropriately control for issues stemming from utilization of rapidly evolving technology, highlighting the growing regulatory risks associated with these technologies and the need to implement robust governance, testing, and oversight frameworks.
Business Activities and Functions Vulnerable to Enforcement Risk
The Brex Treasury, Interactive Brokers, and Two Sigma cases demonstrate key areas in financial services firms where AI and algorithmic technology testing failures are likely to trigger enforcement action. These areas are particularly significant to the SEC because related failures can directly undermine the SEC's core mission of protecting investors, maintaining fair and orderly markets, and facilitating capital formation. Areas warranting particular attention include:
Business Lines:
- Algorithmic Trading: Automated pricing decisions based on flawed information and failed systems controls can immediately impact the market and can result in regulatory violations for the firm, including potential market manipulation.
- Research, Investment Advisory Services, and Portfolio Management: AI-driven analysis affects investment recommendations and client outcomes and, if based on inaccurate information produced by flawed models, can violate a range of regulatory requirements, including obligations to appropriately manage conflicts of interest obligations and transact in the best interest of a client.
Control Functions:
- Compliance: With respect to monitoring trade activity, AI compliance tools that fail could mask or enable violations across multiple regulatory areas. Additionally, the SEC relies on accurate trade data reporting; if AI systems inaccurately report this data, they would create an inaccurate picture of market activity and could diminish the SEC's effectiveness as a regulator.
- Risk Management: Faulty AI risk models could lead to inaccurate or biased assessments of market risk, credit risk, and other financial exposures leading to a range of issues, including potential losses to the firm and potential inaccurate regulatory reporting.
- Surveillance and Monitoring: AI systems that fail to detect suspicious activity could create situations in which inappropriate employee trading activity, anti-money laundering violations, fraud, or other noncompliant activity could occur.
Conclusion
The explosive growth of AI in financial services in the face of explicit regulatory requirements and enforcement precedents creates a clear compliance imperative. The Brex Treasury, Interactive Brokers, and Two Sigma cases demonstrate that regulators—particularly the SEC and FINRA—believe that firms should specifically focus on AI and algorithmic technology utilization and supervision and that failure to do so would be considered a violation of regulatory obligations.
Footnotes
1. Conflicts of Interest Associated with the Use of Predictive Data Analytics by Broker-Dealers and Investment Advisors, Exchange Act Release No. 97990 (July 26, 2023), https://www.sec.gov/files/rules/proposed/2023/34- 97990.pdf
2. Withdrawal of Proposed Regulatory Actions, Exchange Act Release No. 103247 (June 12, 2025), https://www.sec.gov/files/rules/final/2025/33-11377.pdf
3. Id.
4. SEC Roundtable on Artificial Intelligence in the Financial Industry (Mar. 27, 2025), https://www.sec.gov/newsroom/meetings-events/sec-roundtable-artificial-intelligence-financial-industry
5. SEC, Examination Priorities: Fiscal Year 2025, (Oct. 21, 2024), https://www.sec.gov/files/2025-exam-priorities.pdf
6. FINRA Regulatory Notice 15-09, https://www.finra.org/rules-guidance/notices/15-09
7. FINRA Technology Based Innovations for Regulatory Compliance (“RegTech”) in the Securities Industry, https://www.finra.org/sites/default/files/2018_RegTech_Report.pdf
8. FINRA Artificial Intelligence (AI) in the Securities Industry, https://www.finra.org/rules-guidance/keytopics/fintech/report/artificial-intelligence-in-the-securities-industry/ai-apps-in-the-industry
9. FINRA Regulatory Notice 24-09, https://www.finra.org/rules-guidance/notices/24-09
10. 2025 FINRA Annual Regulatory Oversight Report, https://www.finra.org/rules-guidance/guidance/reports/2025- finra-annual-regulatory-oversight-report
11. SEC Administrative Proceeding File No. 3-22418.
12. FINRA Case #2021069316201.
13. FINRA Case #2021071100401.
To subscribe to Cahill Publications Click Here
The content of this article is intended to provide a general guide to the subject matter. Specialist advice should be sought about your specific circumstances.