FINRA Flags Gaps in Oversight as Gen AI Expands Across Financial Operations
The Financial Industry Regulatory Authority (FINRA) reported that member firms’ use of generative artificial intelligence (AI) is outpacing the controls, documentation and supervisory frameworks needed to manage the technology’s risks, based on examination findings detailed in its Annual Regulatory Oversight Report.
The findings, drawn from FINRA’s regulatory programs, show uneven governance, limited transparency into AI outputs and growing exposure tied to vendor-provided tools.
FINRA said it observed firms deploying large language models and similar tools across a widening set of business functions, including customer service, internal research, compliance support and content generation. While firms cited efficiency gains, regulators noted that many deployments lacked formal risk assessments, clear ownership or documentation sufficient to support supervision and accountability. Meanwhile, PYMNTS Intelligence has found that 9 in 10 CFOs now see very positive ROI for using gen AI than previously.
Limited Documentation and Model Visibility
One recurring finding was incomplete documentation around how generative AI tools are selected, configured and used. FINRA reported that some firms could not clearly explain which models were in use, how outputs were generated or whether prompts and responses were retained. In several cases, firms relied on vendor assurances without maintaining internal records necessary to demonstrate compliance with books-and-records and supervisory obligations.
Examiners also observed gaps in version control and change management. As vendors updated models or introduced new capabilities, firms did not consistently reassess risk or update supervisory procedures, increasing the likelihood that material changes went unnoticed. FINRA noted that this lack of visibility made it difficult for firms to evaluate accuracy, bias or drift in AI-generated outputs.
Inadequate Supervision of AI Content
FINRA found that some firms allowed generative AI tools to assist in drafting client-facing communications, marketing materials or internal guidance without adequate review processes. In examinations, regulators identified instances where firms lacked documented procedures describing when human review was required or how AI-generated content was approved prior to use.
The report highlighted concerns that AI-assisted communications could produce misleading or incomplete information if not properly supervised. FINRA emphasized that communications rules apply regardless of whether content is produced by a person or a model and noted that some firms underestimated the supervisory burden created by AI-enabled tools.
Overreliance on Third-Party Providers
Another finding centered on third-party risk. FINRA observed that many firms used generative AI capabilities embedded in vendor platforms rather than deploying models directly. In those cases, firms often lacked detailed understanding of how the tools handled data, trained on prompts or stored outputs. Accordingly, PYMNTS Intelligence in the August edition of the 2025 Certainty Project report, found that attackers frequently compromise a vendor first, then use the trust relationship to infiltrate their target firm.
Some firms did not conduct sufficient due diligence on vendors’ AI controls or monitor changes to vendor systems over time. The report noted that reliance on contractual representations alone did not substitute for ongoing oversight, particularly where customer data or nonpublic information could be exposed.
Data Handling and Information Security
FINRA reported that firms varied widely in how they controlled the data shared with generative AI systems. Some firms permitted employees to input sensitive information into external tools without clear restrictions, increasing the risk of data leakage or unauthorized retention by third parties.
The report also linked generative AI use to broader cybersecurity concerns. FINRA said firms have begun to encounter more sophisticated phishing and social engineering attempts enabled by AI, while internal controls to detect or respond to AI-generated fraud signals were often underdeveloped. As reported by PYMNTS, Anthropic has found that only a few hundred malicious data points can introduce hidden vulnerabilities into large language models.
Governance Structures Still Maturing
Across examinations, FINRA found that governance frameworks for generative AI were often informal or fragmented. Responsibility for AI oversight was sometimes split across technology, compliance and business units without clear accountability. In other cases, firms lacked escalation processes for identifying and addressing AI-related incidents.
While some firms had begun developing AI-specific policies, FINRA observed that many frameworks remained in early stages, with limited testing or enforcement.
The post FINRA Flags Gaps in Oversight as Gen AI Expands Across Financial Operations appeared first on PYMNTS.com.