Two people attending a video conference

Key questions to ask in navigating the evolving US AI regulatory landscape

By Brad O'Brien, Cindra Maharaj, Lisa Toth, and Merry Spears

This September the US Department of Justice’s (DOJ) Criminal Division added a paragraph to their Evaluation of Corporate Compliance Programs handbook which we expect will significantly change the way companies operating in the US think about their use of AI. While this guidance affects all US firms, there are significant implications for Financial Service firms who will need to take immediate, proactive steps to align their AI programs with established ethical standards, prevent misuse of the technology, and protect their customers.

Are you positioned to meet these new compliance standards?

In the new section “Evolving Updates,” the updated federal guidance addresses companies that utilize AI at any point, or in any function, in their external products and services or in their corporate operations. The guidance questions “whether the company has conducted a risk assessment of the use of that technology [AI], and whether the company has taken appropriate steps to mitigate any risk associated with the use of that technology [AI].”(1) If a company decides to use AI in any capacity, the company must routinely track and test their use of the technology to evaluate if it is working as intended and aligned with the company code of conduct.

The other main consideration mandated by the DOJ guidance focuses on proactive monitoring and response time, i.e. companies that use AI should be able to quickly identify and address actions by the technology that conflict with company values. (2) For a financial service firm, this could entail an additional compliance layer in operations to actively manage and report on AI technology used across products and services.

While the new addition to federal Corporate Compliance Program guidance may be minimal in size in the context of the document, it is significantly broader in terms of scope. The definition of AI in the handbook encompasses tools, systems, models and the even techniques used to approximate cognitive tasks.

Furthermore, the guidance clarifies: “No system should be considered too simple to qualify as a covered AI system due to a lack of technical complexity (e.g., the smaller number of parameters in a model, the type of model, or the amount of data used for training purposes).”

With this updated guidance, financial services leadership must now take a targeted look at current or planned AI usage, through the lens of the following questions:

  • Do we as a firm have proper governance, policies, procedures, and guidelines to manage the day-to-day use of AI in a controlled and compliant manner?
  • Are the AI tools, systems, models and techniques that are planned or in place in all areas of our enterprise working the way they should be? How do we define what ‘good’ looks like?
  • Are we monitoring and tracking AI usage across our operations? How are we doing this through compliance?
  • Is our usage of AI mapped to our code of conduct? If not, how do we begin to map and address this?
  • Are we identifying AI value conflicts today? If so, how are we capturing and correcting these incidents?

For some firms, these questions may be expensive to answer but would be a worthy preventative investment in extending or building new compliance procedures to ensure risk is managed satisfactorily across the organization.

Is consumer protection an integral part of your AI usage?

Financial service leaders in the US should also be aware of potential regulatory movements coming from other federal government organizations such as the Consumer Financial Protection Bureau (CFPB).

This August, the US Secretary of the Treasury made definitive statements on current and future regulation of AI, by restating the CFPBs mandate to protect consumers and hold innovative technology to the same standard as legacy tooling, without exceptions. (3)

If, for example, a bank denies a credit line request due to a decision made by an AI tool, the firm should be able to explain the dataset and programming behind their decisioning to show lack of bias in their algorithm and tooling design. In accordance with the more recent DOJ guidance, in this example the firm would need to show previous testing and tracking of their tool against codified values (anti-bias, etc.) to achieve compliance. In addition, per the CFPB, the firm in this example would still be responsible to comply with the Equal Credit Opportunity Act and the Consumer Financial Protection Act, which prohibit discrimination in lending and unfair lending practices, respectively.

Concerns have previously been raised by the CFPB around specific AI use cases in financial services such as large language models for customer service, fraud screening, underwriting and lending. The Secretary stated, “Firms must comply with consumer financial protection laws when adopting emerging technology. If firms cannot manage using a new technology in a lawful way, then they should not use the technology.”

The 5 components of establishing successful AI governance

If you are a financial services leader seeking to harness the benefits of innovation within the boundaries of federal regulation and want to use AI technology in a compliant way, consider asking the hard questions above as you begin to understand your company risk profile. To address the tactical problem of what to do next, and how to do it, Baringa has built a flexible framework for establishing actionable AI Governance with 5 key components:

1. Clear AI Accountability & Ownership – Establish well-defined roles, responsibilities, and governance structures; and ensure people understand both the technical and ethical dimensions of AI use. Regulators are looking for clear lines of responsibility, so documenting accountability frameworks is vital.

2. AI Risk Management & Model Governance – Define and embed clear AI policies and standards. Successful AI governance tends to require a risk-based approach to AI model management, aligning with traditional model risk management (MRM) frameworks seen in the banking sector (e.g., SR 11-7 in the US). This includes rigorous model inventory management, transparent documentation of AI model lifecycles, and thorough testing and validation processes. Regulatory scrutiny is rising on opaque AI models, making it essential to demonstrate robust risk assessments, scenario testing, and monitoring for biases or unintended outcomes.

3. Transparency & Explainability Standards – Establish standards for transparency: both the DoJ and CFPB emphasize the need for transparency, pushing for AI systems that are interpretable by design. It's not just about making AI explainable for regulators; it’s about ensuring that business stakeholders can understand and trust the model outcomes. Establishing standards for explainability—such as SHAP (Shapley Additive Explanations) or LIME (Local Interpretable Model-Agnostic Explanations)—should be built into the model development process to ensure all stakeholders, including customers and regulators, are on board.

4. Regulatory Alignment & Compliance – Implement regulatory horizon scanning and integrate compliance checks into the AI lifecycle, from design to deployment. As AI regulations evolve across jurisdictions (e.g., the EU AI Act, the U.S. Algorithmic Accountability Act), aligning AI systems to global regulatory frameworks is a moving target. Successful AI governance involves a proactive compliance approach, where frameworks are mapped against emerging regulations, and models are assessed for compliance with privacy, fairness, and ethical standards.

5. AI Ethics, Bias & Fairness Assurance – Establish a dedicated AI Ethics Board or AI Ethics Guidelines, to provide an overarching ethical framework.  Addressing bias and ensuring fairness is not only a regulatory imperative but also a reputational risk. Techniques such as fairness-aware machine learning, bias audits, and scenario testing for discriminatory outcomes should be standardized practices. Moreover, aligning AI development to principles like the DoJ’s recent guidance on responsible use, which advocates for fairness and non-discrimination, should be a cornerstone of your governance strategy.

If you would like to learn more about AI regulation, or assistance in building risk-averse solutions, please reach out to Brad O'BrienCindra Maharaj Anna-Lisa Toth, CRISC, or Merry Spears.

Want to discover how Baringa can help you? Explore our Digital Risk and Cyber Security services

Sources:

Principal Deputy Assistant Attorney General Nicole M. Argentieri Delivers Remarks at the Society of Corporate Compliance and Ethics 23rd Annual Compliance & Ethics Institute - https://www.justice.gov/opa/speech/principal-deputy-assistant-attorney-general-nicole-m-argentieri-delivers-remarks-society

DOJ Evaluation of Corporate Compliance Programs - https://www.justice.gov/criminal/criminal-fraud/page/file/937501/dl

CFPB Comment on Request for Information on Uses, Opportunities, and Risks of Artificial Intelligence in the Financial Services Sector - https://www.consumerfinance.gov/about-us/newsroom/cfpb-comment-on-request-for-information-on-uses-opportunities-and-risks-of-artificial-intelligence-in-the-financial-services-sector/

Related Insights

Contact us

Find out what we can do for you...

Get in touch

Subscribe to our Financial Services Newsletter

Get industry news and trending topics direct to your inbox each month

Subscribe now

Are digital and AI delivering what your business needs?

Digital and AI can solve your toughest challenges and elevate your business performance. But success isn’t always straightforward. Where can you unlock opportunity? And what does it take to set the foundation for lasting success?

Find out more