Aug 13 2025
Management

What Is AI’s Role in Financial Compliance?

Four experts discuss how the industry can successfully manage regulations and tech transformations.

Financial services institutions must contend with a never-ending barrage of regulatory compliance requirements, which can vary by location. Technology, and especially artificial intelligence, can help the financial services industry more easily traverse an increasingly complicated compliance landscape. However, some firms are being cautious with their use of AI for compliance, at least for now.

To better understand how financial services institutions are using AI and working through regulatory compliance challenges, BizTech convened a roundtable of experts: Alla Valente, principal analyst at Forrester; Alan Carlisle, chief compliance officer at payment services firm Marqeta; Avi Gesser, a partner at Debevoise & Plimpton and co-chair of the law firm’s data strategy and security group; and Gabe Rosenberg, a partner at Davis Polk focusing on financial regulation.

Click the banner to learn why data protection is an essential part of financial compliance.

 

BIZTECH: What are the most significant regulatory compliance challenges financial services institutions are facing in 2025?

VALENTE: There is a widening gap between different regulatory perspectives and regulatory regimes across different nations. It’s not just a lack of harmonization; it’s vastly different approaches to regulations.

Throw into the mix that we have speed of innovation that really changes what the technology landscape looks like across all financial services organizations. And financial institutions have to ask ourselves, “OK, we have this amazing technology moving really quickly, bringing lots of innovation, but is regulation keeping up with this speed of innovation?”

GESSER: I think that whatever regulatory risks institutions face, their adoption of AI is making those harder in ways that they sometimes aren’t anticipating or don’t have a lot of visibility into. So, there are the known unknowns and then there are the unknown unknowns for them.

And I think they are not necessarily staffed with the appropriate resources to address those additional risks, and therefore they’re moving slowly. And now, they’re under increased pressure to move quicker and are prioritizing which use cases and which tools to approve, and who does that approval.

CARLISLE: I think the challenge is regulatory change management. When you have regulation coming down that has been somewhat unpredictable from a federal perspective, it does make it difficult to respond and adapt to ensure that you’re staying ahead, whether it’s proposed legislation or proposed rulemaking, because things are being reset, things are being withdrawn.

ROSENBERG: The previous administration put the brakes on from a legal and regulatory standpoint for new technologies in the financial sector — digital assets and crypto, for example. If and when the restrictions ease somewhat, and those technologies make their way into the more traditional financial sector, compliance departments and professionals are going to need to develop a regime around these new technologies, because they’re new.

Alla Valente
There’s the whole trust issue. Can we explain aspects of AI? How do we know that it’s free of bias? How can we have insight into the AI supply chain?”

Alla Valente Principal Analyst, Forrester Research

BIZTECH: While AI can help with compliance, many firms are cautious. What are the primary reasons for that?

VALENTE: There are a lot of unknowns around AI. Regulations haven’t necessarily kept up. But also, what if that AI gets misused? Maybe employees didn’t fully understand either that they shouldn’t be using it this way or that they’re using it when companies don’t have good data use policies.

And then of course, there’s the whole trust issue. Can we explain aspects of AI? How do we know that it’s free of bias? How can we have insight into the AI supply chain? Those are questions that require maturity of risk management programs and processes.

EXPLORE: A new era of digital banking is powered by AI.

GESSER: People are understandably very concerned that if the chatbot gets things wrong, then people are going to be relying on it and they’re going to be behaving contrary to policy, thinking that they’re following policy.

But it’s also a workflow issue. Imagine AI gives you an answer to a question: “Can I fly business class to London for a conference and have the firm pay for it?” Instead of saying yes or no, if all it does is take you to the relevant section of the relevant policy, then that’s a lot less risky and it probably gives you all of the value. And so, in some cases, the question is not whether you trust AI. The question is, can you manage the workflow so that you can trust the AI? Because what it’s doing is something that’s verifiable, it’s repeatable, it’s playing to its strength. 

ROSENBERG: Anything that has the label on it of “AI” for a large traditional financial institution is going to have scrutiny. If you instead call this something like “advanced new compliance tool” and didn’t use the word “AI,” I think people might think about it slightly differently.

I think we all know that AI is still a work in progress and will continue to be a work in progress. I think compliance is a great place to start with the use of AI because you don’t have to rely solely on the AI. I see less reluctance when the idea is to add AI to a toolkit, with the rest of the toolkit remaining intact.

 

BIZTECH: What are some of the most promising use cases for AI in financial services compliance?

VALENTE: We’re seeing it being used to detect risk in real time. So, things like going through transaction systems and payment systems to be able to flag something and stop a transaction in real time, that’s some terrific value.

GESSER: I think one of the things that’s underrated is training. There are a lot of folks in compliance who are expected to be experts in 20 different areas of the law, and they understandably don’t want to admit they don’t really understand the difference between domestic bribery and foreign bribery, or they don’t really understand how the gift system works from a Foreign Corrupt Practices Act point of view.

AI is very good at taking complex legal issues, breaking them down into pieces and walking you through what the issues are, where people have gotten into trouble and what the line is. You can ask follow-up questions, and you can use it as a tutor, but you’re not going to necessarily rely on it for making any final decisions.

Click the banner below to read CDW’s recent AI research report.

 

CARLISLE: A lot of financial services companies build massive repositories of information that end up being very fragmented and stored with different naming conventions over different time periods in different mediums and forms. You want to expeditiously identify artifacts that may be useful or problematic. Being able to have some sort of AI model that can identify responsive information across a very large data set could prove valuable and save some time as well.

BIZTECH: What are the key considerations and potential pitfalls that financial institutions should be aware of when implementing AI for compliance?

VALENTE: You don’t need to have specific AI regulations for the bank to run afoul of existing requirements. There are so many ways AI is only as good as the humans who drive it. We can look at AI being used and accidentally exposing confidential information.

RELATED: Explore AI use cases strategically to achieve best outcomes.

GESSER: You may have clients who have placed restrictions on what you can do with their data. And even if you have an enterprise tool that is a closed loop and you’re not sharing any information with humans as your model provider, not training on the data and so forth, it may still be that your clients say, you can’t use AI with my confidential data without talking to me, or you may not use AI in connection with any service to me unless it’s solely for my benefit, and your compliance project is not solely for my benefit. You have to think about whose data you’re using, whether you have the right to use it, are there contractual use limitations and so forth.

CARLISLE: Putting thoughtful governance around processes that have outsized regulatory risk is critical. You need some sort of governance so that people understand the playbook. If you’re going to use AI, you need to say, “This is what we do to ensure that we’re all aligned on the risks that we are taking, the information that we are sourcing, where we are sourcing it from and potential applicable regulatory considerations.”

You’re doing so in a centralized manner so that there’s an audit trail that you talked about it, you thought about it, so that if you’re under scrutiny or need to respond to an inquiry, you can have evidence.

ROSENBERG: A big problem in compliance is false positives. People could start using these AI programs, and they might just throw out a bunch of false positives. That’s a potential pitfall.

That takes a lot of resources away from things that could be done better, but it also might lead people to think that future iterations of AI are useless. They get sort of sour on AI. And that’s not a good thing for anyone.

KEEP READING: More stories from our new publication BizTech: Financial Services.

Brian Stauffer/Theispot
Close

See How Your Peers Are Leveling Up Their IT

Sign up for our financial services newsletter and get the latest insights and expert tips.