Jul 23 2024
Software

How Can Banks Implement AI Ethically?

Follow these best practices to fold artificial intelligence into financial services operations.

Artificial intelligence is driving operational enhancements in financial services. From capital market firms to credit unions, AI is helping teams parse massive amounts of data, fight fraud and improve the customer experience.

But with all of the security and compliance regulations in the financial sector, implementing AI is not without its challenges. That’s why banks must be aware of ethical standards when it comes to customer privacy and user data. In fact, a recent KPMG report reveals that “ethical challenges are the most cited obstacle to successfully implementing generative AI — along with cost and technical skill.”

Here are some best practices banks should consider to keep their AI efforts ethical.

Click the banner below to learn how to leverage artificial intelligence for your business.

 

Establish an Ethical AI Framework

Financial services firms that establish ethical AI frameworks will fare better because they are holding themselves to a standard. Once this standard is set, it should be shared widely across the company so that all employees know to follow it.

But not all banks are there yet. In fact, a recent report from Evident, an independent intelligence platform, notes that just 16 of 50 banks studied “have alluded to the existence of RAI principles at the bank in the last few years,” referring to responsible artificial intelligence. For those that are just beginning the process, setting up an ethical framework is a good place to start.

“Things like explainable AI, responsible AI and ethical AI, which defend against events like unplanned bias, are no longer being seen as optional but required for companies that leverage ML/AI, and specifically where they host customers’ personal data,” says Brian Maher, head of product for firmwide AI and machine learning platforms at JPMorgan Chase.

By following clear guidelines for the development, deployment and monitoring of AI systems, such as those outlined by the IEEE Global Initiative and the White House’s executive order on AI, banks have a better chance at protecting their customers’ privacy and data.

DISCOVER: Where does artificial intelligence fit into your work?

Retain Human Oversight and Governance Mechanisms

Data governance, bias detection and human oversight are also key components to making sure this technology is managed. Having all three of these mechanisms in place will improve AI use throughout the organization.

These controls also limit the degree to which AI technologies can be biased and make technical errors.

Other banks have established ethical committees to lead the initiative, appointing “dedicated employees with clear accountability and specialist expertise in AI ethics and AI risks to spearhead company-wide Responsible AI programmes,” Evident reports. Appointing “ethics owners” is one immediate action financial services can take, according to Deloitte’s “State of Ethics and Trust in Technology” report

An ethical committee can set regular deadlines to audit, train and evaluate AI models so that the insights these models produce are trustworthy and valuable to the business.

Next steps include organizing a technology review of the AI models and how they’re working, offering ethics training and developing a method for “sharing and reporting ethical concerns” within the company.

RELATED: Learn three ways banks are using artificial intelligence in 2024.

To Combat AI Skepticism, Prioritize Customer Trust

IT leaders should also be aware that users are both skeptical and excited by AI. Change can make people nervous, and customer resistance is one of the most common challenges that banks must consider when deploying AI technologies, according to Deloitte’s “Digital Ethics and Banking” report.

To offset this resistance, experts say, banks should focus on data transparency, customer autonomy and brand trustworthiness. This means giving customers a say in how their data is handled and being more open and honest about the bank’s AI plans, intent and data strategy. The more IT leaders communicate about how AI is being used and why, the smoother changes will be.

 “Customers are more likely to share their data when they understand how that data will be used, why sharing it is important and how it will ultimately benefit them,” notes the Deloitte report.

Give Users the Choice to Opt In or Out

Building customer trust also means giving users a choice concerning how their data will be used and shared with AI models. To comply with privacy laws, banks should let users opt in or out of data sharing. Inside banking platforms, customers should also be given a chance to select the degree of personalization and frequency of notifications they receive.

This allows consumers to consent to participation in AI data collection and reduces the chance that a user will feel caught off-guard. According to Deloitte’s report on ethics and banking, “by taking this route, and earning customer trust, institutions will have more of a license to use customer data to build new digital solutions.”

asbe / getty images
Close

Learn from Your Peers

What can you glean about security from other IT pros? Check out new CDW research and insight from our experts.