Large Language Models in Banking: What Are They Good For?

Scaling AI, Featured John McCambridge

Large language models (LLMs) are already improving efficiency in client-facing operations and risk management environments. I will summarize the proven and plausible impact of LLMs and Generative AI in the banking and financial services industry as of September 2024. To make this as relevant as possible to a broad audience, I will limit myself to a specific set of use cases that:

  1. Are live and being actively used by at least one client.
  2. Can be built and automated without code and without data science expertise, if the relevant teams have access to Dataiku.
  3. Do not require anything more technically advanced than a current generation ‘off the shelf’ model, run via secure cloud or on-premise: no model fine-tuning or retraining needed.

Put another way, the below use cases can all be achieved by a business team without dedicated data scientists, if they have three things: 

  1. a) Access to the relevant datasets
  2. b) Access to a firm approved LLM
  3. c) Dataiku: The best platform to build and automate GenAI projects with no code    

Client Engagement

Personalized Marketing Communication

LLMs can analyze customer data, including transaction history, financial data, demographics, and preferences, to generate personalized communication. This can include tailored product recommendations, targeted marketing campaigns, and proactive alerts. As of today:

LLMs currently perform this work effectively. Since LLMs are able to generate highly tailored material much faster than humans, there is clear potential for efficiency improvement. However, human-in-the-loop approval of messaging remains expected for both quality assurance and regulatory compliance. 

As a result, instead of generating truly unique messaging per client (which is technically feasible), these systems are currently used to generate a large but still human-reviewable set of targeted messages. 

Chatbots and Virtual Assistants

LLMs could power chatbots and virtual assistants capable of handling a wide range of customer queries, from basic account inquiries to general financial advice. These AI-powered assistants would provide 24/7 customer support, ensuring that customers receive responses regardless of the time or day. As of today:

LLMs do not currently perform this task with sufficient quality and with low enough incorrect or non-compliant responses to be appropriate in direct communication with clients. This is primarily driven by the reputational and regulatory risk of inappropriate responses being sent directly from a model to a client, rather than concerns that the average LLM response would be of lower quality or accuracy than one created by a human. 

Additionally, many of the most time-consuming tasks that clients want to address cannot be solved by a simple ‘call and response’ design but instead require action by the agent (e.g., making modifications within multiple internal systems to resolve a conflict). As of today these ‘independent LLM agents’ are not deployed live within systems which contain core client data. 

Given these constraints, LLM chatbots and virtual assistants are instead being leveraged as internal support for client-facing personnel: The material generated is always reviewed by a person before being sent or acted upon.

people looking at screen

Operational Efficiency

Document Processing

LLMs can extract key information from various documents, such as loan applications, KYC forms, and financial statements. This allows for unstructured data sources such as paper forms or written communications to be converted efficiently into structured data for analysis. 

LLMs currently perform this work effectively. The latest generation of LLMs is significantly more effective in extracting keywords and concepts, and in summarizing and searching for topics and sentiment, than prior models. 

Used without supervision for lower-risk topics (e.g., allocating client communications to an appropriate human agent) or with supervision for higher-risk topics (e.g., extracting loan terms from human-readable loan documents), firms can significantly improve the efficiency of back-office teams and the speed at which unstructured data sources are reliably structured. 

Internal and External Knowledge Management 

LLMs can help banks navigate complex or large scale documentation, including internal policy and external regulatory guidance.

LLMs currently perform this work effectively. Their ability to assess topic relevance and summarize larger bodies of work is one of the most significant improvements in the technology compared to prior models.

By leveraging a setup that includes direct citation of the original source, such as Dataiku Answers, teams can identify relevant information and immediately check its accuracy before taking concrete action. This speeds time to insight and reduces the cost of asking questions across large sets of documentation. 

Takeaways

LLMs are viable sources of efficiency improvement for business and operations teams today, and without the need for complex data science work or cutting edge technology. These viable use cases all have one thing in common: a human in the loop. This constraint means that rather than processes becoming unrecognizably transformed, the current generation of use cases adds significant improvements to speed, accuracy and accessibility while leaving the fundamental structures unchanged.

 

You May Also Like

Building a Modern AI Platform Strategy in the Era of Generative AI

Read More

The AI Governance Challenge: How to Foster Trust

Read More

5 Surprising Stats About the State of AI Today

Read More

Unlocking Dataiku’s Hidden Gems for Data Preparation

Read More