How Banks Can Up Their Data Game

Use Cases & Projects, Scaling AI Benjamin Libman

Data has always been the foundation of the banking industry, but market volatility combined with technological advancements, new risks, and an exponential influx of data continue to bring new and complex challenges.

To survive and thrive in today’s environment, banks must leverage data massively across all parts of the organization and in all day-to-day business processes. At the same time, it’s critical to find efficiencies and reduce risk by maintaining the right level of visibility and transparency around data work.

Here are the three key things that banks need to do:

• Reduce Risk With Governance & MLOps
• Maximize ROI by Controlling AI Costs
• Empower People With the Right Tools

In this blog, we'll briefly run through each of these categories. For the full breakdown, download the "Top 3 Ways Banks Can Up Their Data Game" ebook below!

Reduce Risk With Governance & MLOps

The nature of the regulatory environment for banks means any work with data — especially the use of machine learning models — must be visible, transparent, and auditable. Banks looking to expand their use of data at a massive scale must have a thorough understanding of the relationship between Responsible AI, AI Governance, and MLOps.

With applied expertise, education, processes, and the right tools (for example, choosing an AI platform with strong governance and MLOps capabilities), the current and future regulatory challenges around data — and especially the use of more advanced technologies like machine
learning and AI — are not insurmountable with careful and thorough
navigation.

Unify the Workspace

What’s wrong with working among a bunch of different tools? Cobbling together a host of different tools throughout the data pipeline is a quick path to non-compliance, especially as the number of day-to-day data users at banks grows.

Using one tool among analysts for data preparation, another among data scientists for building models, and yet another for validating and deploying those models into a production environment not only is inefficient overall in terms of time spent building and integrating data pipelines, but it leaves lots of room for error from an IT perspective, meaning an increased risk of cost overruns, project delays, data loss, security issues, and more!

Essentially, messy governance between lots of different tools is a recipe for disaster and quality concerns. Choosing the fewest possible technology vendors to get the job done will allow for overall easier governance.

Maximize ROI by Controlling AI Costs

It would be naive to ignore the fact that data and AI initiatives represent a cost in and of themselves. And, even if you know what use cases you need to tackle to cut costs, you won’t be able to benefit from them if you don’t have the right systems in place to move quickly and efficiently in data processes.

The reality is that the data and AI project lifecycle is rarely linear, and there are different people involved at every stage, which means lots of potential rework and inefficiencies along the way. You need the tools and the strategy to combat these challenges at their earliest stages.

Here’s what you need to know and what you need to do to start down a cost-smart path:

Reevaluate Your Data & Analytics Stack Now, Not Later

For many banks, the data and analytics stack was built over an extended period of time and was probably dictated by existing investments rather than driven by explicit choice in making new investments. In other words, “We already have tools for x, y, and z, what can we add to complete the stack, and how can we tie it all together?” Unfortunately, this can lead to disparate data, missed opportunities, waste of resources, and, in the end, massively hurt ROI.

The system for accessing and cleaning data should not be locked to the underlying architecture. By separating the two elements you ensure that no matter how many different places data is currently stored, teams won’t have to constantly change their day-to-day processes or tools. Ideally, data access and preparation isn’t just happening in a one-off ETL (extract, transform, and load) tool either. It should be incorporated into downstream systems so that when relevant and appropriate, technical teams can take over the work of analysts and easily apply machine learning techniques.

Examining these choices more closely with costs in mind from the start can provide the necessary spark to rethink the technology stack, including the challenges or inefficiencies it brings.

In fact, there are likely areas that can be automated between steps in the life cycle with new tools to boost efficiency and reduce costs. Just because a system is familiar does not mean it is financially friendly; sticking to what you know might get you stuck with costs you can’t afford.

High AI Maintenance Costs May Be Holding You Back

Putting a model into production is an important milestone, but it’s far from the end of the journey. Depending on the use case, the model can either become less and less effective in a best-case scenario; in the worst case, it can become harmful. Paying attention to data and AI project maintenance, and more importantly, being efficient at it, is paramount to containing costs.

MLOps has emerged as a way of controlling risks but also the cost of maintenance, shifting from a one-off task handled by a different person for each model — usually the original data scientist who worked on the project — into a systematized, centralized task.

Automate Away Manual Work & Inefficiencies

Data project lifecycles can be a roller coaster with lots of different stakeholders on board, equating to manual work, rework, and
inefficiencies. Automating away these inefficiencies can save valuable time (and money) while freeing up people to work on more high-value tasks.

Here's a concrete, real-world example from our friends at Royal Bank of Canada (RBC). By moving from a periodic, manual process to an automated Control Test Framework in Dataiku, the CAE group saves 20-25% of the time for a given audit.

Empower People With the Right Tools

It doesn’t make sense (financially or in terms of risk) to search for unicorn data scientists and build teams from scratch. Instead, today’s successful banks leverage the business knowledge of their existing staff, immersing diverse talent across different facets of the organization in data processes.

The ultimate challenge of hiring is less about more staff and instead about tooling, enablement, and introducing efficiency within the current
staff. Moreover, empowering individuals to work with more reliable and responsible tools will support the governance initiatives that are also critical to the success of AI integration.

Here’s what this looks like in practice:

Wrap Math Minds Into the Fold

When it comes to leveraging and incorporating new techniques (like machine learning and AI) or technologies (like centralized, governed data platforms), don’t leave staff with powerful statistics and math skills out of the equation. From actuaries to quants, think about what they bring to the table on data and AI projects as the overall organizational maturity increases. Or inversely, what data science skills and methodologies can bring their role.

Provide the centralized tools that allow these math minds to participate in the data science process seamlessly. This way, everyone is contributing the most value, in a way that is suited to their particular skills, where and how it will be the most impactful.

Upskill People Away From Spreadsheets

Many non-technical individuals have deep knowledge of business systems, needs, and the regulatory environment. Empowering these individuals to work with data directly and collaborate with data science teams will engender enriched decision making for the business teams as well as better informed model creation on the technical side — a win-win!

And, let’s not sugarcoat it. Working only in spreadsheets contributes to significant security concerns, inaccuracies, and inefficiencies with versioning issues, lack of logs or rollback, and is just generally a jaded approach. If complying with regulations and reducing risk are top concerns (and they are), then spreadsheets and other forms of End User Computing (EUC) are a reservoir of problems that will eventually need to be drained.

You May Also Like

Explainable AI in Practice (In Plain English!)

Read More

Democratizing Access to AI: SLB and Deloitte

Read More

Secure and Scalable Enterprise AI: TitanML & the Dataiku LLM Mesh

Read More

Solving the Ocean Plastic Pollution Problem With Data

Read More