Getting to Responsible AI with Credit Scoring

Use Cases & Projects, Dataiku Product Triveni Gandhi, David Behar

For the financial services industry, Responsible AI considerations are of the utmost importance in a number of key domains — particularly when it comes to access to services and goods like credit lines, loans, and insurance pricing. While all banking and insurance processes do not require use of complex AI models, vigilance on data use is critical for guaranteeing that outcomes are fair and aligned with company policy and ethics. 

At Dataiku, we define Responsible Analytics according to a framework of reliability, accountability, fairness, and transparency. This means that the data, models, and reporting tools (e.g., dashboards) must pass certain checks and measures to ensure that the outputs of any project or analysis can be trusted by its builders, business users, and consumers. 

What does this look like in practice? Let’s take the example of credit scoring. Credit scoring is a key element of risk assessment in any financial institution, serving as a starting point for taking credit-related decisions such as granting loans and mortgages, or allowing overdrafts. Not only is credit the core business of financial institutions, but it is also an essential part of the economy and welfare of companies and individuals. Therefore, great care is put into building robust scorecards that can reliably model risk and that comply with existing regulations, all with the aim of giving people fair access to credit.

However, credit scoring is not without risk of bias and the potential to incorrectly deny otherwise qualified candidates a service. The data used to train models that predict credit scores are often biased by social context — for example, the wage gap between men and women, which can seep into the data and be reflected in downstream models. 

There is also a delicate balance to strike between trust, transparency, and security. Think about how a credit score is presented to the end user — do they get a sense of how their score was determined? Are specific recommendations or actionable recourses provided to improve their score according to the model? These considerations are a part of a larger approach to responsible development, and require a deep dive into the data, models, and reporting features of any credit scoring solution. 

Key Considerations for Responsible Credit Scoring

What are the most important things to consider when building a robust credit risk model? As discussed above, borrowing is a key service for people in society, and access to it can significantly affect one's economic opportunities. So data teams should take great care when designing such a decision mechanism, as it can have a significant impact on the well-being of those who interact with it.

Below, we run through several aspects of credit risk model-building that should be taken into consideration.

Data Quality

The data used to build credit risk models traditionally comes from three sources:

  • The applicant's form information, which contains declarative fields filled-in by the applicant, as well as the characteristics of the loan requested.
  • The applicant's historical data from the bank's system (if the applicant is a current customer). This includes behavioral data about the customer's balances on previous credit products and other information.
  • The applicant's scores and historical data from credit bureaux, which includes the applicant's credit balances with other banks.

The analyst should perform thorough data quality checks before modeling begins. Because the data usually comes from multiple systems and contains some declarative fields that might be filled incorrectly, it is crucial to run some sanity checks to avoid including erroneous data points in the analysis. 

Analysts should also undertake outlier and fraud detection before the analysis takes place to remove the most abnormal or atypical observations. Once the data has reached an acceptable quality threshold and is reliable enough, other considerations, listed below, come into play.

Data Privacy

Privacy is the bedrock of the financial services industry. Analysts therefore need to build their models from a privacy-first perspective. There are multiple privacy laws, varying according to location, that state how sensitive client or customer information should be handled: the Fair Credit Reporting Act in the US; the General Data Protection Regulation (commonly known as GDPR) in the EU; the Personal Information Protection and Electronic Documents Act (PIPEDA) in Canada; and the Data Protection Act in the UK.

A common and effective approach to handling sensitive data is, first, to flag the sensitive dataset, and then to remove or anonymize the identifying columns to create a dataset with less risk of privacy leakage.

Managing Bias Risk

Regulators have similarly focused their attention on reducing model or algorithm bias, especially those which produce unfair outcomes that mirror discriminatory practices in society. Bias in the way credit risk is modeled has specifically been addressed by the US Equal Credit Opportunity Act, the Federal Trade Commission Act, and the Fair Credit Reporting Act in the US, and by the Charter of Fundamental Rights in the EU.

More specifically, these acts spotlight the usage of personal data. For the purpose of avoiding bias, they prohibit the below kinds of variables  from being incorporated into models:

  •  Race
  •  Religion
  •  Sex or gender
  •  Marital Status
  •  Disability

An applicant's historic banking data may also be the product of social discrimination, and so it is important to measure imbalances across attributes in the dataset. This means looking at whether or not there is a significant variation in the target variable across sensitive groups. One way to do this would be by a chi-square test to check for expected values. Examples of these tests are available here. In the case of a severe imbalance, row level weights could be used based on the methodology available in packages like Fairlearn or AI360 Fairness.

Proxy Analysis for Bias Management

To complicate matters, other variables (sometimes non-obvious ones) can act as proxies for these bias-inducing variables. Statistical tools can help find these proxies and take action to reduce the bias from the data. 

Information value statistics, for example, can be computed against gender to check if any variables are closely related to this sensitive variable. One of the top variables included in a credit scoring model (“occupation_type,” which lists an applicant’s employment) is a variable closely related to gender. Therefore, this variable should be considered a proxy and handled carefully to check that its use does not introduce a bias into the model.

Accounting for History With Reject Inference

Reject inference is the acknowledgment of the fact that the data used to estimate credit risk is only made of previously accepted applications because these are the only ones for which credit performance can be measured. Therefore, those groups of applicants who have never been granted credit will remain ignored in the analysis, and bias might persist against them. 

Furthermore, some of these groups might have good credit performance but were never tested for it, so considering them “rejected” for the purposes of the model could exclude some additional credit-worthy customers. 

Reporting with Transparency : Model Explanations

Where decisions are made to not provide credit, business users or customers may wish to understand how this outcome was produced. On the one hand, providing no explanation, or even a poorly conceived one, has potential consequences for reputation and trust; and, on the other, it may leave customers or potential customers without agency regarding their goal to access credit.  

Putting Theory Into Practice: Responsible Credit Scoring With Dataiku

There are many steps to be taken to attain the right balance between Responsible AI principles and credit scoring business outcomes, with each organization needing to structure its own processes based on its risk appetite, its legal obligations, and its convictions. 

Financial Institutions need to continuously revisit their credit scoring models to re-calibrate them and explore new approaches as markets evolve. As they do so, the need to combine speed and safety becomes critical — a principle that is at the core of Dataiku’s value proposition. 

Within the Dataiku platform, financial institutions can leverage Dataiku’s credit scoring solution to create their credit-worthiness models and scorecards in a user-friendly, fully customizable and safe manner, taking into account all Responsible AI principles. With compliance reviews simplified through eased reviews and audits, and with the capacity to clearly articulate business outcomes to all audiences, credit teams are given an opportunity to accelerate their impact and support strategic credit strategies rethinking. 

 

You May Also Like

Alteryx to Dataiku: AutoML

Read More

Conquering the Data Deluge Through Streamlined Data Access

Read More

I Have Databricks, Why Do I Need Dataiku?

Read More

Dataiku Makes Machine Learning Accessible, Transparent, & Universal

Read More