Credit score

Should Your Web History Impact Your Credit Score? The IMF thinks so

(Photo by SAUL LOEB/AFP via Getty Images)
A group of researchers have published a blog post on the International Monetary Fund website in which they call for a significant change in the way credit scores are assessed. Instead of relying on traditional metrics, the group thinks banks should start integrating additional information, including your browser history.

The rise of fintech services and cryptocurrencies has changed the modern banking system in several ways, and banks are facing an increasing number of challenges as various third-party payment processors come between financial institutions and their traditional customers. . Credit scoring systems widely used in the United States and Europe are based on so-called “hard” information – bill payments, payslips and how much of your current credit limit you are using.

Researchers report that so-called “hard” credit ratings present two significant problems. First, banks tend to reduce the availability of credit during downturns, which is when people need help the most. Second, it can be difficult for businesses and individuals with no credit history to start building one. There’s a bit of a catch in the system, in that what you need to persuade an institution to lend you money is a credit history that you don’t have because nobody will lend money.

After identifying two flaws in the existing system, the the authors write:

The rise of the Internet enables the use of new types of non-financial customer data, such as browsing histories and online shopping behavior of individuals, or customer ratings for online sellers.

The literature suggests that this non-financial data is valuable for financial decision-making. Berg et al. (2019) show that easy-to-collect information such as “digital fingerprint” (email provider, mobile operator, operating system, etc.) works just as well as traditional credit scores to assess credit risk. the borrower. Moreover, there are complementarities between financial and non-financial data: the combination of credit scores and digital footprint further improves default predictions. Accordingly, the incorporation of non-financial data can lead to significant efficiency gains in financial intermediation.

In a blog post published on the IMF’s website, the authors also write: “Recent research papers that, when powered by artificial intelligence and machine learning, these alternative data sources are often superior to methods traditional credit reporting”.

Despite the authors of this article’s knowledge of banking systems and finance, they are clearly unaware of the latest research on AI. It’s a bad idea in general, but it’s a really bad idea right now.

The first major problem with this proposal is that there is no evidence that AI is capable of this task or soon will be. In an interview with The Guardian Earlier this summer, Kate Crawford, an AI researcher at Microsoft, had some harsh remarks about the current reality of artificial intelligence, despite working for one of the leaders in the field: “AI doesn’t is neither artificial nor intelligent. It is made from natural resources and it is people who perform the tasks to make the systems appear self-sufficient.

Asked about the specific problem of bias in AI, Crawford said:

Time and time again, we see these systems produce errors – women are offered less credit by creditworthiness algorithms, black faces mislabeled – and the response has been, “We just need more data.” But I’ve tried to look at these deeper logics of classification and you start to see forms of discrimination, not just when systems are applied, but in how they’re constructed and trained to view the world. Training datasets used for machine learning software that classifies people as one out of two genders; which classify people according to their skin color into one of five racial categories and which attempt, based on people’s appearance, to assign a moral or ethical character. The idea that you can make these determinations based on appearance has a dark past and unfortunately classification politics has become entrenched in the substrates of AI.

It’s not just one person’s opinion. Gartner has previously projected that 85% of AI projects through 2022 “will produce erroneous results due to biases in the data, the algorithms or the teams responsible for managing them”. A recent Twitter hackathon found the proof that the website’s photo cropping algorithm was implicitly biased against older people, people with disabilities, black people, and Muslims, and frequently removed them from photographs. Twitter has since stopped using the algorithm because these kinds of bias issues aren’t in anyone’s interest.

Although my own research is far removed from fintech, I have spent the past 18 months experimenting with AI-powered scaling tools, as regular ExtremeTech readers know. I’ve used Topaz Video Enhance AI extensively and experimented with other neural networks as well. Although these tools are capable of providing remarkable improvements, this is a rare video that can simply be thrown into TVEAI in the hope that the gold will come to the other side.

This setting isn’t fabulous, but it’s not too bad compared to the original source material either. If you weren’t paying attention, you might not notice how poorly rendered Dax is in the background (it’s the woman sitting at the console in the back).

Here is image 8829 of the Star Trek: Deep Space Nine Episode “Defiant”. The quality of the plot is reasonable given the starting point of the source, but we have a glaring error against Jadzia Dax. This is the single model release and I am mixing multiple model release to improve the early seasons of DS9. In this case, every model I had tried broke in this scene in one way or another. I show the output of Artemis Medium Quality in this case.

That’s what happens when you zoom in. Dax is neither a Navi nor traditionally rendered in the art style of ancient Egypt.

This specific distortion occurs once in the entire episode. Most Topaz models (and all non-Topaz models I tested) had this problem and it proved resistant to repair. There aren’t many pixels representing his face, and the original MPEG-2 quality is low. There is no AI model yet that correctly processes an entire episode from S1 to S3, but this is by far the worst distortion of the entire episode. It’s also only a few seconds on screen before she moves and things get better.

The best repair result I’ve achieved looks like this, using TVEAI’s Proteus model:

By using a different model, we can semi-repair the damage, but not completely. Too many AIs are currently like this. Capable, but limited and dependent on human supervision.

There’s a reason I use video editing to talk about fintech issues: AI is still far from perfect, in any field of study. The above “fix” is flawed, but required hours of careful testing to achieve. Behind the scenes of what various companies smugly call “AI”, there are a lot of humans doing a lot of work. That’s not to say there isn’t real progress, but these systems aren’t nearly as foolproof as the hype cycle has made them out to be.

Right now, we’re at a point where apps can produce amazing results, even to the point of making real scientific discoveries. Humans, however, are still deeply involved in every step of the process. Even then, there are errors. To fix this particular error, replace the output with an entirely different model for the duration of this scene. If I hadn’t watched the episode carefully, I might have completely missed the issue. AI has a similar problem in general. Companies that have battled bias in their AI networks had no intention of putting it there. It was created due to biases in the underlying datasets themselves. And the problem with these datasets is that if you don’t look at them carefully, you might end up thinking that your output is comprised entirely of images like below, as opposed to the damage scene above:

This is more typical of the final output; absolute quality is limited by the original source. There are no glaring distortions or other issues. Human oversight of these processes is necessary because AI tools are not yet good enough to always do things right 100% of the time. Fintech tools either.

Even if the AI ​​component of this equation was ready to lean on, privacy issues are another major concern. Companies may be experimenting with tracking various aspects of “soft” consumer behavior, but the idea of ​​linking your credit score to your web history is very similar to the social credit score currently assigned to every citizen by China. In this country, saying the wrong things or visiting the wrong websites can result in family members being denied loans or access to certain social events. Even if the system envisaged is not so draconian, it is still a step in the wrong direction.

The United States has no legal framework that would be needed to deploy a credit monitoring system like this. Any bank or financial institution that wants to use AI to make decisions about applicants’ creditworthiness based on their browser and purchase history should be regularly audited for bias against any group. The researchers who wrote this paper for the IMF talk about sucking up people’s purchase histories without considering that many people use the internet to buy things they’re too embarrassed to walk into a store and buy. Who decides which stores and suppliers matter and which don’t? Who monitors the data to ensure that extremely embarrassing information is not leaked, either on purpose or by hackers more generally?

The fact that non-banking financial institutions plan to use some of this data (or already use it) is not a reason to allow it. This is a reason to stay as far away from these organizations as possible. The AI ​​is not ready for this. Our privacy laws are not ready for this. The constant message from reputable and sober researchers working in the field is that we are far from ready to entrust these vital considerations to a black box. The authors of this article may be absolute bank wizards, but their optimism about the near-term state of AI networks is misplaced.

Few things are more important in modern life than one’s credit and financial history, and that’s reason enough to move exceptionally slowly when it comes to AI. Give it a decade or two and check back then, or we’ll spend the next few decades cleaning up the injustices inflicted on various individuals literally through no fault of their own.

Now read: