By Lakshmi S Menon
The technological innovation have called for the data mining algorithms which provides a more effective way to predict the credit risk than the traditional methods. In earlier times the credit scoring was based on human calculations. Nowadays the artificial intelligence coupled with advanced machine learning methods have contributed to an innovative method for analyzing the credit worthiness of borrowers. Aftermath of the financial crisis of 2007-2008,the central banks in various countries have put forward intensive data mining algorithms and financial statistics modeling to improve their supervisory and monetary responsibilities. Certain information about the borrowers like the missed payments, default on previous payments are used to create credit report and generate credit score, for example the FICO score in the US. Pricing algorithms depends on the statistical information of potential buyers. Many online shopping sites including Amazon uses dynamic pricing algorithms to set the prices of various goods.
Larson et.al(2016) in their paper titled “Unintended Consequences of Geographic Targeting” studied about the behavior of pricing algorithms used by Princeton Review. In their study it was found that the incidence ratio of Asians that are affected by higher pricing than Non Asians was 1.8,that is Asians are nearly twice as likely as Non Asians to be offered one of the highest price of the materials by the Princeton Review. So it was argued that the prices of Princeton Review’s online tutoring packages differ substantially depending on the geographic location of the customers. Princeton Review told that they set their prices depending on the “Costs of running our business and the competitive attributes of a given market”. The price also depends on the local tutors.
Many often these pricing algorithms leads to unintended and biased outcomes. It means that there is often a high chance of preferential treatment for certain classes of people and exclusion of certain classes from the process. In an independent report by the UK government titled “Interim report: Review into bias in algorithmic decision making”(2019), it was found that data driven technology of algorithmic pricing have ethical challenges. Credit scoring algorithms take into account the customer behavior and patterns. So it is more likely that the data is unrepresentative of reality and reflects prejudices. For example a credit scoring algorithms may mark certain classes of customers who shop in certain specific shops as not creditworthy because they are less likely to pay back the loans.
Artificial intelligence desires to make the system more equitable and desirable but very often financial institutions and credit companies often fails to remove the ambiguity and bias that exists in the system that discriminates people based on race, gender and certain other perspectives. The data collected for prediction depends on your orientation, your political beliefs and even depends on the college where you have studied. It learns the patterns of the customer and generates the credit score. It may be like a white American can have higher credit scores than a black American and it’s a case for ethical and racial bias.
Sometimes it was found that the same person can have different credit scores with different agencies because the criteria that each lender uses to determine the credit worthy borrowers differs from each other. Although algorithms helps the banks and financial institutions to determine the potential customers it is often leading to biased outcomes and exclusion of certain classes of peoples access to credit. The set of instructions to be fed to the computer programming is often challenging. The inclusion of certain factors in the predictive algorithm model is often very challenging. The variables which is to be included or not included in determining the credit risk is often a herculean task.Sometimes pricing algorithms use historical data to make predictions. If the historical data is biased against a certain section of the population it may lead to inaccurate results. Amazon biased against its female candidates for the job positions some years back due to its Algorithms using historical data as a variable in deciding the worthiness of persons.
The effectiveness of algorithms depends on the variables that are fed into the computer programming that determine the credit worthiness of the person. The predictions can moreover lead to a bias towards a certain ethnic groups or gender due to the inefficiencies that have crept into system. So there is often a dilemma posed by these algorithms. In a study by Dwoskin(2015) it was found that a credit scoring algorithms have a higher probability of error prone to groups that have less representation in the credit market. Thus it calls for technical and nontechnical approaches in rectifying the artificial agents.
The artificial intelligence and machine learning algorithms to predict credit risk are actually in the infant stage of development. There are many inadvertencies that have crept into the model. Many of the engineers and data scientists who deal with artificial intelligence models have less or no exposure to the public policy design so there is a need for a different kind of wisdom to be demanded from these persons who work in this field. One way to mitigate the further risk of credit scoring algorithms can be through the regulatory mechanisms. In the working paper by Osoba and Welsor titled “The Risk of Bias and Errors in Artificial Intelligence” it was found that “The error and bias risk in algorithms and AI will continue as long as artificial agents play increasingly prominent roles in our lives and remain unregulated”.
Avoiding these algorithms completely is not possible. There is a need to inculcate transparency in the algorithmic models that is the common public must be aware of the criteria and procedure which the algorithm uses. According to Sandvig et.al (2014) the algorithmic audit can be a way forward to analyze the efficiency of the financial algorithms. That is artificial agents and algorithms should be tested according to how efficiently they deliver the results or output not taking into consideration the procedures or codes they use. There is a grave need to push towards transparency especially in the algorithmic models that are involved in financial lending and credit operations. But more often these predictive algorithmic models are so complex that people who create them itself have no idea how well the system works efficiently.
Algorithmic accountability can be improved by analyzing whether the results of the output are biased towards a certain groups or population or not. Algorithms have the huge potential to arrive at a more efficient decision making than the humans. Algorithms were invented to reduce the unconscious risk of humans but there are even more problems associated with them. These problems should be identified and rectified in the years to come so that there is less discrimination and more accountability in the outputs of the process. Algorithms are great tools and should not lead to black boxes posing threat. So there is a need to inculcate transparency in the system to find out where do the bias come from and formulate ways to correct them.