Integrating Model Agnostic Explainability into Supervised Learning for Credit Scoring using SHAP and LIME
Main Article Content
Abstract
Advanced machine learning models offer superior accuracy in credit scoring, but their "black box" nature hinders regulatory compliance and erodes trust. This paper addresses this challenge by presenting a hybrid framework, developed using a Design Science Research (DSR) methodology, to integrate model-agnostic Explainable AI (XAI) into the credit scoring pipeline. The framework applies leading XAI techniques, specifically SHAP and LIME, to a range of supervised learning models. A functional, interactive prototype was developed and tested using credit data from the Zambian market. Experimental results revealed a stark "Accuracy Paradox": models with the highest accuracy (84.6%) achieved a perfect specificity of 1.000 by never predicting the minority class, resulting in an F1-Score of only 0.458 and an ROC AUC worse than a random guess (as low as 0.432). XAI techniques proved crucial for diagnosing these failures and providing clear, feature-based explanations for individual loan decisions. This research contributes a practical, integrated artifact that systematically compares multiple models and explanation methods, bridging the gap between complex ML implementation and the pressing need for fair, transparent, and accountable financial decision-making.