In the world of financial services, introducing machine learning applications can offer soaring profits. With an abundant diversity of techniques available, a common mistake some companies make is failing to ask “are those techniques giving me the right answer?” Choosing the wrong explainability method can get you the wrong answer, and in the business of financial services, this can be highly problematic.On Episode 26 of AI at Work, we sit down with Jay Budzik, CTO at ZestFinance, to talk about the importance of explainable AI. Explainability methods help us understand how AI models arrive at a specific decision. As Jay outlines, they help address the questions “Can I trust them? Are they accurate? Is that really what's happening with the model?” Based on a set of studies done by ZestFinance, not all explainability methods actually give you accurate answers about what the model is doing, and the caveat is to be careful how you choose them.
How Explainability Creates Fairness in Lending
ZestFinance was founded with the mission of making fair and transparent credit available to everyone. Jay shares that the founder, Douglas Merrill (formerly the CIO and VP of engineering at Google), “wanted to equalize the playing field for some participants that had been neglected by the financial system by using better math to make more inclusive, fair, and profitable credit decisions.”
“What enables that is the ability to explain how machine learning (ML) models work,” Jay explains, “because if you're going to use ML to run a billion dollar lending business, you probably want to know that it's doing the right thing. We focused a lot on making AI transparent and explainable, so that people can get comfortable trusting it.”
ZestFinance helps eradicate bias from lending decisions using advanced machine learning techniques. Jay explains that they give their clients options, where businesses can try out different alternatives and see what’s going to happen, “because generally, when you're talking about making fair decisions, you might give up some accuracy by being more fair, or some profit. And so there's a trade-off there. What's needed is the ability to see what happens in that trade-off and whether or not you want to live with the consequences.”
Jay outlines the explainability method ZestFinance uses, which is called Shapley additive explanations. They’ve been able to build upon the work of Scott Lundberg at the University of Washington, extending it to allow them to explain heterogeneous ensembles of models.
“These are models that sort of take an answer from a gradient boosted tree, take an answer from a neural network, take an answer from a logistic regression and combine them to get a better answer, because you're using multiple perspectives on the data and different mathematical techniques,” he elaborates, “What we've done is extended that work to allow us to use a broader palette of model types, which then allow us to be more accurate, allow the models to be more stable over time, but still provide accurate explanations, so that you can trust them.”
Tech Giants and Startups: The Peanut Butter and Jelly Effect
The field of machine learning is so vast that you simply can’t do it all. Here, complementarity is key. “As a data science company,” Jay describes, “we’ve decided to focus on a particular area and get really good at that.”
This type of focus and specialization helps create synergy when working with other companies that have a different or broader focus.
Recognizing the interoperability between their platform and Microsoft’s infrastructure from both a technical and business perspective, ZestFinance approached Microsoft with a partnership last year. Microsoft’s elastic cloud-based Azure infrastructure supplied computational power for training models while also offering renowned data security and user management, allowing ZestFinance to focus on what they do best: machine learning. Budzik described the dynamic as a “peanut butter and jelly effect” - each company providing something that went really well with what the other offered.
Jay has an optimistic perspective on a problem that’s frequently raised in the AI world - that a lot of research coming from academia and large R&D shops is oftentimes challenging to apply to real world applications. One of the key functions at ZestFinance is exactly this type of translational work - taking academic results and evaluating them for practical applications on data sets in their business problems.
“Oftentimes we find that those techniques just fail because they were designed to solve a different problem. In other cases, we find that they're very successful,” Jay describes. “It's impossible for the guys who invent the core algorithms to consider all the use cases and to do that work. The folks who are trying to use this stuff are really making a map of what works for what problems,” which is a really important function that businesses provide, according to Jay.
From bringing fairness to the world of financial services to helping companies choose machine learning models which will give them the right results, explainable AI is incredibly important.
Jay shares some final wisdom with us - a few pieces of advice for companies looking to bring on their first data science teams. First, hire the smartest people. Those who have a mindset oriented to constantly question the results and drive the the organization to a better place. Second, before hiring someone, ensure that the organization’s data is set up in an infrastructure that will enable the data scientists to leverage their skills and succeed. Lastly, be clear about the objectives and how they are going to be measured. Jay says, “As soon as you establish the right set of metrics and you have the right data set, that's a field day for a data scientist,”