Industries

Helping clients meet their business challenges begins with an in-depth understanding of the industries in which they work. That’s why KPMG LLP established its industry-driven structure. In fact, KPMG LLP was the first of the Big Four firms to organize itself along the same industry lines as clients.

How We Work

We bring together passionate problem-solvers, innovative technologies, and full-service capabilities to create opportunity with every insight.

Learn more

Careers & Culture

What is culture? Culture is how we do things around here. It is the combination of a predominant mindset, actions (both big and small) that we all commit to every day, and the underlying processes, programs and systems supporting how work gets done.

Learn more

Responsible AI: injecting ethics into algorithms

06.29.2022 | Duration: 21:15

On this episode we explore how AI bias can perpetuate “redlining”, and how adopting better governance can help build more responsible AI.

Listen now

Podcast overview

If you’ve applied for credit recently, it’s likely an AI system provided the approval. But how is the algorithm reaching its conclusion? And in a world where thousands of data points are being fed into AI models, how can businesses ensure their AI systems are making fair decisions — free from bias? In this episode, host Tori Weldon explores how AI bias can perpetuate racism and “redlining”, and how adopting better AI governance and oversight can help build more responsible AI. 

You’ll hear from Crystal Marie McDaniels, a first-time home buyer looking for answers about why her mortgage application was initially rejected, and Allison Bethel, Director of the Fair Housing Legal Clinic and Professor at the University of Illinois Chicago School of Law, who represents clients who face discriminatory bias. You’ll also hear from the technologists at KPMG, Swami Chandrasekaran and Kelly Combs, who developed a framework and governance approach to ensure AI is used responsibly.

Speed to Modern Tech Podcast Series

Podcast Transcript

Reporter: This just in --- an online real estate search brokerage has agreed to pay $4 million dollars to settle a lawsuit brought against it by the National Fair Housing Alliance...

The director of the US Consumer Financial Protection Bureau issued an urgent warning today about the potential misuse of artificial intelligence in lending decisions….

Five Democratic senators sent a letter to the CEO of a digital lending platform demanding proof that it tests its credit model for disparate impact....

HOST: The news is full of stories about AI-driven decisions gone awry. And in the financial service industry, the need to get it right has never been higher.

Companies have had to pay millions to settle legal claims about unfair decisions made by AI software.

And on top of that there's the human cost -- and the loss of trust from the customers they serve.

Crystal Marie McDaniels: This was about a week before closing, he's like, we have an issue, and is there someone that could co-sign the house for you? And they didn't say there was an algorithm. They just said, this is not going to get approved.

Kelly Combs: Do you really understand how you're using the data and how the model used those attributes? And if a consumer was aware of that, would you have a good explanation for why that data was needed for the purposes of evaluating credit risk?

Swami Chandrasekaran: In essence, what we tried to get to is how do I create a framework that has all of these checks and balances that we can put in place?

HOST: This is Speed to Modern Tech, an original podcast from KPMG. I’m Tori Weldon. Each episode, we'll bring you a problem many businesses are facing and the story of how technology was used to tackle it.

Today, making sure people are treated fairly when AI systems calculate credit approvals -- and how technology can help find and fix biased data.

Crystal Marie McDaniels remembers the first time she stepped foot in the house that she now calls home.

Crystal Marie McDaniels: And as soon as we saw the kitchen, I said, I want this house. This is my kitchen. I said, I don't care about the rest of the house. I want this house because I want this kitchen.

HOST: Owning a home had been a long term goal for Crystal and her husband, ever since they first met in Los Angeles.

Crystal Marie McDaniels: We met at a kickball game and I scored a home run without wearing shoes and he was very impressed and that was it from there.

HOST: They fell in love, got married, and had their first child. And it was during her maternity leave that Crystal and her husband started thinking about moving -- so they could own their own home.

Crystal Marie McDaniels:We lived in a one-bedroom apartment, really close to everything. We loved it. Our rent was about $3,000 a month, which is fine when it's just two adults and you can afford it, and you're thinking about buying a house one day down the line. But once you have a kid and you're on leave and you're looking at childcare options and thinking, how am I going to pay my rent, save for a home and have decent childcare? And you're like, one of these three has to go. The math is not math-thing, as they say.

HOST: The cost of buying a home in LA was really high -- even for a couple who both made six figure salaries. And Crystal’s husband had been watching a lot of home shows on TV.

Crystal Marie McDaniels: And he would just every day, come home and say, can you believe in Oklahoma they bought a house for this amount? And what are we doing here? And we decided we would take a leap and we would move to Charlotte, North Carolina, which is not exactly where my family lives, but it's within driving distance of my family. Specifically for the purpose of being able to buy a home.

HOST: It was a big decision for the couple. Crystal had to find a new job, and her husband had grown up in LA. Breaking the news about the move to her mother-in-law was tough.

Crystal Marie McDaniels: She would not speak to us. She took our son, who was two months old, and sat outside and just held him. Oh, the funniest part from that day was - she consistently laughs about this - and she says, are you guys taking the baby too?

Yes, we are taking our son with us to Charlotte. So it was tough for her, but I think - she is someone who moved here from Ethiopia. So she left Ethiopia at 19 for a better life. And so I think she came around and understood that we were doing the same thing, just not quite changing our passports.

HOST: The move to Charlotte was easier than Crystal had thought. She got a new marketing job and they found a great new apartment in South Park -- paying about half the rent they paid in LA. They were able to save quickly for a down payment.

Crystal Marie McDaniels: We saved more than what we needed actually for the down payment, by the time it was time to purchase the house.

HOST: Crystal and her husband were pre-approved for a mortgage, so when they found their dream house -- the one with the kitchen -- they were able to make an offer. It was a milestone that had real meaning for Crystal.

Crystal Marie McDaniels: I come from a family where we were wealthy in everything but money. And I had just seen from my peers, what it meant to have wealth passed down. And I didn't have that. I knew a lot about how purchasing a home adds to your own net worth and your own value. And it makes it just so much easier for you to do things for your kids.

And then also, just thinking to myself about someone whose ancestors on my mom's side were enslaved in North and South Carolina. And thinking about what it meant to be on land that may have been where in previous generations I was enslaved. And then to imagine we're able to own that land meant so much to me - and so it was something that was really important to us.

HOST: Over the next few weeks, Crystal and her husband were busy with all the things that go along with a house purchase. They arranged movers, canceled their lease, and started finalizing all the legal documents.

Crystal Marie McDaniels: The house ended up being $375,000. And we were pre-approved for $600,000, so we were thinking like, this is easy peasy. And at the beginning there was a lot of consistent communication. Can you send this tax form? Can you send that? And I was managing all of that. And then there was like radio silence for maybe three or four weeks and I'm reaching out like, is everything okay? And I didn't hear anything.

HOST: Just a week before they were due to move into their new home, the lender called Crystal with bad news.

Crystal Marie McDaniels: He's like, we have an issue, and is there someone that could co-sign the house for you? And we were like, no, there's no one. And he's like, yeah, because there's concern about you being a contractor versus an employee.

So, I remember I was sitting in my office, kind of spun my office chair around, and I was like, hey, when you guys bought your house were you contractors or employers? And some of them were like, we were employees, some were contractors. And I was like, was that an issue for you? And I'll never forget this guy Bobby said no, like why would that be an issue? Like everyone starts as a contractor and turns into an employee. And I was like, huh.

HOST: Crystal's boss offered to talk to the lender, to give them the verification they needed. But the lender said his system just kept rejecting the mortgage.

Crystal Marie McDaniels: Then we started talking to our realtor who was now abreast of what's going on. And he's just like, I've never heard of this before. I've had lots of contractors buy homes. And for the record, I'm a contractor, but my spouse is not, he is a full-time employee at his job, making more than six figures. He's making more than what's needed to buy the home. So I wasn't sure where they were so concerned about my income.

And they didn't say there was an algorithm. They just said this is not going to get approved, they require certain things, and we're going to sell this loan back to them, but we can't sell this back to them because it's not getting approved by their software.

And we were just like, what? I don't know what that means.

HOST: After many frantic phone calls with their real estate agent and the lender, Crystal says they got a call the night before the house was supposed to close. The funding had finally gone through.

Although their lender's system was still rejecting their mortgage, a supervisor had stepped in to do a manual override.

Crystal was relieved, but also angry. She was never able to get to the bottom of why the lender's system had kept rejecting their mortgage -- but she had her suspicions.

Crystal Marie McDaniels: So I listen to a lot of public radio, there was a story about some other banks had been caught up in a lawsuit around housing discrimination or loan discrimination.

I remember thinking like, I wonder if that was us? Because I just remember thinking like, we did everything right, but it just didn't seem to work. So there was always like an inkling, but I never was sure if that's what it was.

Whenever things aren't quite right, there's always a little siren in my mind that's like: I wonder if this is because I'm a woman, or I wonder if this is because I'm black?

But I never really know cause you know, this is not the 1960s where someone's going to be like in a Klan robe. You just kind of … hmm, I wonder. But I wasn't sure, I certainly suspected it, but I wasn't sure. And I still can't be a hundred percent sure.

HOST: Under the Fair Housing Act, discrimination based on race, color, religion and a number of other factors has been illegal in the US for more than fifty years.

So when AI is used to make mortgage decisions, for example, the lender needs to be certain the aligorithms they are using are free from discriminatory bias.

Allison Bethel helps people fight this type of discrimination in the courts.

As a Law Professor at the University of Illinois Chicago, and the head of the Fair Housing Legal Clinic, she's seen many cases of bad AI decisions.

Allison Bethel: Basically if the data that you're putting in and who's putting it in is flawed or biased in any way, then it's going to yield a flawed or biased result.

HOST: Allison says under the Fair Housing Act, people can pursue a legal claim if the data being used to make decisions is considered discriminatory.

For example, while it's illegal to use race as a data point in mortgage approvals, past court cases have revealed the use of other data points, like postal code, can have the same effect.

With AI software being used more and more widely, Allison says lenders need to be able to explain how the AI is making decisions.

Allison Bethel: Many of the providers' practice is not to give the applicant an opportunity to say, hey, this is right, this is wrong. You know, to sort of do what we call an individualized assessment, they don't do that. They just rule up or down based on that report. And what is happening now is lawsuits are coming down finding that the screening company itself perhaps was negligent in running the report, should have not just blindly relied on that report, but should have looked at the whole thing more holistically before reaching a decision.

HOST: Using AI and machine learning to process credit applications has become commonplace for many financial institutions.

AI systems can process huge amounts of personal data. In many cases, this leads to more nuanced decisions about people's ability to pay.

But using all this data also creates a risk. Companies can face legal action if the data is used incorrectly, or if the AI is reinforcing bias.

So how to determine if the AI technologies making credit decisions are using the data correctly?

This was one of the challenges Kelly Combs and Swami Chandrasekaran set out to tackle.

They both work in Digital Solutions and Architecture for KPMG. As Kelly explains it, solving for bias in AI is more complicated than simply looking for biased data attributes - like race.

They need to explore all the different connections that AI might make between different data attributes -- and how those connections could be introducing bias.

Kelly Combs: And so it's not so much about, are we using race or gender, for example, because those are already, in most cases, prohibited attributes regardless if it's machine learning or not.

But what we've come to find is we're broadening the set of data attributes that are available for the use of machine learning. And in some cases you can use a proxy set of data to uncover gender or to uncover race. So we haven't accounted for the net new areas of risk that AI can introduce.

HOST: Since AI is able to process hundreds of thousands of data sources -- and new data is being added all the time -- flaws in the modeling are sometimes hidden.

If a human is approving a mortgage, you can ask them to explain how they made their decision.

But you can't ask software to explain its decision.

Because AI uses so many different data sets and models, it can be hard to track all the connection points the AI is processing.

For businesses using AI to make decisions that impact people's lives -- things like mortgages and health insurance -- the stakes are pretty high. Decisions being made by machines need to stand up to scrutiny -- from regulators, from lawyers, and from consumers.

 Kelly Combs: So what we're seeing is these organizations saying we need to evolve our method and we need a way to create a solid framework that we can address those new risks.

HOST: Swami says he saw this need for checks and balances early on -- back when AI was taking off as a tool in the financial and insurance sector.

Swami Chandrasekaran: You can't be going and using pen and paper to evaluate AI, right? It's going to be such an oxymoron. You’ve got this super advanced technology and you have pen and paper to validate it. So where I came in was with my AI experience, having been on the other side, building AI platforms, but also building these different AI models at scale, I kind of had this experience and knowledge on things that could possibly go bad if you don't do it the right way.

HOST: Swami is a pioneer in AI technology, and worked as a Chief Engineer on IBM's Watson project. And he says for years, AI specialists built systems without any real oversight.

Swami Chandrasekaran: So historically what has happened is the knowledge was confined to certain people who had studied the field of AI.

So they trained the model, they chose the data, they deployed it and they self-certified everything. So what do you do? I mean, you could have independent overseeing bodies - AI just did not have any.

HOST: Swami saw the need to create a system to oversee AI models -- a series of checks and balances that could be used to test the system, and find the flaws.

He compares it to a home inspection --- a way to identify what's broken, and what might need to be fixed in the future.

Kelly, Swami and other team members began building out a framework for this inspection process -- a project they dubbed responsible AI.

With her background in risk analysis, Kelly and her team got the ball rolling by developing a series of tests to uncover risks with the AI models.

Kelly Combs:We reproduced or re-performed the work that the data scientists did and said, you know, we're seeing risk, for example, in these specific areas.

So, do you really understand how you're using the data and how the model used those attributes? And if a consumer was aware of that, would you have a good explanation for why that data was needed for the purposes of evaluating credit risk? So through a series of, I would say, re-performing or replicating the data science activities we are able to uncover things like bias.

HOST: Kelly has this example of how bias can creep into the decision making -- because of the way the AI is connecting data.

Kelly Combs: For example, if we know that more men go to a four year university than women, we may not use gender as an attribute in the model. But we might be using education as a proxy to uncover, you know, this is more likely to be a male than a female. And we know historically let's just say for illustrative purposes, that men have better credit than women. So we're more likely to skew our decisioning based on that data.

HOST: Swami says the other risk with AI modeling is data drift.

AI models are built around the best data available at the time.

For example, an AI model built five years ago was trained to understand full-time employment is a strong predictor for loan repayment. But what if this is no longer true today – because of the rise of contract work?

This is data drift – and it erodes the accuracy of AI models.

Swami Chandrasekaran: A very simple definition of drift is you train the model for a reason, you used data to train the model - and is that data still good? So let's take the loan example. I trained the model based on data from the last 10 years. Okay. Are people who are coming and applying do they exhibit that same characteristic? So the question you have to ask is, is my model still making valid decisions? Maybe, maybe not.

HOST: Once the team has identified these risks in the AI system, the next step is prevention. A large part of this process involves workflows and documentation -- coming up with a governance structure that puts guardrails on the AI and the people building the models.

Kelly Combs: Let's understand what's happening in the black box and put parameters and checks and a workflow behind what the data scientists do, that's repeatable. So we can evidence how did the AI come to a decision?

Swami Chandrasekaran: So in those dev ops, can I have these checks and balances in place?

Can you teach or enable my teams to do these kinds of things? So it gets institutionalized. It gets operationalized.

HOST: Swami says once they are able to help create some rules to document a company's AI process, the final step is to develop a way to automate this governance.

Ironically, the complexities and scale of AI make human monitoring next to impossible. AI systems need an AI oversight -- and this is where Swami and Kelly see AI heading.

Kelly Combs: The path to the future is how do we automate that governance methodology?

Really the goal is we shouldn't be monitoring and assessing AI using manual methods.

We should be applying automation and the concept of machine learning ops to the data science life cycle and how we build these solutions.

Swami Chandrasekaran: In essence, what we tried to get to is how do I create a framework that has all of these checks and balances that we can put in place? How do we automate many of these, if not all? How do I quantify all of these and how do I continuously monitor all of these so that when something goes wrong, I can come and tell, hey, this loan prediction model, by the way, which you have deployed, is seeming to reject a lot of loans for this particular attribute. It could be salary levels, it could be a region and it is a flag that you want to go and look at, okay, what is going on?

HOST: Kelly admits that even with all these systems in place, developing responsible AI is not a one and done.

Kelly Combs: Consumers want to believe in brands and work with brands they align to, and they want to believe that they have their best interests in mind.

And so as we continue to use AI and machine learning, this responsibility side of it will continue to increase. And there will be onus on organizations to prove how are you doing what you said you are going to do? And how are you ensuring fair outcomes?

HOST: Looking back on the process they went through to buy their house, Crystal says she has no regrets. Owning a home has been everything they hoped for. But she remains suspicious of how lenders are making decisions.

Crystal Marie McDaniels: I knew there's a racial gap. I knew that there was a gap in homeownership, and I know there were a lot of people like me who are able to purchase homes, but for whatever reason, didn't.

So I went into it with a lot of skepticism. And I was disappointed and I felt like we had done everything to be responsible. We hadn't purchased a ridiculously huge home or something outside of our means. We tried our best and this is just what happens.

HOST: For his part, Swami says AI systems need consistent oversight. And the pressure on companies to create this oversight will only continue to grow.

Swami Chandrasekaran: Many clients are acknowledging and recognizing that they've got to instill these practices into their everyday way of working. This is not an after effect. This is not like, oh, I'll build everything and hope and pray everything was good. You have to be able to state and specify how you're protecting the data, how you're not biasing individuals, how you're using AI in a responsible way.

HOST: You've been listening to Speed to Modern Tech, an original podcast from KPMG. I'm Tori Weldon.

Todd Lohr: And I’m Todd Lohr, the head of technology enablement at KPMG.

If you want to know more about the technologies and the people you heard about in this story, click on the link in the show notes.

HOST: And don’t forget to subscribe and leave a review in your favorite podcasting app. We'll be back in two weeks with more stories.

Speed to Modern Tech Podcast Series

Explore other episodes in this original podcast series by KPMG dedicated to world-leading insights in technology.

Explore more

Thank you!

Thank you for contacting KPMG. We will respond to you as soon as possible.

Contact KPMG

Use this form to submit general inquiries to KPMG. We will respond to you as soon as possible.

By submitting, you agree that KPMG LLP may process any personal information you provide pursuant to KPMG LLP's Privacy Statement.

An error occurred. Please contact customer support.

Job seekers

Visit our careers section or search our jobs database.

Submit RFP

Use the RFP submission form to detail the services KPMG can help assist you with.

Office locations

International hotline

You can confidentially report concerns to the KPMG International hotline

Press contacts

Do you need to speak with our Press Office? Here's how to get in touch.

Headline