Financial algorithms have come under fire recently for apparently discriminating against female borrowers. Iona weighs up whether financial AI is really A-OK…
The subject of artificial intelligence (AI) and whether it’s destined to destroy humanity has fascinated the world for decades. Professors and sci-fi writers have envisaged a frightening future where robots will be equipped to make ruthlessly efficient decisions that will ultimately harm their human masters.
But is that future already here? Automation is already radically disrupting industries that used to be heavily reliant on human labour, from manufacturing to retail. Now, it seems that the financial industry us in the grips of its very own AI crisis.
This week, I went to see the latest instalment in the Terminator franchise (which I actually enjoyed a lot!) and it got me thinking about a recent chapter in the AI (and identity) wars. Last week, Apple was accused of issuing different credit card limits to customers based on gender. Even Steve Wozniak, one of the founding fathers of Apple, took to social media to complain about how his wife was given 10 times less borrowing power than him, even though they share all their assets and enjoy the same credit limits on other cards. Others highlighted how Apple was granting less credit to women than their partners, even when their credit scores were higher.
The scandal, which emerged in the U.S., has prompted regulators over there to investigate with Goldman Sachs (the issuer behind the Apple Card) coming in for some serious flak over its response to the claims.
“There’s no gender bias in our process for extending credit,” Solomon said in an interview with Bloomberg TV late Thursday. He added:
“We don’t ask whether — when someone applies — if they’re a man or a woman. We don’t ask if they’re married. There’s no question that different applicants can get different results, and that can be for a variety of reasons.”
So what’s the deal? How do lenders and insurers assess applicants? Is there any truth in the idea that financial AI is discriminating against certain groups in society? Are we already in a scary dystopia where we’ve handed over too much control to automated systems that aren’t getting things right?
I’m not so sure. Anyone looking to uncover widespread, conclusive examples of systemic bias based on protected characteristics within financial AI, at least within the UK, will have a hard time. Maybe because they don’t exist…
Giving credit where credit’s due?
The first obvious point to make is that Apple Card is only available in the US. As things stand, a launch date in the UK isn’t on the horizon. And there is no obvious point of comparison in the UK either. Goldman Sachs doesn’t have issue any credit cards in the UK: its only presence in the British retail market is through the minor savings provider Marcus. So that’s red herring no.1.
Red herring no.2…even if we accept that the U.S. credit system is not that different to ours, do these recent claims stand up to scrutiny? The original complainant, a web guru and businessman called David Heinemeier Hansson, believed the reason why Apple gave him a higher credit limit than his wife was for “f****** sexist” reasons.
As FT Alphaville’s Jemima Kelly pointed out, however, we have no idea why Mr Hansson’s wife only got one-twentieth of his credit. And neither did the thousands who piled in and retweeted, liked or made political capital out of his fulmination (like Elizabeth Warren, one of the Democratic candidates for president).
What was the (seemingly invisible) Mrs Hansson’s credit history? What were her earnings? What does she work as (if she works)? All are far more likely to affect the credit she gets offered than the mere fact of her gender. In fact, much research (and recent history of lower insurance premiums for women, before ‘equality’ regulations kicked in) demonstrate that men are viewed as more risky in their financial behaviour than women. That might explain why, on average, women are likely to have BETTER credit scores than men. Perhaps Mrs Hannsson bucked this trend. We don’t know. That’s why selective honesty on Twitter rarely sheds light on or solves any real problems.
Credit works in mysterious ways
Indeed, we can never FULLY know what factors are considered when we’re being assessed for credit or insurance. The specific processes used by companies are a closely-kept commercial secret, and that inevitably breeds mistrust.
The closest we might have to an explanation about these mysterious systems came from the BBC this week, when personal finance reporter Kevin Peachey tried to admirably explain what might be really happening:
In the Apple Card case, we do not know how how the algorithm makes its decisions or which data it uses, but this could include historic data on which sorts of people are considered more financially risky, or who have traditionally made applications for credit.
And indeed, that might lead to decisions which could be seen as discriminatory, as Kevin goes on to explain:
For example, an algorithm will not know someone’s gender, but it may know you are a primary school teacher – a female-dominated industry. Historic data, most controversially in crime and justice, may be drawn from a time when human decisions by police or judges were affected by somebody’s race.
But this could well be confusing correlation and causation. Some of the data, and assumptions that it creates, may well need to be upgraded in today’s more progressive society. But it seems unlikely that we’ll ever be able to stop lenders preferring those who earn more in their own right, who work in traditionally more “steady and reliable” professions, who convey through certain life choices that they ‘deserve’ a higher credit limit.
A credit counter-story
And here is where I’d like to offer a personal story to counterbalance the prevailing narrative on financial AI (as part of my effort to be more honest on Young Money Blog). My brother and I recently applied for credit cards for the first time. This was in order to help Matt, who is a professional musician, purchase a new high-quality violin and we needed both our borrowing power to make this happen.
We were fortunate enough to have decent credit scores, partly because we have both purchased property together and paid off mortgages. We have similar earnings and an identical borrowing history. Yet I got more than twice the credit limit of Matt, who is two years older than me.
Again, I’ll never know for sure why this happened but I suspect it’s because our lender made a judgement about the stability and security of his career (musician) versus mine (financial journalist and broadcaster). Because when I had to list my regular employers – the FT and the BBC – that might have superficially come across better than Matt’s employers (various national orchestras).
Is this fair? I don’t think so. There’s an argument to be made that our lender discriminated against Matt on the basis of his work and employers. Did it have anything to with gender or age? No.
Personal anecdotes, of course, do not provide the full picture of financial AI today. And that’s why we should take reactionary tweets about credit limits with a decent pinch of salt too.
That doesn’t mean we should be letting lenders and the credit rating system off the hook. When I checked my credit score recently, I was dumbfounded to see it fell within the “good” range. Now, that’s still a good position to be in but seriously…what do you have to do to have an excellent credit score, may I ask? I have paid off two mortgages and managed my credit card impeccably. Is it because I haven’t borrowed enough? I suspect so. The system is compelling young people, regardless of gender and ethnicity, to borrow more than they really should.
And if you make a mistake? That will dramatically slash your credit score and – yes – well and truly terminate your borrowing prospects. Now, THAT’S scary.

Tell us your experience. Do you think your lender has discriminated against you? Are you scared about financial AI? Leave a comment below or tweet us – @ionayoungmoney.