Artificial Intelligence Bias: On Trust, Complexity, and Diversity

November 7, 2019 | Artificial Intelligence, News

Just as in computers “Garbage In = Garbage Out,” so also in Machine Learning: “Bias In = Bias Out.”

Microsoft had to hastily withdraw its Twitter chatbot Tay after it reacted to trolls with racist and misogynist tweets. Presumably, the bot developed this artificial intelligence bias from its machine learning basics.

What is machine learning bias? It’s when an algorithm produces results that are prejudiced due to erroneous assumptions in the machine learning process.

“Algorithms can have built-in biases because they are created by individuals who have conscious or unconscious preferences that may go undiscovered until the algorithms are used, and potentially amplified, publicly” – SearchEnterpriseAI

Artificial Intelligence Bias: the tradeoff

There is an assumption of trust that machine learning data will be input after scanning and verification that it is free of bias.

But what if the bias creeps in due to the sheer complexity of the model, or the number of variables? Bias is, therefore, also possible due to the structure of the algorithm itself. It may become so complex (a “black box”) that humans flounder in their understanding of it.

Melissa Koide, the CEO of FinRegLab, colorfully describes it as “an onion we have to unpeel.”

So, there’s a trade-off here: the power of the algo versus its explainability.

Therefore, human judgment has to be a part of the process, somewhere.

The risks of artificial intelligence bias in financial lending

According to Kenneth Edwards, associate general counsel for regulatory affairs at the lender Upstart, Amazon also faced a bias problem. Automated delivery to ZIP codes with a high African-American population apparently got lower priority.

But what if AI models related to financial lending developed a color bias amongst applications for loans?

It’s cold comfort that traditional human models (unfairly) also suffered bias.

If lending has to be “fair,” how do you get your model to evaluate all the avatars of “fairness?”

There could be hundreds of descriptions of fairness.

One solution could be the introduction of diversity in the humans that control machine learning inputs and as well, create the algorithms.

[Related Story: Auriga Launches WWS AI For Advanced Banking Insights ]

Free Industry News

Subscribe to our free newsletter for updates and news about alternatives investments.

  • This field is for validation purposes and should be left unchanged.


Latest Alternative Investment News
FinTech: Lloyds To Acquire Stake In Loyalty App Bink
January 4, 2022     FinTech, News

Lloyds Banking Group (LON: LLOY), Britain’s biggest mortgage provider, will acquire a minority stake in loyalty app Bink, according to a report by Sky News for an undisclosed amount that…
Digital Assets: Coinbase CEO Armstrong Said To Have Splurged $133M On Home In LA
January 4, 2022     Digital Assets, News, Real Estate

An iconic property in Bel Air, Los Angeles, designed by internationally acclaimed English architect John Pawson changed hands last month for $133 million and the buyer was Coinbase (NASDAQ: COIN)…
Alternative Investments/ESG: VegTech Invest Launches Plant-Based Innovation & Climate ETF

VegTech Invest advisory has launched the VegTech Plant-based Innovation & Climate ETF (Ticker: EATV), its first financial product. The ETF offers exposure to publicly traded companies actively innovating with plants…超·世界.png
Venture Capital: Chinese AI Startup Parametrix Raises $100M, Turns Unicorn
January 4, 2022     Artificial Intelligence, News, Venture Capital

Chinese AI company has raised $100 million in a Series B round led by Sequoia China and joined by existing investors 5Y Capital and Gaorong Capital. Though the valuation…