Artificial Intelligence Bias: On Trust, Complexity, and Diversity

November 7, 2019 | Artificial Intelligence, News

Just as in computers “Garbage In = Garbage Out,” so also in Machine Learning: “Bias In = Bias Out.”

Microsoft had to hastily withdraw its Twitter chatbot Tay after it reacted to trolls with racist and misogynist tweets. Presumably, the bot developed this artificial intelligence bias from its machine learning basics.

What is machine learning bias? It’s when an algorithm produces results that are prejudiced due to erroneous assumptions in the machine learning process.

“Algorithms can have built-in biases because they are created by individuals who have conscious or unconscious preferences that may go undiscovered until the algorithms are used, and potentially amplified, publicly” – SearchEnterpriseAI

Artificial Intelligence Bias: the tradeoff

There is an assumption of trust that machine learning data will be input after scanning and verification that it is free of bias.

But what if the bias creeps in due to the sheer complexity of the model, or the number of variables? Bias is, therefore, also possible due to the structure of the algorithm itself. It may become so complex (a “black box”) that humans flounder in their understanding of it.

Melissa Koide, the CEO of FinRegLab, colorfully describes it as “an onion we have to unpeel.”

So, there’s a trade-off here: the power of the algo versus its explainability.

Therefore, human judgment has to be a part of the process, somewhere.

The risks of artificial intelligence bias in financial lending

According to Kenneth Edwards, associate general counsel for regulatory affairs at the lender Upstart, Amazon also faced a bias problem. Automated delivery to ZIP codes with a high African-American population apparently got lower priority.

But what if AI models related to financial lending developed a color bias amongst applications for loans?

It’s cold comfort that traditional human models (unfairly) also suffered bias.

If lending has to be “fair,” how do you get your model to evaluate all the avatars of “fairness?”

There could be hundreds of descriptions of fairness.

One solution could be the introduction of diversity in the humans that control machine learning inputs and as well, create the algorithms.

[Related Story: Auriga Launches WWS AI For Advanced Banking Insights ]

Free Industry News

Subscribe to our free newsletter for updates and news about alternatives investments.

  • This field is for validation purposes and should be left unchanged.


Alt Insights

December 6, 2019

SIFMA: US Economy Will Expand in 2020 But at a Moderate Pace

SIFMA: US Economy Will Expand in 2020 But at a Moderate Pace
Shape

Latest Alternative Investment News

Bank of America: ETF Assets to Go 10X, Hit $50 Trillion in 10 Years
December 13, 2019     Investments, Latest News, Liquid Alternatives, News

ETF assets could hit $50 trillion by 2030, a figure that is more than ten times current levels, according to Bank of America. The bank also said this week that…

The Domino Effect From This Chinese Bond Default
December 12, 2019     Alternative Investments, News

Mutual funds bleed after Peking University Founder Group Co.’s bond default. Peking University Founder Group Co.’s bonds crashed by 70% the day after the Founder Group defaulted on a 2…

Another Day, Another Hedge Fund Plans to Return Investor Capital
December 12, 2019     Hedge Funds, Investments, News

Everett Capital Advisors will return capital to investors, according to a new report from Bloomberg. The hedge fund of Kelly Hampaul had a rather solid run. Founded in 2015, the event-driven…

Digital Assets: Jay Clayton – SEC Hastens Slowly on Digital Assets
December 12, 2019     Digital Assets, Regulations

Jay Clayton said in testimony to the Senate that the SEC was taking a measured approach to regulation of digital assets. Jay Clayton was very positive on the prospects of…