Artificial Intelligence: A Mathematical Watcher Over AI
Researchers come up with an “ethical eye” on artificial intelligence.
Researchers from the University of Warwick, Imperial College London, EPFL (Lausanne), and Sciteb Ltd have devised a mathematical method to watch over AI and the possibility that it could adopt unethical strategies. (Science Daily)
Unethical or biased behavior that could prove dangerous
Increasingly, AI is being deployed in commercial situations where it takes decisions or adopts strategies without the supervision of humans. It is highly possible that the algorithm could indulge in unethical or biased behavior. This could expose the organization to legal and financial penalties.
“If an AI aims to maximize risk-adjusted return, then under some conditions it is, to a major extent, likely to pick an unethical strategy unless the objective function allows sufficiently for this risk,” says the paper. Titled “An unethical optimization principle,” it published in Royal Society Open Science on Wednesday 1st July 2020.
The following researchers wrote the paper:
- Nicholas Beale of Sciteb Ltd;
- Heather Battey of the Department of Mathematics, Imperial College London;
- Anthony C. Davison of the Institute of Mathematics,
- Ecole Polytechnique Fédérale de Lausanne; and
- Professor Robert MacKay of the Mathematics Institute of the University of Warwick.
How can regulators and businesses reduce the risk of such actions by AI?
“An unethical optimization principle”
Mathematicians and statisticians from the University of Warwick, Imperial, EPFL, and Sciteb Ltd have created a new “Unethical Optimization Principle” and provided a simple formula to estimate its impact.
Professor Robert MacKay of the Mathematics Institute of the University of Warwick said: “Our suggested ‘Unethical Optimization Principle’ can be used to help regulators, compliance staff, and others to find problematic strategies that might be hidden in a large strategy space. Optimization can be expected to choose disproportionately many unethical strategies, an inspection of which should show where problems are likely to arise and thus suggest how the AI search algorithm should be modified to avoid them in future.”
“The Principle also suggests that it may be necessary to re-think the way AI operates in very large strategy spaces so that unethical outcomes are explicitly rejected in the optimization/learning process.”
Related Story: Artificial Intelligence Bias: On Trust, Complexity, and Diversity
Latest Alternative Investment News
Artificial Intelligence: AMD Takes On Rivals In The AI Chip Sweepstakes
Chipmaker AMD (NASDAQ: AMD) has unveiled a range of innovative AI solutions spanning from data centers to personal computers. The AMD Instinct MI300 Series features data center AI accelerators, while…
Digital Assets: Robinhood Debuts Crypto Trading On Its App In The EU
Robinhood (NASDAQ: HOOD) has launched its Crypto app in the European Union (EU), allowing eligible customers to engage in crypto trading with the added incentive of earning Bitcoin rewards. Customers…
FinTech: Samsung Electronics Ties With Mastercard’s Wallet Express
Samsung Electronics (KRX: 005930) and Mastercard (NYSE: MA) have partnered to launch the Wallet Express program, offering banks and card issuers a cost-effective way to expand digital wallet offerings. Through…
Venture Capital: Revaia, Europe’s Biggest Female-Led VC Firm, Racks Up $160M For Second Fund
Revaia, Europe’s largest female-founded venture capital firm, has successfully raised €150 million ($160 million) for its second fund, Revaia Growth II. The funding was secured from sovereign wealth funds, family…