Artificial Intelligence: OpenAI Says Superintelligence Inevitable, But Global Surveillance Needed

“Superintelligence will be more powerful than other technologies humanity has had to contend with in the past.”

An article by Sam Altman, Greg Brockman, and Ilya Sutskever published by OpenAI discusses the potential development of superintelligence in the next ten years and the need to manage the risks associated with it.

The authors make the points that superintelligence can lead to a better world by addressing societal problems and enhancing creative abilities leading to astonishing economic growth and improved quality of life.

Secondly, they argue that stopping the creation of superintelligence would be challenging due to decreasing costs, increasing actors, and the intrinsic nature of technological progress.

However, superintelligence would surpass expert-level skills, carry out extensive productive activity, and could have significant upsides and downsides.

Accordingly, the authors emphasize the importance of proactive risk management, drawing parallels with technologies like nuclear energy and synthetic biology that require special treatment.

They propose three key ideas for navigating the development of superintelligence.

Firstly, they advocate for coordination among leading development efforts to ensure safety and integration with society. This could involve governments setting up projects or agreeing on growth rate limitations. They also stress the need for responsible actions by individual companies.

Secondly, they suggest the establishment of an international authority, similar to the International Atomic Energy Agency (IAEA), to regulate superintelligence efforts above a certain threshold. This authority would inspect systems, enforce safety standards, and restrict deployment and security levels. They suggest starting with voluntary implementation by companies and later expanding it to individual countries.

Thirdly, the authors highlight the necessity of developing technical capabilities to ensure the safety of superintelligence.

“But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight,” write the authors. “We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don’t yet know how to design such a mechanism, but we plan to experiment with its development.”

Related Story: What Sam Altman Said To The Senate Panel

Image by Julius H. from Pixabay

Free Industry News

Subscribe to our free newsletter for updates and news about alternatives investments.

  • This field is for validation purposes and should be left unchanged.


Latest Alternative Investment News
Artificial Intelligence: AMD Takes On Rivals In The AI Chip Sweepstakes
December 7, 2023     Artificial Intelligence, News

Chipmaker AMD (NASDAQ: AMD) has unveiled a range of innovative AI solutions spanning from data centers to personal computers. The AMD Instinct MI300 Series features data center AI accelerators, while…
Digital Assets: Robinhood Debuts Crypto Trading On Its App In The EU
December 7, 2023     Digital Assets, FinTech, News

Robinhood (NASDAQ: HOOD) has launched its Crypto app in the European Union (EU), allowing eligible customers to engage in crypto trading with the added incentive of earning Bitcoin rewards. Customers…
FinTech: Samsung Electronics Ties With Mastercard’s Wallet Express
December 7, 2023     FinTech, News

Samsung Electronics (KRX: 005930) and Mastercard (NYSE: MA) have partnered to launch the Wallet Express program, offering banks and card issuers a cost-effective way to expand digital wallet offerings. Through…
Venture Capital: Revaia, Europe’s Biggest Female-Led VC Firm, Racks Up $160M For Second Fund
December 7, 2023     ESG and Sustainability, News, Venture Capital

Revaia, Europe’s largest female-founded venture capital firm, has successfully raised €150 million ($160 million) for its second fund, Revaia Growth II. The funding was secured from sovereign wealth funds, family…