Artificial Intelligence: OpenAI Says Superintelligence Inevitable, But Global Surveillance Needed

“Superintelligence will be more powerful than other technologies humanity has had to contend with in the past.”

An article by Sam Altman, Greg Brockman, and Ilya Sutskever published by OpenAI discusses the potential development of superintelligence in the next ten years and the need to manage the risks associated with it.

The authors make the points that superintelligence can lead to a better world by addressing societal problems and enhancing creative abilities leading to astonishing economic growth and improved quality of life.

Secondly, they argue that stopping the creation of superintelligence would be challenging due to decreasing costs, increasing actors, and the intrinsic nature of technological progress.

However, superintelligence would surpass expert-level skills, carry out extensive productive activity, and could have significant upsides and downsides.

Accordingly, the authors emphasize the importance of proactive risk management, drawing parallels with technologies like nuclear energy and synthetic biology that require special treatment.

They propose three key ideas for navigating the development of superintelligence.

Firstly, they advocate for coordination among leading development efforts to ensure safety and integration with society. This could involve governments setting up projects or agreeing on growth rate limitations. They also stress the need for responsible actions by individual companies.

Secondly, they suggest the establishment of an international authority, similar to the International Atomic Energy Agency (IAEA), to regulate superintelligence efforts above a certain threshold. This authority would inspect systems, enforce safety standards, and restrict deployment and security levels. They suggest starting with voluntary implementation by companies and later expanding it to individual countries.

Thirdly, the authors highlight the necessity of developing technical capabilities to ensure the safety of superintelligence.

“But the governance of the most powerful systems, as well as decisions regarding their deployment, must have strong public oversight,” write the authors. “We believe people around the world should democratically decide on the bounds and defaults for AI systems. We don’t yet know how to design such a mechanism, but we plan to experiment with its development.”

Related Story: What Sam Altman Said To The Senate Panel

Image by Julius H. from Pixabay

Free Industry News

Subscribe to our free newsletter for updates and news about alternatives investments.

  • This field is for validation purposes and should be left unchanged.


Latest Alternative Investment News
FinTech: Adyen Launches ‘Payout Services’ For Faster Payouts To The Gig And SMB Economies
June 2, 2023     FinTech, News

Adyen, a global financial technology platform, has launched Payout Services, allowing its customers to make instant payouts to their users or partners. By connecting directly to real-time payment rails, Adyen’s…
Digital Assets: Oracle Red Bull Racing Seeks To Digitize Engagement With Fans On The Sui Blockchain

Oracle Red Bull Racing has announced a multi-year partnership with software company Mysten Labs to collaborate on Sui, which will serve as the team’s Official Blockchain partner. Sui, a Layer…
Venture Capital: Bold, L’Oréal’s Venture Capital Fund, Buys A Stake In Exotic Cosmetic Ingredient Maker Debut
June 2, 2023     News, Venture Capital

L’Oréal’s (EPA: OR) corporate venture capital fund, BOLD, has announced a minority investment in Debut, a US biotech company specializing in the discovery, formulation, and manufacture of novel ingredients and…
Artificial Intelligence: In Flight Simulation, An AI Drone Tries To Kill Its Own Handler
June 2, 2023     Artificial Intelligence, News

During a simulated test at a London summit, an AI-controlled drone turned against its human operator, raising significant ethical concerns. Air Force Colonel Tucker “Cinco” Hamilton shared the incident, which…