Artificial Intelligence: Massive AI Workloads Get A Lift With Nvidia’s New H200 GPU

“With NVIDIA H200, the industry’s leading end-to-end AI supercomputing platform just got faster to solve some of the world’s most important challenges.” – Ian Buck, vice president of hyperscale and HPC at NVIDIA.

NVIDIA (NASDAQ: NVDA) has unveiled the HGX™ H200, a powerful addition to its AI computing platform, based on the advanced NVIDIA Hopper™ architecture. Equipped with the groundbreaking H200 Tensor Core GPU, featuring HBM3e, the platform excels in handling vast data volumes for generative AI and high-performance computing tasks.

The H200 GPU introduces HBM3e, boasting 141GB of memory at a remarkable 4.8 terabytes per second, nearly doubling capacity and providing 2.4x more bandwidth than its predecessor, the NVIDIA A100. Featuring NVIDIA NVLink™ and NVSwitch™, the new GPU delivers exceptional performance, providing over 32 petaflops of FP8 deep learning compute and 1.1TB of aggregate high-bandwidth memory in an eight-way configuration.

The versatile H200 will be available in various form factors, including NVIDIA HGX H200 server boards and the NVIDIA GH200 Grace Hopper™ Superchip with HBM3e.

Expected to ship from the second quarter of 2024, H200-powered systems will be produced by leading server manufacturers and cloud service providers. NVIDIA’s Hopper architecture, including the recent release of open-source libraries like NVIDIA TensorRT™-LLM, has showcased remarkable performance enhancements. The H200 is set to further elevate this, promising nearly double the inference speed on Llama 2, a 70 billion-parameter LLM, compared to the H100.

The deployment options for H200 cover every data center type, from on-premises to cloud, hybrid-cloud, and edge. Renowned global partners such as ASRock Rack, ASUS, Dell Technologies, and others will integrate the H200 into their existing systems.

Notable cloud service providers like Amazon Web Services, Google Cloud, Microsoft Azure, and Oracle Cloud Infrastructure are set to offer H200-based instances, with CoreWeave, Lambda, and Vultr joining the lineup.

NVIDIA introduced the HGX H200 on Monday in a special presentation at SC23, a conference focusing on supercomputing, networks, and storage held in Denver.

Related Story:  Runaway Demand For Its AI Chips Catapults Nvidia To Record Quarterly Sales

Free Industry News

Subscribe to our free newsletter for updates and news about alternatives investments.

  • This field is for validation purposes and should be left unchanged.


Latest Alternative Investment News
Digital Assets: Nubank Partners With Circle For Brazilians’ Access To USDC, The Digital Dollar
December 6, 2023     Digital Assets, News

Circle and Nubank (NYSE: NU) have joined forces to broaden access to the digital dollar in Brazil. This collaboration aims to introduce USDC, Circle’s regulated dollar stablecoin, to Nubank’s extensive…
FinTech: Trade Republic, The German Fintech Heavyweight, Wins EU Banking Licence
December 6, 2023     FinTech, News

Berlin-based neobroker Trade Republic has secured a full banking license from the European Central Bank, marking a significant milestone for the fintech. This license empowers Trade Republic to both hold…
Venture Capital: HUGO BOSS Invests In Sustainability-Focused Fashion Venture Fund
December 6, 2023     ESG and Sustainability, News, Venture Capital

HUGO BOSS has reinforced its commitment to sustainability by becoming the inaugural investor in Collateral Good Ventures Fashion I, a climate-centric venture capital fund aimed at expediting sustainability initiatives in…
Artificial Intelligence: xAI, The Musk-Owned AI Startup, To Raise $1B

Elon Musk’s artificial intelligence startup, xAI, aims to secure $1 billion in equity financing to compete with industry leaders such as OpenAI, Microsoft, and Google. The company, co-founded by Musk,…