Chipmaker Nvidia Corp. has unveiled its next-generation artificial intelligence or AI superchip Blackwell graphics processing unit or GPU, amid the significant growth in demand for AI chips worldwide. The new chip is said to do some tasks 30 times faster than its predecessor.

In pre-market activity on Nasdaq, Nvidia shares were losing around 3 percent to trade at $858.20.

At the GTC conference in San Jose, California, NVIDIA founder and CEO Jensen Huang announced its next-generation AI supercomputer, the NVIDIA DGX SuperPOD powered by NVIDIA GB200 Grace Blackwell Superchips, for processing trillion-parameter models with constant uptime for superscale generative AI training and inference workloads.

Huang, while talking to more than 11,000 GTC attendees, said that the company created a processor for the generative AI era.

Huang introduced NVIDIA Blackwell in a massive upgrade to the world’s AI infrastructure. According to the company, the Blackwell platform will unleash real-time generative AI on trillion-parameter large language models.

The Blackwell platform, named for University of California, Berkeley mathematician David Harold Blackwell, succeeds the NVIDIA Hopper architecture, launched two years ago.

The NVIDIA GB200 Grace Blackwell Superchip connects two Blackwell NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.

NVIDIA noted that Blackwell has already been endorsed by Alphabet and Google, Amazon, Dell, Google DeepMind, Meta, Microsoft, OpenAI, Oracle, and Tesla and xAI. Blackwell is being adopted by every major global cloud services provider, pioneering AI companies, system and server vendors, and regional cloud service providers and telcos all around the world, it said.

Huang said, “Accelerated computing has reached the tipping point — general purpose computing has run out of steam. We need another way of doing computing — so that we can continue to scale so that we can continue to drive down the cost of computing, so that we can continue to consume more and more computing while being sustainable.

Accelerated computing is a dramatic speedup over general-purpose computing, in every single industry.”

He also detailed a new set of software tools including NIM microservices, Omniverse Cloud APIs and more.

Huang presented NVIDIA NIM, a reference to NVIDIA inference microservices, a new way of packaging and delivering software that connects developers with hundreds of millions of GPUs to deploy custom AI of all kinds.

He also introduced Omniverse Cloud APIs to deliver advanced simulation capabilities.

“In the future, data centers are going to be thought of … as AI factories. Their goal in life is to generate revenues, in this case, intelligence,” Huang added.

In telecom, Huang announced the NVIDIA 6G Research Cloud, a generative AI and Omniverse-powered platform to advance the next communications era.

In semiconductor design and manufacturing, the company is bringing its breakthrough computational lithography platform, cuLitho, to production, in collaboration with TSMC and Synopsys.

Further, the NVIDIA Earth Climate Digital Twin, the cloud platform, is available now that enables interactive, high-resolution simulation to accelerate climate and weather prediction.

NVIDIA also launched more than two dozen new microservices that allow healthcare enterprises worldwide to take advantage of the latest advances in generative AI.

The company is also bringing Omniverse to Apple Vision Pro, with the new Omniverse Cloud APIs letting developers stream interactive industrial digital twins into the VR headsets.

In Robotics, Huang announced that autonomous vehicle company BYD has selected the next-generation computer for its AV, building its next-generation EV fleets on DRIVE Thor.

Business News




Nvidia Unveils Next-gen Blackwell AI Superchip

2024-03-19 13:18:43

Leave a Reply

Pantère Group

Infinity Building
Amstelveenseweg 500
1081 KL Amsterdam, Netherlands

E: Info@pantheregroup.com