NVIDIA founder Jensen Huang delivered the opening speech at the GTC conference in San Jose, USA. In this speech, NVIDIA released their next-generation chip architecture Blackwell.
According to reports, the Blackwell GPU is named after mathematician David Harold Blackwell and also follows the Hopper architecture previously launched by Nvidia. Blackwell GPUs contain 208 billion transistors and can support AI models with up to 10 trillion parameters. In addition to the chip itself, this architecture also uses the fifth-generation NVLink high-speed interconnect and the second-generation Transformer engine, which are comprehensively upgraded in all aspects. According to Huang Jenxun, this new chip will be available later in 2024.
Huang Renxun said that Nvidia plans to use Blackwell to enter artificial intelligence companies around the world in the future and sign contracts with all OEMs, regional clouds, national sovereign AI, and telecommunications companies around the world.
Currently, Amazon, Dell, Google, Meta, Microsoft, OpenAI, and Tesla have all planned to use Blackwell GPUs. It is worth noting that the previously legendary B100 did not appear. Instead, Nvidia released the super chip GB200, which is composed of a Grace CPU and two sets of Blackwell GPUs. Huang Renxun said that compared with H100, the computing power of GB200 is 6 times that of the former, and the computing power corresponding to processing multi-modal specific fields can reach 30 times.
In addition, Nvidia also released the server GB200 NVL72, which is composed of 36 sets of Grace CPUs and 72 sets of Blackwell GPUs.
This time, NVIDIA will launch a large general basic model called GR00T for the field of robotics. In addition, NVIDIA also launched Thor, a new computer suitable for robots. NVIDIA has made specific optimizations for related performance, power consumption and size.
NVIDIA also launched a new AI inference server NIM (NVIDIA INFERENCE MICROSERVICE), allowing everyone to customize AI models and applications in this form. (36 krypton)