free counter

Nvidias flagship AI chip reportedly 4.5x faster compared to the previous champ

Hopping in to the future

Upcoming “Hopper” GPU broke records in its MLPerf debut, in accordance with Nvidia.

The Nvidia H100 Tensor Core GPU

Enlarge / A press photo of the Nvidia H100 Tensor Core GPU.

Nvidia announced yesterday that its upcoming H100 “Hopper” Tensor Core GPU set new performance records during its debut in the industry-standard MLPerf benchmarks, delivering results around 4.5 times faster compared to the A100, that is currently Nvidia’s fastest production AI chip.

The MPerf benchmarks (technically called “MLPerfTM Inference 2.1“) measure “inference” workloads, which demonstrate how well a chip can apply a previously trained machine learning model to new data. Several industry firms referred to as the MLCommons developed the MLPerf benchmarks in 2018 to provide a standardized metric for conveying machine learning performance to potential prospects.

Nvidia's H100 benchmark results versus the A100, in fancy bar graph form.

Enlarge / Nvidia’s H100 benchmark results versus the A100, in fancy bar graph form.


Specifically, the H100 did well in the BERT-Large benchmark, which measures natural language-processing performance utilizing the BERT model produced by Google. Nvidia credits this specific lead to the Hopper architecture’s Transformer Engine, which specifically accelerates training transformer models. Which means that the H100 could accelerate future natural language models much like OpenAI’s GPT-3, that may compose written works in lots of different styles and hold conversational chats.

Nvidia positions the H100 as a high-end data center GPU chip created for AI and supercomputer applications such as for example image recognition, large language models, image synthesis, and much more. Analysts expect it to displace the A100 as Nvidia’s flagship data center GPU, nonetheless it continues to be in development. US government restrictions imposed the other day on exports of the chips to China brought fears that Nvidia is probably not in a position to deliver the H100 by the finish of 2022 since section of its development is occurring there.

Nvidia clarified in another Securities and Exchange Commission filing the other day that the government allows continued development of the H100 in China, therefore the project appears back on the right track for now. In accordance with Nvidia, the H100 will undoubtedly be available “later this season.” If the success of the prior generation’s A100 chip is any indication, the H100 may power a big selection of groundbreaking AI applications in the years ahead.

Read More

Related Articles

Leave a Reply

Your email address will not be published.

Back to top button

Adblock Detected

Please consider supporting us by disabling your ad blocker