AI pushing Nvidia towards $1 trillion will not assist Intel and AMD

NvidiaThe company’s stock rose to nearly $1 trillion in extended trading on Wednesday after the company reported a worryingly good outlook for the future and CEO Jensen Huang said the company would have a “huge record year.”

Sales are rising on rising demand for Nvidia’s graphics processing units (GPUs), which power artificial intelligence applications like Google’s. Microsoft and OpenAI.

Demand for AI chips in data centers prompted Nvidia to forecast revenue of $11 billion for the current quarter, beating analyst estimates of $7.15 billion.

“The focus has been generative AI,” Huang said in an interview with CNBC. “We know CPU scaling slowed down, we know accelerated computing is the way forward, and then the killer app showed up.”

Nvidia believes it is driving a significant shift in the way computers are built that could lead to even more growth — data center parts could even become a $1 trillion market, Huang says.

Historically, the most important part of a computer or server was the central processor or CPU. This market was dominated by intelwith AMD as its main competitor.

With the advent of AI applications that require a lot of computing power, the GPU is taking center stage, and the most advanced systems utilize up to eight GPUs per CPU. Nvidia currently dominates the AI ​​GPU market.

“The data center of the past, which consisted largely of CPUs for file retrieval, will be generative data in the future,” Huang said. “Instead of retrieving data, you’re going to retrieve some data, but you need to generate most of the data using AI.”

“So instead of millions of CPUs, you’re going to have a lot fewer CPUs, but those are going to be connected to millions of GPUs,” he continued.

For example, Nvidia’s own DGX systems, which are essentially AI computers for training in a box, use eight of Nvidia’s high-end H100 GPUs and just two CPUs.

GoogleIntel’s A3 supercomputer combines eight H100 GPUs with a single high-end Intel Xeon processor.

It’s one of the reasons Nvidia’s data center business grew 14% in the first calendar quarter, while AMD’s data center division saw flat growth and Intel’s AI and data center division declined 39%.

Also, Nvidia’s GPUs tend to be more expensive than many CPUs. Intel’s latest generation of Xeon CPUs can cost up to $17,000 according to the list price. A single Nvidia H100 can sell for $40,000 on the secondary market.

Nvidia will face increased competition as the AI ​​chip market warms up. AMD has a competitive GPU business, particularly in gaming, and Intel also has its own line of GPUs. Startups are building new types of chips specifically for AI, and mobile-focused companies like it Qualcomm And Apple Keep pushing the technology so maybe one day it will run in your pocket and not in a huge server farm. Google and Amazon develop their own AI chips.

But Nvidia’s high-end GPUs remain the chip of choice for current companies developing applications like ChatGPT, which are expensive to train by processing terabytes of data and then executing them later in a process called “inference,” which uses the model used to generate expensive is text, images or make predictions.

Analysts say Nvidia continues to lead AI chips because of its proprietary software that makes it easier to leverage all GPU hardware capabilities for AI applications.

Huang said Wednesday that the company’s software is not easy to replicate.

“You have to develop all the software, all the libraries, all the algorithms, integrate them into the frameworks and optimize them and optimize them for the architecture, not just for a chip, but for the architecture of an entire data center,” Huang said on a call with analysts.

Comments are closed, but trackbacks and pingbacks are open.