Quick access to top menu Direct access to main contents Quick access to page bottom
Subscribe and receive updates

Meta, Microsoft, and Oracle Choose AMD’s MI300X Over Nvidia’s H100

AMD’s Challenge to Nvidia
New Instinct MI300

Amid the boom of generative artificial intelligence (AI), the importance of AI semiconductors is rapidly growing. In response, American semiconductor company AMD has revealed a new product to challenge the market leader, Nvidia.

Lisa Su, CEO of AMD. [Photo=iNews24 DB]

AMD announced on the 6th (local time) in San Francisco, California, that it will officially launch its latest AI chip, the Instinct MI300 series. The Instinct MI300 series is a product that accelerates AI operations in data centers and servers. It consists of the graphics processing unit (GPU) MI300X, the central processing unit (CPU), and the GPU combined MI300A.

Performance Comparison with Nvidia

AMD explained that the MI300X provides 2.4 times the memory density and more than 1.6 times the bandwidth compared to Nvidia’s AI chip H100. The MI300A was emphasized as being suitable for supercomputing.

Lisa Su, AMD’s CEO, demonstrated the H100 and Instinct MI300X at the event. She stated that their latest Instinct MI300X GPU is the world’s fastest AI chip. She further highlighted the increasing relevance of AI semiconductor performance, citing the growing demands of the cloud market for advanced servers and robust graphics capabilities.

Furthermore, AMD projected that this year’s AI chip market will reach $45 billion, a 50% increase from the $30 billion forecasted in June. It also predicted that the AI chip market will grow to $400 billion by 2027. In particular, AMD is confident its AI chip sales will reach $2 billion next year.

Meta, OpenAI, Microsoft (MS), and Oracle adopting the MI300X

There are market predictions that AMD’s Instinct MI300X could change the landscape of the AI semiconductor market, which Nvidia currently dominates. Nvidia designed the H100 chip specifically for training large language models (LLMs), which form the basis of generative AI models. Big tech companies such as Google, Amazon, Meta, and Microsoft (MS) have been using this chip to develop generative AI tools. However, limited supply has struggled to meet demand, and the chip’s steep price, ranging between $25,000 and $40,000, has hindered its accessibility.

At the event, Meta, OpenAI, MS, and Oracle announced they would use the MI300X for some AI functions along with the new product announcement. According to a recent market research firm Omdia report, Meta and MS were the biggest buyers of Nvidia’s H100 this year. During the event, although Lisa Su, the CEO, did not specify the price of the M1300X, she suggested that to appeal to customers, its cost should be lower than Nvidia’s offerings.

Meanwhile, market leader Nvidia plans to launch a new AI GPU product, H200, in the second quarter of next year. The industry expects it to boost the existing H100 performance by about 90%.

CNBC offered an analysis stating that the decision of IT giants like Oracle and Microsoft to enter into supply agreements with AMD indicates these companies are feeling the strain of Nvidia’s semiconductor pricing. They added that they are poised to become a formidable alternative if AMD’s semiconductors can sustain a certain performance threshold.

By. Kwon Yong Sam

+1
0
+1
0
+1
0
+1
0
+1
0
inews24's Profile image

Comments0

300

Comments0

Share it on

adsupport@fastviewkorea.com