AMD MI350 GPU: Specs & Roadmap | AMD News
Uncover AMD’s bold moves in the AI arena! The latest AMD Advancing AI event unveiled the MI350 series GPUs, promising a important leap in performance to rival competitors. These new GPUs deliver nearly four times better performance compared to previous generations, wiht an edge in memory capacity.Explore how AMD is bolstering its networking capabilities with UltraEthernet support and UALink, crucial for large-scale AI clusters.OracleS deployment of a massive GPU cluster using AMD Instinct GPUs signals growing adoption. While Nvidia currently leads, AMD’s commitment to an annual accelerator roadmap, including ROCm 7.0 improvements, is clear. News Directory 3 has the inside scoop on the MI355’s remarkable specs, including a substantial 288GB of HBM3 memory. Discover what’s next for AMD’s MI400 series, including the upcoming challenges for the AI market.
AMD Advancing AI event: New GPUs, Networking, and Software Unveiled
AMD’s annual Advancing AI event in Silicon Valley highlighted the company’s latest advancements in GPUs, networking, and software. The event showcased AMD’s commitment too competing with Nvidia in the rapidly evolving artificial intelligence landscape, including a rack-scale architecture slated for 2026/27.

While acknowledging that AMD’s products are currently second to Nvidia, the company emphasized its commitment to an annual accelerator roadmap. The MI350 series GPUs deliver nearly four times better performance compared to the previous generation, possibly closing the gap with Nvidia in GPU performance. AMD also maintains an edge in memory capacity and bandwidth.
AMD is strengthening its networking capabilities with ultraethernet support this year and UALink next year, facilitating both scale-out and scale-up architectures. The “Helios” rack-scale AI system, planned for 2026/27, represents a more direct challenge to Nvidia’s NVL72 and upcoming Kyber systems.
Oracle is deploying a 27,000 GPU cluster using AMD Instinct GPUs on its cloud infrastructure, indicating growing adoption of AMD’s solutions. AMD also introduced ROCm 7.0 and the AMD Developer Cloud Access Program to foster a larger AI ecosystem.
MI350 Series: AMD’s new AI Accelerators
the AMD Instinct GPU portfolio aims to offer compelling price/performance and openness. AMD claims its GPUs provide 40% more tokens per dollar, with adoption by seven of the top 10 AI companies and over 60 named customers.
A key advantage of the MI350 GPUs is their large memory footprint, featuring 288 GB of HBM3 memory. This capacity allows single-node handling of large models up to 520 billion parameters, surpassing the competition by 60% and potentially lowering the total cost of ownership. The MI350 also boasts twice the 64-bit floating point performance of Nvidia GPUs, making it suitable for HPC workloads.

The MI355, using the same silicon as the MI300, is optimized for higher speeds and temperatures, serving as AMD’s flagship data center GPU. Both GPUs are available on industry-standard UBB8 boards in air- and liquid-cooled versions.

According to MLPerf benchmarks, AMD claims the MI355 is approximately three times faster then the MI300 and on par with Nvidia’s B200 GPU. However, Nvidia maintains a leadership position in AI due to its nvlink, InfiniBand, system design, ecosystem, and software advantages. The B300 is expected to ship soon.

Future GPU Roadmap
AMD shared details about the upcoming MI400 series, with OpenAI CEO Sam Altman expressing strong support for the MI450. OpenAI has played a key role in defining market requirements for AMD’s engineering teams.

The MI400 will feature HBM4 memory at 423GB per GPU and support 300GB/s UltraEthernet through Pensando NICs.

The projected performance gains of the MI400 represent a significant step forward for AMD, reminiscent of similar projections made by Nvidia.

Networking Enhancements
Beyond GPUs, the networking aspects of the AMD Advancing AI event were notably noteworthy.

AMD is a founding member of the UALink consortium and will support ualink with the MI400 series. While promising, Nvidia is expected to ship NVLink 6.0 around the same time or earlier.

ROCm Improvements
The ROCm progress team has made significant strides, demonstrating improved performance and ecosystem adoption.

AMD showcased over three times the performance for inference processing using ROCm 7. This improvement is partly due to advancements in the open AI stack, such as Triton from OpenAI, which could challenge Nvidia’s dominance.

What’s next
AMD’s advancements in GPUs, networking, and software position the company to better compete with Nvidia in the AI market. the upcoming MI400 series and continued improvements to ROCm will be crucial in this ongoing competition.
