AMD hopes to make AI more accessible to developers and researchers by adding support for PyTorch to its program Radeon RX 7900XTX And Radeon Pro W7900 graphics cards.
Based on the RDNA 3 GPU architecture, these are some of the best GPUs out there, and they can now let users set up a private and cost-effective workflow for machine learning training and inference. Previously, such AI workloads may have required users to rely on cloud access to compatible GPUs.
“We are excited to provide the AI community with new support for machine learning development using PyTorch, built on the AMD Radeon RX 7900 XTX and Radeon Pro W7900 GPUs and the ROCm open software platform,” said Dan Wood, VP of Radeon product management. “This is our first RDNA 3 architecture-based deployment and we look forward to working with the community.”
Get the most out of ROCm
Now PyTorch machine learning framework is supported most powerful graphics cardsAMD hopes to create open access to AI workloads for users who don’t have the resources or infrastructure you would otherwise need.
Anyone looking to take advantage of PyTorch can also use the Radeon Open Compute (ROCm) software stack for GPUs, which includes general-purpose computing, high-performance computing (HPC), and heterogeneous computing.
With AMD ROCm 5.7, users of machines with RDNA 3-based GPUs, as well as CDNA GPU and AMD Instinct MI series accelerators, can also use PyTorch.
Because ROCm is open source, developers may want to go in a variety of different directions and add support for their own specific AI processing needs. For example, there is a huge amount of appetite to be had Stable diffusion runs on AMD Accelerated Processing Units (APUs).
For example, one user changed one 4600G APU in a 16GB VRAM GPU that could run AI workloads – including Stable Diffusion – without too many problems, according to a video they posted on YouTube.