At present, we’re saying a multi-year settlement with AMD to energy our AI infrastructure with as much as 6GW of AMD Intuition GPUs, the silicon computing expertise used to help trendy AI fashions.
At Meta, we’re working to construct the subsequent technology of AI and allow private superintelligence for all. To do that, we’d like large, scalable compute energy that may deal with the rising calls for of our AI workloads. Our partnership with AMD, which builds on our present collaboration, will assist us meet these wants.
Working With an Business Chief
Beneath our new settlement, we will even be working with AMD on alignment with our roadmaps throughout silicon, techniques and software program enabling vertical integration throughout our infrastructure stack. This collaboration throughout each software program and {hardware} will allow us to innovate shortly and at scale.
“We’re proud to broaden our strategic partnership with Meta as they push the boundaries of AI at unprecedented scale,” mentioned Dr. Lisa Su, chair and CEO, AMD. “This multi-year, multi-generation collaboration throughout Intuition GPUs, EPYC CPUs and rack-scale AI techniques aligns our roadmaps to ship high-performance, energy-efficient infrastructure optimized for Meta’s workloads, accelerating one of many trade’s largest AI deployments and inserting AMD on the middle of the worldwide AI buildout.”
Shipments to help the primary GPU deployments will start within the second half of 2026, and can be constructed on the Helios rack-scale structure, a rack that we developed and introduced finally yr’s Open Compute Undertaking World Summit in collaboration with AMD.
“We’re excited to type a long-term partnership with AMD to deploy environment friendly inference compute and ship private superintelligence,” mentioned Mark Zuckerberg, Founder and CEO of Meta. “This is a vital step for Meta as we diversify our compute. I anticipate AMD to be an necessary accomplice for a few years to come back.”
Our Portfolio-Based mostly Strategy
Our settlement with AMD is a part of our Meta Compute initiative, an effort to massively scale our infrastructure for the period of non-public superintelligence, future-proofing our management in AI. By diversifying our partnerships and expertise stack, we’re constructing a extra resilient and versatile infrastructure. We’re combining {hardware} sourced from a spread of companions with our personal quickly advancing Meta Coaching and Inference Accelerator (MTIA) silicon program.
We consider this portfolio method will allow us to advance and innovate at an unmatched tempo, rolling out highly effective, environment friendly new {hardware} co-designed with our software program stack to deal with large progress. We sit up for working with AMD to energy our AI improvements and safe our potential to ship world-class AI experiences to billions of individuals globally.
This submit incorporates forward-looking statements, together with about Meta’s enterprise.You shouldn’t depend on these statements as predictions of future occasions. Extra data relating to potential dangers and uncertainties may be present in our most up-to-date Type 10-Ok filed with the Securities and Alternate Fee. Meta undertakes no obligation to replace these statements because of new data or future occasions.










