One telling sign of this commitment came from Broadcom CEO Hock Tan, who, during a recent earnings call, directly addressed analyst concerns. "Now contrary to recent analyst reports, Meta’s custom accelerator MTIA road map is alive and well," he stated, adding, "We’re shipping now." Broadcom contracts with Meta to assist with certain chip design elements, according to Reuters.
Optimizing for Inference and GenAI
The new MTIA roadmap emphasizes inference workloads, which involve running AI models to make predictions or recommendations. These tasks often have more predictable computational patterns compared to AI training, which is the process of building the models themselves and is currently dominated by Nvidia's powerful Graphics Processing Units (GPUs). This is a crucial distinction. Yee Jiun Song, Meta's Vice President of Engineering, explained to CNBC that custom chips allow Meta to "squeeze more price per performance" across its data center fleet.
Meta has already deployed its MTIA 300 chip, which is being used for ranking and recommendations training within its systems. The upcoming MTIA 400, 450, and 500 chips are designed to handle a broader range of workloads. However, Meta's blog post indicates they will "primarily use these chips to support GenAI inference production in the near future and into 2027." For example, one Meta data center rack will incorporate 72 of the MTIA 400 chips, specifically optimized for AI inference tasks.







