MENLO PARK, California, February 2026 — Meta has unveiled four new custom AI chips as part of its ongoing push to reduce reliance on third-party hardware, intensifying competition with Nvidia and AMD in the high-performance AI accelerator market.
The chips, collectively dubbed Meta Training and Inference Accelerator (MTIA) v2 family, include two training-focused variants (MTIA v2 High and MTIA v2 Mid) and two inference-optimized models (MTIA v2 Low and MTIA v2 Ultra-Low).
All four are built on TSMC’s 5nm process node and feature significant improvements in memory bandwidth, compute density, and power efficiency compared to Meta’s first-generation MTIA chips introduced in 2023.
Meta claims the new training chips deliver up to 3x better performance per watt than the previous generation, while the inference variants offer up to 4x higher throughput for recommendation and ranking workloads that power Facebook, Instagram, and WhatsApp feeds.
The company says the chips are already deployed at scale in its data centers and will support the next wave of Llama models and other internal AI systems.
CEO Mark Zuckerberg highlighted the strategic importance during the announcement: “Building our own chips gives us greater control over performance, cost, and efficiency as AI becomes central to everything we do.” Meta has invested billions in its in-house silicon program to offset rising Nvidia GPU costs and secure supply chain independence.
The move escalates competition in the AI chip landscape. Nvidia dominates with its H100 and Blackwell series, while AMD’s Instinct MI300X and upcoming MI400 chips target similar workloads.
Meta joins Google (TPU), Amazon (Trainium/Inferentia), and Microsoft (Maia) in developing custom silicon optimized for their specific use cases—particularly large-scale inference and recommendation systems.
Analysts note that Meta’s chips are not intended for general sale but are tailored for its own massive-scale deployment. However, the announcement signals continued fragmentation in the AI hardware market, where hyperscalers increasingly bypass Nvidia for cost and performance reasons.
Meta did not disclose production volumes or exact power and performance specifications beyond relative generational improvements.

