SANTA CLARA, California, January 22, 2026 — Nvidia and Meta have announced a collaboration to integrate Nvidia’s Grace CPU and Vera architecture into Meta’s next-generation AI training and inference infrastructure, aiming to improve efficiency and scale for large language models and recommendation systems.

The partnership focuses on combining Nvidia’s Grace CPU—based on Arm architecture—with the Vera platform, which emphasizes high-bandwidth memory and low-latency interconnects.

Meta will use the configuration in its data centers to train and serve models more cost-effectively, particularly for recommendation algorithms on Facebook, Instagram, and WhatsApp.

Nvidia CEO Jensen Huang described the collaboration as a step toward “open, efficient AI infrastructure.” “Meta’s scale and expertise, paired with Grace and Vera, will push the boundaries of what’s possible in production AI,” Huang said during a joint briefing at CES 2026.

Meta’s head of AI infrastructure, Santosh Janardhan, highlighted the benefits: “This integration allows us to optimize power consumption and throughput for our massive-scale workloads while maintaining flexibility in model development.”

The announcement builds on existing ties between the companies. Meta has been a major Nvidia customer for years, using H100 and Blackwell GPUs extensively. The Grace-Vera combination offers higher memory bandwidth and better energy efficiency than GPU-only clusters for certain training and inference phases.

No specific deployment timeline or capacity details were shared. The collaboration is part of Meta’s broader push to reduce reliance on third-party cloud providers and optimize in-house AI training costs, which have risen sharply with larger models.

The deal reflects a trend among hyperscalers to customize hardware stacks for AI workloads. Similar partnerships exist with AMD, Intel, and custom silicon efforts by Google and Amazon. Nvidia shares rose modestly following the news, while Meta’s stock remained flat in after-hours trading.

LEAVE A REPLY

Please enter your comment!
Please enter your name here