NVIDIA and AWS Supercharge Their Alliance

At AWS re:Invent, NVIDIA and Amazon Web Services took their 15-year partnership and slammed the accelerator pedal straight into the future. The two companies unveiled a sweeping expansion of their full-stack collaboration, one that fuses NVIDIA’s bleeding-edge AI compute with AWS’s custom silicon and global cloud infrastructure.  A unified, sovereign AI platform designed to power everything from trillion-parameter model training to physical robotics.

At the center of the announcement is NVIDIA NVLink Fusion, now being integrated directly into AWS’s homegrown silicon, including the next-generation Trainium4, Graviton CPUs and the AWS Nitro System. For the first time, AWS’s chips will plug into NVIDIA’s scale-up architecture and MGX rack design, giving AWS a massive performance jump while simplifying deployment across its cloud.

In plain words: AWS is wiring NVIDIA’s supercomputer backbone into its own hardware.

AWS is also designing Trainium4 from the ground up to work with NVLink and MGX, marking the start of a multigeneration roadmap between the two companies. AWS has already rolled out MGX racks powered by NVIDIA GPUs; now NVLink Fusion takes them to another level by opening the entire NVLink supplier ecosystem, from power systems to cooling, to AWS deployments.

NVIDIA CEO Jensen Huang summed it up bluntly: “The virtuous cycle of AI has arrived.” More compute produces smarter AI, which makes more demand for compute, and AWS is preparing for that spiral by fusing its infrastructure directly with NVIDIA’s.

Blackwell Comes to AWS, and Sovereign AI Goes Global

AWS is expanding its accelerated computing offerings with NVIDIA’s latest Blackwell GPUs: HGX B300, GB300 NVL72, and soon, RTX PRO 6000 Blackwell Server Edition. These chips will power AWS AI Factories, dedicated, sovereign AI clouds operated by AWS but controlled by the customer.

Governments, national labs and regulated industries get the world’s strongest AI compute while keeping data locked inside local borders. This is where the partnership stops being just technological and becomes geopolitical: AWS and NVIDIA are positioning themselves as the global default for sovereign AI infrastructure.

NVIDIA Models and Software Go Deep Into the AWS Stack

On the software side, NVIDIA’s open Nemotron models are now available directly through Amazon Bedrock, giving developers instant access to highly efficient agentic and multimodal models without managing any infrastructure. Early adopters like CrowdStrike are already using them in production.

AWS also launched serverless GPU-accelerated vector indexing via the OpenSearch Service, powered by NVIDIA cuVS, delivering up to 10× faster indexing at 75% lower cost. It’s the first major cloud to offer this capability.

And for robotics and physical AI, NVIDIA’s Cosmos world foundation models and Isaac robotics stack are now integrated across Amazon EKS and AWS Batch, enabling companies to train, simulate and deploy robots at cloud scale.

The Bottom Line

AWS and NVIDIA are no longer just partners, they’re building the planetary-scale infrastructure that future AI models, agents and robots will depend on. This is the AI industrial revolution’s backbone, forged by two companies that know the next decade won’t be won by software alone, but by full-stack, sovereign, globally deployable compute.

Related Posts

Leave a Reply

Your email address will not be published. Required fields are marked *

Please consider turning off your adblocker to support our work! We work night and day to offer quality content, and ads help us continue our work! Thank you! The Hardware Busters Team