NVIDIA’s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world.

In 2006, the creation of the CUDA programming model and Tesla GPU platform opened up the parallel-processing capabilities of the GPU to general-purpose computing. A powerful new approach to computing was born.




High Speed Ethernet

We offer Cloud-scale efficient switches for data centers of all sizes, with an operational model built for automation, best-in-class performance, and hardware -accelerated network visibility.

InfiniBand for HPC

The seventh generation of the NVIDIA InfiniBand architecture, featuring NDR 400Gb/s, gives AI developers and scientific researchers the fastest networking performance available to take on the world’s most challenging problems.

Featured Products


Data Center Products and Solutions

Accelerating Data Center Workloads with GPUs

From scientific discoveries to artificial intelligence, modern data centers are key to solving some of the world’s most important challenges. The NVIDIA Volta accelerated computing platform gives these data centers the power to accelerate both artificial intelligence and high performance computing workloads.

Data Center Products

  • NVIDIA® Tesla®

NVIDIA® Tesla®: The world’s leading platform for the accelerated data center

Accelerating scientific discovery, visualizing big data for insights, and providing smart AI-based services to the enterprise are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training and running sophisticated deep learning networks. These workloads also require accelerating data centers to meet the growing demand for exponential computing.

NVIDIA® Tesla® is the world’s leading platform for the accelerated data center, deployed by the largest supercomputing centers and enterprises. It enables breakthrough performance with fewer, more powerful servers, resulting in faster scientific discoveries and insights while saving money.

NVIDIA Tesla is also the world’s fastest, most efficient data center platform for inference. Tesla provides the optimal inference solution—combining the highest throughput, best efficiency, and best flexibility to power AI-driven experiences.

With over 550 HPC applications GPU-optimized in a broad range of domains, including 15 of the top 15 HPC applications, and all deep learning frameworks, every modern data center can save money with the Tesla platform.

Given the increasingly broad spectrum of data center applications, NVIDIA has gone to great lengths to provide you with the data center products most appropriate for you. As such, the NVIDIA Tesla data center platform features products to account for virtually every data center need, including:

  • NVIDIA Tesla V100 for NVIDIA® NVLink™
  • NVIDIA Tesla V100 for PCIe
Tesla V100 GPU

At the 2018 GPU Technology Conference (GTC18), it was announced that the memory capacity of the NVIDIA® Tesla® V100 GPU – widely adopted by the world’s leading researchers – was doubled to handle the most memory-intensive deep learning and high performance computing workloads.

Now equipped with 32GB of memory, Tesla V100 GPUs will help data scientists train deeper and larger deep learning models that are more accurate than ever. They can also improve the performance of memory-constrained HPC applications by up to 50 percent compared with the previous 16GB version.

NVIDIA Tesla V100 for NVLink
NVIDIA Tesla V100 for PCIe

From autonomous vehicles to global climate simulations, new challenges are emerging that demand enormous computing resources to solve. NVIDIA HGX-2 is designed for multi-precision computing to provide a single flexible and powerful platform to solve these massive challenges.

The HGX-2 multi-precision computing platform allows high-precision calculations for scientific computing and simulations, while facilitating fast calculations for AI training and inference.

  • Enables “The World’s Largest GPU.” Accelerated by 16 NVIDIA® Tesla® V100 GPUs and NVIDIA NVSwitch™, HGX-2 has the unprecedented compute power, bandwidth, and memory topology to train these models faster and more efficiently. The 16 Tesla V100 GPUs work as a single unified 2-petaFLOP accelerator with half a terabyte (TB) of total GPU memory, allowing it to handle the most computationally intensive workloads and enable “the world’s largest GPU.”
  • Driving Next-Generation AI to Faster Performance. A single HGX-2 replaces 300 CPU-powered servers, saving significant cost, space, and energy in the data center.
  • The Highest-Performing HPC Supernode. HPC applications require strong server nodes with the computing power to perform a massive number of calculations per second. Increasing the compute density of each node dramatically reduces the number of servers required, resulting in huge savings in cost, power, and space consumed in the data center.
  • NVSwitch for Full Bandwidth Computing. NVSwitch enables every GPU to communicate with every other GPU at full bandwidth of 2.4TB/sec to solve the largest of AI and HPC problems. To learn more about HGX-2, download the NVIDIA HGX-2 Data Sheet.

AI is Reinventing the Way We Invent

MIT Technology Review | February 15, 2019 | By David Rotman The biggest impact of artificial intelligence will be to help humans make discoveries we couldn’t make on our own. Regina…