NVIDIA’s (NASDAQ: NVDA) invention of the GPU in 1999 sparked the growth of the PC gaming market, redefined modern computer graphics, and revolutionized parallel computing. More recently, GPU deep learning ignited modern AI — the next era of computing — with the GPU acting as the brain of computers, robots, and self-driving cars that can perceive and understand the world.
In 2006, the creation of the CUDA programming model and Tesla GPU platform opened up the parallel-processing capabilities of the GPU to general-purpose computing. A powerful new approach to computing was born.
The world’s leading platform for the accelerated data center
Accelerating scientific discovery, visualizing big data for insights, and providing smart AI-based services to the enterprise are everyday challenges for researchers and engineers. Solving these challenges takes increasingly complex and precise simulations, the processing of tremendous amounts of data, or training and running sophisticated deep learning networks. These workloads also require accelerating data centers to meet the growing demand for exponential computing.
NVIDIA® Tesla® is the world’s leading platform for the accelerated data center, deployed by the largest supercomputing centers and enterprises. It enables breakthrough performance with fewer, more powerful servers, resulting in faster scientific discoveries and insights while saving money.
NVIDIA Tesla is also the world’s fastest, most efficient data center platform for inference. Tesla provides the optimal inference solution—combining the highest throughput, best efficiency, and best flexibility to power AI-driven experiences.
With over 550 HPC applications GPU-optimized in a broad range of domains, including 15 of the top 15 HPC applications, and all deep learning frameworks, every modern data center can save money with the Tesla platform.
Given the increasingly broad spectrum of data center applications, NVIDIA has gone to great lengths to provide you with the data center products most appropriate for you. As such, the NVIDIA Tesla data center platform features products to account for virtually every data center need, including:
- NVIDIA Tesla V100 for NVIDIA® NVLink™
- NVIDIA Tesla V100 for PCIe
- NVIDIA Tesla P4
- NVIDIA Tesla P40
Tesla V100 GPU
At the 2018 GPU Technology Conference (GTC18), it was announced that the memory capacity of the NVIDIA® Tesla® V100 GPU – widely adopted by the world’s leading researchers – was doubled to handle the most memory-intensive deep learning and high performance computing workloads.
Now equipped with 32GB of memory, Tesla V100 GPUs will help data scientists train deeper and larger deep learning models that are more accurate than ever. They can also improve the performance of memory-constrained HPC applications by up to 50 percent compared with the previous 16GB version.
NVIDIA Tesla V100 for NVLink
NVIDIA Tesla V100 for PCIe
NVIDIA Tesla P4
The Tesla P4 is powered by the NVIDIA Pascal™ architecture and purpose-built to boost efficiency for scale-out servers running deep learning workloads, enabling smart responsive AI-based services. It reduces inference latency by 15X in any hyperscale infrastructure and provides a remarkable 60X better energy efficiency than traditional CPUs. This unlocks a new wave of AI services previous impossible due to latency limitations.
NVIDIA Tesla P40
The NVIDIA Tesla P40 is purpose-built to deliver maximum throughput for deep learning deployment. With 47 TOPS of inference performance and INT8 operations per GPU, a single server with 8 Tesla P40s delivers the performance of over 140 traditional CPU servers. As models increase in accuracy and complexity, CPUs are no longer capable of delivering interactive user experience. The Tesla P40 delivers over 30X lower latency than a CPU for real-time responsiveness in even the most complex models. Plus, the Tesla P40 offers great inference performance, INT8 precision and 24GB of onboard memory for an outstanding user experience.