NVIDIA Quantum-2 CS9500 Modular Switch Series

Deliver Unparalled Data Throughput and Density with 400G InfiniBand

As artificial intelligence (AI) and increasingly complex applications demand faster,smarter, and more scalable networks, NVIDIA 400 gigabits per second (400Gb/s) InfiniBand provides the fastest networking solution available, offered on the world’s only fully offloadable in-network computing platform.

NVIDIA 400Gb/s InfiniBand’s massive throughput, smart acceleration engines, flexibility, and robust architecture let HPC, AI, and hyperscale cloud infrastructures achieve unmatched performance, with less cost and complexity. Providing up to 2,048 400Gb/s InfiniBand ports, the high-density NVIDIA CS9500 modular switches enable an extremely high-performance, low-latency fabric solution for exascale computing and hyperscale cloud data centers.

The NVIDIA Quantum-2 CS9500 modular switch offers 2.5X the port density of the preceding switch generation while boosting AI acceleration by 32X. In addition, it surges the previous generation modular switch system’s aggregated bidirectional throughput by 5X, to 1.64 petabits per second, enabling users to run larger workloads with fewer constraints.

The Era of Data-Driven Computing

Complex workloads demand ultra-fast processing of high-resolution simulations, extreme-size datasets, and complex, highly parallelized algorithms that require real-time information exchanges. As the highest-performing fabric solution in a 29U form factor, the CS9500 delivers 1600 terabits per second (Tb/s) of full bidirectional bandwidth with ultra-low port latency. The CS9500 modular switches create the highest scalability for
large data aggregation through the network, with the highest application performance of complex computations while data moves through the data center network.

NVIDIA Quantum-2 CS9500  Modular Switch Series

Highlights

Switch System Options

  • 2048 400Gb/s InfiniBand ports or 4096 200Gb/s InfiniBand ports delivering 1600 Tb/s total throughput
  • 1024 400Gb/s InfiniBand ports or 2048 200Gb/s InfiniBand ports delivering 800 Tb/s total throughput
  • 512 400Gb/s InfiniBand ports or 1048 200Gb/s InfiniBand ports delivering 400 Tb/s total throughput

Cooling

Liquid-cooled
  • Liquid-to-air AHX
  • Liquid-to-liquid CDU

 

 

World Class InfiniBand Performance

In-Network Computing

In-Network Computing

NVIDIA Mellanox Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ In-Network Computing technology offloads collective communication operations from the CPU to the switch network, improving application performance by an order of magnitude.

Self-Healing Computing

Self-Healing Computing

NVIDIA Mellanox InfiniBand with self-healing networking capabilities overcomes link failures and achieves network recovery 5,000X faster than any software-based solution—enhancing system performance, scalability, and network utilization.

UFM Management

UFM Management

NVIDIA Mellanox Unified Fabric Management (UFM®) platforms combine enhanced, real-time network telemetry with AI-powered cyber intelligence and analytics to realize higher utilization of fabric resources and a competitive advantage, while reducing OPEX.

Key Features

  • Three versions of switches to choose from based on number of ports: 2.048, 1.024, or 512
  • Flexible pay-as-you-grow design enables data center infrastructures to start small ans scale according to need
  • Single point of management for each switch

Resources