NVIDIA CS8500 InfiniBand Switch Series

Quantum HDR 200Gb/s InfiniBand Smart Modular Switches

Today’s rapid data growth and real-time data processing are fueling the demand for faster, more efficient interconnect solutions. Delivering the modular availability required for mission-critical application environments, the high-density NVIDIA CS8500 switch provides up to 800 high data rate (HDR) 200Gb/s ports, enabling an extremely high-performance, low-latency fabric solution for high-performance computing (HPC), AI, cloud, and hyperscale data center infrastructures.

World-Class Design

CS8500 is an InfiniBand modular switch designed for performance, serviceability, energy savings, and high availability. The leaf blades, spine blades, and management modules, as well as the power supply units (PSUs), are all hot-swappable to help eliminate downtime. CS8500 is an eco-friendly solution cooled solely by a liquid closed loop to reduce noise levels and IT OPEX. CS8500 arrives with liquid coolant (CDU) or air heat exchanger (AHX) data units to best fit all data centers.

CS8500 InfiniBand Modular Switch

Highlights

LINK SPEED

200Gb/s

NUMBER OF PORTS

800HDR/
1600*HDR100

MAX. THROUGHPUT

320Tb/s

SWITCH SIZE

29U

POWER CONSUMPTION (ATIS)

12.3KW (CDU)
13.3KW (AHX)

*When used in conjunction with NVIDIA® Mellanox® ConnectX®-6 adapter cards, the CS8500 can achieve up to 1,600 ports of HDR100 100Gb/s

World-Class InfiniBand Performance

In-Network Computing

In-Network Computing

NVIDIA Mellanox InfiniBand Scalable Hierarchical Aggregation and Reduction Protocol (SHARP)™ In-Network Computing offloads collective communication operations from the CPU to the switch network, improving application performance by an order of magnitude.

Self-Healing Computing

Self-Healing Computing

NVIDIA Mellanox InfiniBand with self-healing networking capabilities, overcomes link failures and achieves network recovery 5,000X faster than any software-based solution—enhancing system performance, scalability, and network utilization.

UFM Management

UFM Management

NVIDIA Mellanox Unified Fabric Management (UFM®) platforms combine enhanced, real-time network telemetry with AI-powered cyber intelligence and analytics; to realize higher utilization of fabric resources and a competitive advantage, while reducing OPEX.

Key Features

  • Up to 800 HDR 200Gb/s ports in a 29U switch
  • Up to 1,600 HDR100 100Gb/s ports
  • Up to 320Tb/s switching capacity
  • Ultra-low latency
  • SHARP In-Network Computing offloads
  • Self-healing networking
  • InfiniBand Trade Association (IBTA) specification 1.4, 1.3, and 1.2.1 compliant
  • Quality-of-service enforcement
  • N+N power supply

 

  • Integrated subnet manager agent (up to 2k nodes)
  • Fast and efficient fabric bringup
  • Comprehensive chassis management
  • Intuitive command-line interface (CLI) and graphical user interface (GUI) for easy access
  • Can be enhanced with Mellanox UFM
  • Temperature sensors and voltage monitors

Benefits

  • High ROI—energy efficiency, cost savings, and scalable high performance
  • High-performance fabric for parallel computation or input/output (I/O) convergence
  • Up to 800 ports Modular scalability
  • High-bandwidth, low-latency fabric for compute-intensive applications

 

  • Quick and easy setup and management
  • Maximizes performance by removing fabric congestions
  • Fabric management for cluster and converged I/O applications

CS8500 Series Specifications

CS8500 Series Specifications

Resources