January 28, 2019
HPCwire | Sponsored Content by Mellanox Technologies
Technical computing systems for both High Performance Computing (HPC) and Artificial Intelligence (AI) applications leverage InfiniBand for multiple reasons. Beyond the fast data throughput, ultra-low latency, ease of deployment and a cost performance advantages of the latest HDR 200Gb/s InfiniBand technology, there are various capabilities that make it the best interconnect solution for compute intensive HPC and AI workloads.
Over generations of InfiniBand technology, proven performance and application scalability in the world’s largest deployments is undisputed. Additionally, InfiniBand is a standards-based interconnect, ensuring backwards compatibility, and also future compatibility across generations. Additionally, the recent developments of In-Network Computing capabilities has set InfiniBand in the forefront of pre-Exascale and Exascale systems.
Furthermore, the advanced CPU-offload acceleration engines, GPUDirect™ RDMA and support for any number of topologies including enhanced Dragonfly which debuted in 2017 at the University of Toronto, are all driving factors why the leading research centers and industry deployments keep choosing InfiniBand.