advanced_hpc_logo
phone
request_a_quote
Only Search Advanced HPC
       
       
 
   
   
  greenbulletHigh Performance Computing  
bar
 
  greenbulletStorage Solutions  
bar
 
  greenbulletCloud Infrastructure  
bar
 
  greenbulletHigh Performance Servers  
bar
 
  greenbulletData Storage  
bar
 
  greenbullet Networking and Infrastructure  
    bulletwhiteEaton  
    bulletwhite Mellanox  
      bulletwhite Products  
        bulletwhite Adapters  
          bulletwhite InfiniBand Adapter Cards  
            Connect-IB  
            ConnectX-4 VPI  
            ConnectX-3 VPI  
            ConnectX-3 Pro  
          bulletwhiteEthernet Adapter Cards  
        bulletwhiteSwitches  
        bulletwhiteCables  
        bulletwhiteSoftware  
      bulletwhiteSolutions  
      bulletwhiteSupport  
bar
 
  greenbulletSupport  
bar
 
  greenbulletCompany Information  
bar
 
  greenbulletGovernment & Education  
   
   
 
Mellanox InfiniBand/VPI Adapter Cards
 
 

Mellanox InfiniBand Adapters Provide Advanced Levels of Data Center IT Performance, Efficiency and Scalability

Mellanox continues its leadership providing InfiniBand Host Channel Adapters (HCA)— the highest performing interconnect solution for Enterprise Data Centers, Web 2.0, Cloud Computing, High-Performance Computing, and embedded environments.

 

Mellanox InfiniBand Host Channel Adapters (HCA)

Mellanox InfiniBand Host Channel Adapters (HCAs) provide the highest performing interconnect solution for Enterprise Data Centers, Web 2.0, Cloud Computing, High-Performance Computing, and embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation.

 

World-Class Performance

Mellanox InfiniBand adapters deliver industry-leading bandwidth with ultra low-latency and efficient computing for performance-driven server and storage clustering applications. Network protocol processing and data movement overhead such as RDMA and Send/Receive semantics are completed in the adapter without CPU intervention. Application acceleration with CORE-Direct™ and GPU communication acceleration brings further levels of performance improvement. Mellanox InfiniBand adapters' advanced acceleration technology enables higher cluster efficiency and large scalability to tens-of-thousands of nodes.

 

Most Efficient Clouds

Mellanox adapters are a major component in Mellanox CloudX architecture. Mellanox adapters utilizing Virtual Intelligent Queuing (Virtual-IQ) technology with SR-IOV provides dedicated adapter resources and guaranteed isolation and protection for virtual machines (VM) within the server. I/O virtualization on Ethernet and InfiniBand gives data center managers better server utilization and LAN and SAN unification while reducing cost, power, and cable complexity.

Overlay networks offload and encap/decap (for VXLAN, NVGRE and Geneve) enable highest bandwidth while freeing the CPU for application tasks. Mellanox adapters enable high bandwidth and more virtual machines per server ratio. For more information on Mellanox adapters savings and VM calculator please refer to Mellanox CloudX.

 

Storage Accelerated

A consolidated compute and storage network achieves significant cost-performance advantages over multi-fabric networks. Standard block and file access protocols leveraging InfiniBand RDMA result in high-performance storage access. Mellanox adapters support SCSI, iSCSI, NFS and FCoIB protocols. Mellanox adapters also provide advanced storage offloads such as T10/DIF and RAID Offload.

 

Software Support

All Mellanox adapters are supported by a full suite of drivers for Microsoft Windows, Linux distributions, VMware, and Citrix XENServer. The adapters support OpenFabrics-based RDMA protocols and software, and the stateless offloads are fully interoperable with standard TCP/UDP/IP stacks. The adapters are compatible with configuration and management tools from OEMs and operating system vendors.

 

Virtual Protocol Interconnect® (VPI)

VPI flexibility enables any standard networking, clustering, storage, and management protocol to seamlessly operate over any converged network leveraging a consolidated software stack. Each port can operate on InfiniBand, Ethernet, or Data Center Bridging (DCB) fabrics, and supports Ethernet over InfiniBand (EoIB) and Fibre Channel over InfiniBand (FCoIB) as well as Fibre Channel over Ethernet (FCoE) and RDMA over Converged Ethernet (RoCE). VPI simplifies I/O system design and makes it easier for IT managers to deploy infrastructure that meets the challenges of a dynamic data center.

 

ConnectX®-4 Single/Dual-Port Adapters supporting 100Gb/s with VPI®

ConnectX-4 adapters with Virtual Protocol Interconnect (VPI), support EDR 100Gb/s InfiniBand and 100Gb/s Ethernet connectivity, and provide the highest performance and most flexible solution for high-performance, Web 2.0, Cloud, data analytics, database, and storage platforms.

ConnectX-4 adapters provide an unmatched combination of 100Gb/s bandwidth in a single port, the lowest available latency, 150 million messages per second and application hardware offloads, addressing both today's and the next generation's compute and storage data center demands.

 

ConnectX®-3 Pro Single/Dual-Port Adapter with VPI®

ConnectX-3 Pro adapter cards with Virtual Protocol Interconnect (VPI), supporting InfiniBand and Ethernet connectivity with hardware offload engines to Overlay Networks ("Tunneling"), provide the highest performing and most flexible interconnect solution for PCI Express Gen3 servers used in public and private clouds, enterprise data centers, and high performance computing.

 

ConnectX®-3

Mellanox's industry-leading ConnectX-3 InfiniBand adapters provides the highest performing and most flexible interconnect solution. ConnectX-3 delivers up to 56Gb/s throughput across the PCI Express 3.0 host bus, enables the fastest transaction latency, less than 1usec, and can deliver more than 90M MPI messages per second making it the most scalable and suitable solution for current and future transaction-demanding applications. ConnectX-3 maximizes the network efficiency making it ideal for HPC or converged data centers operating a wide range of applications.

 

Connect-IB®

Connect-IB delivers leading performance with maximum bandwidth, low latency, and computing efficiency for performance-driven server and storage applications. Maximum bandwidth is delivered across PCI Express 3.0 x16 and two ports of FDR InfiniBand, supplying more than 100Gb/s of throughput together with consistent low latency across all CPU cores. Connect-IB also enables PCI Express 2.0 x16 systems to take full advantage of FDR, delivering at least twice the bandwidth of existing PCIe 2.0 solutions.

 

Complete End-to-End 100Gb/s InfiniBand Networking

ConnectX-4 adapters are part of Mellanox's full EDR 100Gb/s InfiniBand end-to-end portfolio for data centers and high-performance computing systems, which includes switches and cables. Mellanox's SwitchX family of FDR InfiniBand switches and Unified Fabric Management software incorporate advanced tools that simplify networking management and installation, and provide the needed capabilities for the highest scalability and future growth. Mellanox's line of FDR copper and fiber cables ensure the highest interconnect performance. With Mellanox end to end, IT managers can be assured of the highest performance, most efficient network fabric.

 

*All Card Types Listed use RoHS
**OS Support : RHEL, SLES, Windows, HPUX, VMware-ESX
*** Features vary between adapter models

 
Product Guides
     
pdf
 
     
pdf
 
Value Proposition
  • High Performance Computing needs high bandwidth, low latency, and CPU offloads to get the highest server efficiency and application productivity. Mellanox HCAs deliver the highest bandwidth and lowest latency of any standard interconnect enabling CPU efficiencies of greater than 95%.

  • Data centers and cloud computing require I/O services such as bandwidth, consolidation and unification, and flexibility. Mellanox's HCAs support LAN and SAN traffic consolidation and provides hardware acceleration for server virtualization.

  • Virtual Protocol Interconnect® (VPI) flexibility offers InfiniBand, Ethernet, Data Center Bridging, EoIB, FCoIB and FCoE connectivity.
 
Benefits
  • World-class cluster performance

  • High-performance networking and storage access

  • Efficient use of compute resources

  • Cutting-edge performance in virtualized overlay networks (VXLAN and NVGRE)

  • Increased VM per server ratio

  • Guaranteed bandwidth and
    low-latency services

  • Reliable transport

  • Efficient I/O consolidation, lowering data center costs
    and complexity

  • Scalability to tens-of-thousands of nodes

Ordering Part No.  Speed Ports Connectors ASIC & PCI Dev ID PCI Lanes
Mellanox ConnectX®-4 VPI
FDR IB (56Gb/s)
1
QSFP
ConnectX-4®
PCIe 3.0
x16
FDR IB (56Gb/s)
2
QSFP
ConnectX-4®
PCIe 3.0
x16
EDR IB (100Gb/s)
1
QSFP
ConnectX-4®
PCIe 3.0
x16
EDR IB (100Gb/s)
2
QSFP
ConnectX-4®
PCIe 3.0
x16
Mellanox ConnectX-IB®
FDR IB (56Gb/s)
1
QSFP
ConnectX-IB 4113
PCIe 3.0
x8
FDR IB (56Gb/s)
2
QSFP
ConnectX-IB 4113
PCIe 3.0
x8
FDR IB (56Gb/s)
1
QSFP
ConnectX-IB 4113
PCIe 3.0
x16
FDR IB (56Gb/s)
2
QSFP
ConnectX-IB 4113
PCIe 3.0
x16
Mellanox ConnectX®-3 Pro VPI Adapter Cards
FDR and 40/56GbE
1
QSFP
ConnectX®-3 4103
PCIe 3.0
x8
FDR and 40/56GbE
2
QSFP
ConnectX®-3 4103
PCIe 3.0
x8
Mellanox ConnectX®-3 VPI Adapter Cards
QDR and 10GbE
1
QSFP
ConnectX®-3 4099
PCIe 3.0
x8
QDR and 10GbE
2
QSFP
ConnectX®-3 4099
PCIe 3.0
x8
FDR10 and 10GbE
1
QSFP
ConnectX®-3 4099
PCIe 3.0
x8
FDR10 and 10GbE
2
QSFP
ConnectX®-3 4099
PCIe 3.0
x8
FDR and 40GbE
1
QSFP
ConnectX®-3 4099
PCIe 3.0
x8
FDR and 40GbE
2
QSFP
ConnectX®-3 4099
PCIe 3.0
x8
 
 
 
 
 



  Home | Products | Support | Company | News | PartnersPrivacy | Contact Us

Last updated 4/11/2016