HP 4X QDR InfiniBand ConnectX-2 HP InfiniBand Host Channel Adaptors based on M - Page 4

Mellanox ConnectX Technology

Page 4 highlights

Mellanox ConnectX Technology HP InfiniBand HCAs based on the Mellanox ConnectX technology provide the highest performing and most flexible interconnect solution for Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. ConnectX also simplifies network deployment by consolidating cables and enhancing performance in virtualized server environments. ConnectX delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications. These applications will benefit from the reliable transport connections and advanced multicast support offered by ConnectX. Network protocol processing and data movement overhead such as InfiniBand RDMA and Send/Receive semantics are completed in the adapter without CPU intervention. Servers supporting PCI Express 2.0 with 5GT/s will be able to take advantage of 40Gb/s InfiniBand, balancing the I/O requirement of these high-end servers. The features of Mellanox ConnectX include:  1.2us MPI ping latency  10, 20, or 40Gb/s InfiniBand ports  PCI Express 2.0 (up to 5GT/s)  CPU offload of transport operations  End-to-end QoS and congestion control The following table provides a link to the information and documentation for the Mellanox ConnectX HCAs used in HP InfiniBand configurations. ConnectX Dual-Port InfiniBand Adapter Cards with PCI Express 2.0  ConnectX Adapter Card Product Brief  ConnectX Adapter Card User Manual  ConnectX Adapter Card w/QSFP User Manual Link Mellanox ConnectX 4

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7

Mellanox ConnectX Technology
HP InfiniBand HCAs based on the Mellanox ConnectX technology provide the highest performing
and most flexible interconnect solution for Enterprise Data Centers, High-Performance Computing,
and Embedded environments. Clustered data bases, parallelized applications, transactional
services and high-performance embedded I/O applications will achieve significant performance
improvements resulting in reduced completion time and lower cost per operation. ConnectX also
simplifies network deployment by consolidating cables and enhancing performance in virtualized
server environments.
ConnectX delivers low-latency and high-bandwidth for performance-driven server and storage
clustering applications. These applications will benefit from the reliable transport connections and
advanced multicast support offered by ConnectX. Network protocol processing and data movement
overhead such as InfiniBand RDMA and Send/Receive semantics are completed in the adapter
without CPU intervention. Servers supporting PCI Express 2.0 with 5GT/s will be able to take
advantage of 40Gb/s InfiniBand, balancing the I/O requirement of these high-end servers.
The features of Mellanox ConnectX include:
1.2us MPI ping latency
10, 20, or 40Gb/s InfiniBand ports
PCI Express 2.0 (up to 5GT/s)
CPU offload of transport operations
End-to-end QoS and congestion control
The following table provides a link to the information and documentation for the Mellanox ConnectX
HCAs used in HP InfiniBand configurations.
ConnectX Dual-Port InfiniBand Adapter Cards with PCI Express 2.0
Link
ConnectX Adapter Card Product Brief
ConnectX Adapter Card User Manual
ConnectX Adapter Card w/QSFP User Manual
Mellanox ConnectX
4