HP 4X QDR InfiniBand ConnectX-2 HP InfiniBand Host Channel Adaptors based on M - Page 4
Mellanox ConnectX Technology
View all HP 4X QDR InfiniBand ConnectX-2 manuals
Add to My Manuals
Save this manual to your list of manuals |
Page 4 highlights
Mellanox ConnectX Technology HP InfiniBand HCAs based on the Mellanox ConnectX technology provide the highest performing and most flexible interconnect solution for Enterprise Data Centers, High-Performance Computing, and Embedded environments. Clustered data bases, parallelized applications, transactional services and high-performance embedded I/O applications will achieve significant performance improvements resulting in reduced completion time and lower cost per operation. ConnectX also simplifies network deployment by consolidating cables and enhancing performance in virtualized server environments. ConnectX delivers low-latency and high-bandwidth for performance-driven server and storage clustering applications. These applications will benefit from the reliable transport connections and advanced multicast support offered by ConnectX. Network protocol processing and data movement overhead such as InfiniBand RDMA and Send/Receive semantics are completed in the adapter without CPU intervention. Servers supporting PCI Express 2.0 with 5GT/s will be able to take advantage of 40Gb/s InfiniBand, balancing the I/O requirement of these high-end servers. The features of Mellanox ConnectX include: 1.2us MPI ping latency 10, 20, or 40Gb/s InfiniBand ports PCI Express 2.0 (up to 5GT/s) CPU offload of transport operations End-to-end QoS and congestion control The following table provides a link to the information and documentation for the Mellanox ConnectX HCAs used in HP InfiniBand configurations. ConnectX Dual-Port InfiniBand Adapter Cards with PCI Express 2.0 ConnectX Adapter Card Product Brief ConnectX Adapter Card User Manual ConnectX Adapter Card w/QSFP User Manual Link Mellanox ConnectX 4