HP 4X Using InfiniBand for a Scalable Compute Infrastructure

HP 4X - DDR InfiniBand Mezzanine HCA Manual

HP 4X manual content summary:

  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 1
    edition Abstract...2 Introduction ...2 InfiniBand technology...4 InfiniBand components...5 InfiniBand software architecture ...5 MPI...7 IPoIB ...7 RDMA-based protocols...7 RDS ...8 InfiniBand hardware architecture ...8 Link operation...9 Scale-out clusters built on InfiniBand and HP technology 11
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 2
    for financial services and Oracle-based database applications. InfiniBand (IB) is one of the to a large number of processing cores. Scale-out cluster computing that builds Port Controller Input/Output Ultra-3 SCSI 320 MBps Fibre Channel 400 MBps Gigabit Ethernet > 1 Gbps InfiniBand > 10 Gbps* * 4x
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 3
    than 1GbE can provide. However, 10GbE still lags the latest InfiniBand technology in latency and bandwidth performance, and lacks native support for the fat-tree and mesh topologies used in scale-out clusters. InfiniBand remains the interconnect of choice for highly parallel environments where
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 4
    on the type of network media-copper or fiber. Figure 3. Distributed computing using InfiniBand architecture InfiniBand Link InfiniBand has these important characteristics: Very high bandwidth-up to 40Gbps Quad Data Rate (QDR) Low latency end-to-end communication-MPI ping-pong latency approaching
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 5
    duties and RDMA operations as core capabilities and offers greater adaptability through a variety of services and protocols. While the majority of existing InfiniBand clusters operate on the Linux platform, drivers and HCA stacks are also available for Microsoft® Windows®, HP-UX, Solaris, and other
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 6
    , SRP, iSER, NFS) RDS Kernel space Connection Manager Mid-Layer Modules SA Client Open Fabrics Verbs and API SMA MAD Services Provider Hardware-Specific Driver B Hardware InfiniBand HCA As indicated in Figure 4, InfiniBand supports a variety of upper level protocols (ULPs) and libraries that
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 7
    interconnects that significantly reduce the effort for applications to support various popular interconnect technologies. HP-MPI is supported on HP-UX, Linux, True64 UNIX, and Microsoft Windows Compute Cluster Server 2003. IPoIB Internet Protocol over InfiniBand (IPoIB) allows the use of TCP/IP or
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 8
    aggregate as links and nodes are added. Double data rate (DDR) and especially quad data rate (QDR) operation increase bandwidth significantly (Table 1). Table 1. InfiniBand interconnect bandwidth SDR Link Signal rate 1x 2.5 Gbps DDR Signal rate 5 Gbps 4x 10 Gbps 20 Gbps 12x 30 Gbps 60 Gbps
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 9
    are CX4 and quad small-form-factor pluggable (QSFP) as shown in Figure 6. Figure 6. InfiniBand connectors CX4 QSFP Fiber optic cable with CX4 connectors generally offers the greatest distance capability. The adoption of 4X DDR products is widespread, and deployment of QDR systems is expected
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 10
    two channel adapters is established, one of the following transport layer , are basic datagram movers and may require system processor support depending on the ULP used. When the reliable connection rejects duplicate packets, and provides recovery services for failures in the fabric. Figure
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 11
    . The QDR HCA mezzanine card is supported on the ProLiant G6 blades with PCIe x8 Gen 2 mezzanine connectors Figure 9 shows a full bandwidth fat-tree configuration of HP BladeSystem c-Class components providing 576 nodes in a cluster. Each c7000 enclosure includes an HP 4x QDR IB Switch, which
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 12
    w/4x QDR HCAs 16 HP 4x QDR IB Interconnect Switch 16 HP 4x QDR IB Interconnect Switch 36 HP c7000 Enclosure 16 HP BL280c G6 server blades w/4x QDR HCAs 16 HP 4x QDR IB Interconnect Switch 36-Port QDR IB Switch 1 Total nodes Total processor cores Memory Storage Interconnect 36-Port QDR IB Switch
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 13
    InfiniBand. HP BladeSystem c-Class clusters and similar rack-mounted clusters support IB QDR and DDR HCAs and switches. InfiniBand offers solid growth potential in performance, with DDR infrastructure currently accepted as mainstream, QDR becoming available, and Eight Data Rate (EDR) with a per-port
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 14
    : software routine or object written in support of a language DDR Double Data Rate: for InfiniBand, clock rate of 5.0 Gbps (2.5 Gbps a packet-switched network IPoIP Internet Protocol over InfiniBand: protocol allowing the use of TCP/IP over IB networks iSER iWARP MPI NFS NHP QDR QSFP RDMA
  • HP 4X | Using InfiniBand for a Scalable Compute Infrastructure - Page 15
    iWARP RDMA HP BladeSystem Hyperlink www.hp.com www.hp.com/go/hptc http://h18004.www1.hp.com/products/servers/networki ng/index-ib.html http://www.infinibandta.org http://www.openib.org/ http://www.rdmaconsortium.org. http://h20000.www2.hp.com/bc/docs/support/Support Manual/c00589475/c00589475
  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

Abstract
..............................................................................................................................................
2
Introduction
.........................................................................................................................................
2
InfiniBand technology
...........................................................................................................................
4
InfiniBand components
......................................................................................................................
5
InfiniBand software architecture
.........................................................................................................
5
MPI
.............................................................................................................................................
7
IPoIB
...........................................................................................................................................
7
RDMA-based protocols
..................................................................................................................
7
RDS
............................................................................................................................................
8
InfiniBand hardware architecture
.......................................................................................................
8
Link operation
..................................................................................................................................
9
Scale-out clusters built on InfiniBand and HP technology
.........................................................................
11
Conclusion
........................................................................................................................................
13
Appendix A: Glossary
........................................................................................................................
14
For more information
..........................................................................................................................
15
Call to action
....................................................................................................................................
15
Using InfiniBand for a
scalable compute infrastructure
technology brief, 3
rd
edition