HP Cluster Platform Interconnects v2010 Using InfiniBand for a Scalable Comput - Page 8

HPC configuration with HP BladeSystem solutions

Page 8 highlights

HPC configuration with HP BladeSystem solutions Figure 7 shows a full-bandwidth, fat-tree configuration of HP BladeSystem c-Class components providing 576 nodes in a cluster. Each c7000 enclosure includes an HP 4x QDR InfiniBand Switch Blade, with 16 downlinks for server blade connection and 16 QSFP uplinks for fabric connectivity. Sixteen 36-port QDR InfiniBand switches provide spine-level fabric connectivity. Figure 7. HP BladeSystem c-Class 576-node cluster configuration using BL280c blades HP c7000 Enclosure #1 HP c7000 Enclosure #2 16 HP BL280c G6 server blades w/4x QDR HCAs 16 HP QDR IB Switch Blade 16 HP BL280c G6 server blades w/4x QDR HCAs 16 HP QDR IB Switch Blade HP c7000 Enclosure #36 16 HP BL280c G6 server blades w/4x QDR HCAs 16 HP QDR IB Switch Blade 36-Port QDR IB Switch #1 36-Port QDR IB Switch #2 36-Port QDR IB Switch #16 Total nodes Racks required for servers Interconnect 576 (1 per blade) Nine 42U (assumes four c7000 enclosures per rack) 1:1 full bandwidth (non-blocking), 3 switch hops maximum, fabric redundancy 8

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12

8
HPC configuration with HP BladeSystem solutions
Figure 7 shows a full-bandwidth, fat-tree configuration of HP BladeSystem c-Class components
providing 576 nodes in a cluster. Each c7000 enclosure includes an HP 4x QDR InfiniBand Switch
Blade, with 16 downlinks for server blade connection and 16 QSFP uplinks for fabric connectivity.
Sixteen 36-port QDR InfiniBand switches provide spine-level fabric connectivity.
Figure 7.
HP BladeSystem c-Class 576-node cluster configuration using BL280c blades
HP c7000 Enclosure #1
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
36-Port QDR
IB Switch #1
HP c7000 Enclosure #2
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
HP c7000 Enclosure #36
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
HP QDR IB
Switch Blade
HP QDR IB
Switch Blade
HP QDR IB
Switch Blade
Total nodes
576 (1 per blade)
Racks required for servers
Nine 42U
(assumes four c7000 enclosures per rack)
Interconnect
1:1 full bandwidth (non-blocking),
3 switch hops maximum, fabric redundancy
36-Port QDR
IB Switch #2
36-Port QDR
IB Switch #16