HP 489183-B21 Using InfiniBand for a Scalable Compute Infrastructure - Page 12

interconnects. For example, the HP Cluster Platform CL3000BL uses the HP BL2x220c G5, BL280c

Page 12 highlights

Figure 9. HP BladeSystem c-Class 576-node cluster configuration 1 HP c7000 Enclosure 2 HP c7000 Enclosure 16 HP BL280c G6 server blades w/4x QDR HCAs 16 HP BL280c G6 server blades w/4x QDR HCAs 16 HP 4x QDR IB Interconnect Switch 16 HP 4x QDR IB Interconnect Switch 36 HP c7000 Enclosure 16 HP BL280c G6 server blades w/4x QDR HCAs 16 HP 4x QDR IB Interconnect Switch 36-Port QDR IB Switch 1 Total nodes Total processor cores Memory Storage Interconnect 36-Port QDR IB Switch 2 36-Port QDR IB Switch 16 576 (1 per blade) 4608 (2 Nehalem processors per node, 4 cores per processor 28 TB w/4 GB DIMMs (48 GB per node) or 55 TB w/ 8 GB DIMMS (96 GB per node) 2 NHP SATA or SAS per node 1:1 full bandwidth (non-blocking), 3 switch hops maximum, fabric redundancy The HP Unified Cluster Portfolio includes a range of hardware, software, and services that provide customers a choice of pre-tested, pre-configured systems for simplified implementation, fast deployment, and standardized support. HP solutions optimized for HPC: HP Cluster Platforms - flexible, factory integrated/tested systems built around specific platforms, backed by HP warranty and support, and built to uniform, worldwide specifications HP Scalable File Share (HP SFS) - high-bandwidth, scalable HP storage appliance for Linux clusters HP Financial Services Industry (FSI) solutions - defined solution stacks and configurations for realtime market data systems HP and partner solutions optimized for scale-out database applications: HP Oracle Exadata Storage HP Oracle Database Machine HP BladeSystem for Oracle Optimized Warehouse (OOW) HP Cluster Platforms are built around specific hardware and software platforms and offer a choice of interconnects. For example, the HP Cluster Platform CL3000BL uses the HP BL2x220c G5, BL280c G6, and BL460c blade servers as the compute node with a choice of GbE or InfiniBand interconnects. No longer unique to Linux or HP-UX environments, HPC clustering is now supported through Microsoft Windows Server HPC 2003, with native support for HP-MPI. 12

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15

12
Figure 9.
HP BladeSystem c-Class 576-node cluster configuration
The HP Unified Cluster Portfolio includes a range of hardware, software, and services that provide
customers a choice of pre-tested, pre-configured systems for simplified implementation, fast
deployment, and standardized support.
HP solutions optimized for HPC
:
HP Cluster Platforms
flexible, factory integrated/tested systems built around specific platforms,
backed by HP warranty and support, and built to uniform, worldwide specifications
HP Scalable File Share (HP SFS)
high-bandwidth, scalable HP storage appliance for Linux clusters
HP Financial Services Industry (FSI) solutions
defined solution stacks and configurations for real-
time market data systems
HP and partner solutions optimized for scale-out database applications
:
HP Oracle Exadata Storage
HP Oracle Database Machine
HP BladeSystem for Oracle Optimized Warehouse (OOW)
HP Cluster Platforms are built around specific hardware and software platforms and offer a choice of
interconnects. For example, the HP Cluster Platform CL3000BL uses the HP BL2x220c G5, BL280c
G6, and BL460c blade servers as the compute node with a choice of GbE or InfiniBand
interconnects. No longer unique to Linux or HP-UX environments, HPC clustering is now supported
through Microsoft Windows Server HPC 2003, with native support for HP-MPI.
HP c7000 Enclosure
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
36-Port QDR
IB Switch
HP c7000 Enclosure
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
HP c7000 Enclosure
16 HP BL280c G6
server blades
w/4x QDR HCAs
16
1
2
36
HP 4x QDR IB
Interconnect Switch
HP 4x QDR IB
Interconnect Switch
HP 4x QDR IB
Interconnect Switch
Total nodes
576 (1 per blade)
Total processor cores
4608 (2 Nehalem processors per node, 4 cores per processor
Memory
28 TB w/4 GB DIMMs (48 GB per node)
or 55 TB w/ 8 GB DIMMS (96 GB per node)
Storage
2 NHP SATA or SAS per node
Interconnect
1:1 full bandwidth (non-blocking),
3 switch hops maximum, fabric redundancy
36-Port QDR
IB Switch
36-Port QDR
IB Switch
1
2
16