HP Cluster Platform Introduction v2010 HP Cluster Platform Workgroup System To - Page 15

High Performance Computing – HP Cluster Platform Architecture, Local KVM interface

Page 15 highlights

Figure 1-2 Workgroup System Tower Example Rear View 8 9 7 6 10 5 4 3 3 2 1 9 The following list describes the callouts in Figure 1-2: 1. Local KVM interface 2. HP GbE2c Ethernet switch installed in interconnect module bay 1 (IMB1) 3. Active cool fans 4. Interconnect module bay 2 (reserved for potential future use) 5. Enclosure uplink and service port 6. Enclosure downlink 7. iLO/Onboard Administrator port 1 8. iLO/Onboard Administrator port 2 (reserved for future use) 9. Power supplies 10. Optional 4X DDR InfiniBand Interconnect installed in interconnect module bays 3 and 4 (IMB3/4) 1.2 High Performance Computing - HP Cluster Platform Architecture What makes HP Cluster Platform different from standard systems is the assignment of node types, high-speed networks, and the interoperability efficiency gained by using High Performance Computing Software manufactured by HP and other HP partners. This section describes the basics of the HP Cluster Platform hardware architecture. • Nodes: The two types of nodes included in a typical Cluster Platform configuration are: - Control node: Each cluster has one control node. If the optional InfiniBand interconnect is selected, the control node is also connected. The control node can also be used for preprocessing, postprocessing, and computational workload. - Compute nodes: Compute nodes are normally used for application computation rather than administrative workloads. 1.2 High Performance Computing - HP Cluster Platform Architecture 15

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44

Figure 1-2 Workgroup System Tower Example Rear View
1
3
3
2
4
6
5
7
8
9
9
10
The following list describes the callouts in
Figure 1-2
:
1.
Local KVM interface
2.
HP GbE2c Ethernet switch installed in interconnect module bay 1 (IMB1)
3.
Active cool fans
4.
Interconnect module bay 2 (reserved for potential future use)
5.
Enclosure uplink and service port
6.
Enclosure downlink
7.
iLO/Onboard Administrator port 1
8.
iLO/Onboard Administrator port 2 (reserved for future use)
9.
Power supplies
10.
Optional 4X DDR InfiniBand Interconnect installed in interconnect module bays 3 and 4
(IMB3/4)
1.2 High Performance Computing – HP Cluster Platform Architecture
What makes HP Cluster Platform different from standard systems is the assignment of node
types, high-speed networks, and the interoperability efficiency gained by using High Performance
Computing Software manufactured by HP and other HP partners. This section describes the
basics of the HP Cluster Platform hardware architecture.
Nodes:
The two types of nodes included in a typical Cluster Platform configuration are:
Control node
: Each cluster has one control node. If the optional InfiniBand interconnect
is selected, the control node is also connected. The control node can also be used for
preprocessing, postprocessing, and computational workload.
Compute nodes
: Compute nodes are normally used for application computation rather
than administrative workloads.
1.2 High Performance Computing – HP Cluster Platform Architecture
15