HP Cluster Platform Introduction v2010 HP Cluster Platform Workgroup System To - Page 15
High Performance Computing – HP Cluster Platform Architecture, Local KVM interface
View all HP Cluster Platform Introduction v2010 manuals
Add to My Manuals
Save this manual to your list of manuals |
Page 15 highlights
Figure 1-2 Workgroup System Tower Example Rear View 8 9 7 6 10 5 4 3 3 2 1 9 The following list describes the callouts in Figure 1-2: 1. Local KVM interface 2. HP GbE2c Ethernet switch installed in interconnect module bay 1 (IMB1) 3. Active cool fans 4. Interconnect module bay 2 (reserved for potential future use) 5. Enclosure uplink and service port 6. Enclosure downlink 7. iLO/Onboard Administrator port 1 8. iLO/Onboard Administrator port 2 (reserved for future use) 9. Power supplies 10. Optional 4X DDR InfiniBand Interconnect installed in interconnect module bays 3 and 4 (IMB3/4) 1.2 High Performance Computing - HP Cluster Platform Architecture What makes HP Cluster Platform different from standard systems is the assignment of node types, high-speed networks, and the interoperability efficiency gained by using High Performance Computing Software manufactured by HP and other HP partners. This section describes the basics of the HP Cluster Platform hardware architecture. • Nodes: The two types of nodes included in a typical Cluster Platform configuration are: - Control node: Each cluster has one control node. If the optional InfiniBand interconnect is selected, the control node is also connected. The control node can also be used for preprocessing, postprocessing, and computational workload. - Compute nodes: Compute nodes are normally used for application computation rather than administrative workloads. 1.2 High Performance Computing - HP Cluster Platform Architecture 15