HP Cluster Platform Interconnects v2010 HP Cluster Platform InfiniBand Interco - Page 39
Installing and Maintaining the ISR 9024 S/D Interconnect (RoHS Compliant), 3.1 Overview of the ISR
View all HP Cluster Platform Interconnects v2010 manuals
Add to My Manuals
Save this manual to your list of manuals |
Page 39 highlights
3 Installing and Maintaining the ISR 9024 S/D Interconnect (RoHS Compliant) The hardware maintenance operations described in this chapter require appropriate training and knowledge of safety procedures. The safety procedures for HP Cluster Platform are described in the HP Cluster Platform Site Preparation Guide. The following interconnect maintenance topics are described: • Overview of the ISR 9024 S/D (Section 3.1). • Unpacking the ISR 9024 S/D (Section 3.2). • Mounting the ISR 9024 S/D in the rack (Section 3.3). • Replacing the ISR 9024 S/D fan unit (Section 3.4). • Installing the ISR 9024 S/D power supply unit (Section 3.5). • Operating the ISR 9024 S/D interconnect (Section 3.6). • Troubleshooting the ISR 9024 S/D (Section 3.7). The ISR 9024 S/D is a RoHS compliant model. For information on the ISR 9024 non RoHS compliant model refer to Chapter 2 "Installing and Maintaining the ISR 9024 Interconnect". 3.1 Overview of the ISR 9024 S/D Interconnect The ISR 9024 S/D (Single Data Rate/Dual Data Rate) is a high performance, low latency, fully non-blocking switch for high performance computing (HPC) clusters and enterprise grids. With twenty-four 20 or 10 Gb/s ports in a 1U chassis, the standards-based ISR 9024 S/D delivers high bandwidth of up to 960 Gb/s and low latency. Using the ISR 9024 S/D, you can build high performance clusters and grids that scale from several to tens of nodes. To meet varying application needs, the ISR 9024 S/D offers several options of configurations including Single Data Rate (SDR) 4X ports or Dual Data Rate ports (DDR - 20 or 10 Gb/s auto-negotiate), with either copper or optical adapter interfaces. The ISR 9024 S/D incorporates redundant, hot-swappable power supplies for high availability, as well as a hot-swappable fan unit. The ISR 9024 S/D offers a plug-and-play environment, allowing servers to be added without taking down the fabric. The ISR 9024 S/D is available with an active CPU board (ISR 9024S/D-M, internally managed) with the embedded GridVision™ Device and Fabric Manager, or without a CPU board (ISR 9024S/D, externally managed). GridVision™ provides comprehensive and powerful management capabilities, delivering real-time proactive management by providing: • Aggregated fabric and resource views • Access to a suite of fabric and switch diagnostics • The ability to manage fail-over on all levels • Provisioning of InfiniBand fabrics and the attached server networking and storage resources Device management and fabric management capabilities are imbedded in the internally managed ISR 9024 S/D and can be accessed via CLI, GUI or SNMP managers, or in-band via InfiniBand (IPoIB.) The externally managed ISR 9024 S/D can be managed in-band over InfiniBand by a management entity such as another Voltaire switch, using the InfiniBand Management Datagram (MAD) protocol. Technical specifications for the ISR 9024 S/D are provided in Appendix B. This 24-port chassis functions as a standalone unit in small clusters of up to 24 servers, or as a building block for larger clusters. Larger clusters are constructed from multiple ISR 9024 S/D interconnects functioning as the cluster building blocks. In larger configurations, only a single internally-managed ISR 9024 S/D is required to run the management software, while the other interconnects are externally managed. (Using two internally managed devices ensures that the fabric management has high availability). 3.1 Overview of the ISR 9024 S/D Interconnect 39