HP Cluster Platform Interconnects v2010 HP Cluster Platform InfiniBand Interco
HP Cluster Platform Interconnects v2010 Manual
View all HP Cluster Platform Interconnects v2010 manuals
Add to My Manuals
Save this manual to your list of manuals |
HP Cluster Platform Interconnects v2010 manual content summary:
- HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 1
HP Cluster Platform InfiniBand Interconnect Installation and User's Guide HP Part Number: A-CPIBI-1E Published: October 2006 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 2
contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 3
Table of Contents About This Manual...11 Organization...11 Audience...12 Related Documentation Installing the ISR 9024 Power Supply Unit 36 2.5 Operating the ISR 9024 Interconnect 36 2.6 Troubleshooting the ISR 9024...36 3 Installing and Maintaining the ISR 9024 S/D Interconnect (RoHS Compliant - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 4
3.6 Operating the ISR 9024 S/D Interconnect 47 3.7 Troubleshooting the ISR 9024 S/D 47 4 Installing and Maintaining the ISR 9096 and ISR 9288 Interconnects 49 4.1 Overview of the ISR 9XXX Interconnects 49 4.1.1 ISR ISR 9XXX - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 5
8.9.2 HP HPC 4x DDR IB Mezzanine HCA 105 8.9.3 Operating System and Software Requirements 105 9 Postinstallation Troubleshooting and Diagnostics 107 9.1 Postinstallation Troubleshooting 107 9.2 Startup Checks for the ISR 9XXX 107 9.3 Debugging a Fabric Failure by Using Performance Management - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 6
F.5 Mellanox PCI-Express HCA Specifications 128 F.6 Mellanox Memory Free PCI-Express HCA (SDR) Specifications 129 F.7 Mellanox Memory Free PCI-Express HCA (DDR) Specifications 129 F.8 Mellanox PCI-Express DDR HCA Specifications 130 F.9 HPC 4x DDR IB Mezzanine HCA Specifications 130 Index...133 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 7
List of Figures 1-1 Typical InfiniBand Network...21 1-2 ISR 9024 Interconnect Front Panel View 22 1-3 ISR 9024 S/D (RoHS Compliant) Interconnect Front Panel View 23 1-4 ISR9096 Interconnect Front Panel View 23 1-5 ISR 9096 Interconnect Rear Panel View 24 1-6 ISR 9288 Interconnect Front Panel - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 8
8-3 Topspin/Mellanox SDR PCI-X HCA 98 8-4 Topspin/Mellanox SDR PCI-Express HCA 98 8-5 Mellanox PCI-X HCA...99 8-6 Mellanox PCI-Express HCA...100 8-7 Mellanox Memory Free PCI-Express HCA (SDR 101 8-8 Mellanox Memory Free PCI-Express HCA (DDR 102 8-9 Mellanox PCI-Express HCA (DDR 104 8-10 4x DDR - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 9
List of Tables 4-1 ISR 9XXX Standard Configurations 64 6-1 sFB-12 Fabric Board LED Status...73 6-2 Line Board Status LEDs...74 6-3 sMB Board LEDs...75 6-4 Power Supply Unit LEDs...77 6-5 IPR Blade LEDs...78 6-6 FCR Blade LEDs...79 6-7 sCTRL Board LEDs...81 6-8 sFU-8 Fan Module LEDs ...84 7-1 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 10
10 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 11
manual The information in this guide is organized as follows: Chapter Description Chapter 1: InfiniBand Technology Overview Describes the InfiniBand standard, and its implementation in the supported troubleshooting and interconnect operational verification after you install Troubleshooting - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 12
manual is intended for experienced hardware administrators of large-scale computer systems, and for HP Global Service representatives. This manual HP Cluster Platform Overview and the HP Cluster Platform Site Preparation Guide, and is familiar with HP Cluster Platform architecture and concepts. - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 13
- Provides the installation procedure for the c-Class Blade Cable Management Bracket. • HP Cluster Platform InfiniBand Fabric Management and Diagnostic Guide - Provides test and diagnostic procedures that are specific to the application of Voltaire InfiniBand interconnects. Important: Go to http - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 14
: Text set off in this manner presents clarifying information or specific instructions. Note: Text set off in this manner presents commentary, sidelights, or interesting points of information. Conventions This manual uses the following typographic conventions: Monospace type This type denotes - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 15
the HP Cluster Platform Site Preparation Guide to obtain specific information on safety . There are no user-serviceable parts inside. The laser module should be serviced by service personnel only. Do not attempt . Refer to the cluster documentation for instructions on how to move components in the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 16
its on/off switch, then remove the power cord before removing the component's cover. Remove the Power Protection Device cables before any servicing operation. Always replace the cover before switching the component on again. Cellular telephones and other wireless technology can interfere with the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 17
Recycling Shipping an integrated cluster generates far less packaging than the individual components that it contains. However, large clusters use a substantial amount of packaging material that is not reusable. The bulk of the packaging material is recyclable, and is labeled as such. You should - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 18
18 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 19
Features of an InfiniBand fabric are: • Performance: - The following bandwidth options are supported by the architecture: ◦ 1X (2.5 Gb/s) ◦ 4X (10 Gb/s or 20 a 4X or 12X link auto-negotiates to 1X because of link problems. - Low latency - Reduced CPU utilization - Fault-tolerance through automatic - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 20
- Security and partitioning - Integrated management tools • A strong technical future, with support for: - PCI cards: ◦ Voltaire SDR PCI-X ◦ Topspin/Mellanox SDR PCI-X Rev B; Mellanox SDR PCI-X Rev C ◦ Topspin/Mellanox SDR PCI-Express Rev B; Mellanox SDR PCI-Express - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 21
(LAN). 11. A connection to the Wide Area Network (WAN). 1.2 ISR 9xxxx-series Interconnects HP is currently superseding the ISR 9xxxx-series interconnects documented in this guide with newer models. The ISR 9096 and ISR 9288 models documented in this - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 22
• Support only single data rate links (SDR). • Are not compliant with Restriction of models, refer to the following documents: • Voltaire ISR 9024 S/D Installation Manual • Voltaire ISR 9288 / ISR 9096 Installation Manual 1.3 Identifying the ISR 9024 Interconnect (SDR) Figure 1-2 shows the front - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 23
Figure 1-3 ISR 9024 S/D (RoHS Compliant) Interconnect Front Panel View The ISR 9024 S/D (RoHS Compliant) interconnect is a fully non-blocking interconnect with a theoretical throughput of 960 Gb/s. This device has the following physical and operational features: • A 1U chassis, designed for industry - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 24
24 ports of type 4X. • Two redundant management boards, model sMB, including a CPU Mezzanine. • Up to 4 router blade drawers, model sRBD that each support up to three router blades, which are either of the following models: - TCP/IP internet protocol router blade, model IPR that provides four - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 25
other InfiniBand-to-TCP/IP and InfiniBand-to-Fibre channel router blades in any combination. Up to 132 router ports are supported in a single chassis. The SR 9288 chassis supports the following modules: • Up to four redundant fabric boards, model sFB-12. • Up to 12 line boards, model sLB-24, each - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 26
to the CLI, which is primarily used for initial configuration of the interconnect and troubleshooting or diagnostics. You can connect with a DB9 cable (or adapter, in and performs environmental monitoring of the interconnect. It supports an industry-standard management information base (MIB) that - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 27
See the Voltaire InfiniBand Fabric Management and Diagnostic Guide and the HP Cluster Platform InfiniBand Fabric Management and Diagnostic Guide for information on how to launch and use the management interfaces. 1.7 Understanding the Interconnect Management Capabilities 27 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 28
28 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 29
the HP Cluster Platform Site Preparation Guide. The following interconnect maintenance topics are enable you to monitor, upgrade, and troubleshoot the interconnect and its network (fabric). incorporates an internal crossbar switch capable of supporting cut-through switching to minimize latency. This - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 30
and rear panels provide InfiniBand port and status indications, which enable you to monitor the switch operation and diagnose fabric problems. Figure 2-1 shows the front panel of the internally managed configuration. Figure 2-1 Internally Managed ISR 9024, Front Panel 10 9 8 7 1 N M L Reset - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 31
card reset button (use this button only when directed by a diagnostic program, a documented interconnect procedure, or when directed by HP service personnel.) 6. Card-to-host adapter cable, for connecting the interconnect's management card to a PC, laptop, or terminal. 2.1 Overview of the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 32
2.1.3 Externally Managed ISR 9024 Front Panel In this configuration, the I²C port panel replaces the management card panel. The I²C port provides access to channel management functions. Figure 2-3 shows the front panel of the externally managed configuration. Figure 2-3 Externally Managed ISR 9024, - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 33
carton to ensure that it is factory sealed and undamaged. If the box shows sign of damage or is unsealed, contact your HP sales or service representative. 3. Use only short-bladed safety knife to slit the sealing tape, ensuring that you do not damage the packaging or shock absorption material - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 34
, refer only to the HP documentation for installation and configuration instructions. Perform the preceding steps in reverse order to repack a . Do not reuse any damaged packaging, contact your HP sales and service representative if you are unsure about return shipping requirements. 2.3 Mounting the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 35
is not running, you are now ready to cable the interconnect, following the cabling instructions in Chapter 7, and using the following sequence: a. InfiniBand cables, which must be cluster. See the Voltaire InfiniBand Fabric Management and Diagnostic Guide . 2.3 Mounting the ISR 9024 in the Rack 35 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 36
a link with no data traffic passing over the link • Flashing, indicating the presence of a link with data traffic passing over the link. 2.6 Troubleshooting the ISR 9024 If any indicators are not illuminated as described in the preceding list, use the following verification procedure: 1. The PS LED - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 37
ends. Try swapping the cable with a spare, or use the cable from an adjacent port temporarily to eliminate the cable as the source of the problem. If a known good cable does not correct the problem, there might be a problem with the port itself. Call HP Service. 2.6 Troubleshooting the ISR 9024 37 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 38
38 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 39
Platform are described in the HP Cluster Platform Site Preparation Guide. The following interconnect maintenance topics are described: • Overview unit (Section 3.5). • Operating the ISR 9024 S/D interconnect (Section 3.6). • Troubleshooting the ISR 9024 S/D (Section 3.7). The ISR 9024 S/D is a RoHS - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 40
front and rear panels provide InfiniBand port and status indications, which enable you to monitor the switch operation and diagnose fabric problems. Figure 3-1 shows the front panel of the internally managed configuration. 40 Installing and Maintaining the ISR 9024 S/D Interconnect (RoHS Compliant - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 41
Figure 3-1 Internally Managed ISR 9024 D-M, Front Panel 1 3 5 7 8 10 2 4 6 9 11 Figure 3-1 shows the following front panel features: 1. Power supply indicator. 2. Hot-swappable power supply. 3. Fan unit indicator. 4. Hot-swappable fan unit (contains two fans for high-availability) with auto - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 42
Figure 3-2 Externally Managed ISR 9024 S/D, Front Panel 1 3 5 8 2 4 67 9 Figure 2-3 shows the following front panel features: The following list corresponds to the callouts shown in Figure 3-2. 1. Power supply indicator. 2. Power supply module. 3. Fan unit indicator. 4. Fan unit. 5. Reset - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 43
carton to ensure that it is factory sealed and undamaged. If the box shows sign of damage or is unsealed, contact your HP sales or service representative. 3. Use only short-bladed safety knife to slit the sealing tape, ensuring that you do not damage the packaging or shock absorption material - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 44
Platform, refer only to the HP documentation for installation and configuration instructions. Perform the preceding steps in reverse order to repack a not reuse any damaged packaging, contact your HP sales and service representative if you are unsure about return shipping requirements. 3.3 Mounting - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 45
cable the interconnect, following the cabling instructions in Chapter 7, and using the See the Voltaire InfiniBand Fabric Management and Diagnostic Guide . 3.4 The ISR 9024 S/D Fan Unit allows for silent fan operation. Normal and Turbo are supported modes for higher MTBF and better acoustic noise. If - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 46
3.4.1 Replacing the Fan Unit In normal operation, the two fans work at 50% utilization. In case of fan failure or high temperature detection, the fans go into Turbo mode. In case of fan failure, the fan drawer LED and the PS/FAN LED on the rear panel blink. When removing the fan unit, the system can - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 47
traffic passing over the link. 3.7 Troubleshooting the ISR 9024 S/D If any problem. Reduce the ambient operating temperature and ensure that the airflow is not impeded. If the condition persists after correcting any problems, it is likely that a component has failed, and you should call HP Service - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 48
, or use the cable from an adjacent port temporarily to eliminate the cable as the source of the problem. If a known good cable does not correct the problem, there might be a problem with the port itself. Call HP Service. 48 Installing and Maintaining the ISR 9024 S/D Interconnect (RoHS Compliant) - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 49
Cluster Platform Overview and the HP Cluster Platform Site Preparation Guide. The following interconnect maintenance topics are described in this bisectional bandwidth for each port. The ISR 9288 supports up to 288 InfiniBand 4X (10 Gb/s) ports. The ISR 9096 supports up to 96 InfiniBand 4X (10 Gb/s) - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 50
configured for a wide range of applications. The ISR 9288 chassis is 14U. The ISR 9096 is 6U. Features of the ISR 9XXX architecture are: • Supports multiple line boards (sLB-24) with 24 4X or 8 12X InfiniBand copper interfaces for connectivity between the line cards and the interconnect fabric. The - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 51
- The ISR 9288 enclosure can contain up to 11 hot-swappable sRBDs. • Power supply units (PSU). - The ISR 9096 supports up to four redundant hot-swappable PSUs. - The ISR 9288 supports up to five redundant hot-swappable PSUs. • Fan units (sFU-8). The ISR 9XXX uses conventional front to back air flow - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 52
ports. See theVoltaire InfiniBand Fabric Management and Diagnostic Guide. The ISR 9XXX features an embedded InfiniBand subnet data and supports an industry-standard . 4.2 Unpacking the Chassis This section provides step-by-step instructions for packing and unpacking the ISR 9XXX chassis. The ISR - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 53
releasing the four clamps. 2. Verify that the top compartment of the crate ( Figure 4-1) contains the following components: 1. Rail kit. 2. Cabling Guide brackets (these brackets are not used for HP Cluster platform installations). 3. Screw kit. 4. Grounding kit. 5. Console cable. 6. Power cables - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 54
and the CD wrapped in an antistatic bag on top of the chassis. Figure 4-2 Documentation and CD Location 1 2 Item Description 1 Getting Started Short Guide 2 ISR 9288 Product CD and other CDs, according to system configuration 5. Open up the front door of the wooden crate by releasing the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 55
4-4 Top Compartment in the ISR 9096 Box 33 1 44 2 Item Description 1 Accessories box 2 Packing list 3 Cabling brackets 4 Getting Started Short Guide 3. Remove the accessories box (Figure 4-5) from the crate and verify that it contains the following items: • A CD with documentation and - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 56
Cluster Platform. • Product CD and printed documentation. This documentation is superseded by the HP Cluster Platform documentation, unless otherwise stated in this guide. • sFB-12 fabric modules. • sLB-24 line modules. • sMB management modules (one or more). • sPSU power supply modules (one or more - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 57
Note: Depending on the reason for the chassis replacement, the shipment might contain preinstalled modules. In other instances (such as a failed mid-plane in the original chassis) you might need to depopulate the failed chassis, and transfer the modules to the new chassis after you install the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 58
router blade drawer 5 One of 5 redundant power supplies 6 sCTRL control module 4.3 Repacking the Chassis If you need to return a defective chassis for servicing or replacement, it must be packed for shipping in its original container. Use the following procedure to pack the ISR 9XXX chassis - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 59
rack. This will ensure that the power draw is correctly distributed across the rack's redundant power distribution units. This section provides step-by-step instructions for installing the ISR 9XXX chassis. Two people are required to remove the interconnect from its box and mount it in a rack, due - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 60
the correct assembly of each telescoping rail unit from the rear of the rack. Notice the flange on the inner rail that is intended to support the interconnect chassis Figure 4-10 Telescoping Rail Assembly 4 1 3 2 Starting at the front of the rack, assemble and install the rail kit as follows - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 61
, as shown by callout 3 in Figure 4-9. Caution: The chassis is heavy and the next step requires two people. 4. Place the chassis on the rail kit support flange, and slide the chassis into the rack, so that the L-brackets are flush with the rack columns. 5. Secure the chassis to the rail kit - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 62
requirements of the operating environment. If the cluster is not running, you are now ready to cable the interconnect, following the cabling instructions and using the following sequence: a. Verify that all modules are properly installed. Their faceplates should be flush with the chassis front or - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 63
4.4.3 Installing the Cabling Clamps If line boards are installed in slots 1 and 2 of the chassis (the topmost slots) or the number of cables connected to the interconnect exceeds 128 and the interconnect is installed in a standard 600mm wide cabinet, additional cabling clamps are required, as shown - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 64
chassis gripping) Chassis gripping bracket kit Chassis gripping bracket kit Cabling guide bracket kit (rack Rack gripping bracket kit N/A gripping) Voltaire 24 4X InfiniBand ports modular Line Board, Fibre MediaConverter support sLB-24 Voltaire 8 12X InfiniBand ports modular Line Board sLB - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 65
The front panel contains the following modules and features: 1. A master sMB module. 2. The sFU-8 eight-fan horizontal cooling module. 3. The sFU-4 four-fan vertical cooling module. 4. Up to 4 sFB-12 fabric boards,. 5. A redundant slave sMB module. 6. L-brackets that you use to attach the chassis to - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 66
Figure 4-16 Populated I/O Drawer 1 2 3 4 The following features are shown in Figure 4-16: 1. sRBD router blade drawer. 2. Blade installed in the sRBD, 3. Location of the locking levers for inserting and removing sRBD router blade drawers 4. sLB-24 InfiniBand line board. In an HP Cluster Platform, - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 67
checking their status LEDs: a. Gigabit Ethernet. b. InfiniBand. c. 10/100 Ethernet management. See Chapter 9 and the Voltaire InfiniBand Fabric Management and Diagnostic Guide to isolate and resolve the problem if any of the LEDs are not displaying the correct status. 4.6 Post-installation Tasks 67 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 68
68 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 69
any internally managed rack-mount IB switch. OpenSM is not supported in HP Cluster Platform solutions. With the InfiniBand technology, the the Servers and Workstations Overview and the 4x DDR IB Switch Module Installation Instructions for information on how to install the 4x DDR IB switch module in - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 70
Bracket Installation Guide. 5.1.1 Subnet Manager A subnet manager is required to establish the InfiniBand fabric and provide InfiniBand fabric services. The Grid switch product family is supported to provide subnet manager services. 70 HP 4x DDR IB Switch Module for c-Class BladeSystems Overview - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 71
a groove in the chassis. If the guide pin is not perfectly aligned with this groove, you might experience difficulty in locking the ejector handles in place. Should this occur, manipulate the ejector handle gently until it locks into place • Insert screws manually at first to ensure that the screw - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 72
Figure 6-1 Board Ejectors 1 2 3 4 Figure 6-1 shows the top ejector latches on the fabric boards, and includes the following information: 1. Security screws on each board complete the board seating, and lock of the board in place. 2. The latch release button, which also electronically signals a board - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 73
operating normally. Not illuminated - There is a problem with the power supply to the board. Illuminated management use. Various diagnostic procedures will instruct you to check its status. Note button. 2. Carefully seat the board into the side guide rails. 3. Slowly slide the board into the chassis - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 74
problem in the power supply to the board. Info This is a general purpose LED for system management use. Various diagnostic procedures will instruct the ejectors are unlocked. 2. Carefully seat the board into the slot's guide rails. 3. Slide the board slowly into the chassis until the ejectors - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 75
two RS232 ports: • CLI port - Used to make a local connection to the management interface. • I²C port - Used for debugging purposes, by trained service personnel only. Table 6-3 lists and describes the sMB board LEDs. Table 6-3 sMB Board LEDs Indication Hot Swap Description Illuminated: It is safe - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 76
by pressing the red button. 2. Carefully seat the board into the side guide rails, ensuring that it is square and level with the slot. 3. Slowly the ISR 9288 chassis. For maximum fault tolerance in the event of power problems, ensure that you connect the PSU's input cord to the appropriate circuit - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 77
module LEDs. Table 6-4 Power Supply Unit LEDs Indication DC ON (Green) AC ON (Green) Description Illuminated: The DC power source is present. Off: There is a problem with the DC supply. Otherwise, power might not be applied to the chassis. Illuminated: The AC power source is present. Off: There is - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 78
• Four small form-factor SFP GBIC GbE ports, providing a fast link to an IP network for devices on the InfiniBand fabric • An RJ-45 management port. Figure 6-6 shows the front panel of the IPR blade. Figure 6-6 IPR Blade Front Panel 1 2 34 Figure 6-6 shows the following features of the IPR - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 79
lever in the outward (module removal) orientation. When inserting or extracting a module, do not hold the weight of a module by the lever alone, support its weight by holding the front bezel. Figure 6-8 Router Blade Lock Lever Use the following procedure to install either type of router blade in - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 80
to insert an sRBD to the chassis: 1. Verify that the ejectors are unlocked by pressing the red button. 2. Carefully seat the sRBD into the side guide rails, ensuring that it is level and square with the slot. 3. Slowly slide the sRBD into the chassis until the ejectors begin to engage on - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 81
is installed. • Off: No fabric board is installed. Temp (Amber) These LEDs indicate an over-temperature fault on the chassis. This usually indicates a problem with one of the fan units and is signalled by the following states: • Illuminated: An over-temperature fault is detected. Check the cooling - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 82
Table 6-7 sCTRL Board LEDs (continued) LED label Description CM Active 2 (Green) If illuminated, the chassis manager is running on management card #2 Eth 1 and Eth 2 Each Ethernet port (which provides access to the management interface) has two LEDs, which provide the following status - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 83
Warning! Never remove the sFU-4 and sFU-8 at the same time; at least one fan module must be installed at all times. Use the following procedure to replace a defective fan module by hot-swap: 1. Unpack and prepare all components on a convenient work surface adjacent to the chassis before you begin - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 84
Fan Module LEDs LED Label Temp (Amber) Description This LED indicates an over-temperature fault on the chassis. When present, this usually indicates a problem with one of the fan modules. • Illuminated: An over-temperature fault is detected • Off: Temperature levels are normal. sFU-4 (Amber) sFU - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 85
7 Cabling the Interconnect Each model of HP cluster platform has a set of specific port-to-port cabling tables that describe the origin and destination of every link between the interconnect and the cluster nodes. When the cluster is integrated at the factory, the cables are installed and tested. - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 86
Cabling Tables The following documents provide cabling tables for supported InfiniBand configurations: • InfiniBand 1U Server 2:1 Reduced Bandwidth has locked in place. 4. Route the cable to the nearest cabling guide hook, in accordance with the cable routing procedure for your model of Cluster - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 87
is specific to CPE models. Follow the cabling instructions in the Cluster Platform Express Installation Guide. The cabling tables are specific to: • hook-and-loop straps are fastened to the side of the rack to support the interconnect-to-node cable bundles. 7.1.5 Cable Routing Procedure for the ISR - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 88
the rack. When all ports are used, 48 cables run down each side of the rack. 7.1.6 Connecting the ISR 9096 Router Blades This section provides instructions on cable connections to the IPR and FCR router modules that are installed in the ISR 9096 sRBD. 7.1.6.1 Connecting to IPR GbE Ports Use the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 89
All cables are routed through the comb brackets, and down the sides of the cabinet. 7.2 Connecting the ISR 9288 Router Blades This section provides instructions on cable connections to the IPR and FCR router modules that are installed in the ISR 9288 sRBD. 7.2.1 Connecting to IPR GbE Ports Use the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 90
to run a CLI (via Telnet) or the GUI (via a Web browser or Java Web client). • A diagnostic I²C interface is provided for the use of technical support personnel only. If you choose not to use the built-in software interfaces, you can use a third party SNMP manager. If the configuration calls for - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 91
To use the serial interface on the ISR 9024, ISR 9096 or the ISR 9288, the PC that you use for the connection must support VT100 terminal emulation. The PC must be equipped with a terminal emulation software such as HyperTerminal or minicom. Note: Recommend serial terminal application for Windows - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 92
92 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 93
Specific combinations of HCA card and card firmware are required to support operating environments such as HP XC. Consult the InfiniBand Firmware Matrix and Qualified Solutions Tables in the InfiniBand Fabric Management and Diagnostic Guide or on the web at http://docs.hp.com/en/highperfcomp.html. - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 94
Figure 8-1 HCA-400 PCI Card 5 5 1 2 4 3 The following features of the HCA-400 PCI card are identified by the numbered callouts in Figure 8-1. 1. An InfiniBand port, one of two. 2. The indicator LEDs for physical (green) and logical (amber) link status. 3. The card's metal bracket. Always handle the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 95
. You cannot use these installation instructions with alternate server models, or with alternate configurations of supported servers. The HCA is installed Diagnostic Guide and the HP Cluster Platform InfiniBand Fabric Management and Diagnostic Guide for more information on troubleshooting procedures. - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 96
management arm found on many server installations is never used in HP cluster platform, because the arm cannot support the heavy copper link cables. A separate installation guide is provided with each cable management component, and shipped with the cluster. You must locate, read, and understand - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 97
the card , following the cabling instructions for your model of cluster platform detected. • Flashing: There is a problem with the logical link. After installing the Management and Diagnostic Guide. Next, boot the The Topspin/Mellanox PCI-X HCA supports InfiniBand protocols including IPoIB, SDP, - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 98
similar to that of the Voltaire HCAs described previously in this chapter. 8.4 Topspin/Mellanox PCI-Express HCA The Topspin/Mellanox PCI-Express HCA supports InfiniBand protocols including IPoIB, SDP, SRP, UDAPL and MPI. The Topspin/Mellanox PCI-Express HCA is a single data rate (SDR) card with two - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 99
the Topspin cards are similar to that of the Voltaire HCAs described previously in this chapter. 8.5 Mellanox PCI-X HCA (SDR) The Mellanox PCI-X HCA supports InfiniBand protocols. It is single data rate (SDR) card with two 4x InfiniBand 10 Gb/s ports and 128 MB memory. Figure 8-5 shows the Mellanox - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 100
• 4x InfiniBand 10 Gb/s x 2 ports • 128 MB memory • PCI-Express x8 edge connector • I/O Panel LEDs • I2C compatible connector (for debug) • Supports InfiniBand protocols Installation of the Mellanox card is similar to that of the Voltaire HCAs described previously in this chapter. 8.6.1 Mellanox PCI - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 101
- Yellow LED Name Physical Link - Green Data Activity - Yellow 8.7 Mellanox Memory Free PCI-Express HCA (SDR) The Mellanox Memory Free PCI-Express HCA supports InfiniBand protocols. It is single data rate (SDR) card with one 4x InfiniBand 20 Gb/s port. Figure 8-7 shows the Mellanox Memory Free PCI - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 102
for Management & Subnet Management Agent (SMA) • Integrated Physical Layer SerDes • Integrated GSA (General Service Agents) • Low-Latency Communication Technology • Flexible Completion Mechanism Support (Completion Queue, Event, or Polled operation) • I/O Panel LEDs 8.7.1 LEDs The board has two - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 103
for Management & Subnet Management Agent (SMA) • Integrated Physical Layer SerDes • Integrated GSA (General Service Agents) • Low-Latency Communication Technology • Flexible Completion Mechanism Support (Completion Queue, Event, or Polled operation) • I/O Panel LEDs 8.8.1 LEDs The board has two - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 104
Features of the Mellanox PCI-Express DDR HCA include: • Two 4X InfiniBand copper ports for connecting InfiniBand traffic (4X IB connectors) • 4X port supports 20 Gb/s • Third-Generation HCA core • On board DDR SDRAM memory (memory configurations vary) • PCI-Express x8 edge connector • I/O Panel LEDs - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 105
Maintenance and Service Guide for more information on installing the 4x DDR IB mezzanine HCA in server blades. 8.9.3 Operating System and Software Requirements Refer to the mezzanine HCA QuickSpecs and the Infiniband Software and Firmware Compatibility Matrix for supported operating environment - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 106
106 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 107
Guide and the HP Cluster Platform InfiniBand Fabric Management and Diagnostic Guide failed port ( Section 9.5). 9.1 Postinstallation Troubleshooting Startup problems are usually isolated to a single that you can make), contact a customer service representative. 2. Check the power supply LEDs - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 108
Using Performance Management (PM) The interconnect's fabric manager enables you to debug problem with fabric connections by using the performance management (PM) features. The following two PM functions support fabric debugging: • Port counters monitoring and report. The PM generates a periodic port - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 109
future polls or initialization events, the port is not discovered unless the problem is corrected and you clear the entry in the failed port table. the CLI as described in Voltaire InfiniBand Fabric Management and Diagnostic Guide and use the enable command to enter privileged mode, specifying the - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 110
110 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 111
throughput of up to 480 Gb/s. Interconnect options: copper, optical Indicators: physical and logical status Management Externally or internally managed switch supported by modular architecture Fabric management Power PC440 core CPU, operating at 400 MHz EIA/TIA-232 console, I²C and RJ-45 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 112
Feature Technical Specification Externally managed over 400,000 hours Environmental Operating Humidity 15% to 80%, non-condensing Altitude 0 to 9843 ft (3000m) Environmental Storage Temperature 13°F to 185°F (-25°C to +85°C) Humidity 5% to 90% non-condensing Altitude 0 ft to 15,000 ft - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 113
Single Data Rate (SDR - 10 Gb/s) ports, or 24 4X Dual Data Rate (DDR - 20 or 10 Gb/s auto-negotiate) Interconnect options: copper, with optional support for optical adaptors on top row 12 ports Indicators: physical and logical status All ports are located on the rear panel Remote InfiniBand (in-band - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 114
Feature Technical Specification Power Rating Power consumption*: ISR 9024S - 51W, max. ISR 9024S-M - 63W, max. ISR 9024D - 58W, max. ISR 9024D-M - 69W, max. BTU/hour = Watts x 3.413 Power Factor: 90 Vac/60 Hz/Max Load = 0.998 120 Vac/60 Hz/Max Load = 0.997 230 Vac/60 Hz/Max Load = 0.973 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 115
Feature EMC RoHS Technical Specification EN55022:98/EN55024:98/EN61000-3-2:00/ EN61000-3-3:95 This device complies with par 15 of FCC rules. Operation is subject to the following two conditions: (1) This device may not cause harmful interference (2) this device must accept any interference received - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 116
116 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 117
: - Fabric Management - Chassis and Device Management - Network Traffic Management - Storage Resource Management • InfiniBand managers • InfiniBand Agents • Supported management protocols: - SNMPv2c - Telnet - SSH - HTTP - FTP - IBTA - SMI/GSI • Connectors: - RS232 DB9-M, - I²C DB9-F • Indicators - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 118
Voltaire FCR (FC Router): - FCR InfiniBand form factor for integration with ISR 9096, providing shared storage connectivity to servers, with support for high-availability, Multi-Pathing, security and storage volume management capabilities. - 4 interfaces, Up to 400MBps per channel full duplex, 1 or - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 119
slots for a combination of line boards/router blades drawer in the rear. Up to four hot-swappable fabric boards (sFB-12), which support the switching FAT Tree topology and provide the following features: • Hot-swappable operation with no traffic disruptions. • 2Kbit I²C controlled EEPROM's to store - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 120
Feature Router Blades Control (rear) Power Requirements Cooling Air Flow Dimensions Specification You can install up to three of the following InfiniBand form factor router blades in each router blade drawer (sRBD): • FCR InfiniBand form factor for integration with ISR 9288 and OEM system: - Four - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 121
Feature Weight Environmental Specification 110 to 187 lbs (50 to 85 kg), depending on configuration • Ambient Operating Temperature 32° to 113°F (0° to 45°C). • Operating Humidity 15 to 80%, non-condensing. • Operating Altitude 0 to 9843 ft (3000m). • Storage Temperature -13° to 158°F (-25° to +70 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 122
122 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 123
E 4x DDR IB Switch Module Specifications The technical specifications of the 4x DDR IB Switch Module are as follows: 4x DDR IB Switch Module Specifications Compliance IBTA version 1.1 compatible ROHS-R5 General Specifications Communications Processor MT25204A0-FCC-D, InfiniHost III Lx On- - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 124
124 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 125
, PMA, CM). InfiniBand Interface Two auto-sensing 1X/4X InfiniBand ports, 8-pair MicroGigaCN copper. Memory 128 MB, optional 256 MB. Protocol Software MPI Support: MPI-CH (open source), MPI-Pro, Scali-MPI, and HP-MPI (available separately). uDAPL and kDAPL. IP and TCP: IETF IPoIB, and IBTA - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 126
40 Gb/s InfiniBand bandwidth full duplex (2 ports x 10 Gb/s/port x 2 for full duplex). General Specifications Power and Environmental Operating System Support Processor Mellanox MT23108 InfiniHost™ Line rate bandwidth 4x(10 Gb/s) per port (40 Gb/s aggregate full duplex Observed bandwitdh - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 127
40 Gb/s InfiniBand bandwidth full duplex (2 ports x 10 Gb/s/port x 2 for full duplex). General Specifications Power and Environmental Operating System Support Processor Mellanox MT25208 InfiniHost™ III Line rate bandwidth 4x(10 Gb/s) per port (40 Gb/s aggregate full duplex Observed bandwitdh - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 128
60068-2- 64, 29, 32 F.5 Mellanox PCI-Express HCA Specifications The specifications listed below cover the Mellanox PCI-Express SDR HCA Physical Power and Environmental Protocol Support Regulatory Size: Air Flow: 4X 20 Gb/s Connector: Voltage: Maximum Power: Temperature: InfiniBand: QoS: RDMA - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 129
cover the Mellanox Mem-Free PCI-Express HCA (SDR). Physical Power and Environmental Protocol Support Regulatory Size: Air Flow: 10 Gb/s Connector: Voltage: Maximum Power: Temperature: InfiniBand: QoS: RDMA Support: EMC: Safety: 54mm x 102mm (2.13 in. x 4 in.) 200 LFM @55°C Amphenol InfiniBand - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 130
Regulatory Size: Air Flow: 4X 20 Gb/s Connector: Voltage: Maximum Power: Temperature: InfiniBand: QoS: RDMA Support: EMC: Safety: Environmental: 2.5in. x 6.6in. 200 LFM @55°C InfiniBand (Copper) 12V, 3.3V 10W 0° to 55° Celsius Auto-Negotiation (20 Gb/s, 5 Gb/s) or (10 Gb/s, 2.5 Gb/s) 8 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 131
4x DDR IB Mezzanine HCA Specifications Non-operating Temperature 40 to 700 C. Non-operating Humidity (non-condensing) 5% to 95% Power requirement 1.35 A at 3.3V max (4.5W) Emissions Classifications FCC CFR 47 Part 15 Class A CISPR 22 Class A ICES-003 Class A VCCI Class A ACA CISPR 22 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 132
132 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 133
Index Symbols 12x ports ISR 9024, 30 24-port 12x, 29 24-port 4x, 29 4x DDR IB mezzanine HCA, 105 specifications, 130 4x DDR IB switch module, 69 specifications, 123 4X port switch chip, 29 4X port type, 24, 25 4x ports ISR 9024, 30 A administration, 26 (see also management) administrative - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 134
externally managed ISR 9024 S/D, 41 ISR 9024, 30 ISR 9024 S/D, 40 front view ISR 9288, 57 FRU ISR 9288, 71 FTP server, 26 G gateway, 21 GUID, 108, 109 guidelines for cabling, 85 H handling boards ISR 9288, 71 hazard warning, 13 HCA, 20, 67, 94, 109 installing, 94 LEDs, 94, 97 overview - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 135
, 101 HCA mellanox PCI-Express SDR, 100 HCA Mellanox PCI-X (SDR), 99 HCA port GUID, 109 HCA Topspin/Mellanox PCI-Express, 98 HCA Topspin/Mellanox PCI-X, 97 head shell, 85 ) specifications, 111 status LEDs, 31, 32 troubleshooting, 36, 47 unpacking, 33 ISR 9024 management, 26 (see also management) 135 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 136
ISR 9024 S/D cooling, 45 DB-9 adapter, 40 externally managed, 39 front panel, externally managed, 41 installing PSU, 46 internally managed, 39 master mode, 40 (see also internally managed) overview, 39 rack kit, 44 slave mode, 40 (see also externally managed) specifications, 113 unpacking, 43 ISR - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 137
ISR 9xxxx-series interconnect, 21 I²C, 30 Port on ISR 9024 S/D, 41 Port on ISR9024, 32 J Java Web client, 26 L laser radiation, 15 latency, 19, 24, 25, 29, 49, 50 LED, 37, 47 FCR, 79 HCA, 94 hot-swap, 72 InfiniBand Port, 37, 48 IPR, 78 PS, 37, 47 PSU, 77 sCTRL, 81 sFB-12, 72 sFU-8, 83 sLB-24, 74 sMB - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 138
108 debugging malfunction, 108 detecting failed, 108 GUID, 108 NodeInfo request, 108 status values, 109 port GUID, 109 Port LED, 37, 48 port throughput, 22, 23, 24, 25 port type 4X, 24, 25 post-installation ISR 9096, 67 ISR 9288, 67 postinstallation troubleshooting, 107 power inlet, IEC ISR 9024, 33 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 139
installing, 79 router blade drawer, 24, 26, 50 (see also sRBD) S safety battery, 16 burn injuries, 16 caution, 14 component covers, 16 electrical, 15 ergonomics, 16 important, 14 laser radiation, 15 noise levels, 16 power protection device, 16 rack stability, 15 removing components, 15 warning, 14 - HP Cluster Platform Interconnects v2010 | HP Cluster Platform InfiniBand Interco - Page 140
Topspin/Mellanox PCI-X HCA specifications, 126 torque setting, 35, 45 transport layer, 20 troubleshooting clear bad ports, 109 fabric failure, 108 failed ports, 108 HCA port GUID, 109 ISR 9024, 36, 47 NodeInfo, 108 port GUID, 108 port status values, 109 ports, 108 U U-location, 35, 44 unpacking ISR
HP Cluster Platform
InfiniBand Interconnect Installation and
User's Guide
HP Part Number: A-CPIBI-1E
Published: October 2006