HP Cluster Platform Introduction v2010 HP Cluster Platform Server and Workstat
HP Cluster Platform Introduction v2010 Manual
View all HP Cluster Platform Introduction v2010 manuals
Add to My Manuals
Save this manual to your list of manuals |
HP Cluster Platform Introduction v2010 manual content summary:
- HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 1
HP Cluster Platform Server and Workstation Overview HP Part Number: A-CPSOV-1H Published: March 2009 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 2
contained herein is subject to change without notice. The only warranties for HP products and services are set forth in the express warranty statements accompanying such products and services. Nothing herein should be construed as constituting an additional warranty. HP shall not be liable - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 3
About This Manual...14 Audience...14 Organization...15 HP Cluster Platform Documentation 15 Bracket Installation Guides...16 rx2600...30 1.2.1 Network Port Assignments...32 1.2.2 Supported Memory Configurations 33 1.2.3 Supported Storage Configurations 34 1.2.4 Cable Management...34 1.2.5 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 4
2.3.1 PCI Slot Assignments...63 2.4 HP ProLiant DL360 G3, G4, and G4p 63 2.4.1 PCI Slot Assignments...68 2.4.2 Embedded Technologies...68 2.4.3 High-Availability Features...69 2.4.4 Removing a Server from the Rack 70 2.4.4.1 Accessing Internal Components 71 2.4.5 Replacing a PCI Card...72 2.5 HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 5
177 4.9.3 HP ProLiant BL460c Internal View 178 4.9.4 HP ProLiant BL460c System Board 178 4.9.5 Memory Options...179 4.9.6 Mezzanine HCA Card...180 4.9.7 Supported Storage...180 4.9.8 Removing the HP ProLiant BL460c from the c-Class Enclosure 181 4.10 HP ProLiant BL480c Server Blade Overview 183 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 6
.3 HP ProLiant BL465c Internal View 193 4.11.4 HP ProLiant BL465c System Board 194 4.11.5 Memory Options...195 4.11.6 Mezzanine HCA Card...195 4.11.7 Supported Storage...195 4.11.8 Removing the HP ProLiant BL465c from the c-Class Enclosure 196 4.12 HP ProLiant BL465c G5 Server Blade 196 4.13 HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 7
Graphics Card (Optional 229 5.3.7 System Interconnect Cards...230 5.3.8 Memory Configurations...230 5.3.9 PCI Card Installation and Removal Instructions 231 5.3.9.1 PCI Card Support...231 5.3.9.2 Removing and Installing PCI Express Cards 232 5.3.9.3 Removing and Installing PCI or PCI-X Cards 233 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 8
List of Figures 1-1 HP Integrity rx1620 Front Panel...25 1-2 HP Integrity rx1620 Rear Panel...26 1-3 Releasing the PCI I/O Riser...29 1-4 Removing the PCI I/O Riser Assembly 29 1-5 Removing the PCI Slot Cover...30 1-6 Sliding the Card into the PCI Riser Connector 30 1-7 HP Integrity rx2600 Front - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 9
2-23 ProLiant DL360 G5 Rear Panel LEDs 77 2-24 System Insight Display (Actual)...78 2-25 System Insight Display Map...79 2-26 PCI Riser Board Assembly...82 2-27 Inserting a New PCI Adapter Into the PCI Riser Board 82 2-28 HP ProLiant DL380 G3 front panel 84 2-29 HP ProLiant DL380 G4 Front Panel - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 10
3-35 Removing the HP ProLiant DL385 PCI Riser Cage 128 3-36 Unlocking the HP ProLiant DL385 PCI Retaining Clip 128 3-37 Removing the HP ProLiant DL385 Expansion Board 129 3-38 HP ProLiant DL385 G2 Front Panel 130 3-39 HP ProLiant DL385 G2 Rear Panel 131 3-40 Removing the ProLiant DL385 G2 from - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 11
4-31 HP ProLiant BL465c Front Panel LEDs 192 4-32 HP ProLiant BL465c Internal View 193 4-33 HP ProLiant BL465c System Board Components 194 4-34 HP ProLiant BL465c G5 Front View 196 4-35 HP ProLiant BL685c Front View 197 4-36 HP ProLiant BL685c Front Panel LEDs 198 4-37 HP ProLiant BL685c - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 12
List of Tables 1 HP Cluster Platform Supported Servers 14 1-1 HP Integrity rx1620 Front Panel...25 1-2 HP Integrity rx1620 Rear Panel Features 26 1-3 HP Integrity rx2600 Ports Used in Clusters 32 1-4 Memory Slot - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 13
HP xw9300 Workstation PCI Slots 226 5-8 Narrow (1-Slot) Graphics Cards 227 5-9 Wide (2-Slot) Graphics Cards...227 5-10 Supported Interconnect Cards...230 5-11 Supported Memory Configurations 230 5-12 HP Workstation xw9400 Features 235 5-13 HP xw9400 Workstation PCI Slots 237 5-14 Graphics Cards - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 14
to the servers, that have their own documentation. Audience This manual is intended for experienced hardware system administrators of large-scale computer systems, and for HP Global Service representatives. This guide references skilled tasks and describes important safety considerations and is not - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 15
HP Cluster Platform architecture and concepts. Organization This manual is organized as follows: Chapter Chapter 1 Chapter Describes the HP ProLiant server blades supported in HP Cluster Platform solutions. Overview • Cluster Platform Site Preparation Guide • Cluster Platform Core Components - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 16
HP ProLiant DL140 G3 Server Maintenance and Service Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00795598/ c00795598.pdf HP ProLiant DL140 G2 Server Maintenance and Service Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00368751/ c00368751.pdf HP ProLiant 100 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 17
Service Guide http://h18004.www1.hp.com/products/servers/platforms/retired.html ProLiant DL360 Generation 4 Server Reference and Troubleshooting Guide DL360 Generation 5 Server Maintenance and Service Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00710376/ c00710376.pdf HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 18
ProLiant DL380 Generation 4 Server Reference and Troubleshooting Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00300504/ c00300504.pdf ProLiant DL380 Generation 4 SCSI Cabling Matrix http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00218257/ c00218257.pdf HP ProLiant - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 19
(includes the HP ProLiant c00700767.pdf BL460c G5) HP ProLiant BL460c Server Blade http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00718709/ Maintenance and Service Guide (includes c00718709.pdf the HP ProLiant BL460c G5) HP ProLiant BL480c Server Blade User http://h20000.www2.hp.com - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 20
/bc/docs/support/SupportManual/c00714237/ Service Guides c00714237.pdf support/SupportManual/c00913926/ ProLiant servers c00913926.pdf HP ProLiant Servers Troubleshooting Guide http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00300504/ c00300504.pdf HP BIOS Serial Console User Guide - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 21
http://h20000.www2.hp.com/bc/docs/support/SupportManual/c00705292/ User Guide c00705292.pdf For more information about HP -9000B.pdf HP Integrity rx2660 Site Prep Guide http://docs.fc.hp.com/en/AB419-9004B/AB419-9004B.pdf HP Integrity rx2660 User Service Guide http://docs.fc.hp.com/en/AB419- - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 22
SupportManual/ c00211837/c00211837.pdf HP xw8200 HP Workstation xw8200 Service and Technical Reference Guide http://h200002.www2.hp.com/bc/docs/support/SupportManual/ c00213033/c00213033.pdf HP xw8400 HP Workstation xw8400 Service and Technical Reference Guide http://h20000.www2.hp.com/bc/docs - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 23
Information This manual provides only an overview of the procedures for removing servers from a cluster rack and for installing PCI cards. Before performing such procedures, read the safety information provided in the following documents: • HP Cluster Platform Site Preparation Guide - The servers - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 24
zk-2071 ! zk-2071 ! CAUTION: To avoid the risk of damage to the system or expansion boards, remove all power cords before installing or removing expansion boards. When the Power On/Off switch is in the Off position, auxiliary power is still connected to the PCI expansion slot and can damage the - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 25
1 Itanium Processor Servers The following Itanium processor servers in the HP Integrity series are supported in HP Cluster Platform solutions. This chapter presents the following information: • An overview of the HP Integrity rx1620 (Section 1.1) • An overview of the HP Integrity - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 26
Table 1-1 HP Integrity rx1620 Front Panel (continued) Name Diagnostic LEDs LAN LED System LED Power On/Off LED LVD HDD 1 and LVD HDD 2 Power On/Off button Function The four diagnostic LEDs operate in conjunction with the system LED to provide diagnostic information about the system. The LAN LED - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 27
with the server. For the HP Integrity rx1620 these rules are: 1. The system has eight memory slots for installing DDR SDRAM memory modules. 2. The system supports a maximum of 16 GB of memory and a minimum of 512 MB. 3. Memory modules can either be 256MB, 512MB, 1GB, or 2GB, and they must be - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 28
while it is running, but a manual software procedure is necessary to complete the has sufficient throughput in its connection memory to support the full PCI-X 133 bandwidth (approximately 1 GB Cluster Platform Tab Mount Cable Management Installation Guide. 1.1.5 Installing or Removing a PCI Card - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 29
1. Remove the server cover. 2. Release the PCI I/O riser by turning the jackscrew, as shown in Figure 1-3. This action frees the PCI I/O riser from the system board. Figure 1-3 Releasing the PCI I/O Riser 3. Remove the PCI I/O riser from the chassis (Figure 1-4). Figure 1-4 Removing the PCI I/O - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 30
are required by the PCI card. 9. Replace the server cover. More detailed information about this procedure is provided in the HP Integrity rx1620 Installation Guide. 1.2 HP Integrity rx2600 The dual-processor 2U HP Integrity rx2600 can be used as an application node, control node, or utility node. HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 31
Figure 1-7 HP Integrity rx2600 Front Panel 2 3 4 5 1 The following list describes the callouts shown in Figure 1-7: Item Description 1 SCSI drives 2 Locator LED 3 Diagnostic LEDs 4 Power switch 5 CD-ROM drive Table 1-3 describes the ports on the rear of the HP Integrity rx2600 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 32
Table 1-3 HP Integrity rx2600 Ports Used in Clusters Callout 1 2 3 4 5 6 7 Port Label PWR 1 MP 10/100 LAN Gb VGA LAN 10/100 USB PCI Slot 0 Node Role Cluster Cabling Name and Description All Power input 1 is used for the single power connection. MP Console, CES - The management processor - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 33
the cluster's cabling tables for more information. 1.2.2 Supported Memory Configurations HP Cluster Platform does not enforce any • The system has 12 memory slots for installing DDR SDRAM memory modules. • The system supports a maximum of 12 GB of memory and a minimum of 512 MB. • Memory modules - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 34
6th 4th Memory Cell 1 DIMM 5B DIMM 3B The HP Integrity rx2600 supports the chip spare feature, enabling the server's error handling to bypass an high-availability system while it is running, but a manual software procedure is necessary to complete the task. The Guide. 34 Itanium Processor Servers - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 35
the server's cover and take out the PCI card cage. Removal instructions are provided on the card cage label. Detailed information is provided in Integrity rx2600 Server and HP Workstation zx6000 - Operations and Maintenance Guide. When performing these tasks, heed the warnings and cautions listed - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 36
, but provides additional features and specifications described in Table 1-5. Table 1-5 HP Integrity rx2620 Features Component Processor board Processors supported Main memory Specification Up to 2 processors Chipset: HP zx1 System bus bandwidth: 6.4 GB/s Type: Intel Itanium 2 processor Speeds - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 37
Table 1-5 HP Integrity rx2620 Features (continued) Component Internal storage devices Maximum internal storage Expansion slots Core I/O and management processor interconnect Specification Internal hard disk drive bays: 3 Disk drive sizes: 36 GB, 73 GB, and 146 GB drives available Disk drive - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 38
Figure 1-15 HP Integrity rx2620 Rear Panel 1 5 9 11 WARNING Unplug all power cords from system before servicing PWR PWR 2 1 Management Card LAN 10/100 VGA Automatic Internal SCSI Termination SCSI LVD/SE LAN Gb A MP RESET CONSOLE / REMOTE / UPS LAN Gb B TOC - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 39
the HP Integrity rx2620 Single-Core to Dual-Core Processor Upgrade Guide http://docs.fc.hp.com/en/AD117-9009A/AD117-9009A.pdf cache dual-core 1.6 GHz / 18 MB cache dual-core Eight DIMM slots located on the system board: Supported DDR2 DIMM sizes: 512 MB, 1 GB, 2 GB, and 4 GB Minimum memory (2 x 512 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 40
Table 1-8 HP Integrity rx2660 Features (continued) Component LAN system I/O Management core I/O Optical device Power supply Specification There are also a pair of internal slots dedicated to optional RAID 5/PCI-e for the SAS drives. Two GigE LAN ports One serial port, and one 10 Base-T/100 Base-T - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 41
Figure 1-17 HP Integrity rx2660 Rear Panel 123 4 5 6 7 8 9 10 23 22 21 18 17 20 19 11 12 14 13 16 15 The following table describes the callouts shown in Figure 1-17. Item Description Comments 1 PCI-X/PCI-E slot 1 PCI Express Interconnect 2 PCI-X/PCI-E slot 2 3 PCI-X/PCI-E slot 3 4 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 42
Item Description Comments 22 System LAN port 2 Connect to Administrative Network Switch (AES1) 23 LAN link speed LED (LAN port 2) 1.4.1 PCI Slot Assignments The HP Integrity rx2660 has three PCI slots. Table 1-9 and Table 1-10 summarize the slot assignments and the PCI cards. Table 1-9 HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 43
system reconfiguration and may cause boot failure. 7. Loosen the two captive screws as shown by callout 1 in Figure 1-18. Check the removal instructions on the backplane assembly (see callout 3 in Figure 1-18). Figure 1-18 Removing the I/O Backplane Assembly 1 2 3 a. Press the blue button to release - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 44
PCI-X I/O Backplane Assembly 3 2 1 3 4 9 8 7 6 5 The following list describes the callouts shown in Figure 1-19: 1. Gate latches 2. PCI-X backplane assembly 3. Guide tabs 4. Slotted T15 screws 5. PCI slot covers 6. PCI-X slot 3 7. PCI-X slot 2 8. PCI-X slot 1 9. PCI-X riser board Figure 1-20 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 45
3. PCI-X slot 3 4. Mixed PCI-X/PCI-E riser board 1.4.3 Installing PCI Cards in the Integrity rx2660 Ensure that you install the proper drivers for the PCI-X/PCI-E card before installing the card. To install a PCI-X/PCI-e card, follow these steps: 1. Remove the I/O backplane assembly as described in - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 46
ports, but be aware that you should never make additional connections to a server that is configured for a specific role in the cluster. 1.5.1 Supported Memory Configurations The HP Cluster Platform does not enforce any memory configuration rules on the control node and utility nodes other than - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 47
the processor(s) in the HP Integrity rx3600, see the HP Integrity rx3600 Server Upgrade Guide, Second Edition http://docs.fc.hp.com/en/A6961-96018/A6961-96018.pdf. 1.5.3 PCI-X Slot Assignment and Supported Options PCI-X slots are numbered from 1 through 8, with the interconnect card in slot 8 on - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 48
HP Integrity rx3600 provides eight PCI slots across five PCI-X buses and supports hot-pluggable PCI-X devices. However, you should not install the interconnect PCI cards is provided in the HP Integrity rx3600 Maintenance Guide http://docs.hp.com/en/rx3600_maint/rx3600_maint.pdf. When installing - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 49
Figure 1-24 Removing the Server's Top Panel 1 3. If you are inserting a card, remove the bulkhead screw that attaches the blank plate to the server chassis. Retain both the screw and the plate. 4. Insert the PCI adapter into slot 8, which is closest to the side of the server's case and furthest from - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 50
one cable bracket installed on the rear rack column. This component is documented in the HP Cluster Platform Integrity rx3600 Cable Bracket Installation Guide. 1.6 HP Integrity rx4640 The HP Integrity rx4640 is a 4U, quad-processor server that typically functions as a control node, utility node, or - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 51
server that is configured for a specific role in the cluster. 1.6.1 Supported Memory Configurations The HP Cluster Platform does not enforce any memory configuration Integrity rx4640, see the HP Integrity rx4640 Server Upgrade Guide, Second Edition http://docs.fc.hp.com/en/A6961-96018/A6961-96018.pdf - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 52
HP Integrity rx4640 provides eight PCI slots across five PCI-X buses and supports hot-pluggable PCI-X devices. However, you should not install the installing PCI cards is provided in the HP Integrity rx4640 Maintenance Guide http://docs.hp.com/en/rx4640_maint/rx4640_maint.pdf. When installing - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 53
Figure 1-28 Removing the Screws that Secure the Server in the Rack 1 2. Remove the server's top rear cover to access the PCI slots by unscrewing the thumbscrews at the rear of the server (see callout 1 in Figure 1-29). Figure 1-29 Removing the Server's Top Panel 1 3. If you are inserting a card, - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 54
interconnect requires one cable bracket installed on the rear rack column. This component is documented in the HP Cluster Platform Integrity rx4640 Cable Bracket Installation Guide. 54 Itanium Processor Servers - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 55
. A standard full-height/full-length PCI express slot and an additional low-profile, half-length PCI slot are provided for adapters. The DL140 G2 supports non hot-pluggable serial ATA (SATA) and SCSI hard disk drives. Table 2-1 lists the features of the HP ProLiant DL140 G2. Table 2-1 HP ProLiant - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 56
Figure 2-1 HP ProLiant DL140 G2 Front Panel 1 2 3 46 8 HP ProLiant DL 145 579 The following table describes the callouts in Figure 2-1. Table 2-2 HP ProLiant DL140 G2 Front Panel Features Item Description 1 Hard disk drive (HDD) bays 2 Optical media device bay 3 Unit identification (UID) - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 57
to 16 GB maximum system memory (2 GB in each of the eight DIMM slots). Observe the following rules when installing memory modules: • Use only HP supported PC2-3200 (400 MHz) registered ECC DIMMs in 512 MB, 1 GB, or 2 GB capacities. • Install memory modules in pairs of the same size. • Install memory - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 58
1. Remove the cover as follows: a. Loosen the captive thumbscrew on the rear panel. This screw is identified by item 2 in Table 2-3. b. Slide the cover approximately 1.25 cm (0.5 in) toward the rear of the unit, then lift the cover to detach it from the chassis. c. Place the top cover in a safe - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 59
Sheet • HP ProLiant DL140 Generation 2 Server Maintenance and Service Guide 2.2 HP ProLiant DL140 G3 Used in HP Cluster Platform for adapters. The DL140 G3 supports non hot-pluggable serial ATA (SATA) and SCSI 3.5-inch hard disk drives. The DL140 G3 also supports up to two hot-pluggable Serial - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 60
Table 2-7 HP ProLiant DL140 G3 Features (continued) Feature Specification Cache 5100 series: 4 MB (1x4 MB L2 cache) 5000 series: 4 MB (2x2 MB L2 cache) Memory type PC2-5300 Fully Buffered DIMMs (DDR2-667) with Advanced ECC Maximum memory 16 GB Storage (Maximum two 3.5" hard drives of Non - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 61
to 16 GB maximum system memory (2 GB in each of the eight DIMM slots). Observe the following rules when installing memory modules: • Use only HP supported PC2-5300 fully buffered DIMMs (DDR2-667) with advanced ECC capability in 512 MB, 1 GB, or 2 GB capacities. • Install memory modules in pairs of - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 62
described previously in Section 2.1.3. The HP ProLiant DL140 Generation 3 Server supports up to two optional PCI-X riser boards. For more information on HP ProLiant DL140 G3, see the ProLiant 100 Series Servers User Guide. 2.3 HP ProLiant DL160 G5 and G5p The HP ProLiant DL160 G5 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 63
as the system board layout and installing PCI cards, see the HP ProLiant DL160 Generation 5 Server Maintenance and Service Guide: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01325420/c01325420.pdf For additional information, such as the system board layout and installing PCI cards - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 64
Table 2-13 ProLiant DL360 G3, G4 and G4p Model Comparison Feature Processor Processor cache FSB Drive controller NIC Memory Drive bays Management i/O slots Maximum memory Power supply Fan Chassis Power HP ProLiant DL360 G3 HP ProLiant DL360 G4 and G4p 2.4+GHz Xeon, 533MHz Front Side Bus (2P) - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 65
Figure 2-9 HP ProLiant DL360 G3 Front Panel 1 3 7 2 4 5 6 The following table describes the callouts in Figure 2-9. Item Description 1 Floppy drive 2 SCSI drive bay 1 3 CD-ROM drive 4 SCSI drive bay 2 5 Fan module 6 Power switch 7 Signal LEDs Figure 2-10 HP ProLiant DL360 G3 Rear - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 66
Item Description 8 Keyboard connector (purple) 9 USB ports 10 UID 11 PCI Slot 2 12 Power supply Figure 2-11 and Figure 2-12 show the front and rear panels of the ProLiant DL360 G4 servers. The DL360 G4 front panel is the same as the DL360 G3 with the exception that the G4 has a USB port - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 67
11 Power supply bay 2 (optional) 12 Power supply bay 1 The ProLiant DL360 G4p is a variant of the ProLiant DL360 G4. It provides support for four Serial SCSI or Serial ATA disk drives, a high performance RAID controller, and external Serial SCSI connector for MSA50 storage connectivity. It also - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 68
Figure 2-14 HP ProLiant DL360 G4p Rear Panel 1 7 11 12 2 3 4 5 6 8 9 10 The following table describes the callouts in Figure 2-14: Item Description 1 PCI-X expansion slot 1, 64-bit 133-MHz 3.3V (optional PCI Express slot 1, x8) 2 Serial connector (teal) 3 Video connector (blue) 4 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 69
spare memory is a high level of memory protection that complements Advanced ECC support. With online spare memory enabled, the system still takes advantage of Advanced the pre-failure level without any service interruption and without compromising system availability. 2.4 HP ProLiant DL360 G3, G4, - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 70
and rear panels. Figure 2-15 Front Unit Identification LEDs 1 2 The rear Unit Identification LED, as shown in Figure 2-16, identifies the server being serviced. Figure 2-16 Rear Unit Identification LEDs To completely remove all power, disconnect the power cord first from the AC outlet and then from - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 71
Figure 2-17 Sliding the Server from the Rack 2 2 1 3 MC 1 MC 2 2 2 MC 1 MC 2 1 2.4.4.1 Accessing Internal Components To access internal components in the ProLiant DL360 servers, remove the access panel. When performing this task, heed the warnings and cautions listed in "Important Safety - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 72
2.4.5 Replacing a PCI Card When replacing a PCI card, you need a grounding strap. The adapter card is sensitive to electrostatic discharge. Take care to avoid mishandling, which could damage the card. Before beginning installation, and without removing the adapter card from its antistatic bag, - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 73
Processor (one of the following depending on model) Performance and Base models NOTE: Intel 5100/5000 series processors are 64-bit, dual-core, support hyper threading, and Intel VT Entry models technology. Dual-Core Intel Xeon 5160 Processor, 3.00 GHz, 1333 Front Side Bus (FSB) Dual-Core - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 74
Entry models Description Four Small Form Factor bays Embedded Dual NC373i Multifunction Gigabit Network Adapters with TCP/IP Offload Engine, including support for Accelerated iSCSI through an optional ProLiant Essentials Licensing Kit Two PCI Express expansion slots: one full-length, full-height - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 75
The following table describes the callouts in Figure 2-20: Item Description 1 Hard drive bay 5 (an optional controller is required when the server is configured with six hard drives) 2 Hard drive bay 6 (an optional controller is required when the server is configured with six hard drives) 3 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 76
Item Description 12 Multifunction Gigabit Ethernet NIC 1 13 Multifunction Gigabit Ethernet NIC 2 2.5.3 HP ProLiant DL360 G5 Front Panel LEDs The HP ProLiant DL360 G5 front panel LEDs are shown in Figure 2-22. Figure 2-22 ProLiant DL360 G5 Front Panel LEDs 12 3 4 4 5 6 The following - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 77
Table 2-16 ProLiant DL360 G5 Front Panel LEDs (continued) Item Description 6 NIC 2 link/activity LED Status If power is off, the front panel LED is not active. View the LEDs on the RJ-45 connector for status by referring to the rear panel LEDs. Green = Network link exists. Flashing green = - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 78
Table 2-17 ProLiant DL360 G5 Rear Panel LEDs and Buttons (continued) Item Description Status Flashing blue = System is being managed remotely. Off = Identification is deactivated. 8 Power supply 2 LED Green = Normal. Off = System is off or power supply has failed. 9 Power supply 1 LED - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 79
Note: The System Insight Display LEDs represent the board layout (see Figure 2-25). Figure 2-25 System Insight Display Map Table 2-18 describes the status of the System Insight Display LEDs. Table 2-18 System Insight Display LEDs LED Description Online Spare Memory Mirrored Memory All Other LEDs - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 80
Table 2-19 System Insight Display LED and Internal Health LED Combinations HP Systems Insight Display LED and Color Processor failure, socket X (amber) Processor failure, both sockets (amber) PPM failure (amber) FBDIMM failure, slot X (amber) FBDIMM failure, all slots (amber) Over temperature ( - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 81
supplies, and hot-pluggable devices, then the system might be able to be serviced without bringing the server down. See the HP ProLiant DL360 G5 Server Maintenance and Service Guide for more information on servicing the system's hot-pluggable devices such as a hot-pluggable disk or a power supply - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 82
Figure 2-26 PCI Riser Board Assembly 1 2 7. Lift the front of the assembly slightly and unseat the riser boards from the PCI riser board connectors (see callout 2 in Figure 2-26). Note: Be sure that all of the DIMM slot latches are closed to provide adequate clearance before removing the PCI riser - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 83
Important: A 64-bit riser card must be used in a 64-bit PCI slot; likewise, a 32-bit riser card must be used in a 32-bit PCI slot. Otherwise, the PCI interface might not be correctly detected and serious performance irregularities might result. 12. Insert the riser card into a PCI slot (callout 1 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 84
Figure 2-28 HP ProLiant DL380 G3 front panel 1 2 4 3 The following table describes the callouts in Figure 2-28. Item Description 1 Tape drive bay or hard drive and tape drive blank 2 Diskette drive 3 Hard drive bays 4 CD-ROM drive Figure 2-29 shows the front panel of the HP ProLiant - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 85
Figure 2-30 HP ProLiant DL380 G3 and G4 Rear Panel 124 5 79 13 3 6 8 10 11 12 The following table describes the callouts in Figure 2-30. Item Description Connector Color 1 Hot-pluggable PCI-X expansion slot 3 (bus 6) 64-bit/100 N/A MHz 3.3V 2 Hot-pluggable PCI-X expansion slot 2 (bus 6) - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 86
on the front panel, an LED illuminates blue on the server front and rear panels. The rear Unit Identification LED identifies the server being serviced. To completely remove all power, disconnect the power cord first from the AC outlet and then from the server. When performing these tasks, heed - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 87
1. Attach the grounding strap to your wrist or ankle and a metal part of the chassis. 2. Power off the server. 3. Remove the server from the rack. 4. Remove the cover from the server, and locate the PCI riser cage. 5. Disconnect any cables connected to any existing expansion boards. 6. Open the PCI - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 88
Figure 2-33 Removing the HP ProLiant DL380 PCI Riser Cage 1 2 1 2 3 9. Unlock the PCI retaining clip, as shown in Figure 2-34. Figure 2-34 Unlocking the HP ProLiant DL380 PCI Retaining Clip HPTC-0251 10. Remove the expansion board, as shown by callouts 1 and 2 in Figure 2-35. 88 Xeon Processor - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 89
iSCSI through an optional ProLiant Essentials Licensing Kit Max Drive Bays Eight SFF (Small Form Factor) hot-pluggable drive bays to support SAS (Serial Attached SCSI) and SATA (Serial ATA) drives Remote Management Integrated Lights-Out 2 (iLO 2) Expansion Slots Four available PCI Express - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 90
Table 2-23 HP ProLiant DL380 G5 Features (continued) Feature Redundancy Chassis Power HP ProLiant DL380 G5 12 fully redundant hot plug fans Hot-pluggable power supply with optional redundancy (Included in Performance models) 2U 800 Watt, CE Mark Compliant (Optional Hot-Pluggable AC Redundant Power - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 91
Figure 2-37 HP ProLiant DL380 G5 Rear Panel 1 234 56 7 10 5 4 2 1 3 11 12 8 9 18 17 16 15 14 13 The following table describes the callouts in Figure 2-37: Item Description 1 T-10/T-15 Torx screwdriver 2 Expansion slot 3 (PCI Interconnect) 3 Expansion slot 4 4 Expansion slot 5 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 92
Figure 2-38 HP ProLiant DL380 G5 Front LEDs P W PO SP P P I FA OV 123 12 3 4 56 Table 2-24 describes the callouts in Figure 2-38. Table 2-24 HP ProLiant DL380 G5 Front Panel LEDs Item Description Status 1 UID LED button Blue = Activated Flashing = System being remotely managed Off = - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 93
Figure 2-39 HP ProLiant DL380 G5 Rear LEDs 1 2 1 2 3 45 6 The following table describes the callouts in Figure 2-39. Table 2-25 HP ProLiant DL380 G5 Rear LEDs Item Description 1 Power supply LEDs 2 NIC activity LED 3 NIC link LED 4 iLO 2 activity LED 5 iLO 2 link LED 6 UID LED - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 94
. For more information on HP Systems Insight Display and Internal Health LED combinations, see the ProLiant DL380 Generation 5 Server Maintenance and Service Guide. 2.7.6 PCI Slot Assignments The HP ProLiant DL380 G5 has four PCI Express expansion slots standard and an optional PCI-X. The following - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 95
Table 2-27 HP ProLiant DL380 G5 PCI Express Slot Assignments Default PCI Slot Express Bus Assignment Comment 1 A PCI Express (used with SAS controller) x4 2 B PCI Express x4 3 C PCI Express x4 4 D PCI Express x8 5 E PCI Express (InfiniBand interconnect) x8 Table 2-28 HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 96
on the front panel, an LED illuminates blue on the server front and rear panels. The rear Unit Identification LED identifies the server being serviced. To completely remove all power, disconnect the power cord first from the AC outlet and then from the server. When performing these tasks, heed - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 97
1. Power down the server following the steps previously outlined. 2. Disconnect all remaining cables on the server rear panel, including cables extending from external connectors on expansion boards. Make note of which Ethernet and interconnect cables are connected to which ports. 3. Loosen the - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 98
Figure 2-42 Removing the ProLiant DL380 G5 PCI Riser Cage 1 2 3 9. Remove the expansion board, as shown in Figure 2-43. Figure 2-43 Removing the ProLiant DL380 G5 PCI Riser Cage 2 1 Caution: To prevent improper cooling and thermal damage, do not operate the server unless all PCI slots have - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 99
G1 HP ProLiant DL145 G2 AMD Opteron up to 2.4 GHz 1 MB L2 cache 800 MHz HyperTransport AMD Opteron up to 2.6 GHz and future dual-core support 1 MB L2 cache 1 GHz HyperTransport AMD 8111 and AMD 8131 AMD 8132 for PCI-X and NVIDIA CK8-04 for PCI Express 1 GB-2 GB / 16 GB - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 100
Table 3-1 ProLiant DL145 G1 and G2 Comparison (continued) Features Memory technology Drive controller RAID controller NIC Hard drive bays Slots HP ProLiant DL145 G1 HP ProLiant DL145 G2 PC2700 ECC DDR SDRAM @ 333MHz PC3200 ECC DDR1 SDRAM @ 400MHz Integrated dual-channel ATA Integrated dual- - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 101
The following table describes the callouts in Figure 3-2. Item Description 1 Mouse 2 Keyboard 3 Video 4 USB 5 Dedicated Management NIC 6 NIC 1 7 NIC 2 8 COM1/management processor Figure 3-3 shows the front panel of the HP ProLiant DL145 G2, and Figure 3-4 shows the rear panel of - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 102
Figure 3-4 HP ProLiant DL145 G2 Rear Panel 1 23 6 10 3 14 4 5 7 8 9 11 12 13 The following table describes the callouts in Figure 3-4. Item Description 1 Ventilation holes 2 Thumbscrew for the access panel 3 Thumbscrews for the PCI riser board assembly 4 GbE LAN ports for NIC1 (RJ-45 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 103
on a sliding rail. This section describes how you shut down power, remove a server from the rack, and access internal components. When performing these tasks, heed the warnings and cautions listed in "Important Safety Information" (page 23). The front panel power button on the HP ProLiant DL145 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 104
Figure 3-6 Sliding the HP ProLiant DL145 G1 from the Rack 1 2 6. Remove the server from the rack and position it securely on a workbench or other solid surface for stability and safety. To remove the access panel on the HP ProLiant DL145, as shown in Figure 3-7, follow these steps: 1. Remove the - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 105
Figure 3-8 Removing the HP ProLiant DL145 G2 Access Panel 2 1 Item Description 1 Access panel screw 2 Access panel latch 3. Slide the cover approximately 1.25 cm (0.5 in) toward the rear of the unit. 4. Pull up the latch to remove the access panel from the chassis. To replace the access - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 106
AC power cord. 17. Press the Power button on the server. Check that the card is detected. Refer to your software documentation for further installation instructions. 106 Opteron Processor Servers - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 107
G2 has three PCI expansion slots on the system board. The system supports up to two expansion boards at a time. Figure 3-11 shows the When replacing a PCI card in an HP ProLiant DL145 G2, use only HP supported expansion boards that meet the following specifications: • PCI or PCI-X compliant - - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 108
a. Loosen the two captive thumbscrews that secure the assembly to the chassis, as shown by callout 1 in Figure 3-12. Figure 3-12 Removing the HP ProLiant DL145 G2 PCI Card Cage 2 1 1 b. Lift the assembly away from the chassis, as shown by callout 2 in Figure 3-12. c. Identify the slot that is - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 109
be configured to fit in either slot by replacing the default bracket (attached to the board) with a different sized bracket. The different sized bracket and instructions on how to attach it to the board is included in the option kit. 9. Verify that the board's default bracket is compatible with the - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 110
Instructions for HP ProLiant DL100 Series Generation 2 Servers • HP ProLiant 100 Series Servers User Guide • HP ProLiant DL145 Generation 2 Server Maintenance and Service Guide • HP ProLiant DL145 Generation 2 Server Installation Sheet 3.2 HP ProLiant DL145 G3 The HP ProLiant DL145 G3 is supported - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 111
Table 3-4 HP ProLiant DL145 G3 Specifications (continued) Feature USB Ports Optical Drive Power Supply Specification Four USB ports (two front, two rear) Support for one (optional): CD-ROM, DVD-ROM, DVD RW, DVD/CD RW 650 W Power supply (non hot-pluggable, auto-switching) Figure 3-16 shows the - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 112
Figure 3-17 HP ProLiant DL145 G3 Rear Panel 45 123 678 9 10 21 20 19 18 17 16 15 14 13 12 11 The following list corresponds to the callouts shown in Figure 3-17. Item Description 1 Ventilation holes 2 Thumbscrew for the low-profile riser board assembly (two places) 3 Low-profile - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 113
Item Description Off - No network data activity was detected within the preceding one second (same for NIC 1) 15 USB 2.0 ports (black) 16 PS/2 keyboard port (purple) 17 PS/2 mouse port (green) 18 Serial port 19 Video port (blue) 20 Non-Maskable Interrupt (NMI) button (recessed) 21 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 114
5. Press and hold the rail locks (see callout 1 in Figure 3-19), and extend the server until it clears the rack (see callout 2 in Figure 3-6). Figure 3-19 Sliding the HP ProLiant DL145 G3 from the Rack 1 2 6. Remove the server from the rack and position it securely on a workbench or other solid - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 115
to help you slide the cover more easily. 3.2.3 DL145 G3 System Board Expansion Slots There are four expansion slots on the system board that support four different PCI riser boards. Figure 3-21 shows the system board expansion slots and Table 3-5 summarizes the types of slots available on the system - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 116
G3 System Board Expansion Slot Descriptions Item Component Description 1 HTX slot Supports a full-sized 1 GHz, 16x16 HTX expansion board installed on an installed on a PCI X riser board 4 PCI Express x4 slot Supports a low-profile PCI Express x4 expansion board installed on a PCI Express - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 117
You also cannot install the HTX and PCI Express x16 riser boards at the same time. Expansion Board Installation Guidelines Use only HP-supported expansion boards that meet the following specifications: • HTX: Full-sized, 1 GHz, 16x16 • PCI Express x4: Low-profile • PCI Express x16: Full-sized • PCI - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 118
7. Lift and remove the appropriate PCI riser board assembly from the chassis as follows: Figure 3-22 Removing the HP ProLiant DL145 G3 Full-Sized PCI Riser Board Assembly 2 1 1 Figure 3-23 Removing the HP ProLiant DL145 G3 Low-Profile PCI Riser Assembly 2 1 1 a. Loosen the two captive thumbscrews - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 119
3.2.5.2 Removing or Installing a Riser Board To remove or install a riser board in either the full-sized or low-profile riser board assemblies, follow these steps: 1. Perform the procedure described in Section 3.2.5 to remove the appropriate riser board assembly. 2. If an expansion board is - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 120
3. Remove the installed riser board from the riser board assembly as shown in Figure 3-24 and Figure 3-25. Figure 3-24 Remove the Full-Sized Riser Board 1 1 2 Figure 3-25 Remove the Low-Profile Riser Board 1 2 1 Note: Keep the two screws you remove in this step for installing the new riser board - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 121
1. Attach a grounding strap to your wrist or ankle and a metal part of the chassis. 2. Press the Power button to power down the server. When the server powers down, the system power LED turns off. 3. Disconnect the AC power cord from the AC outlet. Note: The front panel Power button does not - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 122
. For more information, refer to the following documents: • HP ProLiant 100 Series Servers User Guide • HP ProLiant DL145 Generation 3 Server Maintenance and Service Guide 3.3 HP ProLiant DL165 G5 The HP ProLiant DL165 G5 supports up to two AMD Opteron 2300 series Quad-Core processors and up to four - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 123
-E x16 PCI-E x4 For additional information, such as the system board layout and installing PCI cards, see the HP ProLiant DL165 Generation 5 Server Maintenance and Service Guide: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01384378/c01384378.pdf 3.3 HP ProLiant DL165 G5 123 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 124
used as a control node, directing the management and administrative functions, or as a utility node for any specific task, including running applications. It supports up to two 2.6 GHz 1 MB L2 cache 1 GHz HyperTransport AMD Opteron processors, 16 GB of PC3200 DDR SDRAM memory running at 400MHz, and - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 125
Figure 3-30 HP ProLiant DL385 G1 Front Panel Figure 3-31 HP ProLiant DL385 G1 Rear Panel 123 5 7 10 HPTC-0115 12 1 100 MHz 2 100 MHz 3 133 MHz 4 6 8 9 11 The following ports are available on the rear panel of the HP ProLiant DL385 G1 Table 3-7 HP ProLiant DL385 G1 Rear Panel Ports Item - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 126
Table 3-8 HP ProLiant DL385 PCI Slot Assignments Slot Bus 1 A 2 A Assignment PCI-X interconnect Optional 2Gb/sec Fibre Channel HBA Comment 64-bit 100 MHz PCI-X (1 Gb/s) 64-bit 100 MHz PCI-X (1 Gb/s) 3 B PCI interconnect 64-bit 133 MHz PCI-X (1 Gb/s) 3.4.2 Removing an HP ProLiant DL385 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 127
5. Remove the server from the rack and position it securely on a workbench or other solid surface for stability and safety. 6. To access internal components in the HP ProLiant DL385 G1, lift up on the hood latch handle and remove the access panel. 3.4.3 Replacing an HP ProLiant DL385 PCI Card When - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 128
Figure 3-35 Removing the HP ProLiant DL385 PCI Riser Cage 1 2 1 2 3 9. Unlock the PCI retaining clip, as shown in Figure 3-36. Figure 3-36 Unlocking the HP ProLiant DL385 PCI Retaining Clip HPTC-0251 10. Remove the expansion board, as shown in Figure 3-37. 128 Opteron Processor Servers - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 129
Figure 3-37 Removing the HP ProLiant DL385 Expansion Board 2 1 Caution: To prevent improper cooling and thermal damage, do not operate the server unless all PCI slots have either an expansion slot cover or an expansion board installed. 11. To replace the component, reverse the removal procedure. - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 130
Redundant Power Supplies Hot Plug Fully Redundant Fans Standard Hot Plug Fully Redundant Fans Standard For more information on ProLiant DL385 G2 supported storage, memory, and other options, see the HP ProLiant DL385 Generation 2 (G2) QuickSpecs. 3.5.1 ProLiant DL385 G2 Front and Rear Views Figure - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 131
9. NIC 1 link/activity LED 10. External health LED (power supply) 11. Internal health LED 12. UID LED button Figure 3-39 shows the rear panel of the ProLiant DL385 G2. Figure 3-39 HP ProLiant DL385 G2 Rear Panel 1 6 78 13 16 18 20 21 22 23 4 5 9 10 11 12 14 15 17 19 The following list - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 132
Table 3-10 HP ProLiant DL385 G2 PCI Express Slot Assignments Slot Assignment 1 PCI Express 2 PCI Express 3 PCI Express 4 PCI Express 5 PCI Express Interconnect Comment x8 x8 x4 x8 x8 Table 3-11 summarizes a PCI Express/PCI-X mixed configuration. Table 3-11 HP ProLiant DL385 G2 PCI - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 133
Note: If the operating system automatically places the server in Standby mode, omit the next step. 3. Press the Power On/Standby button to place the server in Standby mode. When the server activates Standby power mode, the system power LED changes to amber. Important: Pressing the UID button - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 134
11. To put the server back into the rack, press the server rail-release latches and slide the server fully into the rack. Warning! To reduce the risk of personal injury, be careful when pressing the server rail-release latches and sliding the server into the rack. The sliding rails could pinch your - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 135
Figure 3-42 Removing a PCI Slot Cover in the PCI Riser Cage 2 1 Caution: To prevent improper cooling and thermal damage, do not operate the server unless all of the PCI slots have either an expansion slot cover or an expansion board installed. 8. Remove the expansion board, as shown in Figure 3-43. - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 136
as the system board layout and installing PCI cards, see the HP ProLiant DL385 Generation 5 Server Maintenance and Service Guide: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01302275/c01302275.pdf For additional information for the ProLiant DL385 G5p, see the HP ProLiant DL385 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 137
64-bit PCI-X • An embedded dual-channel Gigabit Ethernet NIC with PXE support and Wake on LAN (WOL) • Redundant hot-pluggable power supplies with optional Integrated Lights Out (iLO) technology • QuickFind diagnostic display for troubleshooting at the server level • ROM-based setup utility The HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 138
administrative network, and the NIC 2 port is used for the Gigabit Ethernet system interconnect. See Figure 3-45 to locate the ports. Figure 3-44 displays the front panel of the HP ProLiant DL585, and Figure 3-45 displays its rear panel. Figure 3-44 HP ProLiant DL585 Front Panel 2 356 7 8 9 10 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 139
iLO regardless of the state of the host server • Access advanced troubleshooting features through the iLO interface • Diagnose iLO using Insight Manager 7 For more information about iLO features, see the Integrated Lights-Out User Guide on the HP Cluster Platform CD or on the HP website: http:// - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 140
The front panel power button on the HP ProLiant DL585, as shown in Figure 3-44, toggles between On and Standby. If you press the Power button on an HP ProLiant DL585 to power down the server, the LED changes from green to amber, indicating Standby mode. In Standby mode, the server removes power from - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 141
1. Remove the server from the rack, as described in Section 3.7.2. 2. Locate and remove the Torx T-15 tool that is stored on the back of the server chassis, between the fans and the PCI slots. 3. Unlock the access panel latch using the Torx T-15 tool, as shown in Figure 3-48, that you removed from - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 142
3.7.3 Replacing a PCI Card The HP ProLiant DL585 supports the installation of both PCI (33 MHz and 66 MHz) and PCI-X (66 MHz, 100 MHz, and 133 MHz) expansion boards. All PCI-X slots are - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 143
6. Disconnect any cables connected to the expansion board. 7. Using the callouts shown in Figure 3-51 as a guide, press the PCI-X retaining clip toward the front of the server to lock it in the open position (callout 1). Figure 3-51 Removing a PCI Card from - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 144
that the card is detected. Refer to your software documentation for further installation instructions. 3.8 HP ProLiant DL585 G2 The HP ProLiant DL585 G2 typically functions as AMD 8200 Series Processors Memory Up to 128 GB of memory, supported by 32 slots of PC2-5300 Registered DIMMs at 667 MHz - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 145
Optical Drive Slimline DVD/CD-RW Drive Standard on all models - ejectable for security and serviceability Hard Drives None ship standard Hard Disk Internal SAS backplane supports up to eight SFF hard disk drive Drive Backplane Maximum Internal Storage Hot Plug 1.168 TB (8 x 146 GB - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 146
8. Hard drive bay 8 9. Video connector 10. USB connectors (two) 11. Media drive blank or optional media drive 12. DVD drive 13. UID switch and LED 14. Internal system health LED 15. External system health LED 16. NIC 1 link/activity LED 17. NIC 2 link/activity LED 18. Power on/Standby button and LED - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 147
16. Video connector 17. Rear UID button and LED 3.8.1 Removing an HP ProLiant DL585 G2 from the Rack To access internal components in the HP ProLiant DL585 G2 server, you must first shut down power to the server and remove it from the rack. All of the servers in the cluster are secured to the rack - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 148
Figure 3-56 ProLiant DL585 G2 from the Rack 1 2 3 3 5. Press and hold the rail locks (see callout 1 in Figure 3-56) and extend the server until it clears the rack. 6. Remove the server from the rack and position it securely on a workbench or other solid surface for stability and safety. To remove - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 149
Figure 3-57 Unlock the ProLiant DL585 G2 Access Panel Latch 2 1 3 3. Lift up on the latch and remove the access panel. To replace the access panel, place the panel on top of the server with the latch open. Allow the panel to extend past the rear of the server approximately 1.25 cm (0.5 in). Push - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 150
Figure 3-58 HP ProLiant DL585 G2 PCI Slots 123456789 The following table describes the callouts in Figure 3-58. Item Slot Bus 1 Slot 1 66 2 Slot 2 66 3 Slot 3 73 4 Slot 4 76 5 Slot 5 70 6 Slot 6 79 7 Slot 7 2 8 Slot 8 5 9 Slot 9 8 Description PCI-X, 64-bit/100-MHz ( - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 151
, as shown in Figure 3-57, and locate the PCI slot. 6. Disconnect any cables connected to the expansion board. 7. Using the callouts in Figure 3-59 as a guide, press the PCI card retaining clip toward the front of the server to lock it in the open position (see callout 1). Figure 3-59 Removing a PCI - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 152
card is detected. Refer to your software documentation for further installation instructions. 3.9 HP ProLiant DL585 G5 The HP ProLiant DL585 G5 server typically Generation 5 Server Maintenance and Service Guide: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01384250/c01384250.pdf - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 153
blade enclosure backplane, and to the interconnect blades. The middle eight bays support server blades. Combinations of different series server blades are supported in the same server blade enclosure. Each enclosure supports a pair of switch or patch panel interconnects for network cable management - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 154
The HP ProLiant BL35p server blade has the following features: Feature Specification Available processors AMD Opteron™ Model 250 - 2.4 GHz, 1 MB L2 (68W) Processor capacity 2 Memory type PC3200 DDR 400 MHz (4 slots) 2:1 interleave Maximum memory 8 GB NIC Two 10/100/1000 NICs on mezzanine - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 155
The following table describes the callouts in Figure 4-1. Item Description 1 UID LED 2 Internal system health LED 3 NIC1 LED (actual NIC numeration depends on several factors, including the operating system installed on the server blade) 4 NIC 2 LED (actual NIC numeration depends on - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 156
support of Guide. Each HP ProLiant BL35p server blade includes three network adapters: two Broadcom 5703 Gigabit Ethernet Embedded 10/100/1000T WOL (Wake On LAN) enabled with Preboot eXecution Environment (PXE) plus one additional 10/100T Ethernet adapter dedicated to iLO management. 4.2.1 Supported - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 157
and in the HP ProLiant BL35p Server Blade User Guide. In addition to the hard drives, the HP Adapter) • Optical FC cables • Supported SAN and associated software For more detailed server blade from a remote location. After initiating a manual or virtual power down command, be sure that the - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 158
more information about iLO, see the HP Integrated Lights-Out User Guide. 4.3 HP ProLiant BL45p Server Blade Overview The ProLiant BL45p four- same infrastructure components as all other p-Class server blades. Each server blade supports up to four 2.2 GHz AMD Opteron Dual-Core processors with 1 GHz - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 159
Table 4-1 ProLiant BL45p Characteristics (continued) Feature Specification Maximum hard drive bays 2 - 3.5" universal SCSI hot-pluggable hard disk drive bays Connects to Fibre Channel storage Yes Storage controller Smart Array 6i Plus Chassis 4 server blades per 6U enclosure Networking 4 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 160
Figure 4-5 HP ProLiant BL45p Rear Panel Components 1 2 Item Description 1 Power connectors 2 Signal connector The HP ProLiant BL45p has two system boards. The primary system board relates to the first and second processor, and the second system board relates to the third and fourth - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 161
The following table describes the callouts in Figure 4-6. Item Description 1 Fibre Channel adapter (optional) 2 Power converter modules 3 Smart Array 6i controller 4 Smart Array 6i battery-backed write cache enabler (optional) 5 Processor 1 memory bank 2 6 Processor 1 memory bank 1 ( - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 162
converter modules 8 Processor 4 memory bank 2 9 Processor 4 memory bank 1 (shown populated) 10 DIMMs 13-16 11 Processor socket 4 (shown populated) 4.3.1 Supported Memory The HP ProLiant BL45p server blade ships with two DIMMs installed in memory bank 1 for each installed processor. Each - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 163
the HP ProLiant BL45p Server Blade User Guide. The HP ProLiant BL45p server blade also delivers optional Fibre Channel support for SAN implementations and clustering capabilities. from a remote location. After initiating a manual or virtual power down 4.3 HP ProLiant BL45p Server Blade Overview 163 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 164
information about iLO, see the HP Integrated Lights-Out User Guide. 4.4 HP BladeSystem c-Class Enclosure Overview The HP BladeSystem c-Class Enclosure provides all the power, cooling, and I/O infrastructure needed to support today's modular server, interconnect, and storage components as well as - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 165
Fibre Channel InfiniBand Dimensions Height Width Depth Description Up 16 half-height server blades Up to eight full-height server blades Mixed configurations supported Eight, in any I/O fabric Up to 6 x 2250W 4 or 6 standard, up to 10 total 2 2 x NEMA L15-30p 2 x IEC 309 5-Pin, 9h, Red, 16A 6 x IEC - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 166
. • If the c3000 enclosure is the pedestal version, see the HP Cluster Platform Workgroup System Tower Hardware Installation Guide. Figure 4-9 HP BladeSystem c-Class Enclosure Front View 1 5 2 4 3 The following list corresponds to the callouts shown in Figure 4-9: 1. Device bay 1 (up to 16 half - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 167
is hard-wired to chassis) 4. Onboard Administrator 5. Redundant Onboard Administrator Note: See the HP BladeSystem c7000 Enclosure Setup and Installation Guide for more information on LEDs and buttons for the Onboard Administrator. 4.4.2 HP BladeSystem c-Class Enclosure Device Bay Numbering The - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 168
Figure 4-11 HP BladeSystem c-Class Enclosure Device Bay Numbering (Full-Height Device Bays) 12 3 45 67 8 Figure 4-12 HP c-Class BladeSystem Enclosure Device Bay Numbering (Half-Height Device Bays) 1 2 34 5 6 78 9 10 11 12 13 14 15 16 Figure 4-13 shows a sample configuration with half-height and full - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 169
hard drive 8. Systems manager display connector 4.4.3 Interconnect Module Bay Numbering You must install interconnect modules in the appropriate bay(s) to support network connections for specific signals. Figure 4-14 shows the module bay numbers in the c-Class enclosure and Figure 4-15 provides - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 170
Figure 4-14 HP c-Class BladeSystem Module Bay Numbering Figure 4-15 HP c-Class BladeSystem Module Bay Numbering Descriptions 170 Server Blades - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 171
2 N/A Mezzanine port 1 Mezzanine slot 2, port 2 Mezzanine slot 3, port 1 For more information on mapping to interconnect ports, see the HP BladeSystem c7000 Enclosure Setup and Installation Guide. 4.4 HP BladeSystem c-Class Enclosure Overview 171 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 172
the table below, see the HP Cluster Platform Gigabit Ethernet Hardware Guide. Server Blade Type Two 16-Port One 16-Port Pass-Through and full-height (FH) server blades. A full-height server blade can support an additional InfiniBand mezzanine option in mezzanine slot 3, along with a second double - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 173
cable management bracket needs to be removed, reverse the installation procedure described in the HP Cluster Platform c-Class Blade Cable Management Bracket Installation Guide. In most instances, it is not necessary to remove the cables from the bracket, however, it might be necessary to remove the - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 174
2. Install the 4x DDR IB switch module into the appropriate double-wide bay and close the release lever as shown by callouts 1 and 2 in Figure 4-17. Figure 4-17 Install the 4x DDR IB Switch Module 1 2 4.7 HP ProLiant BL2x220c G5 Server Blade The HP ProLiant BL2x220c G5 server blade can be used as a - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 175
the system board layout and installing mezzanine HCAs, see the HP ProLiant BL2x220c Generation 5 Server Maintenance and Service Guide: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01462866/c01462866.pdf 4.8 HP ProLiant BL260c G5 Server Blade The HP ProLiant BL260c G5 server blade - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 176
board layout and installing mezzanine HCA cards, see the HP ProLiant BL260c Generation 5 Server Maintenance and Service Guide: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c01416733/c01416733.pdf 4.9 HP ProLiant BL460c and BL460c G5 Server Blade Overview The HP BladeSystem c-Class - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 177
For the features and specifications of the HP ProLiant BL460c G5, go to: http://h18004.www1.hp.com/products/quickspecs/12796_div/12796_div.pdf 4.9.1 HP ProLiant BL460c Front View Figure 4-20 shows the front view of a ProLiant BL460c server. Figure 4-20 HP ProLiant BL460c Front View 12 3 4 7 6 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 178
Item Description Status Off = No active remote management 2 Health LED Green = Normal Flashing = Booting Amber = Degraded condition 3 NIC 1 LED1 Red = Critical condition Green = Network linked Green flashing = Network activity 4 NIC 2 LED1 Off = No link or activity Green = Network - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 179
up to 12 GB of active memory and 4 GB of online spare memory utilizing 2-GB FBDIMMs. • Mirrored memory providing protection against failed FBDIMMs supporting up to 8 GB of active memory and 8 GB of mirrored memory utilizing 2-GB FBDIMMs. 4.9 HP ProLiant BL460c and BL460c G5 Server Blade Overview - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 180
disk drive is discussed in the document that comes with the drive and in the HP ProLiant BL460c Server Blade User Guide. Two optional Fibre Channel HBAs are supported by the HP ProLiant BL460c. Both mezzanine circuit boards connect directly to the server blade system board. These Fibre Channel HBAs - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 181
For more detailed SAN configuration information for the server blade, see the following documents: • The model-specific QuickSpecs document located on the HP ProLiant c-Class server blade products web page: http://h18000.www1.hp.com/products/quickspecs/Division/12534.html • The HP StorageWorks SAN - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 182
, see the HP BladeSystem c7000 Enclosure Setup and Installation Guide at: http://h71028.www7.hp.com/enterprise/cache/316682-0-0-0- from iLO 2 regardless of the state of the host server. • Access advanced troubleshooting features through the iLO 2 interface. • Diagnose iLO 2 using HP SIM through - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 183
x 1), memory mirroring, and online spare capacity Integrated Smart Array P400i RAID controller with 256 MB cache (with optional battery-backed write cache) supports RAID 0/1/5 Up to four small form factor (SFF) SAS or SATA hot-plug hard drives Four embedded NIC ports, plus one additional management - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 184
Figure 4-25 HP ProLiant BL480c Front View 1 2 3 4 9 8 76 5 The following list describes the callouts in Figure 4-25: 1. Hard drive bay 1 2. Hard drive bay 2 3. Hard drive bay 3 4. Hard drive bay 4 5. Server blade handle 6. Server blade handle release button 7. Serial label pull tab 8. Local - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 185
Table 4-7 HP ProLiant BL480c Front Panel LEDs (continued) Item Description Status Amber = Degraded condition 3 NIC 1 LED1 Red = Critical condition Green = Network linked Green flashing = Network activity 4 NIC 2 LED1 Off = No link or activity Green = Network linked Green flashing = - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 186
Figure 4-27 HP ProLiant BL480c Internal View 2 3 1 4 The following list describes the callouts in Figure 4-27: 1. Four hot-pluggable SAS/SATA drive bays 2. Embedded smart array controller integrated on drive backplane 3. Three mezzanine slots: one x4, two x8 4. Twelve fully buffered DIMM slots - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 187
16. FBDIMM slots (1-12) 4.10.5 Memory Options The HP ProLiant BL480c server blade contains 12 memory expansion slots. You can expand server memory by installing supported DDR-2 FBDIMMs. 4.10 HP ProLiant BL480c Server Blade Overview 187 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 188
disk drive is discussed in the document that ships with the drive and in the HP ProLiant BL480c Server Blade User Guide. Two optional Fibre Channel HBAs are supported by the HP ProLiant BL480c. Both mezzanine circuit boards connect directly to the server blade system board. These Fibre Channel HBAs - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 189
4.10.8 Removing the HP ProLiant BL480c from the c-Class Enclosure To remove the HP ProLiant BL480c server blade from the c-Class enclosure, follow these steps: 1. Identify the proper server blade and back up the data. 2. Depending on the Onboard Administrator configuration, use one of the following - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 190
from iLO 2, regardless of the state of the host server. • Access advanced troubleshooting features through the iLO 2 interface. • Diagnose iLO 2 using HP SIM through ProLiant BL465c Server Blade Overview The HP BladeSystem c-Class enclosure supports up to 16 HP ProLiant BL465c server blades. The HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 191
Integrated smart array E200i RAID controller with 64 MB cache (with optional battery-backed write-cache with an upgrade to 128MB cache (BBWC)). Supports RAID 0,1. Up to two small form factor (SFF) SAS or SATA hot-pluggable hard disk drives Two embedded NC370i multifunction gigabit network adapters - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 192
Figure 4-30 HP ProLiant BL465c Front View 1 2 3 4 5 6 7 The following list describes the callouts in Figure 4-30: 1. Hard drive bay 1 2. Power On/Stand by button 3. Local I/O connector 4. Hard drive bay 2 5. Serial label pull tab 6. Release button 7. Server blade handle 4.11.2 HP ProLiant - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 193
The following table describes the callouts in Figure 4-31: Item Description Status 1 UID LED Blue = Identified Blue flashing = Active remote management Off = No active remote management 2 Health LED Green = Normal Flashing = Booting Amber = Degraded condition 3 NIC 1 LED1 Red = - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 194
1. Two hot-pluggable SAS/SATA drive bays 2. Embedded smart array controller integrated on drive backplane 3. Two mezzanine slots: one x4, one x8 4. Eight fully buffered DIMM Slots DDR II 667Mhz 4.11.4 HP ProLiant BL465c System Board Figure 4-33 shows the system board components of an HP ProLiant - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 195
disk drive is discussed in the document that comes with the drive and in the HP ProLiant BL465c Server Blade User Guide. Two optional Fibre Channel HBAs are supported by the HP ProLiant BL465c. Both mezzanine circuit boards connect directly to the server blade system board. These Fibre Channel HBAs - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 196
board layout and how to install mezzanine HCA cards, see the HP ProLiant BL465c Generation 5 Server Maintenance and Service Guide: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00778741/c00778741.pdf 4.13 HP ProLiant BL685c Server Blade Overview The HP BladeSystem c-Class enclosure - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 197
multifunction gigabit server adapters - Plus one additional 10/100 NIC dedicated to iLO 2 management Three additional I/O expansion slots via mezzanine card. Supports up to three mezzanine cards: • Dual-port Fibre Channel mezzanine (4-Gb) options for SAN connectivity (choice of Emulex or QLogic - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 198
1. Hard drive bay 2 2. Server blade handle 3. Server blade handle release button 4. Serial label pull tab 5. Hard drive bay 1 6. Local I/O cable connector (the local I/O cable connector is used with the local I/O cable to perform some server blade configuration and diagnostic procedures) 7. Power On - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 199
Item Description Status 6 Health LED Green = Normal Flashing = Booting Amber = Degraded condition Red = Critical condition 7 UID LED Blue = Identified Blue flashing = Active remote management Off = No active remote management 1 Actual NIC numbers depend on several factors, including - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 200
Figure 4-38 HP ProLiant BL685c System Board Components 2 1 34 5 6 7 8 9 10 11 26 25 12 13 14 24 23 22 21 20 19 18 17 15 16 The following list describes the callouts in Figure 4-38: 1. Bezel LED connector 2. Processor socket 4 3. DIMM slots (Processor 4 memory banks G and H) 4. - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 201
disk drive is discussed in the document that ships with the drive and in the HP ProLiant BL685c Server Blade User Guide. Two optional Fibre Channel HBAs are supported by the HP ProLiant BL685c. Both mezzanine circuit boards connect directly to the server blade system board. These Fibre Channel HBAs - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 202
layout and how to install mezzanine HCA cards, see the HP ProLiant BL685c Generation 5 Server Maintenance and Service Guide: http://bizsupport.austin.hp.com/bc/docs/support/SupportManual/c00805082/c00805082.pdf 4.15 HP Integrity BL860c Server Blade Overview The HP Integrity BL860c server blade is - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 203
core • Intel Itanium 2 Processor (9010) 1.6GHz/6MB L3 cache single-core Memory Storage Controller Internal Drive Support Network Controller Mezzanine Support Up to 48 GB of memory. Supporting twelve DDR2-533 ECC memory DIMMS Embedded smart array E200i controller integrated on system board Standard - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 204
Figure 4-40 HP ProLiant BL860c Front View 1 2 3 4 5 The following list describes the callouts in Figure 4-40: 1. Hard drive bays 2. Status indicator 3. Power button 4. Server blade handle 5. Local I/O cable connector (the local I/O cable connector is used with the local I/O cable to perform some - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 205
Figure 4-41 HP Integrity BL860c LEDs The following list describes the callouts in Figure 4-41: 1. Unit identification (UID) LED 2. System health LED 3. Internal health LED 4. NIC 1 LED 5. NIC 2 LED 6. NIC 3 LED 7. NIC 4 LED 4.15.3 HP Integrity BL860c Internal View Figure 4-42 shows the internals of - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 206
Figure 4-42 HP Integrity BL860c Internal View The following list describes the callouts in Figure 4-42: 1. SAS backplane 2. Memory DIMMs 3. Mezzanine card 1 4. Mezzanine card 2 5. Mezzanine card 3 6. Processors 7. System board 8. Trusted platform module 9. Front panel 10. SAS disk drives 4.15.4 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 207
HP Integrity BL860c Server Blade QuickSpecs at: http://h18004.www1.hp.com/products/servers/integrity-bl/c-class/860c/index.html. 4.15.6 Supported Storage The HP Integrity BL860c supports up to two optional hot-pluggable serial attach SCSI (SAS) drives for a maximum of 292 GB (2 x 146 GB serial SCSI - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 208
Tuning Framework guides system setup, allowing a custom configuration that best matches the workstation to user requirements. This custom feature ensures availability of the graphics drivers and removes some memory restraints. For specific application support and download instructions, go to - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 209
Table 5-1 HP Workstation xw8200 features (continued) Feature Power supply Input devices Specification 600 W USB or PS/2 keyboard; choice of 2-button scroll mouse (optical or mechanical); 3-button mouse (optical or mechanical) Figure 5-1 displays the front panel of the HP xw8200 Workstation, and - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 210
Figure 5-2 HP xw8200 Workstation Rear Panel 8 9 1 10 11 2 12 3 13 4 14 5 15 6 16 7 The following table describes the callouts shown in Figure 5-2: Item Description 1 Power cord connector 2 Keyboard connector 3 Serial connector (teal) 4 USB ports (six) 5 IEEE 1394 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 211
side with the system board facing up. 5. Remove the PCI retainer, as shown in Figure 5-3. Figure 5-3 PCI Retainer HPTC-0212 6. Remove the PCI card support, if necessary. 7. Lift the PCI levers by first pressing down, then out. If you are removing a PCI Express card, remove the power supply cable - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 212
Figure 5-4 PCI Levers Figure 5-5 PCI Express Levers HPTC-0211 HPTC-0215 8. Lift the PCI card out of the chassis and store it in an antistatic bag. Installing a New PCI Card To install a new PCI or PCI Express card from an HP xw8200 workstation, follow these steps: 1. Disconnect the power cord - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 213
Figure 5-6 Installing a PCI Card in the HP xw8200 Workstation 1 2 4 3 7. Remove the PCI slot cover, as shown by callout 2 in Figure 5-6. 8. Lower the PCI 3 or PCI Express 3 card into the chassis. Verify that the keyed components of the card align with the socket, as shown by callout 3 in Figure - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 214
Tuning Framework (PTF) comes pre-installed to guide workstation setup and custom configuration to help increase performance of memory via 4 GB DIMMs The HP xw8400 is enabled to achieve the maximum memory supported by the chipset of 64 GB Three external 5.25-inch bays Five internal 3.5-inch bays - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 215
Table 5-3 HP Workstation xw8400 Features (continued) Feature Audio Network Ports Input devices Power Specification Entry 3D: NVIDIA Quadro FX 560 (128 MB), ATI FireGL V3300 (128 MB) Midrange 3D: NVIDIA Quadro FX 1500 (256 MB), ATI FireGL V7200 (256 MB) High-end 3D: NVIDIA Quadro FX 3500 (256 MB), - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 216
The following table describes the callouts shown in Figure 5-8: Item Description 1 Optical drive 2 Optical drive activity lights 3 5.25-inch drive bays 4 Optical drive eject button 5 Power on light 6 Power button 7 Hard drive activity light 8 USB ports (two) 9 Headphone connector - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 217
Item Description 6 Microphone connector (pink) 7 Audio line out connector (lime) 8 Universal chassis clamp openings 9 Access panel key 10 Padlock loop 11 Cable lock slot 12 Mouse connector (green) 13 Parallel connector (burgundy) 14 RJ-45 network connector 15 Audio line-in - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 218
Table 5-4 HP xw8400 Workstation PCI Slots (continued) Slot Assignment Maximum Slot Power 4 PCI Express x16 mechanical 25W (x4 electrical) 5 PCI-X 133 25W 6 PCI-X 100 25W 7 PCI-X 100 25W Comment SI - gigabit Ethernet or InfiniBand SI - Myrinet or Quadrics Available NIC 5.2.2 xw8400 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 219
visualization • Power walls • Caves/Immersive environments • On-Air Broadcast Graphics and Post Production Station The NVIDIA Quadro G-Sync option card is supported as an option to the NVIDIA Quadro FX 4500. Features include: • Enables full Genlock/Framelock functionality through GUI or API • ATX - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 220
Figure 5-12 NVIDIA Quadro G-Sync Card 5.2.5 NVIDIA Quadro FX 5500 Graphics Card The NVIDIA Quadro FX 5500 graphics card includes the following features: • Quadro FX 5500 graphics processor • 1 GB GDDR2 SDRAM • Dual dual-link DVI-I • SLI capable • New dual slot thermal solution • High Precision - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 221
5.2.6 Replacing or Installing a PCI Card Use the following procedures to install and replace PCI cards. Replacing a PCI Card To replace a PCI or PCI Express card from an HP xw8400 workstation, follow these steps: 1. Disconnect the power cord from the AC outlet and then from the workstation. 2. - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 222
Figure 5-15 PCI Retention Clamp 1 1 3 2 7. Lift the PCI card out of the chassis (callout 2 in Figure 5-15). If you are removing a PCI Express high-end graphics card, remove the auxiliary power supply cable (not illustrated) if required, and move the lever to release the card and lift it out of - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 223
HP xw9300 Workstation Overview The HP xw9300 Workstation is a 64-bit personal workstation designed for visualization and compute intensive environments. It supports dual PCI Express x16 graphics and dual singleand dual-core AMD Opteron processors. The AMD Direct Connect Architecture connects the HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 224
Tuning Framework guides system setup, allowing a custom configuration that best matches the workstation to user requirements. This customization ensures availability of the graphics drivers and removes some memory restraints. For specific application support and download instructions, go to - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 225
Figure 5-17 HP xw9300 Workstation Rear Panel 8 1 9 2 10 11 3 12 4 5 13 6 14 7 15 16 The following tables describes the callouts in Figure 5-17: Item Description 1 Power cord connector 2 Power supply built-in self test (BIST) LED 3 Serial connector (teal) 4 PS/2 keyboard - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 226
Note: Although the xw9300 has six PCI slots on the mother board, there are seven apertures in the chassis. Ensure that you remove the correct blank when installing a card. See Figure 5-18. Figure 5-18 HP xw9300 Slot Numbering 0 1 2 3456 Table 5-7 HP xw9300 Workstation PCI Slots Slot Assignment - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 227
Table 5-8 and Table 5-9 list the slot assignments for narrow (1-slot), or wide (2-slot) graphics cards. Table 5-8 Narrow (1-Slot) Graphics Cards Slot Single CPU 2 CPU, 1 GFX 2 CPU, 2GFX 1 GFX GFX GFX 2 3 GFX 4 G-sync G-sync G-sync 5 NIC NIC NIC 6 Interconnect Interconnect - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 228
Figure 5-19 Typical xw9300 Slot Configuration Figure 5-19 shows the slot usages for a typical xw9300 configuration: • Slot 0 is a blank chassis aperture; there is no PCI slot on the motherboard. • Slot 1 contains a narrow graphics card • Slot 2 is empty. • Slot 3 contains a narrow graphics card. • - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 229
multiple cards in multiple workstations. • Required for active stereo. • Interconnected using CAT-5 cables. The G-Sync card uses the PCI slot only for physical support. An internal cable attaches the G-sync card to one, or both of the graphics cards. Connections to the G-Sync card are not defined in - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 230
in any available slot. Note that the slot itself is only used for support. The slot must be close enough to the graphics card that the ribbon hardware. 5.3.7 System Interconnect Cards Table 5-10 shows the supported interconnect cards depending on the system configuration. Some configurations depend - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 231
Table 5-11 Supported Memory Configurations (continued) Memory Size 16 GB (4x4GB) DDR-333 32 GB (8x4GB) DDR-333 1 CPU 2 CPU EK738AV EK737AV 5.3.9 PCI Card Installation and Removal Instructions This section describes PCI card installation, and removal procedures for the xw9300 workstation. Note: - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 232
refer to the callouts in Figure 5-23, and follow these steps: 1. Disconnect power from the system, remove the access panel and remove the PCI card support, if installed. 2. Lift the PCI levers (callout 1) by first pressing down and then up. 3. Remove the PCI slot cover (callout 2). 4. Lower the PCI - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 233
, refer to the callouts in Figure 5-24 and follow these steps: 1. Disconnect power from the system, remove the access panel and remove the PCI card support, if installed. 2. Lift the PCI levers (callout 1) by first pressing down and then up. 3. Lift the PCI card (callout 2) out of the chassis. Store - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 234
to the callouts in Figure 5-25 and follow these steps: 1. Disconnect power from the system, remove the access panel, and remove the PCI card support. 2. Lift the PCI levers (callout 1) by first pressing down and then up. 3. Remove the PCI slot cover (callout 2). 4. Lower the PCI (callout 3) card - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 235
(a max. of 16 GB with one processor). The HP xw9400 is expected to support 64 GBiv of memory with 8 GB DIMMs. Integrated SATA 3 Gb/s controller (6 , 750 GB SATA 3 Gb/s NCQ; or up to five serial attached SCSI (SAS) drives supported natively (1.5 TB max.); 146 GB (10K rpm) or 146, 300 GB (15K rpm) SAS - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 236
1 Power cord connector 2 Power supply built-in self test (BIST) LED 3 Serial connector (teal) 4 SPDIF OUT (single RCA jack to support SPDIF digital audio output via coax cable) 5 Keyboard connector 6 USB 2.0 ports 7 Microphone connector (pink) 8 Audio line-out connector 9 MiniSAS - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 237
5.4.1 PCI Slot Indentification The HP xw9400 workstation has seven PCI expansion slots. Figure 5-27 shows the PCI slots available in the xw9400 workstation, and Table 5-13 identifies the slots. Figure 5-27 HP xw9400 Slot Numbering 1 2 3 4 5 6 7 Table 5-13 HP xw9400 Workstation PCI Slots Slot - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 238
Table 5-14 Graphics Cards (continued) Slot Slot Type 1 CPU 5 PCI Express, x16 NIC 6 PCI-X 100 SI - All 7 PCI-X 100/133 2 CPU, 1 Graphics Card NIC SI - Myrinet or Quadrics 2 CPU, 2 Graphics Cards G-Sync or NIC SI - All 5.4.3 xw9400 Graphics Options The following graphics options are used - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 239
Figure 5-28 NVIDIA Quadro FX 3500 Graphics Card 5.4.3.2 NVIDIA Quadro FX 4500 The NVIDIA Quadro FX 4500 includes the following features: • G70GL graphics processor • 512 MB GDDR3 graphics memory • Dual dual-link DVI-I • SLI Capable • New dual slot thermal solution • High Precision Dynamic Range - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 240
Visualization • Power Walls • Caves / Immersive Environments • On Air Broadcast Graphics and Post Production Station The NVIDIA Quadro G-Sync Option Card is supported as an option to the NVIDIA Quadro FX 4500. Features include: • Enables full Genlock/Framelock functionality through GUI or API • ATX - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 241
DIMM pairs by size and type. Refer to the HP xw9400 Workstation Service and Technical Reference Guide for more information on upgrading memory in the HP xw9400 workstation. 5.4.5 PCI Card Installation and Removal Instructions PCI and PCI Express card installation, and removal procedures for the - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 242
server with a new ROM image. HP Drive Key Boot Utility software can be downloaded from the following Web site: http://h18000.www1.hp.com/support/files/serveroptions/us/download/21621.html Use the following procedure to make your drive key bootable and capable of flashing firmware: 1. Install the HP - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 243
Index A access panel ProLiant DL145, 104 ProLiant DL145 G3, 114 ProLiant DL585, 140 ProLiant DL585 G2, 148 advanced ECC memory, 69 application node characteristics, 99 characteristics of, 30, 45, 50 application nodes removing from a rack, 102, 113, 139, 147 Automatic Server Recovery-2, 69 B BL260c - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 244
storage, 188 ProLiant BL485c, 14 ProLiant BL485c G5, 14 ProLiant BL680c G5, 14 ProLiant BL685c, 14, 196 supported storage, 201 ProLiant BL685c G5, 14, 202 ProLiant BL860c, 14, 202 supported storage, 207 ProLiant DL 385 G1 power buttons, 126 ProLiant DL140 G1, 14 ProLiant DL140 G2, 14, 55 features - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 245
front panel features, 60 installing PCI card, 62 memory configurations, 61 memory module sequence, 61 PCI slot assignments, 61 ProLiant DL145 accessing internal components, 104 characteristics, 99 memory, 99 power buttons, 103 replacing a PCI card, 105 shutting down, 103 ProLiant DL145 G1, 14 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 246
51 S SAN, 157, 163, 180, 188, 195, 201 SCSI controller, 136 SCSI drive supported, 28, 34 server, 30, 45, 50 blades, 153 Integrity rx1620, 25 Integrity rx2600, 14 183 ProLiant BL685c, 196 ProLiant BL685c G5, 202 ProLiant BL860c, 202 service node characteristics of, 30, 45, 50 SIM, 79 (see also system - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 247
xw8400, 14 characteristics, 214 PCI slot rules, 218 removing from a rack, 223 replacing a PCI card, 221 xw9300, 14 characteristics, 223 memory configurations, 230 PCI slot numbering, 226 PCI slots, 225 rear panel, 224 removing from a rack, 234 replacing a PCI card, 231 slot assignment rules, 226 - HP Cluster Platform Introduction v2010 | HP Cluster Platform Server and Workstat - Page 248
*A-CPSOV-1H* Printed in the US
HP Cluster Platform
Server and Workstation Overview
HP Part Number: A-CPSOV-1H
Published: March 2009