HP rp7440 nPartition Administrator's Guide, Second Edition - Page 41

Remote and Local Management of nPartitions, Intelligent Platform Management Interface (IPMI)

Page 41 highlights

Table 1-5 Complex Profile Group Details (continued) Complex Profile Group Description and Contents The Partition Configuration Data contains configuration details specific to each nPartition in the complex. Each nPartition has its own Partition Configuration Data entry, which may be modified by administrators. The service processor (MP or GSP) has a copy of the Partition Configuration Data for every nPartition. Each cell has a copy of the Partition Configuration Data entry for the nPartition to which it is assigned. Partition Configuration Data includes this data for each nPartition: • HP 9000 server components (unused on HP Integrity servers) - These components apply only on HP 9000 servers, but are present on HP Integrity servers for compatibility: Primary Boot Path, HA Alternate Boot Path, Alternate Boot Path, Console Path, Keyboard Path, Boot Timer, Known Good Memory Requirement, Autostart and Restart Flags, and CPU Flags (e.g. Data Prefetch setting). • Cell use-on-next-boot values - Specifies whether the cell is to be an active or inactive member of the nPartition to which it is assigned. • Core Cell Choices - Up to four cells preferred to be the core cell. • Partition Number - The partition number; not user-configurable. • Profile Architecture - Specifies whether the current Partition Configuration Data applies to the HP 9000 server architecture or HP Integrity server architecture; not user-configurable. • nPartition Name - The nPartition name, used in various displays. • Cell Failure Usage - Specifies how each cell in the nPartition is handled when a processor or memory component fails self-tests. Only activating the cell to integrate it into the nPartition is supported (the ri failure usage option, as specified by the parcreate and parmodify commands). • IP Address - If set, should be consistent with the IP address assigned to the nPartition when HP-UX is booted. Not actually used for network configuration, but for information only. Remote and Local Management of nPartitions You can remotely manage cell-based servers using either the Enhanced nPartition Commands or Partition Manager Version 2.0. The Enhanced nPartition Commands and Partition Manager Version 2.0 also can run on an nPartition and manage that nPartition and the complex to which it belongs. The ability to remotely manage a server based on the HP sx1000 chipset or HP sx2000 chipset is enabled by two technologies: the Web-Based Enterprise Management infrastructure (WBEM) and the Intelligent Platform Management Interface (IPMI). A brief overview of these technologies is provided first, then explanations of how to use the tools to locally and remotely manage cell-based servers are provided. Intelligent Platform Management Interface (IPMI) The nPartition management tools perform their functions by sending requests to the service processor. These requests are either to get information about the server or to affect changes to the server. On the first generation of cell-based servers (the HP 9000 Superdome SD16000, SD32000, and SD64000 models; rp7405/rp7410; and rp8400 servers) a proprietary interface to the service processor was implemented. This interface relied on system firmware to convey information between HP-UX and the service processor. This in turn required that the nPartition management tools run on an nPartition in the complex being managed. The service processor in all sx1000-based or sx2000-based servers supports the Intelligent Platform Management Interface (IPMI) as a replacement for the proprietary interface mentioned above. IPMI is an industry-standard interface for managing hardware. IPMI also supports value-added capabilities, such as HP's nPartition and complex management features. Remote and Local Management of nPartitions 41

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266

Table 1-5 Complex Profile Group Details
(continued)
Description and Contents
Complex Profile Group
The Partition Configuration Data contains configuration details specific to each nPartition
in the complex. Each nPartition has its own Partition Configuration Data entry, which may
be modified by administrators.
The service processor (MP or GSP) has a copy of the Partition Configuration Data for every
nPartition. Each cell has a copy of the Partition Configuration Data entry for the nPartition
to which it is assigned.
Partition Configuration Data includes this data for each nPartition:
HP 9000 server components (unused on HP Integrity servers) — These components apply
only on HP 9000 servers, but are present on HP Integrity servers for compatibility: Primary
Boot Path, HA Alternate Boot Path, Alternate Boot Path, Console Path, Keyboard Path,
Boot Timer, Known Good Memory Requirement, Autostart and Restart Flags, and CPU
Flags (e.g. Data Prefetch setting).
Cell use-on-next-boot values — Specifies whether the cell is to be an active or inactive
member of the nPartition to which it is assigned.
Core Cell Choices — Up to four cells preferred to be the core cell.
Partition Number — The partition number; not user-configurable.
Profile Architecture — Specifies whether the current Partition Configuration Data applies
to the HP 9000 server architecture or HP Integrity server architecture; not user-configurable.
nPartition Name — The nPartition name, used in various displays.
Cell Failure Usage — Specifies how each cell in the nPartition is handled when a processor
or memory component fails self-tests. Only activating the cell to integrate it into the
nPartition is supported (the
ri
failure usage option, as specified by the
parcreate
and
parmodify
commands).
IP Address — If set, should be consistent with the IP address assigned to the nPartition
when HP-UX is booted. Not actually used for network configuration, but for information
only.
Remote and Local Management of nPartitions
You can remotely manage cell-based servers using either the Enhanced nPartition Commands
or Partition Manager Version 2.0.
The Enhanced nPartition Commands and Partition Manager Version 2.0 also can run on an
nPartition and manage that nPartition and the complex to which it belongs.
The ability to remotely manage a server based on the HP sx1000 chipset or HP sx2000 chipset is
enabled by two technologies: the Web-Based Enterprise Management infrastructure (WBEM)
and the Intelligent Platform Management Interface (IPMI). A brief overview of these technologies
is provided first, then explanations of how to use the tools to locally and remotely manage
cell-based servers are provided.
Intelligent Platform Management Interface (IPMI)
The nPartition management tools perform their functions by sending requests to the service
processor. These requests are either to get information about the server or to affect changes to
the server.
On the first generation of cell-based servers (the HP 9000 Superdome SD16000, SD32000, and
SD64000 models; rp7405/rp7410; and rp8400 servers) a proprietary interface to the service
processor was implemented. This interface relied on system firmware to convey information
between HP-UX and the service processor. This in turn required that the nPartition management
tools run on an nPartition in the complex being managed.
The service processor in all sx1000-based or sx2000-based servers supports the Intelligent Platform
Management Interface (IPMI) as a replacement for the proprietary interface mentioned above.
IPMI is an industry-standard interface for managing hardware. IPMI also supports value-added
capabilities, such as HP's nPartition and complex management features.
Remote and Local Management of nPartitions
41