Dell PowerConnect W Clearpass 100 Software 3.7 Deployment Guide - Page 358

Recovering From a Failure, Recovering From a Temporary Outage, temporary outage, hardware failure

Page 358 highlights

The maintenance commands that are available on this page will depend on the current state of the cluster as well as which node you are logged into. Some maintenance commands are only available on the secondary node. Other commands may change the active state of the cluster. For this reason it is recommended that cluster maintenance should only be performed by logging into a specific node in the cluster using its IP address. Recovering From a Failure From a cluster maintenance perspective, there are two kinds of failure:  A temporary outage is an event or condition that causes the cluster to failover to the secondary node. Clearing the condition allows the cluster's primary node to resume operations in essentially the same state as before the outage.  A hardware failure is a fault that to correct requires rebuilding or replacing one of the nodes of the cluster. The table below lists some system failure modes and the corresponding cluster maintenance that is required. Table 28 Failure Modes Failure Mode Maintenance Software failure - system crash, reboot or hardware reset Temporary outage Power failure Temporary outage Network failure - cables or switching equipment Temporary outage Network failure - appliance network interface Hardware failure Hardware failure - other internal appliance hardware Hardware failure Data loss or corruption Hardware failure Recovering From a Temporary Outage Use this procedure to repair the cluster and return to a normal operating state: 1. This procedure assumes that the primary node has experienced a temporary outage, and the cluster has failed over to the secondary node. 2. Ensure that the primary node and the secondary node are both online. 3. Log into the secondary node. (Due to failover, this node will be assigned the cluster's virtual IP address.) 4. Click Cluster Maintenance, and then click the Recover Cluster command link. 5. A progress meter is displayed while the cluster is recovered. The cluster's virtual IP address will be temporarily unavailable while the recovery takes place. 358 | High Availability Services Amigopod 3.7 | Deployment Guide

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292
  • 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302
  • 303
  • 304
  • 305
  • 306
  • 307
  • 308
  • 309
  • 310
  • 311
  • 312
  • 313
  • 314
  • 315
  • 316
  • 317
  • 318
  • 319
  • 320
  • 321
  • 322
  • 323
  • 324
  • 325
  • 326
  • 327
  • 328
  • 329
  • 330
  • 331
  • 332
  • 333
  • 334
  • 335
  • 336
  • 337
  • 338
  • 339
  • 340
  • 341
  • 342
  • 343
  • 344
  • 345
  • 346
  • 347
  • 348
  • 349
  • 350
  • 351
  • 352
  • 353
  • 354
  • 355
  • 356
  • 357
  • 358
  • 359
  • 360
  • 361
  • 362
  • 363
  • 364
  • 365
  • 366
  • 367
  • 368
  • 369
  • 370
  • 371
  • 372
  • 373
  • 374
  • 375
  • 376
  • 377
  • 378
  • 379
  • 380
  • 381
  • 382
  • 383
  • 384
  • 385
  • 386
  • 387
  • 388
  • 389
  • 390
  • 391
  • 392
  • 393
  • 394
  • 395
  • 396
  • 397
  • 398
  • 399
  • 400
  • 401
  • 402
  • 403
  • 404
  • 405
  • 406
  • 407
  • 408
  • 409
  • 410
  • 411
  • 412
  • 413
  • 414
  • 415
  • 416
  • 417
  • 418
  • 419
  • 420
  • 421
  • 422
  • 423
  • 424
  • 425
  • 426
  • 427
  • 428
  • 429
  • 430
  • 431
  • 432
  • 433
  • 434
  • 435
  • 436
  • 437
  • 438

358
|
High Availability Services
Amigopod 3.7
|
Deployment Guide
The maintenance commands that are available on this page will depend on the current state of the cluster as
well as which node you are logged into.
Recovering From a Failure
From a cluster maintenance perspective, there are two kinds of failure:
A
temporary outage
is an event or condition that causes the cluster to failover to the secondary node.
Clearing the condition allows the cluster’s primary node to resume operations in essentially the same
state as before the outage.
A
hardware failure
is a fault that to correct requires rebuilding or replacing one of the nodes of the
cluster.
The table below lists some system failure modes and the corresponding cluster maintenance that is
required.
Recovering From a Temporary Outage
Use this procedure to repair the cluster and return to a normal operating state:
1.
This procedure assumes that the primary node has experienced a temporary outage, and the cluster has
failed over to the secondary node.
2.
Ensure that the primary node and the secondary node are both online.
3.
Log into the secondary node. (Due to failover, this node will be assigned the cluster’s virtual IP address.)
4.
Click Cluster Maintenance, and then click the
Recover Cluster
command link.
5.
A progress meter is displayed while the cluster is recovered. The cluster’s virtual IP address will be
temporarily unavailable while the recovery takes place.
Some maintenance commands are only available on the secondary node. Other commands may change the active
state of the cluster. For this reason it is recommended that cluster maintenance should only be performed by
logging into a specific node in the cluster using its IP address.
Table 28
Failure Modes
Failure Mode
Maintenance
Software failure – system crash, reboot or hardware reset
Temporary outage
Power failure
Temporary outage
Network failure – cables or switching equipment
Temporary outage
Network failure – appliance network interface
Hardware failure
Hardware failure – other internal appliance hardware
Hardware failure
Data loss or corruption
Hardware failure