HP StorageWorks 4000s NAS 4000s and 9000s Administration Guide - Page 228

Basic Cluster Administration Procedures, Failing Over and Failing Back, Restarting One Cluster Node

Page 228 highlights

Cluster Administration Basic Cluster Administration Procedures ■ Failing over and failing back ■ Restarting one cluster node ■ Shutting down one cluster node ■ Powering down all cluster nodes ■ Powering up all cluster nodes Failing Over and Failing Back As previously mentioned, when a node goes offline, all of the resources dependent on that node are automatically failed over to another node. Processing continues, but in a reduced manner because all operations must be processed on the remaining node(s). In clusters containing more than two nodes additional failover rules can be applied. For instance, groups can be configured to failover different nodes to balance the additional work load imposed by the failed node. Nodes can be excluded from the possible owners list to prevent a resource from coming online on a particular node. Lastly the preferred owners list can be ordered, to provide an ordered list of failover nodes. Using these tools, the failover of resources can be controlled with in a multimode cluster to provide a controlled balanced failover methodology that balances the increased work load. Because operating environments differ, the administrator must indicate whether the system will automatically fail the resources (organized by resource groups) back to their original node or will leave the resources failed over, waiting for the resources to be moved back manually. See "Managing Cluster Resource Groups" for information on allowing or preventing failback and moving these resources from one node to another. Note: If the NAS server is not set to automatically fail back the resources to their designated owner, the resources must be moved back manually each time a failover occurs. See "Managing Cluster Resource Groups" for information on overriding this default setting. Restarting One Cluster Node Caution: Restarting a cluster node should be done only after confirming that the other node(s) in the cluster are functioning normally. Adequate warning should be given to users connected to resources of the node being restarted. Attached connections can be viewed through the NAS Management Console on the NAS Desktop using Terminal Services. From the NAS Management Console, select File Sharing, Shared Folders, and Sessions. The physical process of restarting one of the nodes of a cluster is the same as restarting a NAS device in single node environment. However, additional caution is needed. Restarting a cluster node causes all file shares served by that node to fail over to the another node(s) in the cluster based on the failover policy in place. Until the failover process completes, any currently executing read and write operations will fail. Other node(s) in the cluster will be placed under a heavier load by the extra work until the restarted node comes up and the resources are moved back. 228 NAS 4000s and 9000s Administration Guide

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246

Cluster Administration
228
NAS 4000s and 9000s Administration Guide
Basic Cluster Administration Procedures
Failing over and failing back
Restarting one cluster node
Shutting down one cluster node
Powering down all cluster nodes
Powering up all cluster nodes
Failing Over and Failing Back
As previously mentioned, when a node goes offline, all of the resources dependent on that
node are automatically failed over to another node. Processing continues, but in a reduced
manner because all operations must be processed on the remaining node(s). In clusters
containing more than two nodes additional failover rules can be applied. For instance, groups
can be configured to failover different nodes to balance the additional work load imposed by
the failed node. Nodes can be excluded from the possible owners list to prevent a resource
from coming online on a particular node. Lastly the preferred owners list can be ordered, to
provide an ordered list of failover nodes. Using these tools, the failover of resources can be
controlled with in a multimode cluster to provide a controlled balanced failover methodology
that balances the increased work load.
Because operating environments differ, the administrator must indicate whether the system
will automatically fail the resources (organized by resource groups) back to their original node
or will leave the resources failed over, waiting for the resources to be moved back manually.
See “Managing Cluster Resource Groups” for information on allowing or preventing failback
and moving these resources from one node to another.
Note:
If the NAS server is not set to automatically fail back the resources to their designated owner,
the resources must be moved back manually each time a failover occurs. See “Managing Cluster
Resource Groups” for information on overriding this default setting.
Restarting One Cluster Node
Caution:
Restarting a cluster node should be done only after confirming that the other
node(s) in the cluster are functioning normally. Adequate warning should be given to users
connected to resources of the node being restarted.
Attached connections can be viewed through the NAS Management Console on the NAS
Desktop using Terminal Services. From the NAS Management Console, select File Sharing,
Shared Folders, and Sessions.
The physical process of restarting one of the nodes of a cluster is the same as restarting a NAS
device in single node environment. However, additional caution is needed.
Restarting a cluster node causes all file shares served by that node to fail over to the another
node(s) in the cluster based on the failover policy in place. Until the failover process
completes, any currently executing read and write operations will fail. Other node(s) in the
cluster will be placed under a heavier load by the extra work until the restarted node comes up
and the resources are moved back.