Compaq ProLiant 6400R Compaq Parallel Database Cluster Model PDC/05000 for Ora - Page 181

Preparing the Existing Cluster Nodes, Installing the Cluster Software

Page 181 highlights

Cluster Management 6-27 14. Restart the new node. 15. Verify Ethernet connectivity between the new node and the existing cluster nodes. Run the ping utility from the replacement node, pinging the client LAN adapters on all other nodes. If Ethernet is used as the cluster interconnect, ping the cluster interconnect adapters of all other nodes. 16. Verify that the new node can access the shared storage. From the new node, start Disk Management to verify that the same shared disk resources are seen from this node as they are from other installed nodes in the cluster. Preparing the Existing Cluster Nodes To prepare the existing cluster nodes for adding a new node, add the unique IP addresses and node names for the new node in the hosts and lmhosts files of each existing cluster node. These files are located at %SystemRoot%\system32\drivers\etc. Verify TCP/IP connectivity between the new node and the existing cluster nodes. Run the ping utility from the new node, pinging the client LAN adapters on all other nodes. If Ethernet is used as the cluster interconnect, ping the cluster interconnect adapters of all other nodes. Installing the Cluster Software Now that the new node is configured and physically connected to the cluster, the next procedure is to integrate the node into the cluster. This involves installing the low-level cluster management software and the OSDs. It also involves installing the application-level cluster software, including Oracle8i Server with the Oracle8i Parallel Server Option. The cluster must be brought offline to install the cluster software. Because the database will be unavailable while configuring the Oracle software, it is recommended that this procedure be performed during non-peak work hours. NOTE: As a precaution, it is recommended that you back up the database.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233

Cluster Management
6-27
14.
Restart the new node.
15.
Verify Ethernet connectivity between the new node and the existing
cluster nodes. Run the ping utility from the replacement node, pinging
the client LAN adapters on all other nodes. If Ethernet is used as the
cluster interconnect, ping the cluster interconnect adapters of all other
nodes.
16.
Verify that the new node can access the shared storage. From the new
node, start Disk Management to verify that the same shared disk
resources are seen from this node as they are from other installed nodes
in the cluster.
Preparing the Existing Cluster Nodes
To prepare the existing cluster nodes for adding a new node, add the unique IP
addresses and node names for the new node in the hosts and lmhosts files of
each existing cluster node. These files are located at
%SystemRoot%\system32\drivers\etc.
Verify TCP/IP connectivity between the new node and the existing cluster
nodes. Run the ping utility from the new node, pinging the client LAN
adapters on all other nodes. If Ethernet is used as the cluster interconnect, ping
the cluster interconnect adapters of all other nodes.
Installing the Cluster Software
Now that the new node is configured and physically connected to the cluster,
the next procedure is to integrate the node into the cluster. This involves
installing the low-level cluster management software and the OSDs. It also
involves installing the application-level cluster software, including Oracle8
i
Server with the Oracle8
i
Parallel Server Option.
The cluster must be brought offline to install the cluster software. Because the
database will be unavailable while configuring the Oracle software, it is
recommended that this procedure be performed during non-peak work hours.
NOTE:
As a precaution, it is recommended that you back up the database.