Compaq ProLiant 6500 Compaq Parallel Database Cluster Model PDC/O1000 for Orac - Page 143

Preparing the Existing Cluster Nodes, Installing the Cluster Software for Oracle8i

Page 143 highlights

Cluster Management 6-23 8. Using Fibre Channel cables, connect each Fibre Host Adapter in the new node to its Fibre Channel SAN Switch, FC-AL Switch, or Storage Hub. 9. Restart the new node. 10. Verify TCP/IP connectivity between the new node and the existing cluster nodes. Run the ping utility from the new node, pinging the client LAN adapters on all other nodes. If Ethernet is used as the cluster interconnect, also ping the cluster interconnect adapters of all other nodes. 11. Verify that the new node can access the shared storage. From the new node, use Disk Management to verify that the same shared disk resources are seen from this node as they are seen from other installed nodes in the cluster. Preparing the Existing Cluster Nodes To prepare the existing cluster nodes for adding a new node: 1. Add the unique IP addresses and node names for the new node in the hosts and lmhosts files of each existing cluster node. These files are located at %SystemRoot%\system32\drivers\etc. 2. Verify TCP/IP connectivity between the new node and the existing cluster nodes. Run the ping utility from the new node, pinging the client LAN adapters on all other nodes. If Ethernet is used as the cluster interconnect, ping the cluster interconnect adapters of all other nodes. Installing the Cluster Software for Oracle8i Now that the new node is configured and physically connected to the cluster, the next procedure is to integrate the node into the cluster. This involves installing the low-level cluster management software and the OSDs. It also involves installing the application-level cluster software, including Oracle8i Server with the Oracle8i Parallel Server Option. The cluster must be brought offline to install the cluster software. Because the database will be unavailable while configuring the Oracle software, it is recommended that this procedure be performed during non-peak work hours. NOTE: As a precaution, it is recommended that you back up the database.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180

Cluster Management
6-23
8.
Using Fibre Channel cables, connect each Fibre Host Adapter in the
new node to its Fibre Channel SAN Switch, FC-AL Switch, or
Storage Hub.
9.
Restart the new node.
10.
Verify TCP/IP connectivity between the new node and the existing
cluster nodes. Run the ping utility from the new node, pinging the client
LAN adapters on all other nodes. If Ethernet is used as the cluster
interconnect, also ping the cluster interconnect adapters of all other
nodes.
11.
Verify that the new node can access the shared storage. From the new
node, use Disk Management to verify that the same shared disk
resources are seen from this node as they are seen from other installed
nodes in the cluster.
Preparing the Existing Cluster Nodes
To prepare the existing cluster nodes for adding a new node:
1.
Add the unique IP addresses and node names for the new node in the
hosts and lmhosts files of each existing cluster node. These files are
located at %SystemRoot%\system32\drivers\etc.
2.
Verify TCP/IP connectivity between the new node and the existing
cluster nodes. Run the ping utility from the new node, pinging the client
LAN adapters on all other nodes. If Ethernet is used as the cluster
interconnect, ping the cluster interconnect adapters of all other nodes.
Installing the Cluster Software for Oracle8
i
Now that the new node is configured and physically connected to the cluster,
the next procedure is to integrate the node into the cluster. This involves
installing the low-level cluster management software and the OSDs. It also
involves installing the application-level cluster software, including
Oracle8
i
Server with the Oracle8
i
Parallel Server Option.
The cluster must be brought offline to install the cluster software. Because the
database will be unavailable while configuring the Oracle software, it is
recommended that this procedure be performed during non-peak work hours.
NOTE:
As a precaution, it is recommended that you back up the database.