HP DL160 HP ProLiant Storage Server User Guide (440584-004, February 2008) - Page 134

Shared disk requirements, Cluster installation

Page 134 highlights

• A domain user account for Cluster service (all nodes must be members of the same domain) • Each node should have at least two network adapters-one for connection to the public network and the other for the node-to-node private cluster network. If only one network adapter is used for both connections, the configuration is unsupported. A separate private network adapter is required for HCL certification. Shared disk requirements NOTE: Do not allow more than one node access the shared storage devices at the same time until Cluster service is installed on at least one node and that node is online. This can be accomplished through selective storage presentation, SAN zoning, or having only one node online at all times. • All software components listed in the HP ProLiant Storage Server SAN Connection and Management white paper (located on the HP web site at http://h20000.www2.hp.com/bc/docs/support/ SupportManual/c00663737/c00663737.pdf) must be installed and the fiber cables attached to the HBAs before the cluster installation is started. • All shared disks, including the Quorum disk, must be accessible from all nodes. When testing connectivity between the nodes and the LUN, only one node should be given access to the LUN at a time. • All shared disks must be configured as basic (not dynamic). • All partitions on the disks must be formatted as NTFS. Cluster installation During the installation process, nodes are shut down and rebooted. These steps guarantee that the data on disks that are attached to the shared storage bus is not lost or corrupted. This can happen when multiple nodes try to simultaneously write to the same disk that is not yet protected by the cluster software. Use Table 24 to determine which nodes and storage devices should be presented during each step. Table 24 Power sequencing for cluster installation Step Node 1 Setting up networks On Setting up shared disks (including the On Qurom disk) Verifying disk configuration Off Configuring the first node On Additional Nodes On Storage Comments Not Presented Verify that all storage devices on the shared bus are not presented; Power on all nodes. Off Presented Shut down all nodes. Present the shared storage, then power on the first node. Shut down first node, power on next On Presented node. Repeat this process for all cluster nodes. Off Presented Shut down all nodes; power on the first node. 134 Cluster administration

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172

A domain user account for Cluster service (all nodes must be members of the same domain)
Each node should have at least two network adapters—one for connection to the public network
and the other for the node-to-node private cluster network. If only one network adapter is used for
both connections, the configuration is unsupported. A separate private network adapter is required
for HCL certification.
Shared disk requirements
NOTE:
Do not allow more than one node access the shared storage devices at the same time until Cluster service
is installed on at least one node and that node is online. This can be accomplished through selective storage
presentation, SAN zoning, or having only one node online at all times.
All software components listed in the
HP ProLiant Storage Server SAN Connection and Management
white paper (located on the HP web site at
h
t
tp://h20000.w
w
w2
.hp
.co
m/bc/doc
s/su
ppo
r
t/
Su
ppo
r
tMan
ual/c006
6
3
7
3
7/c006
6
3
7
3
7
.pdf
) must be installed and the fiber cables attached
to the HBAs before the cluster installation is started.
All shared disks, including the Quorum disk, must be accessible from all nodes. When testing
connectivity between the nodes and the LUN, only one node should be given access to the LUN
at a time.
All shared disks must be configured as basic (not dynamic).
All partitions on the disks must be formatted as NTFS.
Cluster installation
During the installation process, nodes are shut down and rebooted. These steps guarantee that the
data on disks that are attached to the shared storage bus is not lost or corrupted. This can happen
when multiple nodes try to simultaneously write to the same disk that is not yet protected by the cluster
software.
Use
Table 24
to determine which nodes and storage devices should be presented during each step.
Table 24 Power sequencing for cluster installation
Comments
Storage
Additional
Nodes
Node 1
Step
Verify that all storage devices on the
shared bus are not presented; Power on
all nodes.
Not
Presented
On
On
Setting up
networks
Shut down all nodes. Present the shared
storage, then power on the first node.
Presented
Off
On
Setting up
shared disks
(including the
Qurom disk)
Shut down first node, power on next
node. Repeat this process for all cluster
nodes.
Presented
On
Off
Verifying disk
configuration
Shut down all nodes; power on the first
node.
Presented
Off
On
Configuring the
first node
Cluster administration
134