Compaq ProLiant 6500 Compaq Parallel Database Cluster Model PDC/O1000 for Orac - Page 44

Fibre Host Adapter-to-Storage Hub/FCAL Switch Paths, one I/O path for each Fibre Host Adapter.

Page 44 highlights

Architecture 2-21 Fibre Host Adapter-to-Storage Hub/FC-AL Switch Paths Figure 2-10 highlights the I/O data paths that run between the Fibre Host Adapter in each cluster node and the Storage Hub or FC-AL Switch. There is one I/O path for each Fibre Host Adapter. Fibre Host Adapters Client LAN Fibre Host Adapters ProLiant Servers Switch (Cluster Interconnect) ProLiant Servers Storage Hub/FC-AL Switch RA4000/4100 Array Figure 2-10. Fibre Host Adapter-to-Storage Hub/FC-AL Switch I/O data paths If one of these connections experiences a fault, the connections from the other nodes ensure continued access to the database. The fault results in the eviction of the cluster node with the failed connection. All network clients accessing the database through that node must reconnect through another cluster node. The effect of this failure is relatively minor. It affects only those users who are connected to the database through the affected node. The duration of downtime includes the time to detect the failure, the time to reconfigure from the failure, and the time required for the network clients to reconnect to the database through another node. Note that Compaq Insight Manager monitors the health of each RA4000/RA4100 Array. If any part of the I/O data path disrupts a node's access to an RA4000/RA4100 Array, the status of the array controller in that RA4000/RA4100 Array changes to "Failed" and the condition is red. The red condition is reported to higher-level Insight Manager screens, and eventually to the device list. Refer to the Compaq Insight Manager Guide for details.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180

Architecture
2-21
Fibre Host Adapter-to-Storage Hub/FC-AL
Switch Paths
Figure 2-10 highlights the I/O data paths that run between the Fibre Host
Adapter in each cluster node and the Storage Hub or FC-AL Switch. There is
one I/O path for each Fibre Host Adapter.
Client LAN
Fibre
Host Adapters
Fibre
Host Adapters
ProLiant
Servers
RA4000/4100 Array
Storage Hub/FC-AL Switch
Switch
(Cluster Interconnect)
ProLiant
Servers
Figure 2-10.
Fibre Host Adapter-to-Storage Hub/FC-AL Switch I/O data paths
If one of these connections experiences a fault, the connections from the other
nodes ensure continued access to the database. The fault results in the eviction
of the cluster node with the failed connection. All network clients accessing
the database through that node must reconnect through another cluster node.
The effect of this failure is relatively minor. It affects only those users who are
connected to the database through the affected node. The duration of
downtime includes the time to detect the failure, the time to reconfigure from
the failure, and the time required for the network clients to reconnect to the
database through another node.
Note that Compaq Insight Manager monitors the health of each
RA4000/RA4100 Array. If any part of the I/O data path disrupts a node’s
access to an RA4000/RA4100 Array, the status of the array controller in that
RA4000/RA4100 Array changes to “Failed” and the condition is red. The red
condition is reported to higher-level Insight Manager screens, and eventually
to the device list. Refer to the
Compaq Insight Manager Guide
for details.