HP XP20000/XP24000 HP StorageWorks XP Disk Array Configuration Guide (T5278-96 - Page 64

Fabric zoning and LUN security for multiple operating systems, Configuring FC switches

Page 64 highlights

Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous (mixed operating systems). How you configure LUN security and fabric zoning depends on the operating system mix and the SAN configuration. WARNING! For OpenVMS - HP recommends that a volume be presented to one OpenVMS cluster or stand alone system at a time. Volumes should not be presented to allow them to move between stand alone systems and/or OpenVMS clusters, as this can lead to corruption of the OpenVMS volume and data loss. Fabric zoning and LUN security for multiple operating systems You can connect multiple clusters with multiple operating systems to the same switch and fabric using appropriate zoning and LUN security as follows: • Storage port zones can overlap if more than one operating system needs to share an array port. • Heterogeneous operating systems can share an array port if you set the appropriate host group and mode. All others must connect to a dedicated array port. • Use LUN Manager for LUN isolation when multiple hosts connect through a shared array port. LUN Manager provides LUN security by allowing you to restrict which LUNs each host can access. Table 15 Fabric zoning and LUN security settings (OpenVMS) Environment Standalone SAN (non-clustered) Clustered SAN Multi-Cluster SAN OS Mix Fabric Zoning homogeneous (a single OS type present Not required in the SAN) heterogeneous (more than one OS type Required present in the SAN) LUN Security Must be used when multiple hosts or cluster nodes connect through a shared port Configuring FC switches OpenVMS supports Fibre Channel only in a switched fabric topology. See the switch documentation to set up the switch. Connecting the disk array The HP service representative connects the disk array to the host by: 1. Verifying operational status of the disk array channel adapters, LDEVs, and paths. 2. Connecting the Fibre Channel cables between the disk array and the fabric switch or host. 3. Creating Fibre Channel zones connecting the host systems to the array ports. See your switch manufacturer's documentation for information on setting up zones. 4. Verifying the ready status of the disk array and peripherals. Verifying disk array device recognition Verify that the host recognizes the disk array devices: 1. Enter the show device dg command: $ show device dg 64 OpenVMS

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183

Within the SAN, the clusters can be homogeneous (all the same operating system) or heterogeneous
(mixed operating systems). How you configure LUN security and fabric zoning depends on the
operating system mix and the SAN configuration.
WARNING!
For OpenVMS — HP recommends that a volume be presented to one OpenVMS
cluster or stand alone system at a time. Volumes should not be presented to allow them to move
between stand alone systems and/or OpenVMS clusters, as this can lead to corruption of the
OpenVMS volume and data loss.
Fabric zoning and LUN security for multiple operating systems
You can connect multiple clusters with multiple operating systems to the same switch and fabric
using appropriate zoning and LUN security as follows:
Storage port zones can overlap if more than one operating system needs to share an array
port.
Heterogeneous operating systems can share an array port if you set the appropriate host
group and mode. All others must connect to a dedicated array port.
Use LUN Manager for LUN isolation when multiple hosts connect through a shared array port.
LUN Manager provides LUN security by allowing you to restrict which LUNs each host can
access.
Table 15 Fabric zoning and LUN security settings (OpenVMS)
LUN Security
Fabric Zoning
OS Mix
Environment
Must be used when multiple
hosts or cluster nodes connect
through a shared port
Not required
homogeneous (a single OS type present
in the SAN)
Standalone SAN
(non-clustered)
Clustered SAN
Multi-Cluster SAN
Required
heterogeneous (more than one OS type
present in the SAN)
Configuring FC switches
OpenVMS supports Fibre Channel only in a switched fabric topology. See the switch documentation
to set up the switch.
Connecting the disk array
The HP service representative connects the disk array to the host by:
1.
Verifying operational status of the disk array channel adapters, LDEVs, and paths.
2.
Connecting the Fibre Channel cables between the disk array and the fabric switch or host.
3.
Creating Fibre Channel zones connecting the host systems to the array ports. See your switch
manufacturer's documentation for information on setting up zones.
4.
Verifying the ready status of the disk array and peripherals.
Verifying disk array device recognition
Verify that the host recognizes the disk array devices:
1.
Enter the
show device dg
command:
$ show device dg
64
OpenVMS