HP StorageWorks 1606 HP StorageWorks Fabric OS 6.3.0 release notes (5697-0358, - Page 41

In a DC SAN Director or DC04 SAN Director with Fabric OS 6.3.0 and DC Switch encryption

Page 41 highlights

• When all nodes in an EG (HA Cluster or DEK Cluster) are powered down (due to catastrophic disaster or a power outage to the data center) and later nodes come back online (in the event of the Group Leader (GL) node failing to come back up or the GL node being kept powered down) the member nodes lose information and knowledge about the EG. This leads to no crypto operations or commands (except node initialization) being available on the member nodes after the powercycle. This condition persists until the GL node is back online. • Workaround. In the case of a datacenter power down, bring the GL node online first, before the other member nodes are brought back up. In the event of the GL node failing to come back up, the GL node can be replaced with a new node. The following are the procedures to allow an EG to function with existing member nodes and to replace the failed GL node with a new node • Make one of the existing member nodes the Group Leader node and continue operations. 1. On one of the member nodes, create the EG with the same EG name. This will make that node the GL node and the rest of the CTC and Tape Pool related configurations will remain intact in this EG. 2. For any containers hosted on the failed GL node, issue cryptocfg --replace to change the WWN association of containers from the failed GL node to the new GL node. • Replace the failed GL node with a new node. 1. On the new node, follow the switch/node initialization steps. 2. Create an EG on this fresh switch/node with the same EG name as before. 3. Perform a configdownload to the new GL node of a previously uploaded configuration file for the EG from an old GL Node. 4. For any containers hosted on the failed GL node, issue cryptocfg --replace to change the WWN association of containers from failed GL node to the new GL node. • During an online upgrade from Fabric OS 6.2.0x to 6.3.0, we expect to see the I/O link status reported as Unreachable when the cryptocfg command is invoked. However, once all the nodes are upgraded to Fabric OS 6.3.0, the command will accurately reflect the status of the I/O Link. The I/O link status should be disregarded during the code upgrade process. • The -key_lifespanoption has no effect for cryptocfg -add -LUN, and only has an effect for cryptocfg --create -tapepool for tape pools declared -encryption_format native. For all other encryption cases, a new key is generated each time a medium is rewound and block zero is written or overwritten. For the same reason, the Key Life field in the output of cryptocfg --show -container -all -stat should always be ignored, and the Key Life field in cryptocfg --show -tapepool -cfg is significant only for native-encrypted pools. • The Quorum Authentication feature requires DCFM 10.3.0 or later. Note that all nodes in the EG must be running Fabric OS 6.3.0 for quorum authentication to be properly supported. • In a DC SAN Director or DC04 SAN Director with Fabric OS 6.3.0 and DC Switch encryption FC blades installed, you must set the quorum size to zero and disable the system card on the blade prior to downgrading to a Fabric OS version earlier than 6.3.0. • The System Card feature requires DCFM 10.3.0 or later. Note that all nodes in the EG must be running Fabric OS 6.3.0 for system verification to be properly supported. • The Encryption SAN Switch and Encryption FC blade do not support QoS. When using encryption or Frame Redirection, participating flows should not be included in QoS Zones. • HP encryption devices can be configured for either disk or tape operation. The ability to configure multiple Crypto-Target Containers defining different media types on a single encryption engine (Encryption SAN Switch or Encryption FC blade) is not supported. Encryption FC blades can be configured to support different media types within a common DC SAN Director/DC04 SAN Director chassis. HP StorageWorks Fabric OS 6.3.0 release notes 41

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70

When all nodes in an EG (HA Cluster or DEK Cluster) are powered down (due to catastrophic
disaster or a power outage to the data center) and later nodes come back online (in the event of
the Group Leader (GL) node failing to come back up or the GL node being kept powered down)
the member nodes lose information and knowledge about the EG. This leads to no crypto operations
or commands (except node initialization) being available on the member nodes after the power-
cycle. This condition persists until the GL node is back online.
Workaround.
In the case of a datacenter power down, bring the GL node online first, before
the other member nodes are brought back up.
In the event of the GL node failing to come back up, the GL node can be replaced with a new
node. The following are the procedures to allow an EG to function with existing member nodes
and to replace the failed GL node with a new node
Make one of the existing member nodes the Group Leader node and continue operations.
1.
On one of the member nodes, create the EG with the same EG name. This will make that
node the GL node and the rest of the CTC and Tape Pool related configurations will remain
intact in this EG.
2.
For any containers hosted on the failed GL node, issue
cryptocfg --replace
to
change the WWN association of containers from the failed GL node to the new GL node.
Replace the failed GL node with a new node.
1.
On the new node, follow the switch/node initialization steps.
2.
Create an EG on this fresh switch/node with the same EG name as before.
3.
Perform a
configdownload
to the new GL node of a previously uploaded configuration
file for the EG from an old GL Node.
4.
For any containers hosted on the failed GL node, issue
cryptocfg --replace
to
change the WWN association of containers from failed GL node to the new GL node.
During an online upgrade from Fabric OS 6.2.0x to 6.3.0, we expect to see the I/O link status
reported as
Unreachable
when the
cryptocfg
command is invoked. However, once all the
nodes are upgraded to Fabric OS 6.3.0, the command will accurately reflect the status of the I/O
Link. The I/O link status should be disregarded during the code upgrade process.
The
–key_lifespan
option has no effect for
cryptocfg –add –LUN
, and only has an effect
for
cryptocfg --create –tapepool
for tape pools declared
-encryption_format
native
. For all other encryption cases, a new key is generated each time a medium is rewound
and block zero is written or overwritten. For the same reason, the
Key Life
field in the output of
cryptocfg --show -container -all –stat
should always be ignored, and the
Key Life
field in
cryptocfg --show –tapepool –cfg
is significant only for native-encrypted pools.
The Quorum Authentication feature requires DCFM 10.3.0 or later. Note that all nodes in the EG
must be running Fabric OS 6.3.0 for quorum authentication to be properly supported.
In a DC SAN Director or DC04 SAN Director with Fabric OS 6.3.0 and DC Switch encryption
FC blades installed, you must set the quorum size to zero and disable the system card on the blade
prior to downgrading to a Fabric OS version earlier than 6.3.0.
The System Card feature requires DCFM 10.3.0 or later. Note that all nodes in the EG must be
running Fabric OS 6.3.0 for system verification to be properly supported.
The Encryption SAN Switch and Encryption FC blade do not support QoS. When using encryption
or Frame Redirection, participating flows should not be included in QoS Zones.
HP encryption devices can be configured for either disk or tape operation. The ability to configure
multiple Crypto-Target Containers defining different media types on a single encryption engine
(Encryption SAN Switch or Encryption FC blade) is not supported. Encryption FC blades can be
configured to support different media types within a common DC SAN Director/DC04 SAN Dir-
ector chassis.
HP StorageWorks Fabric OS 6.3.0 release notes
41