HP Brocade 8/12c HP Fabric OS 6.2.2f Release Notes (5697-1756, February 2012) - Page 34

SKM Key Vault in Cluster to new or replacing SKM Key Vault using SKM Admin Guide Provided

Page 34 highlights

• The HP Encryption Switch and HP Encryption blade support registration of only one HPSKM Key Vault for Fabric OS 6.2.2x. Multiple HP SKM Key Vaults can be clustered at the SKM server level. Registration of a second SKM key vault is not blocked. When the registered key vault connection goes down or the registered key vault is down, you must correct the connection with Key Vault, or replace the failed SKM and re-register (deregister failed SKM entry and register the new SKM entry) on the HP Encryption Switch or HP Encryption blade. You must ensure that the replaced (new) SKM key vault is in sync with the rest of the SKM units in Cluster in terms of Keys Database (manually sync the Key Database from existing SKM Key Vault in Cluster to new or replacing SKM Key Vault using SKM Admin Guide Provided Key Synchronization methods). • The SKM is supported with Multiple Nodes and Dual SKM Key Vaults. Two-way certificate exchange is supported. See the Encryption Admin Guide for configuration information. • Direct FICON device connectivity is not supported for the HP Encryption Switch, or HP Encryption Blade for front end User Ports. Also, FICON devices as part of Encryption or Clear-Text flows are not supported, which means FICON devices cannot be configured as Crypto Target Containers on the encryption switch or blade. • Ensure that all encryption engines in the HA cluster (HAC), Data Encryption Key (DEK) cluster, or encryption group are online before invoking or starting rekey operations on LUNs. Also ensure that all target paths for a LUN are online before invoking or starting rekey operations on LUNs. • If writes are done to a LUN that is undergoing first time encryption or rekeying, you may see failed I/Os. HP recommends that host I/O operations are quiesced and not started again until rekey operations or first time encryption operations for the LUN are complete. • When all nodes in an EG (HA Cluster or DEK Cluster) are powered down (due to catastrophic disaster or a power outage to the data center) and later nodes come back online (in the event of the Group Leader (GL) node failing to come back up or the GL node being kept powered down) the member nodes lose information and knowledge about the EG. This leads to no crypto operations or commands (except node initialization) being available on the member nodes after the power-cycle. This condition persists until the GL node is back online. ◦ Workaround. In the case of a datacenter power down, bring the GL node online first, before the other member nodes are brought back up. In the event of the GL node failing to come back up, the GL node can be replaced with a new node. The following are the procedures to allow an EG to function with existing member nodes and to replace the failed GL node with a new node ◦ Make one of the existing member nodes the Group Leader node and continue operations: 1. On one of the member nodes, create the EG with the same EG name. This will make that node the GL node and the rest of the Crypto Target Container configurations will remain intact in this EG. 2. For any containers hosted on the failed GL node, issue cryptocfg --replace to change the WWN association of containers from the failed GL node to the new GL node. ◦ Replace the failed GL node with a new node: 1. On the new node, follow the switch/node initialization steps. 2. Create an EG on this fresh switch/node with the same EG name as before. 3. Perform a configdownload to the new GL node of a previously uploaded configuration file for the EG from an old GL Node. 4. For any containers hosted on the failed GL node, issue cryptocfg --replace to change the WWN association of containers from failed GL node to the new GL node. 34

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50

The HP Encryption Switch and HP Encryption blade support registration of only one HPSKM
Key Vault for Fabric OS 6.2.2x. Multiple HP SKM Key Vaults can be clustered at the SKM
server level. Registration of a second SKM key vault is not blocked.
When the registered key vault connection goes down or the registered key vault is down, you
must correct the connection with Key Vault, or replace the failed SKM and re-register (deregister
failed SKM entry and register the new SKM entry) on the HP Encryption Switch or HP Encryption
blade. You must ensure that the replaced (new) SKM key vault is in sync with the rest of the
SKM units in Cluster in terms of Keys Database (manually sync the Key Database from existing
SKM Key Vault in Cluster to new or replacing SKM Key Vault using SKM Admin Guide Provided
Key Synchronization methods).
The SKM is supported with Multiple Nodes and Dual SKM Key Vaults. Two-way certificate
exchange is supported. See the
Encryption Admin Guide
for configuration information.
Direct FICON device connectivity is not supported for the HP Encryption Switch, or HP
Encryption Blade for front end User Ports. Also, FICON devices as part of Encryption or
Clear-Text flows are not supported, which means FICON devices cannot be configured as
Crypto Target Containers on the encryption switch or blade.
Ensure that all encryption engines in the HA cluster (HAC), Data Encryption Key (DEK) cluster,
or encryption group are online before invoking or starting rekey operations on LUNs. Also
ensure that all target paths for a LUN are online before invoking or starting rekey operations
on LUNs.
If writes are done to a LUN that is undergoing first time encryption or rekeying, you may see
failed I/Os. HP recommends that host I/O operations are quiesced and not started again
until rekey operations or first time encryption operations for the LUN are complete.
When all nodes in an EG (HA Cluster or DEK Cluster) are powered down (due to catastrophic
disaster or a power outage to the data center) and later nodes come back online (in the event
of the Group Leader (GL) node failing to come back up or the GL node being kept powered
down) the member nodes lose information and knowledge about the EG. This leads to no
crypto operations or commands (except node initialization) being available on the member
nodes after the power-cycle. This condition persists until the GL node is back online.
Workaround.
In the case of a datacenter power down, bring the GL node online first,
before the other member nodes are brought back up.
In the event of the GL node failing to come back up, the GL node can be replaced with
a new node. The following are the procedures to allow an EG to function with existing
member nodes and to replace the failed GL node with a new node
Make one of the existing member nodes the Group Leader node and continue operations:
1.
On one of the member nodes, create the EG with the same EG name. This will make
that node the GL node and the rest of the Crypto Target Container configurations
will remain intact in this EG.
2.
For any containers hosted on the failed GL node, issue
cryptocfg --replace
to change the WWN association of containers from the failed GL node to the new
GL node.
Replace the failed GL node with a new node:
1.
On the new node, follow the switch/node initialization steps.
2.
Create an EG on this fresh switch/node with the same EG name as before.
3.
Perform a
configdownload
to the new GL node of a previously uploaded
configuration file for the EG from an old GL Node.
4.
For any containers hosted on the failed GL node, issue
cryptocfg --replace
to change the WWN association of containers from failed GL node to the new GL
node.
34