Dell DX6004S DX Object Storage Administration Guide - Page 60

C.2.2. Upgrade Steps, C.2.2.1. Example Shutdown Script, C.2.2.2. Cluster Reboot

Page 60 highlights

differences, and any issues that could change the manner in which the DX Storage cluster will process and store data. If using USB boot devices, they may be removed from the running nodes in order to view and backup the configuration and license files. The sticks or configuration server can then be updated using the instructions in the README.txt file found in the ISO update cd for the new version of DX Storage. After performing the upgrade, the node.cfg file should be validated to ensure there are no deprecated parameters that need to be removed. See the release notes and Appendix A: Node Configuration for information on any changes to the parameters in the node and/or cluster configuration files. Once all upgrades and validations have been completed, you can return USB devices to the same node from which it was removed. Care should be taken to match each USB device to its original node in the cluster to ensure the "vols" parameter which defines the storage devices matches the correct node. Before any node upgrade is performed, the administrator should verify the cluster's health by checking for critical error messages on each node's status page or the SNMP CastorErrTable OID. This is necessary in order to ensure that there are no hardware problems that could interrupt the upgrade process. Any problems should be corrected prior to upgrading. C.2.2. Upgrade Steps 1. Shutdown all cluster nodes (or one at a time for a rolling upgrade) 2. Install updated USB boot devices or updated ISO on your PXE boot server 3. Reboot all nodes 4. Verify correct operation A simultaneous shutdown of the cluster is the first step in the simple upgrade. For customers where downtime cannot be tolerated, the nodes can be rebooted one at a time in a rolling upgrade so the cluster remains online. During a rolling upgrade, the absence of the single upgrading node will be detected and the remaining nodes will begin trying to recover the missing content. The recovery will be halted when the upgraded node comes back online. Best practice would be to preprepare upgraded sticks so the downtime for the upgrading node is minimal and there is no risk of the remaining nodes filling their disks with recovered content. If all the nodes are shutdown within several seconds of each other, initiation of the disk recovery process is not a concern. C.2.2.1. Example Shutdown Script This Unix shell script demonstrates a method of issuing the shutdown command to all cluster nodes. In this example, all the nodes of the cluster are defined in the NODES variable. NODES="192.168.1.101 192.168.1.102 192.168.1.103" for n in $NODES; do snmpset -v 1 -c pwd -m +CASTOR-MIB $n \ caringo.castor.CastorShutdownAction = "shutdown" done This Unix shell script demonstrates a method of issuing the shutdown command to all cluster nodes. In this example, all the nodes of the cluster are defined in the NODES variable. C.2.2.2. Cluster Reboot After the cluster has been shutdown, ensure the updated USB sticks or the configuration server is prepared and then begin the reboot process. The recommended power-on sequence is to start Copyright © 2010 Caringo, Inc. All rights reserved 55 Version 5.0 December 2010

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74

Copyright © 2010 Caringo, Inc.
All rights reserved
55
Version 5.0
December 2010
differences, and any issues that could change the manner in which the DX Storage cluster will
process and store data.
If using USB boot devices, they may be removed from the running nodes in order to view and
backup the configuration and license files. The sticks or configuration server can then be updated
using the instructions in the README.txt file found in the ISO update cd for the new version of
DX Storage. After performing the upgrade, the node.cfg file should be validated to ensure there
are no deprecated parameters that need to be removed. See the release notes and Appendix A:
Node Configuration for information on any changes to the parameters in the node and/or cluster
configuration files.
Once all upgrades and validations have been completed, you can return USB devices to the same
node from which it was removed. Care should be taken to match each USB device to its original
node in the cluster to ensure the “vols” parameter which defines the storage devices matches the
correct node.
Before any node upgrade is performed, the administrator should verify the cluster’s health by
checking for critical error messages on each node’s status page or the SNMP CastorErrTable OID.
This is necessary in order to ensure that there are no hardware problems that could interrupt the
upgrade process. Any problems should be corrected prior to upgrading.
C.2.2. Upgrade Steps
1. Shutdown all cluster nodes (or one at a time for a rolling upgrade)
2. Install updated USB boot devices or updated ISO on your PXE boot server
3. Reboot all nodes
4. Verify correct operation
A simultaneous shutdown of the cluster is the first step in the simple upgrade. For customers where
downtime cannot be tolerated, the nodes can be rebooted one at a time in a rolling upgrade so
the cluster remains online. During a rolling upgrade, the absence of the single upgrading node
will be detected and the remaining nodes will begin trying to recover the missing content. The
recovery will be halted when the upgraded node comes back online. Best practice would be to pre-
prepare upgraded sticks so the downtime for the upgrading node is minimal and there is no risk of
the remaining nodes filling their disks with recovered content. If all the nodes are shutdown within
several seconds of each other, initiation of the disk recovery process is not a concern.
C.2.2.1. Example Shutdown Script
This Unix shell script demonstrates a method of issuing the shutdown command to all cluster nodes.
In this example, all the nodes of the cluster are defined in the NODES variable.
NODES=”192.168.1.101
192.168.1.102
192.168.1.103”
for n in $NODES; do
snmpset –v 1 –c pwd –m +CASTOR-MIB $n \
caringo.castor.CastorShutdownAction = ”shutdown”
done
This Unix shell script demonstrates a method of issuing the shutdown command to all cluster nodes.
In this example, all the nodes of the cluster are defined in the NODES variable.
C.2.2.2. Cluster Reboot
After the cluster has been shutdown, ensure the updated USB sticks or the configuration server
is prepared and then begin the reboot process. The recommended power-on sequence is to start