HP 1032 ClusterPack V2.4 Tutorial - Page 53

Upgrading from V2.2 to V2.4

Page 53 highlights

1.4.5 Upgrading from V2.2 to V2.4 ClusterPack V2.4 supports an upgrade path from ClusterPack V2.2. Customers that currently deploy ClusterPack V2.2 on HP Integrity servers use HP-UX 11i Version 2.0 TCOE. ClusterPack V2.4 provides a mechanism for the use of the majority of V2.2 configuration settings for the V2.4 configuration. Before starting the upgrade, it is important to have all of your Compute Nodes in good working order. All Compute Nodes and MP cards should be accessible. The LSF queues (if in use) should be empty of all jobs, and the nodes should be idle. Instructions for upgrading from V2.2 to V2.4: z Backup the cluster user-level data. z Install the V2.4 backup utilities. % swinstall -s CPACK-BACKUP z Take a backup of the cluster information. % /opt/clusterpack/bin/clbackup -f z Copy the backup file to another system for safe keeping. z Remove the TCP wrappers on your Compute Nodes. % clsh /usr/bin/perl -p -i -e "'s^ /usr/lbin/tcpd^^;'" /etc/inetd.conf z Remove the Compute Nodes from the Systems Inventory Manager database. % /opt/sysinvmgr/bin/simdevice -r ' /opt/sysinvmgr/bin/simdevice -l | egrep ^Name: | awk '{print "-n", $2}' | grep \.' z Install the new ClusterPack manager software. % swinstall -s CPACK-MGR z Run manager_config in upgrade mode using the file you created in Step 3. % /opt/clusterpack/bin/manager_config -u z Register your MP cards. (To save time, check out the new -f option to compute_config.) % /opt/clusterpack/bin/mp_register z Install the new software on the Compute Nodes. (The -u is important.)

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173

1.4.5 Upgrading from V2.2 to V2.4
ClusterPack V2.4 supports an upgrade path from ClusterPack V2.2. Customers that currently
deploy ClusterPack V2.2 on HP Integrity servers use HP-UX 11i Version 2.0 TCOE.
ClusterPack V2.4 provides a mechanism for the use of the majority of V2.2 configuration
settings for the V2.4 configuration.
Before starting the upgrade, it is important to have all of your Compute Nodes in good working
order. All Compute Nodes and MP cards should be accessible. The LSF queues (if in use)
should be empty of all jobs, and the nodes should be idle.
Instructions for upgrading from V2.2 to V2.4:
Backup the cluster user-level data.
Install the V2.4 backup utilities.
%
swinstall -s <depot_with_V2.4> CPACK-BACKUP
Take a backup of the cluster information.
%
/opt/clusterpack/bin/clbackup -f
<backup_file_name>
Copy the backup file to another system for safe keeping.
Remove the TCP wrappers on your Compute Nodes.
%
clsh /usr/bin/perl -p -i -e
"'s^ /usr/lbin/tcpd^^;'" /etc/inetd.conf
Remove the Compute Nodes from the Systems Inventory Manager database.
%
/opt/sysinvmgr/bin/simdevice -r
' /opt/sysinvmgr/bin/simdevice -l | egrep ^Name: |
awk '{print "-n", $2}' | grep \.'
Install the new ClusterPack manager software.
%
swinstall -s <depot_with_V2.4> CPACK-MGR
Run manager_config in upgrade mode using the file you created in Step 3.
%
/opt/clusterpack/bin/manager_config -u
<backup_file_name>
Register your MP cards. (To save time, check out the new -f option to
compute_config.)
%
/opt/clusterpack/bin/mp_register
Install the new software on the Compute Nodes. (The -u is important.)