IBM BS029ML Self Help Guide - Page 223

Our approach to backup, Synchronize the nodes through the Deployment Manager Administrative Console.

Page 223 highlights

Important: XMLAccess does not play a part in our backup approach. XMLAccess is not a tool that is designed for full backup purposes. XMLAccess is a tool designed for deploying Portal artifacts from one Portal environment to another Portal environment. For example, you can use XMLAccess to move Portal artifacts from your staging environment into your production environment once the Portal configuration has been thoroughly tested in the staging environment. While XMLAccess does have features that can play a role in some backup situations, you should not rely on an XMLAccess export in a disaster recovery scenario. Thus, we have left XMLAccess out of the discussion for WebSphere Portal disaster recovery to avoid giving a false impression of the tool's capabilities. Our approach to backup We recommend the following approach to backup: 1. Determine the time of day when the maintenance window takes place, preferably when the load on the cluster is the lowest. 2. Based on your environment, determine the fewest number of Portal nodes that are required to handle the load during this maintenance window. 3. Based on the length of your maintenance window and the minimum number of Portal nodes required to handle the load, determine the architecture of your backup procedure. For example, if you have a maintenance window of two hours for a 10 node cluster, you will need a minimum of three Portal nodes up to meet the average load requirements for this time period. If you assume that you can back up the file systems in 30 minutes, you can then break the backup into two sections. Bring down five Portal nodes, make backups, and then bring them back online. Then, take down the other five nodes and make backups. This is the quickest approach in a 24x7 environment, because you have divided your backup process into two sections. However, if you have a nine node cluster and the load requires six nodes to be up, then you will have to divide it into three sections. Depending on the speed of your backup process, you might need to extend the maintenance window in this situation. For this example, we divide the backups into two sections of five nodes each. 4. Stop the individual Portal application servers on nodes 1 through 5 using the Deployment Manager Administrative Console. Note: Ensure that you take steps to stop all Web traffic to the nodes that will be undergoing maintenance before you stop the portal application servers. 5. Stop the node agents for nodes 1 through 5 using the Deployment Manager Administrative Console. 6. Make sure no servers are running on nodes 1 through 5 by using the serverStatus.sh/bat command or by checking for running Java processes. 7. Make file system backups on each node, 1 through 5, of the WebSphere Application Server and WebSphere Portal root directories. 8. Start node agents through the command line on nodes 1 through 5 after file system backups are complete. 9. Synchronize the nodes through the Deployment Manager Administrative Console. Appendix B. Maintenance: Fix strategy, backup strategy, and migration strategy 209

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242

Appendix B. Maintenance: Fix strategy, backup strategy, and migration strategy
209
Our approach to backup
We recommend the following approach to backup:
1.
Determine the time of day when the maintenance window takes place, preferably when the
load on the cluster is the lowest.
2.
Based on your environment, determine the fewest number of Portal nodes that are
required to handle the load during this maintenance window.
3.
Based on the length of your maintenance window and the minimum number of Portal
nodes required to handle the load, determine the architecture of your backup procedure.
For example, if you have a maintenance window of two hours for a 10 node cluster, you will
need a minimum of three Portal nodes up to meet the average load requirements for this
time period. If you assume that you can back up the file systems in 30 minutes, you can
then break the backup into two sections. Bring down five Portal nodes, make backups, and
then bring them back online. Then, take down the other five nodes and make backups.
This is the quickest approach in a 24x7 environment, because you have divided your
backup process into two sections. However, if you have a nine node cluster and the load
requires six nodes to be up, then you will have to divide it into three sections. Depending
on the speed of your backup process, you might need to extend the maintenance window
in this situation.
For this example, we divide the backups into two sections of five nodes each.
4.
Stop the individual Portal application servers on nodes 1 through 5 using the Deployment
Manager Administrative Console.
5.
Stop the node agents for nodes 1 through 5 using the Deployment Manager Administrative
Console.
6.
Make sure no servers are running on nodes 1 through 5 by using the
serverStatus.sh/bat
command or by checking for running Java processes.
7.
Make file system backups on each node, 1 through 5, of the WebSphere Application
Server and WebSphere Portal root directories.
8.
Start node agents through the command line on nodes 1 through 5 after file system
backups are complete.
9.
Synchronize the nodes through the Deployment Manager Administrative Console.
Important:
XMLAccess does not play a part in our backup approach. XMLAccess is not a
tool that is designed for full backup purposes. XMLAccess is a tool designed for deploying
Portal artifacts from one Portal environment to another Portal environment. For example,
you can use XMLAccess to move Portal artifacts from your staging environment into your
production environment once the Portal configuration has been thoroughly tested in the
staging environment.
While XMLAccess does have features that can play a role in some backup situations, you
should not rely on an XMLAccess export in a disaster recovery scenario. Thus, we have
left XMLAccess out of the discussion for WebSphere Portal disaster recovery to avoid
giving a false impression of the tool’s capabilities.
Note:
Ensure that you take steps to stop all Web traffic to the nodes that will be undergoing
maintenance before you stop the portal application servers.