IBM BS029ML Self Help Guide - Page 40

The geographically deployed architecture

Page 40 highlights

Two independent WebSphere Cells (Cell A and Cell B). Each WebSphere Portal Server cluster consists of at least two physical nodes per cluster or cell (so that each cluster is in highly available its own right). The WebSphere Plug-in resident in each HTTP Server only routes requests to the cluster members for the immediate Portal Cluster. Two independent WebSphere Network Deployment Manager (Deployment Manager) instances, one per WebSphere Cell, are collocated on the same physical node. A separate release database domain (releaseAusr and releaseBusr) exists for each WebSphere Portal Server cluster or "Lines of Production" (Portal Cluster A and Portal Cluster B), maintaining indenpendant configuration data for each. The remaining database domains (communityusr, customizationusr, wmmusr, fdbkusr, lmdbusr, and jcr) are shared between each WebSphere Portal Server cluster or "Lines of Production" to maintain a consistent user experience. Note that the JCR Repository exists in a different database. The environment also hosts a LDAP directory server (not shown), which is highly available, for maintaining the registered user base. It is worth noting that a dual clustered architecture will require twice as much administration as a single clustered deployment. Furthermore, in order to keep each "Line of Production" in synchronization, a staging environment plays an important part for preparing build promotions. Such tools as XMLAccess and Release Builder must be utilized to ensure consistency between the different "Lines of Production" or clusters. The geographically deployed architecture As WebSphere Portal Server has evolved, one requirement that has continually been requested has been the ability to deploy an architecture in geographically distributed fashion. With the release of WebSphere Portal Server V6.0.x, this requirement is now a possibility. Such a requirement, however, raises the question about how best to design an operational architecture that caters for such a "global deployment". Not every WebSphere Portal Server deployment with a geographically scattered workforce requires that the physical servers themselves are geographically dispersed. Indeed, many internet facing Portals may be country or region specific, but by the very nature of the internet are accessible worldwide. However, when the demands of an implementation start to include the very integration points that a Portal brings together, partitioning across geographies becomes a necessity. Aspects, such as high availability and disaster recovery, are also influencers for considering a distributed implementation. For example, partitioning a WebSphere Portal Server deployment between Europe and North America becomes a prerequisite when each major geography maintains the local services and back-end systems that are effectively accessed through the Portal. The idea of a geographically deployed architecture can also be considered in terms of a split between multiple data centers, even when those data centers exist within close proximity to one another (across the street or across a city). With the introduction of database domains in WebSphere Portal Server V6.0.x, greater flexibility was made possible in terms of the permissible operational architecture. As such, the distinction between release, community, and user customization data has made it possible to achieve a truly "global deployment". Readers familiar with previous versions of WebSphere Portal Server will recall that it was not possible to split the Portal database between multiple redundant clusters, located potentially in different geographies, and to maintain a consistent user experience. Indeed, such an architecture when deployed sacrificed the ability for a user to make any customization or personalization modifications, as the changes simply could not 26 IBM WebSphere Portal V6 Self Help Guide

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242

26
IBM WebSphere Portal V6 Self Help Guide
±
Two independent WebSphere Cells (Cell A and Cell B).
±
Each WebSphere Portal Server cluster consists of at least two physical nodes per cluster
or cell (so that each cluster is in highly available its own right).
±
The WebSphere Plug-in resident in each HTTP Server only routes requests to the cluster
members for the immediate Portal Cluster.
±
Two independent WebSphere Network Deployment Manager (Deployment Manager)
instances, one per WebSphere Cell, are collocated on the same physical node.
±
A separate release database domain (releaseAusr and releaseBusr) exists for each
WebSphere Portal Server cluster or “Lines of Production” (Portal Cluster A and Portal
Cluster B), maintaining indenpendant configuration data for each.
±
The remaining database domains (communityusr, customizationusr, wmmusr, fdbkusr,
lmdbusr, and jcr) are shared between each WebSphere Portal Server cluster or “Lines of
Production” to maintain a consistent user experience. Note that the JCR Repository exists
in a different database.
±
The environment also hosts a LDAP directory server (not shown), which is highly
available, for maintaining the registered user base.
It is worth noting that a dual clustered architecture will require twice as much administration
as a single clustered deployment. Furthermore, in order to keep each “Line of Production” in
synchronization, a staging environment plays an important part for preparing build
promotions. Such tools as XMLAccess and Release Builder must be utilized to ensure
consistency between the different “Lines of Production” or clusters.
The geographically deployed architecture
As WebSphere Portal Server has evolved, one requirement that has continually been
requested has been the ability to deploy an architecture in geographically distributed fashion.
With the release of WebSphere Portal Server V6.0.x, this requirement is now a possibility.
Such a requirement, however, raises the question about how best to design an operational
architecture that caters for such a "global deployment".
Not every WebSphere Portal Server deployment with a geographically scattered workforce
requires that the physical servers themselves are geographically dispersed. Indeed, many
internet facing Portals may be country or region specific, but by the very nature of the internet
are accessible worldwide. However, when the demands of an implementation start to include
the very integration points that a Portal brings together, partitioning across geographies
becomes a necessity. Aspects, such as high availability and disaster recovery, are also
influencers for considering a distributed implementation. For example, partitioning a
WebSphere Portal Server deployment between Europe and North America becomes a
prerequisite when each major geography maintains the local services and back-end systems
that are effectively accessed through the Portal. The idea of a geographically deployed
architecture can also be considered in terms of a split between multiple data centers, even
when those data centers exist within close proximity to one another (across the street or
across a city).
With the introduction of database domains in WebSphere Portal Server V6.0.x, greater
flexibility was made possible in terms of the permissible operational architecture. As such, the
distinction between release, community, and user customization data has made it possible to
achieve a truly "global deployment". Readers familiar with previous versions of WebSphere
Portal Server will recall that it was not possible to split the Portal database between multiple
redundant clusters, located potentially in different geographies, and to maintain a consistent
user experience. Indeed, such an architecture when deployed sacrificed the ability for a user
to make any customization or personalization modifications, as the changes simply could not