D-Link DWS-3160-24TC DWS-3160 Series Web UI Reference Guide - Page 502

Appendix F Wireless Switch Specific

Page 502 highlights

DWS-3160 Series Gigabit Ethernet Unified Switch Web UI Reference Guide Appendix F Wireless Switch Specific Captive Portal Guidlines Authenticated Roaming and Clustering: In addition to the generic implementation, Captive Portal also provides two key features for the wireless networks called authenticated roaming and clustering. 1. Authenticated roaming allows the client to roam from access point to access point in a seamless fashion while remaining authenticated. 2. Clustering provides roaming between access points attached to different switches and monitoring Captive Portal status for all switches from the Cluster Controller. The Switches in the cluster must share the same Captive Portal settings, such as Captive Portal Configuration instances, associated interfaces, local user database and RADIUS server settings. The databases should be synchronized in a cluster to support client authenticated roaming. Cluster Controller Election Each Switch in the peer group makes an independent decision about who is the Cluster Controller. If a Switch does not have any peer Switches, then it appoints itself the Cluster Controller. When two Switches detect each other through the discovery process, they compare the value of the Cluster priority field. The Switch with higher priority becomes the Cluster Controller. If the priority is the same, then the Switch with lower IP address becomes the Cluster Controller. The Cluster priority is conveyed in the initial identification message The Cluster priority has a range from 0 to 255. Setting the priority to 0, disables the Cluster Controller function on the Switch. Customers may want to disable the low-end Switches from becoming the Cluster Controller if they deploy a large network where only a high end switch or network appliance is powerful enough to act as the Cluster Controller. The administrator may change the Switch Cluster priority value after the Switch has already joined the peer group. The Cluster priority is also conveyed in the keep-alive message enabling the peer Switches to learn the new Cluster priority of the Switch. A Switch performs the election process after it boots, after it loses connection to the current Cluster Controller, and every time it receives an initial identification message or a keep-alive message from another Switch. The Switch keeps a list of Cluster priorities and IP addresses for each peer Switch and elects the Cluster Controller based on the criteria described above. If a Cluster Controller Switch decides that it is no longer a controller because it receives a message from another Switch with higher Cluster priority or lower IP address, then it purges some of the databases. The decision to transition out of the Cluster Controller state is immediate. If the Switch elects itself as the Cluster Controller immediately. If the Switch elects another Switch as the Cluster Controller, then the decision to declare that Switch as the Cluster Controller is delayed for the duration of the keep-alive timer interval. If another Cluster Controller is detected during this interval, then the delay timer is restarted. The administrator looking at the Switch status during the delay period would see that the Switch is not the Cluster Controller and the Cluster Controller address is 0.0.0.0. In this release the keep-alive timer interval is fixed at 120 seconds. Each peer Switch independently establishes connections with other peer Switches. In a transient case, it is possible that one of the Switches, that just established a connection with another Switch, does not see all the Switches that the other Switch is seeing, so that the two Switches may select different Cluster Controllers. Although the WIDS security functions do not work correctly when peer Switches disagree about which Switch is the Cluster Controller, this condition does not affect data forwarding through the network and normal operation is restored as soon as all the Switches in the peer group discover each other. 497

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292
  • 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302
  • 303
  • 304
  • 305
  • 306
  • 307
  • 308
  • 309
  • 310
  • 311
  • 312
  • 313
  • 314
  • 315
  • 316
  • 317
  • 318
  • 319
  • 320
  • 321
  • 322
  • 323
  • 324
  • 325
  • 326
  • 327
  • 328
  • 329
  • 330
  • 331
  • 332
  • 333
  • 334
  • 335
  • 336
  • 337
  • 338
  • 339
  • 340
  • 341
  • 342
  • 343
  • 344
  • 345
  • 346
  • 347
  • 348
  • 349
  • 350
  • 351
  • 352
  • 353
  • 354
  • 355
  • 356
  • 357
  • 358
  • 359
  • 360
  • 361
  • 362
  • 363
  • 364
  • 365
  • 366
  • 367
  • 368
  • 369
  • 370
  • 371
  • 372
  • 373
  • 374
  • 375
  • 376
  • 377
  • 378
  • 379
  • 380
  • 381
  • 382
  • 383
  • 384
  • 385
  • 386
  • 387
  • 388
  • 389
  • 390
  • 391
  • 392
  • 393
  • 394
  • 395
  • 396
  • 397
  • 398
  • 399
  • 400
  • 401
  • 402
  • 403
  • 404
  • 405
  • 406
  • 407
  • 408
  • 409
  • 410
  • 411
  • 412
  • 413
  • 414
  • 415
  • 416
  • 417
  • 418
  • 419
  • 420
  • 421
  • 422
  • 423
  • 424
  • 425
  • 426
  • 427
  • 428
  • 429
  • 430
  • 431
  • 432
  • 433
  • 434
  • 435
  • 436
  • 437
  • 438
  • 439
  • 440
  • 441
  • 442
  • 443
  • 444
  • 445
  • 446
  • 447
  • 448
  • 449
  • 450
  • 451
  • 452
  • 453
  • 454
  • 455
  • 456
  • 457
  • 458
  • 459
  • 460
  • 461
  • 462
  • 463
  • 464
  • 465
  • 466
  • 467
  • 468
  • 469
  • 470
  • 471
  • 472
  • 473
  • 474
  • 475
  • 476
  • 477
  • 478
  • 479
  • 480
  • 481
  • 482
  • 483
  • 484
  • 485
  • 486
  • 487
  • 488
  • 489
  • 490
  • 491
  • 492
  • 493
  • 494
  • 495
  • 496
  • 497
  • 498
  • 499
  • 500
  • 501
  • 502
  • 503
  • 504
  • 505

DWS-3160 Series Gigabit Ethernet Unified Switch Web UI Reference Guide
497
Appendix F Wireless Switch Specific
Authenticated Roaming and Clustering:
Captive Portal Guidlines
In addition to the generic implementation, Captive Portal also provides two key features for the wireless networks
called
authenticated roaming
and
clustering
.
1.
Authenticated roaming allows the client to roam from access point to access point in a seamless fashion
while remaining authenticated.
2.
Clustering provides roaming between access points attached to different switches and monitoring Captive
Portal status for all switches from the Cluster Controller.
The Switches in the cluster must share the same Captive Portal settings, such as Captive Portal Configuration
instances, associated interfaces, local user database and RADIUS server settings. The databases should be
synchronized in a cluster to support client authenticated roaming.
Each Switch in the peer group makes an independent decision about who is the Cluster Controller. If a Switch does
not have any peer Switches, then it appoints itself the Cluster Controller.
Cluster Controller Election
When two Switches detect each other through the discovery process, they compare the value of the Cluster priority
field. The Switch with higher priority becomes the Cluster Controller. If the priority is the same, then the Switch with
lower IP address becomes the Cluster Controller. The Cluster priority is conveyed in the initial identification
message
The Cluster priority has a range from 0 to 255. Setting the priority to 0, disables the Cluster Controller function on
the Switch. Customers may want to disable the low-end Switches from becoming the Cluster Controller if they
deploy a large network where only a high end switch or network appliance is powerful enough to act as the Cluster
Controller.
The administrator may change the Switch Cluster priority value after the Switch has already joined the peer group.
The Cluster priority is also conveyed in the keep-alive message enabling the peer Switches to learn the new
Cluster priority of the Switch.
A Switch performs the election process after it boots, after it loses connection to the current Cluster Controller, and
every time it receives an initial identification message or a keep-alive message from another Switch. The Switch
keeps a list of Cluster priorities and IP addresses for each peer Switch and elects the Cluster Controller based on
the criteria described above.
If a Cluster Controller Switch decides that it is no longer a controller because it receives a message from another
Switch with higher Cluster priority or lower IP address, then it purges some of the databases.
The decision to transition out of the Cluster Controller state is immediate. If the Switch elects itself as the Cluster
Controller immediately. If the Switch elects another Switch as the Cluster Controller, then the decision to declare
that Switch as the Cluster Controller is delayed for the duration of the keep-alive timer interval. If another Cluster
Controller is detected during this interval, then the delay timer is restarted. The administrator looking at the Switch
status during the delay period would see that the Switch is not the Cluster Controller and the Cluster Controller
address is 0.0.0.0. In this release the keep-alive timer interval is fixed at 120 seconds.
Each peer Switch independently establishes connections with other peer Switches. In a transient case, it is
possible that one of the Switches, that just established a connection with another Switch, does not see all the
Switches that the other Switch is seeing, so that the two Switches may select different Cluster Controllers. Although
the WIDS security functions do not work correctly when peer Switches disagree about which Switch is the Cluster
Controller, this condition does not affect data forwarding through the network and normal operation is restored as
soon as all the Switches in the peer group discover each other.