Dell PowerEdge XL 5133-4 MXL 10/40GbE Switch IO Module FTOS Command Reference - Page 19

DCB Support, FCoE Connectivity and FIP Snooping, iSCSI Operation

Page 19 highlights

DCB Support DCB enhancements for data center networks are supported to eliminate packet loss and provision links with required bandwidth. The Aggregator provides zero-touch configuration for DCB. The Aggregator auto-configures DCBX port roles as follows: • Server-facing ports are configured as auto-downstream interfaces. • Uplink ports are configured as auto-upstream interfaces. In operation, DCBX auto-configures uplink ports to match the DCB configuration in the ToR switches to which they connect. The Aggregator supports DCB only in standalone mode; DCB is not supported in stacking mode. FCoE Connectivity and FIP Snooping Many data centers use Fibre Channel (FC) in storage area networks (SANs). Fibre Channel over Ethernet (FCoE) encapsulates Fibre Channel frames over Ethernet networks. On an Aggregator, the internal ports support FCoE connectivity and connect to the converged network adapter (CNA) in blade servers. FCoE allows Fibre Channel to use 10-Gigabit Ethernet networks while preserving the Fibre Channel protocol. The Aggregator also provides zero-touch configuration for FCoE configuration. The Aggregator auto-configures to match the FCoE settings used in the ToR switches to which it connects through its uplink ports. FIP snooping is automatically configured on an Aggregator. The auto-configured port channel (LAG 128) operates in FCoE forwarder (FCF) port mode. iSCSI Operation Support for iSCSI traffic is turned on by default when the Aggregator powers up. No configuration is required. When the Aggregator powers up, it monitors known TCP ports for iSCSI storage devices on all interfaces. When a session is detected, an entry is created and monitored as long as the session is active. The Aggregator also detects iSCSI storage devices on all interfaces and auto-configures to optimize performance. Performance optimization operations, such as Jumbo frame size support, STP port-state fast, and disabling of storm control on interfaces connected to an iSCSI storage device, are applied automatically. CLI configuration is necessary only when the configuration includes iSCSI storage devices that cannot be automatically detected and when non-default QoS handling is required. Before You Start | 5

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290

Before You Start
|
5
DCB Support
DCB enhancements for data center networks are supported to eliminate packet loss and
provision links with required bandwidth.
The Aggregator provides zero-touch configuration for DCB. The Aggregator auto-configures
DCBX port roles as follows:
Server-facing ports are configured as auto-downstream interfaces.
Uplink ports are configured as auto-upstream interfaces.
In operation, DCBX auto-configures uplink ports to match the DCB configuration in the ToR
switches to which they connect.
The Aggregator supports DCB only in standalone mode; DCB is not supported in stacking
mode.
FCoE Connectivity and FIP Snooping
Many data centers use Fibre Channel (FC) in storage area networks (SANs). Fibre Channel
over Ethernet (FCoE) encapsulates Fibre Channel frames over Ethernet networks.
On an Aggregator, the internal ports support FCoE connectivity and connect to the converged
network adapter (CNA) in blade servers. FCoE allows Fibre Channel to use 10-Gigabit
Ethernet networks while preserving the Fibre Channel protocol.
The Aggregator also provides zero-touch configuration for FCoE configuration. The
Aggregator auto-configures to match the FCoE settings used in the ToR switches to which it
connects through its uplink ports.
FIP snooping is automatically configured on an Aggregator. The auto-configured port
channel (LAG 128) operates in FCoE forwarder (FCF) port mode.
iSCSI Operation
Support for iSCSI traffic is turned on by default when the Aggregator powers up. No
configuration is required.
When the Aggregator powers up, it monitors known TCP ports for iSCSI storage devices on
all interfaces. When a session is detected, an entry is created and monitored as long as the
session is active.
The Aggregator also detects iSCSI storage devices on all interfaces and auto-configures to
optimize performance. Performance optimization operations, such as Jumbo frame size
support, STP port-state fast, and disabling of storm control on interfaces connected to an
iSCSI storage device, are applied automatically.
CLI configuration is necessary only when the configuration includes iSCSI storage devices
that cannot be automatically detected and when non-default QoS handling is required.