Dell PowerEdge FX2 Dell PowerEdge FN I/O Aggregator Configuration Guide 9.6(0 - Page 39

ETS Operation with DCBx, Bandwidth Allocation for DCBX CIN, DCBX Operation

Page 39 highlights

- ETS is enabled by default with the default ETS configuration applied (all dot1p priorities in the same group with equal bandwidth allocation). ETS Operation with DCBx In DCBx negotiation with peer ETS devices, ETS configuration is handled as follows: • ETS TLVs are supported in DCBx versions CIN, CEE, and IEEE2.5. • ETS operational parameters are determined by the DCBX port-role configurations. • ETS configurations received from TLVs from a peer are validated. • In case of a hardware limitation or TLV error: - DCBx operation on an ETS port goes down. - New ETS configurations are ignored and existing ETS configurations are reset to the previously configured ETS output policy on the port or to the default ETS settings if no ETS output policy was previously applied. • ETS operates with legacy DCBx versions as follows: - In the CEE version, the priority group/traffic class group (TCG) ID 15 represents a non-ETS priority group. Any priority group configured with a scheduler type is treated as a strict-priority group and is given the priority-group (TCG) ID 15. - The CIN version supports two types of strict-priority scheduling: * Group strict priority: Allows a single priority flow in a priority group to increase its bandwidth usage to the bandwidth total of the priority group. A single flow in a group can use all the bandwidth allocated to the group. * Link strict priority: Allows a flow in any priority group to increase to the maximum link bandwidth. CIN supports only the default dot1p priority-queue assignment in a priority group. Bandwidth Allocation for DCBX CIN After an ETS output policy is applied to an interface, if the DCBX version used in your data center network is CIN, a QoS output policy is automatically configured to overwrite the default CIN bandwidth allocation. This default setting divides the bandwidth allocated to each port queue equally between the dot1p priority traffic assigned to the queue. DCBX Operation The data center bridging exchange protocol (DCBX) is used by DCB devices to exchange configuration information with directly connected peers using the link layer discovery protocol (LLDP) protocol. DCBX can detect the misconfiguration of a peer DCB device, and optionally, configure peer DCB devices with DCB feature settings to ensure consistent operation in a data center network. DCBX is a prerequisite for using DCB features, such as priority-based flow control (PFC) and enhanced traffic selection (ETS), to exchange link-level configurations in a converged Ethernet environment. DCBX is also deployed in topologies that support lossless operation for FCoE or iSCSI traffic. In these scenarios, all network devices are DCBX-enabled (DCBX is enabled end-to-end). The following versions of DCBX are supported on an Aggregator: CIN, CEE, and IEEE2.5. DCBX requires the LLDP to be enabled on all DCB devices. Data Center Bridging (DCB) 39

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292

ETS is enabled by default with the default ETS configuration applied (all dot1p priorities in the same
group with equal bandwidth allocation).
ETS Operation with DCBx
In DCBx negotiation with peer ETS devices, ETS configuration is handled as follows:
ETS TLVs are supported in DCBx versions CIN, CEE, and IEEE2.5.
ETS operational parameters are determined by the DCBX port-role configurations.
ETS configurations received from TLVs from a peer are validated.
In case of a hardware limitation or TLV error:
DCBx operation on an ETS port goes down.
New ETS configurations are ignored and existing ETS configurations are reset to the previously
configured ETS output policy on the port or to the default ETS settings if no ETS output policy was
previously applied.
ETS operates with legacy DCBx versions as follows:
In the CEE version, the priority group/traffic class group (TCG) ID 15 represents a non-ETS priority
group. Any priority group configured with a scheduler type is treated as a strict-priority group and
is given the priority-group (TCG) ID 15.
The CIN version supports two types of strict-priority scheduling:
*
Group strict priority: Allows a single priority flow in a priority group to increase its bandwidth
usage to the bandwidth total of the priority group. A single flow in a group can use all the
bandwidth allocated to the group.
*
Link strict priority: Allows a flow in any priority group to increase to the maximum link
bandwidth.
CIN supports only the default dot1p priority-queue assignment in a priority group.
Bandwidth Allocation for DCBX CIN
After an ETS output policy is applied to an interface, if the DCBX version used in your data center network
is CIN, a QoS output policy is automatically configured to overwrite the default CIN bandwidth allocation.
This default setting divides the bandwidth allocated to each port queue equally between the dot1p
priority traffic assigned to the queue.
DCBX Operation
The data center bridging exchange protocol (DCBX) is used by DCB devices to exchange configuration
information with directly connected peers using the link layer discovery protocol (LLDP) protocol. DCBX
can detect the misconfiguration of a peer DCB device, and optionally, configure peer DCB devices with
DCB feature settings to ensure consistent operation in a data center network.
DCBX is a prerequisite for using DCB features, such as priority-based flow control (PFC) and enhanced
traffic selection (ETS), to exchange link-level configurations in a converged Ethernet environment. DCBX
is also deployed in topologies that support lossless operation for FCoE or iSCSI traffic. In these scenarios,
all network devices are DCBX-enabled (DCBX is enabled end-to-end).
The following versions of DCBX are supported on an Aggregator: CIN, CEE, and IEEE2.5.
DCBX requires the LLDP to be enabled on all DCB devices.
Data Center Bridging (DCB)
39