Dell PowerEdge FX2 Dell PowerEdge FN I/O Aggregator Configuration Guide 9.6(0 - Page 17

Data Center Bridging Support, FCoE Connectivity and FIP Snooping

Page 17 highlights

• Data center bridging capability exchange protocol (DCBx): Server-facing ports auto-configure in auto-downstream port roles; uplink ports auto-configure in auto-upstream port roles. • Fibre Channel over Ethernet (FCoE) connectivity and FCoE initiation protocol (FIP) snooping: The uplink port channel (LAG 128) is enabled to operate in Fibre channel forwarder (FCF) port mode. • Link layer discovery protocol (LLDP): Enabled on all ports to advertise management TLV and system name with neighboring devices. • Internet small computer system interface (iSCSI)optimization. • Internet group management protocol (IGMP) snooping. • Jumbo frames: Ports are set to a maximum MTU of 12,000 bytes by default. • Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as an upstream interface. Server-facing links are auto-configured to be brought up only if the uplink port-channel is up. • In VLT mode, port 9 is automatically configured as VLT interconnect ports. VLT domain configuration is automatic. This includes peer-link, configured MAC, backup link and setting every port channel as VLT port-channel. Data Center Bridging Support To eliminate packet loss and provision links with required bandwidth, Data Center Bridging (DCB) enhancements for data center networks are supported. The aggregator provides zero-touch configuration for DCB. The aggregator auto-configures DCBX port roles as follows: • Server-facing ports are configured as auto-downstream interfaces. • Uplink ports are configured as auto-upstream interfaces. In operation, DCBx auto-configures uplink ports to match the DCB configuration in the ToR switches to which they connect. The Aggregator supports DCB only in standalone mode. FCoE Connectivity and FIP Snooping Many data centers use Fiber Channel (FC) in storage area networks (SANs). Fiber Channel over Ethernet (FCoE) encapsulates Fiber Channel frames over Ethernet networks. On an Aggregator, the internal ports support FCoE connectivity and connects to the converged network adapter (CNA) in servers. FCoE allows Fiber Channel to use 10-Gigabit Ethernet networks while preserving the Fiber Channel protocol. The Aggregator also provides zero-touch configuration for FCoE connectivity. The Aggregator autoconfigures to match the FCoE settings used in the switches to which it connects through its uplink ports. FIP snooping is automatically configured on an Aggregator. The auto-configured port channel (LAG 128) operates in FCF port mode. Before You Start 17

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292

Data center bridging capability exchange protocol (DCBx): Server-facing ports auto-configure in
auto-downstream port roles; uplink ports auto-configure in auto-upstream port roles.
Fibre Channel over Ethernet (FCoE) connectivity and FCoE initiation protocol (FIP) snooping: The
uplink port channel (LAG 128) is enabled to operate in Fibre channel forwarder (FCF) port mode.
Link layer discovery protocol (LLDP): Enabled on all ports to advertise management TLV and system
name with neighboring devices.
Internet small computer system interface (iSCSI)optimization.
Internet group management protocol (IGMP) snooping.
Jumbo frames: Ports are set to a maximum MTU of 12,000 bytes by default.
Link tracking: Uplink-state group 1 is automatically configured. In uplink state-group 1, server-facing
ports auto-configure as downstream interfaces; the uplink port-channel (LAG 128) auto-configures as
an upstream interface. Server-facing links are auto-configured to be brought up only if the uplink
port-channel is up.
In VLT mode, port 9 is automatically configured as VLT interconnect ports. VLT domain configuration
is automatic. This includes peer-link, configured MAC, backup link and setting every port channel as
VLT port-channel.
Data Center Bridging Support
To eliminate packet loss and provision links with required bandwidth, Data Center Bridging (DCB)
enhancements for data center networks are supported.
The aggregator provides zero-touch configuration for DCB. The aggregator auto-configures DCBX port
roles as follows:
Server-facing ports are configured as auto-downstream interfaces.
Uplink ports are configured as auto-upstream interfaces.
In operation, DCBx auto-configures uplink ports to match the DCB configuration in the ToR switches to
which they connect.
The Aggregator supports DCB only in standalone mode.
FCoE Connectivity and FIP Snooping
Many data centers use Fiber Channel (FC) in storage area networks (SANs). Fiber Channel over Ethernet
(FCoE) encapsulates Fiber Channel frames over Ethernet networks.
On an Aggregator, the internal ports support FCoE connectivity and connects to the converged network
adapter (CNA) in servers. FCoE allows Fiber Channel to use 10-Gigabit Ethernet networks while
preserving the Fiber Channel protocol.
The Aggregator also provides zero-touch configuration for FCoE connectivity. The Aggregator auto-
configures to match the FCoE settings used in the switches to which it connects through its uplink ports.
FIP snooping is automatically configured on an Aggregator. The auto-configured port channel (LAG 128)
operates in FCF port mode.
Before You Start
17