HP StorageWorks 2/24 FW 07.00.00/HAFM SW 08.06.00 McDATA Products in a SAN Env - Page 170

Rate limiting, Data compression, Fast LZO with history, The Lempel-Ziv-Oberhumer LZO compression

Page 170 highlights

Implementing SAN Internetworking Solutions 4 • Rate limiting - If ingress traffic enters the SAN router faster than egress traffic leaves, port buffers fill and cause dropped data packets. Dropped packets cause TCP to resort to internal (and inefficient) flow control, causing dramatic link throughput decrease. Rate limiting prevents this problem. Refer to Intelligent Port Speed for detailed information about rate limiting. • Data compression - SAN router software identifies repetitive information in an output data stream and applies a compression algorithm to ensure the data is more compact and efficiently transmitted. The compression algorithm is set at the Element Manager application using the Compression Method drop-down list at the Advanced TCP Configuration dialog box. The list provides four algorithm selections: - LZO - The Lempel-Ziv-Oberhumer (LZO) compression algorithm searches for strings of characters duplicated within a block of data being compressed. Duplicated strings are removed from the data stream and replaced by an encoded string. Non-duplicate characters (literals) are output with special encoding to distinguish them from duplicate string encoding. LZO generates a self-contained compressed data block. All information needed to decompress the data is in the compressed data, and there is no history maintained by sender (for compression) or the receiver (for decompression). The algorithm is recommended when up to 64 TCP sessions are used and the available bandwidth is up to 155 Mbps (OC-3 transport level). - Fast LZO with history - This algorithm uses the LZO algorithm with a history cache. A history cache is maintained and used to more effectively compress and decompress data. The algorithm has an average compression ratio increase of approximately 20% over LZO. The algorithm is recommended when up to 8 TCP sessions are used and the available bandwidth is up to 155 Mbps (OC-3 transport level). - LZO with history - This algorithm incorporates the LZO algorithm with a history cache and Huffman encoding. Huffman encoding is an algorithm for lossless compression based on the statistical frequency of occurrence of a symbol in the file being compressed. As the probability of occurrence of a symbol increases, the compressed bit-size representation decreases. The algorithm uses additional computing resources 4-26 McDATA Products in a SAN Environment - Planning Manual

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292
  • 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302
  • 303
  • 304
  • 305
  • 306
  • 307
  • 308
  • 309
  • 310
  • 311
  • 312
  • 313
  • 314
  • 315
  • 316
  • 317
  • 318
  • 319
  • 320
  • 321
  • 322

4
4-26
McDATA Products in a SAN Environment - Planning Manual
Implementing SAN Internetworking Solutions
Rate limiting -
If ingress traffic enters the SAN router faster than
egress traffic leaves, port buffers fill and cause dropped data
packets. Dropped packets cause TCP to resort to internal (and
inefficient) flow control, causing dramatic link throughput
decrease. Rate limiting prevents this problem. Refer to
Intelligent
Port Speed
for detailed information about rate limiting.
Data compression -
SAN router software identifies repetitive
information in an output data stream and applies a compression
algorithm to ensure the data is more compact and efficiently
transmitted. The compression algorithm is set at the Element
Manager application using the
Compression Method
drop-down
list at the
Advanced TCP Configuration
dialog box. The list
provides four algorithm selections:
LZO -
The Lempel-Ziv-Oberhumer (LZO) compression
algorithm searches for strings of characters duplicated within
a block of data being compressed. Duplicated strings are
removed from the data stream and replaced by an encoded
string. Non-duplicate characters (literals) are output with
special encoding to distinguish them from duplicate string
encoding. LZO generates a self-contained compressed data
block. All information needed to decompress the data is in the
compressed data, and there is no history maintained by sender
(for compression) or the receiver (for decompression). The
algorithm is recommended when up to 64 TCP sessions are
used and the available bandwidth is up to 155 Mbps (OC-3
transport level).
Fast LZO with history -
This algorithm uses the LZO
algorithm with a history cache. A history cache is maintained
and used to more effectively compress and decompress data.
The algorithm has an average compression ratio increase of
approximately 20% over LZO. The algorithm is recommended
when up to 8 TCP sessions are used and the available
bandwidth is up to 155 Mbps (OC-3 transport level).
LZO with history -
This algorithm incorporates the LZO
algorithm with a history cache and Huffman encoding.
Huffman encoding is an algorithm for lossless compression
based on the statistical frequency of occurrence of a symbol in
the file being compressed. As the probability of occurrence of
a symbol increases, the compressed bit-size representation
decreases. The algorithm uses additional computing resources