HP ProLiant DL380G5-WSS 3.7.0 HP StorageWorks HP Scalable NAS File Serving Sof - Page 194

NFS reads and writes, DB Optimized mount option

Page 194 highlights

The following example shows an /etc/fstab entry for a hard mount on the client system: 10.0.0.1:/share /mnt/share nfs hard,intr 0 0 The next example shows an /etc/fstab entry for a soft mount on the client system: 10.0.0.1:/share /mnt/share nfs soft,timeo=7,retrans=4 0 0 If you are receiving I/O errors with a soft mount, you may want to consider either switching to a hard mount or raising your timeio and/or retrans parameters to compensate. Consider that the maximum acceptable time delay for an nfs mount to respond before receiving an I/O error is (retrans*timeo). In the above example, this is 4*0.7=2.8 seconds. NFS reads and writes The read and write size that NFS uses to read files from an NFS server or to write files to an NFS server should be set to 32K. This is done with the rsize and wsize mount options. Because of the nature of the NFS protocol, single-stream write performance is entirely bounded by the size of the I/Os submitted by the client. Since NFS is "stateless" (disregarding file locking), the only way to avoid data loss is for writes to actually be committed to storage. By default, NFS v3 mounts are "synchronous." If a client is submitting 4K writes, each write needs to be transmitted, received by the server, submitted to disk, written, and then a response generated. The latency causes very low throughput. If the client submits a 1MB write, it will be broken down into "wsize" writes (32K if the tuning above is performed). All but the last are immediately acknowledged by the server, and only the final requires a commit/write, allowing for much higher stream performance. This can be an issue when using commands such as cp, which use very small buffer sizes. Use of cp to copy a file to an nfs-mounted filesystem will yield poor I/O rates. Using dd with a large blocksize to perform the same operation will give vastly improved results. DB Optimized mount option The HP Scalable NAS DB Optimized (or DBOPTIMIZE) mount option is intended for use with database objects. It should not be used for general-purpose NFS access. 194 Configure FS Option for Linux

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133
  • 134
  • 135
  • 136
  • 137
  • 138
  • 139
  • 140
  • 141
  • 142
  • 143
  • 144
  • 145
  • 146
  • 147
  • 148
  • 149
  • 150
  • 151
  • 152
  • 153
  • 154
  • 155
  • 156
  • 157
  • 158
  • 159
  • 160
  • 161
  • 162
  • 163
  • 164
  • 165
  • 166
  • 167
  • 168
  • 169
  • 170
  • 171
  • 172
  • 173
  • 174
  • 175
  • 176
  • 177
  • 178
  • 179
  • 180
  • 181
  • 182
  • 183
  • 184
  • 185
  • 186
  • 187
  • 188
  • 189
  • 190
  • 191
  • 192
  • 193
  • 194
  • 195
  • 196
  • 197
  • 198
  • 199
  • 200
  • 201
  • 202
  • 203
  • 204
  • 205
  • 206
  • 207
  • 208
  • 209
  • 210
  • 211
  • 212
  • 213
  • 214
  • 215
  • 216
  • 217
  • 218
  • 219
  • 220
  • 221
  • 222
  • 223
  • 224
  • 225
  • 226
  • 227
  • 228
  • 229
  • 230
  • 231
  • 232
  • 233
  • 234
  • 235
  • 236
  • 237
  • 238
  • 239
  • 240
  • 241
  • 242
  • 243
  • 244
  • 245
  • 246
  • 247
  • 248
  • 249
  • 250
  • 251
  • 252
  • 253
  • 254
  • 255
  • 256
  • 257
  • 258
  • 259
  • 260
  • 261
  • 262
  • 263
  • 264
  • 265
  • 266
  • 267
  • 268
  • 269
  • 270
  • 271
  • 272
  • 273
  • 274
  • 275
  • 276
  • 277
  • 278
  • 279
  • 280
  • 281
  • 282
  • 283
  • 284
  • 285
  • 286
  • 287
  • 288
  • 289
  • 290
  • 291
  • 292
  • 293
  • 294
  • 295
  • 296
  • 297
  • 298
  • 299
  • 300
  • 301
  • 302
  • 303
  • 304
  • 305
  • 306
  • 307
  • 308
  • 309
  • 310
  • 311
  • 312
  • 313
  • 314
  • 315
  • 316
  • 317
  • 318
  • 319
  • 320
  • 321
  • 322
  • 323
  • 324
  • 325
  • 326
  • 327
  • 328
  • 329
  • 330
  • 331
  • 332
  • 333
  • 334
  • 335
  • 336
  • 337
  • 338
  • 339
  • 340
  • 341
  • 342
  • 343
  • 344
  • 345
  • 346
  • 347
  • 348
  • 349
  • 350
  • 351
  • 352
  • 353
  • 354
  • 355
  • 356
  • 357
  • 358
  • 359
  • 360
  • 361
  • 362
  • 363
  • 364
  • 365
  • 366
  • 367
  • 368
  • 369
  • 370
  • 371
  • 372
  • 373
  • 374
  • 375
  • 376
  • 377
  • 378
  • 379
  • 380
  • 381
  • 382
  • 383
  • 384
  • 385
  • 386
  • 387
  • 388
  • 389
  • 390
  • 391
  • 392
  • 393
  • 394
  • 395
  • 396
  • 397
  • 398
  • 399
  • 400
  • 401
  • 402
  • 403
  • 404
  • 405
  • 406
  • 407
  • 408
  • 409
  • 410
  • 411
  • 412
  • 413
  • 414
  • 415
  • 416
  • 417
  • 418
  • 419
  • 420
  • 421
  • 422
  • 423
  • 424
  • 425
  • 426
  • 427
  • 428
  • 429
  • 430
  • 431
  • 432
  • 433
  • 434
  • 435

The following example shows an
/etc/fstab
entry for a hard mount on the client
system:
10.0.0.1:/share /mnt/share nfs hard,intr 0 0
The next example shows an
/etc/fstab
entry for a soft mount on the client system:
10.0.0.1:/share /mnt/share nfs soft,timeo=7,retrans=4 0 0
If you are receiving I/O errors with a soft mount, you may want to consider either
switching to a hard mount or raising your
timeio
and/or
retrans
parameters to
compensate. Consider that the maximum acceptable time delay for an nfs mount to
respond before receiving an I/O error is (retrans*timeo). In the above example, this
is 4*0.7=2.8 seconds.
NFS reads and writes
The read and write size that NFS uses to read files from an NFS server or to write
files to an NFS server should be set to 32K. This is done with the
rsize
and
wsize
mount options.
Because of the nature of the NFS protocol, single-stream write performance is entirely
bounded by the size of the I/Os submitted by the client. Since NFS is
stateless
(disregarding file locking), the only way to avoid data loss is for writes to actually
be committed to storage. By default, NFS v3 mounts are
synchronous.
If a client
is submitting 4K writes, each write needs to be transmitted, received by the server,
submitted to disk, written, and then a response generated. The latency causes very
low throughput. If the client submits a 1MB write, it will be broken down into
wsize
writes (32K if the tuning above is performed). All but the last are immediately
acknowledged by the server, and only the final requires a commit/write, allowing
for much higher stream performance.
This can be an issue when using commands such as
cp
, which use very small buffer
sizes. Use of
cp
to copy a file to an nfs-mounted filesystem will yield poor I/O rates.
Using
dd
with a large blocksize to perform the same operation will give vastly
improved results.
DB Optimized mount option
The HP Scalable NAS DB Optimized (or DBOPTIMIZE) mount option is intended for
use with database objects. It should not be used for general-purpose NFS access.
Configure FS Option for Linux
194