HP 6100 HP 4x00/6x00/8x00 Enterprise Virtual Array Updating Product Software G - Page 27

Managing host I/O timeouts for an online upgrade, HP-UX, Default timeout values, IBM AIX

Page 27 highlights

Table 3 HP Command View EVAPerf virtual disk statistics (continued) Counter Description Write Data Rate The rate at which data is written to the virtual disk by all hosts and includes transfers from the source array to the destination array. Write Latency This average time it takes to complete a write request (from initiation to receipt of write completion). Flush Data Rate The rate at which data is written to a physical disk for the associated virtual disk. The sum of flush counters for all virtual disks on both controllers is the rate at which data is written to the physical drives and is equal to the total host write data. Data written to the destination array is included. Host writes to snapshots and snapclones are included in the flush statistics, but data flow for internal snapshot and snapclone normalization and copy-before-write activity are not included. Mirror Data Rate The rate at which data travels across the mirror port to complete read and write requests to a virtual disk. This data is not related to the physical disk mirroring for Vraid1 redundancy. Write data is always copied through the mirror port when cache mirroring is enabled for redundancy. In active/active controllers, this counter includes read data from the owning controller that must be returned to the requesting host through the proxy controller. Reported mirror traffic is always outbound from the referenced controller to the other controller. Prefetch Data Rate The rate at which data is read from the physical disk to cache in anticipation of subsequent reads when a sequential stream is detected. Note that a sequential data stream may be created by host I/O and other I/O activity that occurs because of a DR initial copy or DR full copy. Managing host I/O timeouts for an online upgrade The default values for host operating parameters such as LUN timeout and queue depth are typically set to values that ensure proper operation with the storage system. These values are appropriate for most storage system operations, including online controller software upgrades. In general, host LUN timeouts of 60 seconds or more are adequate to accommodate an online upgrade. In most situations it will not be necessary to alter these settings to perform an online controller software upgrade. If any host timeout values have been decreased to less than 60 seconds, it will be necessary to reset them to their original default value. The following section provide a summary of the steps and commands involved in checking and changing timeout values for each supported operating system. See the operating system documentation for more information. NOTE: Unless otherwise noted, cluster testing with online controller software upgrades has not been completed. Cluster information will be added to this document when the testing is complete. HP-UX CAUTION: Because HP-UX supports boot across Fibre Channel SAN, any change to default SCSI timeouts on the HP-UX host may cause corruption and make the system unrecoverable. Default timeout values • Sdisk timeout: 30 seconds • (LVM) lvol timeout: 0 seconds IBM AIX Checking or changing timeouts AIX requires the disk settings shown in Table 4 (page 28) for the native multipath drives. Managing host I/O timeouts for an online upgrade 27

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66

Table 3 HP Command View EVAPerf virtual disk statistics
(continued)
Description
Counter
The rate at which data is written to the virtual disk by all hosts and includes transfers
from the source array to the destination array.
Write Data Rate
This average time it takes to complete a write request (from initiation to receipt of write
completion).
Write Latency
The rate at which data is written to a physical disk for the associated virtual disk. The
sum of flush counters for all virtual disks on both controllers is the rate at which data
Flush Data Rate
is written to the physical drives and is equal to the total host write data. Data written
to the destination array is included. Host writes to snapshots and snapclones are
included in the flush statistics, but data flow for internal snapshot and snapclone
normalization and copy-before-write activity are not included.
The rate at which data travels across the mirror port to complete read and write requests
to a virtual disk. This data is not related to the physical disk mirroring for Vraid1
Mirror Data Rate
redundancy. Write data is always copied through the mirror port when cache mirroring
is enabled for redundancy. In active/active controllers, this counter includes read data
from the owning controller that must be returned to the requesting host through the
proxy controller. Reported mirror traffic is always outbound from the referenced
controller to the other controller.
The rate at which data is read from the physical disk to cache in anticipation of
subsequent reads when a sequential stream is detected. Note that a sequential data
Prefetch Data Rate
stream may be created by host I/O and other I/O activity that occurs because of a
DR initial copy or DR full copy.
Managing host I/O timeouts for an online upgrade
The default values for host operating parameters such as LUN timeout and queue depth are typically
set to values that ensure proper operation with the storage system. These values are appropriate
for most storage system operations, including online controller software upgrades. In general, host
LUN timeouts of 60 seconds or more are adequate to accommodate an online upgrade. In most
situations it will not be necessary to alter these settings to perform an online controller software
upgrade.
If any host timeout values have been decreased to less than 60 seconds, it will be necessary to
reset them to their original default value. The following section provide a summary of the steps and
commands involved in checking and changing timeout values for each supported operating system.
See the operating system documentation for more information.
NOTE:
Unless otherwise noted, cluster testing with online controller software upgrades has not
been completed. Cluster information will be added to this document when the testing is complete.
HP-UX
CAUTION:
Because HP-UX supports boot across Fibre Channel SAN, any change to default SCSI
timeouts on the HP-UX host may cause corruption and make the system unrecoverable.
Default timeout values
Sdisk timeout: 30 seconds
(LVM) lvol timeout: 0 seconds
IBM AIX
Checking or changing timeouts
AIX requires the disk settings shown in
Table 4 (page 28)
for the native multipath drives.
Managing host I/O timeouts for an online upgrade
27