HP 4400 HP Enterprise Virtual Array Updating Product Software Guide (XCS 10000 - Page 27

Managing host I/O timeouts for an online upgrade, HP-UX

Page 27 highlights

Table 4 HP Command View EVAPerf virtual disk statistics (continued) Counter Description Write Req/s The number of completed write requests per second to a virtual disk received from all hosts. Write requests may include transfers from a source array to this array for data replication and host data written to snapshot or snapclone volumes. Write MB/s The rate at which data is written to the virtual disk by all hosts; includes transfers from the source array to the destination array. Write Latency (ms) The average time it takes to complete a write request (from initiation to receipt of write completion). Flush MB/s The rate at which data is written to a physical disk for the associated virtual disk. The sum of flush counters for all virtual disks on both controllers is the rate at which data is written to the physical drives, and is equal to the total host write data. Data written to the destination array is included. Host writes to snapshots and snapclones are included in the flush statistics, but data flow for internal snapshot and snapclone normalization and copy-before-write activity are not included. Mirror MB/s The rate at which data travels across the mirror port to complete read and write requests to a virtual disk. This data is not related to the physical disk mirroring for Vraid1 redundancy. Write data is always copied through the mirror port when cache mirroring is enabled for redundancy. In active/active controllers, this counter includes read data from the owning controller that must be returned to the requesting host through the proxy controller. Reported mirror traffic is always outbound from the referenced controller to the other controller. Prefetch MB/s The rate at which data is read from the physical disk to cache in anticipation of subsequent reads when a sequential data stream is detected. A sequential data stream may be created by host I/O or other I/O activity that occurs because of a DR initial copy or DR full copy. Managing host I/O timeouts for an online upgrade The defaults for host operating parameters, such as LUN timeout and queue depth, ensure proper operation with the array. These values are appropriate for most array operations, including online controller software upgrades. In general, host LUN timeouts of 60 seconds or more are sufficient for an online upgrade. In most situations you will not need to change these settings to perform an online controller software upgrade. If any host timeout values have been changed to less than the default (typically 60 seconds), you must reset them to their original default. The following sections summarize the steps and commands for checking and changing timeout values for each supported operating system. See the operating system documentation for more information. IMPORTANT: Depending on your operating system, changing timeout values may require a reboot of your system. To minimize disruption of normal operations, schedule reboots one node at a time. In a cluster environment, plan your reboots one node at a time. HP-UX CAUTION: Because HP-UX supports boot across Fibre Channel SAN, any change to default SCSI timeouts on the HP-UX host may cause corruption and make the system unrecoverable. Default timeout values • Sdisk timeout: 30 seconds • (LVM) lvol timeout: 0 seconds (default=0, retries forever) Managing host I/O timeouts for an online upgrade 27

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77

Table 4 HP Command View EVAPerf virtual disk statistics
(continued)
Description
Counter
The number of completed write requests per second to a virtual disk received from all
hosts. Write requests may include transfers from a source array to this array for data
replication and host data written to snapshot or snapclone volumes.
Write Req/s
The rate at which data is written to the virtual disk by all hosts; includes transfers from
the source array to the destination array.
Write MB/s
The average time it takes to complete a write request (from initiation to receipt of write
completion).
Write Latency (ms)
The rate at which data is written to a physical disk for the associated virtual disk. The
sum of flush counters for all virtual disks on both controllers is the rate at which data
Flush MB/s
is written to the physical drives, and is equal to the total host write data. Data written
to the destination array is included. Host writes to snapshots and snapclones are
included in the flush statistics, but data flow for internal snapshot and snapclone
normalization and copy-before-write activity are not included.
The rate at which data travels across the mirror port to complete read and write requests
to a virtual disk. This data is not related to the physical disk mirroring for Vraid1
Mirror MB/s
redundancy. Write data is always copied through the mirror port when cache mirroring
is enabled for redundancy. In active/active controllers, this counter includes read data
from the owning controller that must be returned to the requesting host through the
proxy controller. Reported mirror traffic is always outbound from the referenced
controller to the other controller.
The rate at which data is read from the physical disk to cache in anticipation of
subsequent reads when a sequential data stream is detected. A sequential data stream
Prefetch MB/s
may be created by host I/O or other I/O activity that occurs because of a DR initial
copy or DR full copy.
Managing host I/O timeouts for an online upgrade
The defaults for host operating parameters, such as LUN timeout and queue depth, ensure proper
operation with the array. These values are appropriate for most array operations, including online
controller software upgrades. In general, host LUN timeouts of 60 seconds or more are sufficient
for an online upgrade. In most situations you will not need to change these settings to perform an
online controller software upgrade.
If any host timeout values have been changed to less than the default (typically 60 seconds), you
must reset them to their original default. The following sections summarize the steps and commands
for checking and changing timeout values for each supported operating system. See the operating
system documentation for more information.
IMPORTANT:
Depending on your operating system, changing timeout values may require a
reboot of your system. To minimize disruption of normal operations, schedule reboots one node
at a time. In a cluster environment, plan your reboots one node at a time.
HP-UX
CAUTION:
Because HP-UX supports boot across Fibre Channel SAN, any change to default SCSI
timeouts on the HP-UX host may cause corruption and make the system unrecoverable.
Default timeout values
Sdisk timeout: 30 seconds
(LVM) lvol timeout: 0 seconds (default=0, retries forever)
Managing host I/O timeouts for an online upgrade
27