HP 381513-B21 HP Smart Array Controller technology, 3rd edition - Page 9

Balanced cache size, RAID performance enhancements, Disk striping, Parity data

Page 9 highlights

Balanced cache size Smart Array controllers allow administrators to adjust how the cache is distributed for write-back and read-ahead operations. Administrators can configure the cache module for optimal performance for any storage need. The default setting for most present generation Smart Array controllers configures the cache for 75 percent write-back operations and 25 percent read-ahead operations, but these default settings can vary by controller. Additionally, the cache module capacity can be upgraded to increase caching performance. RAID performance enhancements Smart Array controllers use several enhancements to increase RAID performance. Disk striping Striping combines several individual disk drives into a larger disk array containing one or more logical drives. Performance of the individual drives is aggregated to provide a single highperformance logical drive. The array controller evenly distributes the logical drive data into small "stripes" of data sequentially located across each member disk drive in the array. Administrators can adjust the stripe size to achieve optimal performance. Performance improves as the number of drives in the array increases. Parity data In a RAID 5 configuration, data protection is provided by distributed parity data. This parity data is calculated stripe by stripe from the user data that is written to all other blocks within that stripe. The blocks of parity data are distributed evenly over every physical drive within the logical drive. When a physical drive fails, data that was on the failed drive can be calculated from the remaining parity data and user data on the other drives in the array. This recovered data is usually written to an online spare drive through a process called a rebuild. RAID 6, like RAID 5, generates and stores parity information to protect against data loss caused by drive failure. With RAID 6, however, two different sets of parity data are used so that data can still be preserved even if two drives fail. Each set of parity data uses a capacity equivalent to that of one of the constituent drives. This method is most useful when data loss is unacceptable but cost is also an important factor. RAID 6 provides better protection for data than a RAID 5 configuration because of the additional parity information. Background RAID creation When a RAID 1, RAID 5, or RAID 6 logical drive is first created, the Smart Array controller must build the logical drive within the array before enabling certain advanced performance techniques. While the logical drive is created, the storage volume is accessible by the host with full fault tolerance. The Smart Array controller creates the logical drive whenever the controller is not busy; this is called background parity initialization. Parity initialization takes several hours to complete, depending on the size of the logical drive and how busy the host keeps the controller. Before parity initialization completes, normal writes to RAID 5 and RAID 6 logical drives are slower because the controller must read the entire stripe to update the parity data and maintain fault tolerance. These writes during parity initialization are called regenerative writes or reconstructed writes. RAID 5 and RAID 6 read-modify-write After parity initialization is complete, writes to a RAID 5 or RAID 6 logical drive are typically faster because the controller does not read the entire stripe to update the parity data. Since the controller knows that the parity data is consistent with all the member drives in the stripe, the controller needs to read from only two disk drives during a RAID 5 write (or three disk drives for a RAID 6 write) to compute the parity data (regardless of array size). This technique is called a read-modify-write or backed-out write. 9

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32

Balanced cache size
Smart Array controllers allow administrators to adjust how the cache is distributed for write-back and
read-ahead operations. Administrators can configure the cache module for optimal performance for
any storage need. The default setting for most present generation Smart Array controllers configures
the cache for 75 percent write-back operations and 25 percent read-ahead operations, but these
default settings can vary by controller. Additionally, the cache module capacity can be upgraded to
increase caching performance.
RAID performance enhancements
Smart Array controllers use several enhancements to increase RAID performance.
Disk striping
Striping combines several individual disk drives into a larger disk array containing one or more
logical drives. Performance of the individual drives is aggregated to provide a single high-
performance logical drive. The array controller evenly distributes the logical drive data into small
“stripes” of data sequentially located across each member disk drive in the array. Administrators can
adjust the stripe size to achieve optimal performance. Performance improves as the number of drives
in the array increases.
Parity data
In a RAID 5 configuration, data protection is provided by distributed parity data. This parity data is
calculated stripe by stripe from the user data that is written to all other blocks within that stripe. The
blocks of parity data are distributed evenly over every physical drive within the logical drive. When a
physical drive fails, data that was on the failed drive can be calculated from the remaining parity
data and user data on the other drives in the array. This recovered data is usually written to an online
spare drive through a process called a rebuild.
RAID 6, like RAID 5, generates and stores parity information to protect against data loss caused by
drive failure. With RAID 6, however, two different sets of parity data are used so that data can still be
preserved even if two drives fail. Each set of parity data uses a capacity equivalent to that of one of
the constituent drives. This method is most useful when data loss is unacceptable but cost is also an
important factor. RAID 6 provides better protection for data than a RAID 5 configuration because of
the additional parity information.
Background RAID creation
When a RAID 1, RAID 5, or RAID 6 logical drive is first created, the Smart Array controller must build
the logical drive within the array before enabling certain advanced performance techniques. While
the logical drive is created, the storage volume is accessible by the host with full fault tolerance. The
Smart Array controller creates the logical drive whenever the controller is not busy; this is called
background parity initialization. Parity initialization takes several hours to complete, depending on
the size of the logical drive and how busy the host keeps the controller. Before parity initialization
completes, normal writes to RAID 5 and RAID 6 logical drives are slower because the controller must
read the entire stripe to update the parity data and maintain fault tolerance. These writes during
parity initialization are called regenerative writes or reconstructed writes.
RAID 5 and RAID 6 read-modify-write
After parity initialization is complete, writes to a RAID 5 or RAID 6 logical drive are typically faster
because the controller does not read the entire stripe to update the parity data. Since the controller
knows that the parity data is consistent with all the member drives in the stripe, the controller needs to
read from only two disk drives during a RAID 5 write (or three disk drives for a RAID 6 write) to
compute the parity data (regardless of array size). This technique is called a read-modify-write or
backed-out write.
9