HP StorageWorks MSA1510i HP Storage Management Utility user guide (383075-002, - Page 42

guration status - after con, guring hard drives

Page 42 highlights

2. As needed, expand the drop-down boxes in the task area to change the settings from the suggested defaults. NOTE: • The SMU suggests defaults for the logical drive, creating one large logical drive from all unused space on the array, with the highest fault tolerance and performance possible for the hard drives included in that array. • Only Fault Tolerance levels possible for the array are displayed. For example, RAID 5 is not listed if the array has only two physical hard drives. • The default Stripe Size gives optimum performance in a mixed read/write environment. • For read-prominent environments, use a larger stripe size. • For write-prominent environments, use a smaller stripe size for RAID 5 or RAID_ADG, and a larger stripe size for RAID 0 or RAID 1+0. • To build multiple logical drives on the same array, reduce the Size setting from the default to a smaller amount. Additional logical drives can then be built from the remaining unused space. • Disabling the Array Accelerator for a logical drive reserves use of the accelerator cache for other logical drives in the array. This feature is useful if you want the other logical drives to have the maximum possible performance. 3. When the display refreshes, verify that the configured logical drives are shown in the component list. 4. Repeat Step 1 through Step 3 to create additional logical drives for this array, or to create logical drives for other arrays. Configuration status - after configuring hard drives Figure 10 illustrates the current configuration. The following items are configured: • Management port MA0 and MB0 • Data ports SA0, SA1, SB0, and SB1 • Physical hard drives, into: • Array A-Four (4) 160 GB hard drives, with no assigned spare • Logical drive 1-RAID 1+0 • Array B-Three (3) 250 GB hard drives, with an assigned spare • Logical drive 2-RAID 5 • Array C-Three (3) 250 GB hard drives, with an assigned spare • Logical drive 3-RAID 5 42 Configure

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99

2.
As needed, expand the drop-down boxes in the task area to change the settings from the suggested
defaults.
NOTE:
The SMU suggests defaults for the logical drive, creating one large logical drive from all
unused space on the array, with the highest fault tolerance and performance possible
for the hard drives included in that array.
Only
Fault Tolerance
levels possible for the array are displayed. For example, RAID 5
is not listed if the array has only two physical hard drives.
The default
Stripe Size
gives optimum performance in a mixed read/write environment.
For read-prominent environments, use a larger stripe size.
For write-prominent environments, use a smaller stripe size for RAID 5 or
RAID_ADG, and a larger stripe size for RAID 0 or RAID 1+0.
To build multiple logical drives on the same array, reduce the
Size
setting from the
default to a smaller amount. Additional logical drives can then be built from the
remaining unused space.
Disabling the
Array Accelerator
for a logical drive reserves use of the accelerator
cache for other logical drives in the array. This feature is useful if you want the other
logical drives to have the maximum possible performance.
3.
When the display refreshes, verify that the con
gured logical drives are shown in the component list.
4.
Repeat
Step 1
through
Step 3
to create additional logical drives for this array, or to create logical
drives for other arrays.
Con
guration status - after con
guring hard drives
Figure 10
illustrates the current con
guration.
The following items are con
gured:
Management port MA0 and MB0
Data ports SA0, SA1, SB0, and SB1
Physical hard drives, into:
Array A—Four (4) 160 GB hard drives, with no assigned spare
Logical drive 1—RAID 1+0
Array B—Three (3) 250 GB hard drives, with an assigned spare
Logical drive 2—RAID 5
Array C—Three (3) 250 GB hard drives, with an assigned spare
Logical drive 3—RAID 5
42
Con
gure