HP Z840 Maintenance and Service Guide - Page 124

Software RAID solution, Software RAID considerations, Performance considerations

Page 124 highlights

● Write Through:The configuration might result in slower performance. ● Always Write Back: The configuration results in optimal performance, but there is a risk of data loss in the event of power failure. ● Write Back with BBU: If you have installed a BBU, write back is enabled only when the battery has a sufficient charge. During a learning cycle, the caching policy reverts to write-through until the learning cycle is complete. 7. Select Yes to accept the warning, and then select Next. 8. Select Accept, and then select Yes to save the configuration. 9. Select Yes to initialize the virtual drive you created. Software RAID solution This section summarizes software RAID considerations that are specific to the Linux environment, and provides links to additional configuration resources. Software RAID considerations The Linux kernel software RAID driver (called md, for multiple device) offers integrated software RAID without the need for additional hardware disk controllers or kernel patches. Unlike most hardware RAID solutions, software RAID can be used with all types of disk technologies, including SATA, SAS, SCSI, and solid-state drives. This software solution requires only minimal setup of the disks themselves. However, when compared to hardware-based RAID, software RAID has disadvantages in managing the disks, breaking up data as necessary, and managing parity data. The processor must assume some extra loading: disk-intensive workloads result in roughly double the processor overhead (for example, from 15% to 30%). For most applications, this overhead is easily handled by excess headroom in the processors. But for some applications where disk and processor performance are very well balanced and already near bottleneck levels, this additional processor overhead can become troublesome. Hardware RAID offers advantages because of its large hardware cache and the capability for better scheduling of operations in parallel. However, software RAID offers more flexibility for disk and disk controller setup. Additionally, hardware RAID requires that a failed RAID controller must be replaced with an identical model to avoid data loss, whereas software RAID imposes no such requirements. Some software RAID schemes offer data protection through mirroring (copying the data to multiple disks in case one disk fails) or parity data (checksums that allow error detection and limited rebuilding of data in case of a failure). For all software RAID solutions on HP workstations, redundancy can be restored only after the system is shut down so that the failed drive can be replaced. This replacement requires only a minimum amount of work. Performance considerations Disk I/O bandwidth is typically limited by the system bus speeds, the disk controller, and the disks themselves. The balance of these hardware limitations, as affected by the software configuration, determines the location of the any bottleneck is in the system. Several RAID levels offer improved performance relative to stand-alone disk performance. If disk throughput is restricted because of a single disk controller, RAID can probably do little to improve performance until another controller is added. Conversely, if raw disk performance is the bottleneck, a tuned software RAID solution can dramatically improve the throughput. The slower disk performance is, relative to the rest of the system, the better RAID performance will scale, because the slowest piece of the performance pipeline is being directly addressed by moving to RAID. 114 Appendix B Configuring RAID devices

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108
  • 109
  • 110
  • 111
  • 112
  • 113
  • 114
  • 115
  • 116
  • 117
  • 118
  • 119
  • 120
  • 121
  • 122
  • 123
  • 124
  • 125
  • 126
  • 127
  • 128
  • 129
  • 130
  • 131
  • 132
  • 133

Write Through
:The
configuration
might result in slower performance.
Always Write Back
: The
configuration
results in optimal performance, but there is a risk of data
loss in the event of power failure.
Write Back with BBU
: If you have installed a BBU, write back is enabled only when the battery has a
sufficient
charge. During a learning cycle, the caching policy reverts to write-through until the
learning cycle is complete.
7.
Select
Yes
to accept the warning, and then select
Next
.
8.
Select
Accept
, and then select
Yes
to save the
configuration.
9.
Select
Yes
to initialize the virtual drive you created.
Software RAID solution
This section summarizes software RAID considerations that are
specific
to the Linux environment, and
provides links to additional
configuration
resources.
Software RAID considerations
The Linux kernel software RAID driver (called
md
, for
multiple device
)
offers
integrated software RAID without
the need for additional hardware disk controllers or kernel patches. Unlike most hardware RAID solutions,
software RAID can be used with all types of disk technologies, including SATA, SAS, SCSI, and solid-state
drives. This software solution requires only minimal setup of the disks themselves.
However, when compared to hardware-based RAID, software RAID has disadvantages in managing the disks,
breaking up data as necessary, and managing parity data. The processor must assume some extra loading:
disk-intensive workloads result in roughly double the processor overhead (for example, from 15% to 30%).
For most applications, this overhead is easily handled by excess headroom in the processors. But for some
applications where disk and processor performance are very well balanced and already near bottleneck
levels, this additional processor overhead can become troublesome.
Hardware RAID
offers
advantages because of its large hardware cache and the capability for better
scheduling of operations in parallel. However, software RAID
offers
more
flexibility
for disk and disk controller
setup. Additionally, hardware RAID requires that a failed RAID controller must be replaced with an identical
model to avoid data loss, whereas software RAID imposes no such requirements.
Some software RAID schemes
offer
data protection through mirroring (copying the data to multiple disks in
case one disk fails) or parity data (checksums that allow error detection and limited rebuilding of data in case
of a failure). For all software RAID solutions on HP workstations, redundancy can be restored only after the
system is shut down so that the failed drive can be replaced. This replacement requires only a minimum
amount of work.
Performance considerations
Disk I/O bandwidth is typically limited by the system bus speeds, the disk controller, and the disks
themselves. The balance of these hardware limitations, as
affected
by the software
configuration,
determines
the location of the any bottleneck is in the system.
Several RAID levels
offer
improved performance relative to stand-alone disk performance. If disk throughput
is restricted because of a single disk controller, RAID can probably do little to improve performance until
another controller is added. Conversely, if raw disk performance is the bottleneck, a tuned software RAID
solution can dramatically improve the throughput. The slower disk performance is, relative to the rest of the
system, the better RAID performance will scale, because the slowest piece of the performance pipeline is
being directly addressed by moving to RAID.
114
Appendix B
Configuring
RAID devices