HP BL680c XenServer Administrator's Guide 4.1.0 - Page 23

Aggregates, Thick or thin provisioning, A-SIS deduplication

Page 23 highlights

Storage There are two constraints to consider, therefore, in mapping the virtual storage objects of the XenServer Host to the filer; in to order to maintain space efficiency it makes sense to limit the number of LUNs per FlexVol, yet at the other extreme, in order to avoid resource limitations a single LUN per FlexVol provides the most flexibility. However, since there is a vendor-imposed limit of 200 or 500 FlexVols, per filer (depending on the NetApp model), this would create a limit of 200 or 500 VDIs per filer and it is therefore important to select a suitable number of FlexVols around these parameters. Given these resource constraints, the mapping of virtual storage objects to the Ontap storage system has been designed in the following manner; LUNs are distributed evenly across FlexVols, with the expectation of using VM UUIDs to opportunistically group LUNs attached to the same VM into the same FlexVol. This is a reasonable usage model that allows a snapshot of all the VDIs in a VM at one time, maximizing the efficiency of the snapshot operation. An optional parameter you can set is the number of FlexVols assigned to the SR. You can use between 1 and 32 FlexVols; the default is 8. The trade-off in the number of FlexVols to the SR is that, for a greater number of FlexVols, the snapshot and clone operations become more efficient, since there are statistically fewer VMs backed off the same FlexVol. The disadvantage is that more FlexVol resources are used for a single SR, where there is a typical system-wide limitation of 200 for some smaller filers. Aggregates When creating a NetApp driver-based SR, you select an appropriate aggregate. The driver can be probed for non-traditional type aggregates, that is, newer-style aggregates that support FlexVols, and then lists all aggregates available and the unused disk space on each. We strongly recommend that you configure an aggregate exclusively for use by XenServer storage, since space guarantees and allocation cannot be correctly managed if other applications are also sharing the resource. Thick or thin provisioning When creating NetApp storage, you can also choose the type of space management used. By default, allocated space is "thickly provisioned" to ensure that VMs never run out of disk space and that all virtual allocation guarantees are fully enforced on the filer. Selecting "thick provisioning" ensures that whenever a VDI (LUN) is allocated on the filer, sufficient space is reserved to guarantee that it will never run out of space and consequently experience failed writes to disk. Due to the the nature of the Ontap FlexVol space provisioning algorithms the best practice guidelines for the filer require that at least twice the LUN space is reserved to account for background snapshot data collection and to ensure that writes to disk are never blocked. In addition to the double disk space gaurantee, Ontap also requires some additional space reservation for management of unique blocks across snapshots. The guideline on this amount is 20% above the reserved space. Therefore, the space guarantees afforded by "thick provisioning" will reserve up to 2.4 times the requested virtual disk space. The alternative allocation strategy is thin provisioning, which allows the admin to present more storage space to the VMs connecting to the SR than is actually available on the SR. There are no space garauntees, and allocation of a LUN does not claim any data blocks in the FlexVol until the VM writes data. This would be appropriate for development and test environments where you might find it convenient to over-provision virtual disk space on the SR in the anticipation that VMs may be created and destroyed frequently without ever utilizing the full virtual allocated disk. This method should be used with extreme caution and only in non-critical environments. A-SIS deduplication A-SIS (Advanced Single Instance Storage) deduplication is a NetApp technology for reclaiming redundant disk space. Newly-stored data objects are divided into small blocks, each block containing a digital signature, 17

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108

Storage
17
There are two constraints to consider, therefore, in mapping the virtual storage objects of the XenServer
Host to the filer; in to order to maintain space efficiency it makes sense to limit the number of LUNs per
FlexVol, yet at the other extreme, in order to avoid resource limitations a single LUN per FlexVol provides the
most flexibility. However, since there is a vendor-imposed limit of 200 or 500 FlexVols, per filer (depending
on the NetApp model), this would create a limit of 200 or 500 VDIs per filer and it is therefore important to
select a suitable number of FlexVols around these parameters.
Given these resource constraints, the mapping of virtual storage objects to the Ontap storage system has
been designed in the following manner; LUNs are distributed evenly across FlexVols, with the expectation
of using VM UUIDs to opportunistically group LUNs attached to the same VM into the same FlexVol. This
is a reasonable usage model that allows a snapshot of all the VDIs in a VM at one time, maximizing the
efficiency of the snapshot operation.
An optional parameter you can set is the number of FlexVols assigned to the SR. You can use between 1
and 32 FlexVols; the default is 8. The trade-off in the number of FlexVols to the SR is that, for a greater
number of FlexVols, the snapshot and clone operations become more efficient, since there are statistically
fewer VMs backed off the same FlexVol. The disadvantage is that more FlexVol resources are used for a
single SR, where there is a typical system-wide limitation of 200 for some smaller filers.
Aggregates
When creating a NetApp driver-based SR, you select an appropriate
aggregate
. The driver can be probed
for non-traditional type aggregates, that is, newer-style aggregates that support FlexVols, and then lists all
aggregates available and the unused disk space on each.
We strongly recommend that you configure an aggregate exclusively for use by XenServer storage, since
space guarantees and allocation cannot be correctly managed if other applications are also sharing the
resource.
Thick or thin provisioning
When creating NetApp storage, you can also choose the type of space management used. By default,
allocated space is "thickly provisioned" to ensure that VMs never run out of disk space and that all virtual
allocation guarantees are fully enforced on the filer. Selecting "thick provisioning" ensures that whenever
a VDI (LUN) is allocated on the filer, sufficient space is reserved to guarantee that it will never run out
of space and consequently experience failed writes to disk. Due to the the nature of the Ontap FlexVol
space provisioning algorithms the best practice guidelines for the filer require that at least twice the LUN
space is reserved to account for background snapshot data collection and to ensure that writes to disk are
never blocked. In addition to the double disk space gaurantee, Ontap also requires some additional space
reservation for management of unique blocks across snapshots. The guideline on this amount is 20% above
the reserved space. Therefore, the space guarantees afforded by "thick provisioning" will reserve up to 2.4
times the requested virtual disk space.
The alternative allocation strategy is
thin provisioning
, which allows the admin to present more storage
space to the VMs connecting to the SR than is actually available on the SR. There are no space garauntees,
and allocation of a LUN does not claim any data blocks in the FlexVol until the VM writes data. This would
be appropriate for development and test environments where you might find it convenient to over-provision
virtual disk space on the SR in the anticipation that VMs may be created and destroyed frequently without
ever utilizing the full virtual allocated disk. This method should be used with extreme caution and only in
non-critical environments.
A-SIS deduplication
A-SIS (Advanced Single Instance Storage) deduplication is a NetApp technology for reclaiming redundant
disk space. Newly-stored data objects are divided into small blocks, each block containing a digital signature,