HP BL680c XenServer Administrator's Guide 4.1.0 - Page 34

Managing Host Bus Adapters HBAs

Page 34 highlights

Storage 5. Within XenCenter select the VM's storage tab. Use the Attach button and select the VDIs from the new SR. This step can also be done use the vbd-create CLI command. 6. To delete the original VDIs, within XenCenter select the storage tab of the original SR. The original VDIs will be listed with an empty value for the VM field and can be deleted with the Delete button. 3.4.5. Managing VDIs in a Netapp SR Due to the complex nature of mapping VM storage objects onto Netapp storage objects such as LUNs, FlexVols and disk Aggregates, the plugin driver makes some general assumptions about how storage objects should be organised. The default number of FlexVols that are managed by an SR instance is 8, named XenStorage__FV# where # is a value between 0 and the total number of FlexVols assigned. This means that VDIs (LUNs) are evenly distributed across any one of the FlexVols at the point that the VDI is instantiated. The only exception to this rule is for groups of VM disks which are opportunistically assigned to the same FlexVol to assist with VM cloning, and when VDIs are created manually but passed a 'vmhint' flag which informs the backend of the FlexVol to which the VDI should be assigned. Using either of the following 2 commands, a VDI created manually via the CLI can be assigned to a specific FlexVol: xe vdi-create uuid= sr-uuid= sm-config:vmhint= xe vdi-create uuid= sr-uuid= sm-config:vmhint= 3.4.6. Taking VDI snapshots with a Netapp SR As outlined earlier in Section 3.2.6, "Shared NetApp Storage", a Netapp SR comprises a collection of FlexVols. Cloning a VDI entails generating a snapshot of the FlexVol and then creating a LUN clone backed off the snapshot. When generating a VM snapshot, an admin must snapshot each of the VMs disks in sequence. Since all the disks are expected to be located in the same FlexVol, and the FlexVol snapshot operates on all LUNs in the same FlexVol, it makes sense to re-use an existing snapshot for all subsequent LUN clones. By default, if no snapshot hint is passed into the backend driver it will generate a random ID with which to name the FlexVol snapshot. There is a CLI override however for this value, passed in as an epochhint. The first time the epochhint value or 'cookie' is received, the backend will generate a new snapshot based on the cookie name. Any subsequent snapshot requests with the same epochhint value will be backed off the existing snapshot: xe vdi-snapshot uuid= driver-params:epochhint= 3.4.7. Adjusting the disk IO scheduler for an LVM-based SR For general performance, the default disk scheduler 'noop' is applied on all new SR types that implement LVM based storage over a disk, i.e. Local LVM, LVM over iSCSI and LVM over HBA attached LUNs. The noop scheduler provides the fairest performance for competing VMs accessing the same device. In order to apply disk QoS however (Section 3.6, "Virtual disk QoS settings (Enterprise Edition only)") it is necessary to override the default setting and assign the 'cfq' disk scheduler to any LVM-based SR type. For any LVMbased SR type the corresponding PBD must be unplugged and re-plugged in order for the scheduler parameter to take effect. The disk scheduler can be adjusted using the following CLI parameter: xe sr-param-set other-config:scheduler={noop|cfq|anticipatory|deadline} uuid= 3.5. Managing Host Bus Adapters (HBAs) This section covers various operations required to manage Fibre Channel and iSCSI HBAs. 28

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13
  • 14
  • 15
  • 16
  • 17
  • 18
  • 19
  • 20
  • 21
  • 22
  • 23
  • 24
  • 25
  • 26
  • 27
  • 28
  • 29
  • 30
  • 31
  • 32
  • 33
  • 34
  • 35
  • 36
  • 37
  • 38
  • 39
  • 40
  • 41
  • 42
  • 43
  • 44
  • 45
  • 46
  • 47
  • 48
  • 49
  • 50
  • 51
  • 52
  • 53
  • 54
  • 55
  • 56
  • 57
  • 58
  • 59
  • 60
  • 61
  • 62
  • 63
  • 64
  • 65
  • 66
  • 67
  • 68
  • 69
  • 70
  • 71
  • 72
  • 73
  • 74
  • 75
  • 76
  • 77
  • 78
  • 79
  • 80
  • 81
  • 82
  • 83
  • 84
  • 85
  • 86
  • 87
  • 88
  • 89
  • 90
  • 91
  • 92
  • 93
  • 94
  • 95
  • 96
  • 97
  • 98
  • 99
  • 100
  • 101
  • 102
  • 103
  • 104
  • 105
  • 106
  • 107
  • 108

Storage
28
5. Within XenCenter select the VM's storage tab. Use the Attach button and select the VDIs from the new
SR. This step can also be done use the vbd-create CLI command.
6.
To delete the original VDIs, within XenCenter select the storage tab of the original SR. The original VDIs
will be listed with an empty value for the VM field and can be deleted with the Delete button.
3.4.5. Managing VDIs in a Netapp SR
Due to the complex nature of mapping VM storage objects onto Netapp storage objects such as LUNs,
FlexVols and disk Aggregates, the plugin driver makes some general assumptions about how storage ob-
jects should be organised. The default number of FlexVols that are managed by an SR instance is 8, named
XenStorage_<SR_UUID>_FV# where # is a value between 0 and the total number of FlexVols assigned.
This means that VDIs (LUNs) are evenly distributed across any one of the FlexVols at the point that the VDI
is instantiated. The only exception to this rule is for groups of VM disks which are opportunistically assigned
to the same FlexVol to assist with VM cloning, and when VDIs are created manually but passed a 'vmhint'
flag which informs the backend of the FlexVol to which the VDI should be assigned. Using either of the
following 2 commands, a VDI created manually via the CLI can be assigned to a specific FlexVol:
xe vdi-create uuid=<VALID_VDI_UUID> sr-uuid=<VALID_SR_UUID> sm-config:vmhint=<VALID_VM_UUID>
xe vdi-create uuid=<VALID_VDI_UUID> sr-uuid=<VALID_SR_UUID> sm-config:vmhint=<VALID_FLEXVOL_NUMBER>
3.4.6. Taking VDI snapshots with a Netapp SR
As outlined earlier in Section 3.2.6, “Shared NetApp Storage”, a Netapp SR comprises a collection of
FlexVols. Cloning a VDI entails generating a snapshot of the FlexVol and then creating a LUN clone backed
off the snapshot. When generating a VM snapshot, an admin must snapshot each of the VMs disks in
sequence. Since all the disks are expected to be located in the same FlexVol, and the FlexVol snapshot
operates on all LUNs in the same FlexVol, it makes sense to re-use an existing snapshot for all subsequent
LUN clones. By default, if no snapshot hint is passed into the backend driver it will generate a random ID
with which to name the FlexVol snapshot. There is a CLI override however for this value, passed in as
an epochhint. The first time the epochhint value or 'cookie' is received, the backend will generate a new
snapshot based on the cookie name. Any subsequent snapshot requests with the same epochhint value
will be backed off the existing snapshot:
xe vdi-snapshot uuid=<VALID_VDI_UUID> driver-params:epochhint=<COOKIE>
3.4.7. Adjusting the disk IO scheduler for an LVM-based SR
For general performance, the default disk scheduler 'noop' is applied on all new SR types that implement
LVM based storage over a disk, i.e. Local LVM, LVM over iSCSI and LVM over HBA attached LUNs. The
noop scheduler provides the fairest performance for competing VMs accessing the same device. In order
to apply disk QoS however (Section 3.6, “Virtual disk QoS settings (Enterprise Edition only)”) it is necessary
to override the default setting and assign the 'cfq' disk scheduler to any LVM-based SR type. For any LVM-
based SR type the corresponding PBD must be unplugged and re-plugged in order for the scheduler pa-
rameter to take effect. The disk scheduler can be adjusted using the following CLI parameter:
xe sr-param-set other-config:scheduler={noop|cfq|anticipatory|deadline}
uuid=<VALID_SR_UUID>
3.5. Managing Host Bus Adapters (HBAs)
This section covers various operations required to manage Fibre Channel and iSCSI HBAs.