HP 6100 HP StorageWorks MPX200 - Simplified Cost-Effective Virtualization De - Page 7

Implementation Option 2

Page 7 highlights

shared by ESX servers, there will be some improved performances that some applications can take advantage of. However, the management of VMFS and Vdisks are highly involved. In figure 2, Vdisks D1, D2, D3, and D4 are pooled together and a VMFS file system volume is created. In the VMFS volume, Virtual Machine Disks (vmdk) are carved out for every VM. As the figure shows, vmdk A, B, and C are carved out for VMs. VM1 is installed in vmdk A, VM2 in vmdk B, and VM3 in vmdk C. Microsoft Windows guest OS is installed in every VM. Microsoft's software iSCSI initiator is configured in every VM. The software iSCSI initiator in the VM is configured to connect to MPX200. Vdisk D5 is presented to the iSCSI initiator in VM1, Vdisk D6 is presented to the iSCSI initiator in VM2, and Vdisk D7 is presented to the iSCSI initiator in VM3. Because the VM has direct ownership of its LUNs, it has the flexibility of managing those LUNs it wants to. The iSCSI EVA Vdisks in the guest OS can be configured with the native file system of the guest OS, and the application and data can be stored there. Implementation Option 2 Architecture This architecture provides a high performance solution in a virtual server environment. The increase in performance is because the Raw Device Mapped (RDM) volume is configured in the MPX200 iSCSI SAN and a single Vdisk is dedicated for a single VM. A single guest OS is configured in that dedicated RDM volume. For the data LUNs, the iSCSI software initiator is configured in the guest OS's. The Vdisks are presented to individual VMs. The architecture facilitates that the guest OS's owns and manages the LUNs. The VM, in the virtual environment, is like a standalone system running its own software iSCSI. Benefits: As a result, the guest OS will have all its features available to the devices as in implementation option 1. In addition, the VMs are configured entirely on the SAN, thereby enabling the benefits of VMware's traditional implementation. Because a single VM runs on a single RDM LUN, this reduces the number of shared resources; leading to high performance VMs. This also leads to simplification of VM management because a larger number of disks are supported in the MPX200 EVA iSCSI solution to allow individual disks for every VM. This option allows a maximum of 512 high-performing virtual machines that own one iSCSI LUN, each in a single MPX200 configuration [512 for installing the guest OS and 512 data luns]. Figure 3 shows the improved way of configuring iSCSI for high performance virtual machines in a hypervisor.

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

shared by ESX servers, there will be some improved performances that some applications can
take advantage of. However, the management of VMFS and Vdisks are highly involved.
In figure 2, Vdisks D1, D2, D3, and D4 are pooled together and a VMFS file system volume is
created. In the VMFS volume, Virtual Machine Disks (vmdk) are carved out for every VM. As the
figure shows, vmdk A, B, and C are carved out for VMs. VM1 is installed in vmdk A, VM2 in
vmdk B, and VM3 in vmdk C. Microsoft Windows guest
OS is installed in every VM. Microsoft’s
software iSCSI initiator is configured in every VM. The software iSCSI initiator in the VM is
configured to connect to MPX200. Vdisk D5 is presented to the iSCSI initiator in VM1, Vdisk D6
is presented to the iSCSI initiator in VM2, and Vdisk D7 is presented to the iSCSI initiator in VM3.
Because the VM has direct ownership of its LUNs, it has the flexibility of managing those LUNs it
wants to. The iSCSI EVA Vdisks in the guest OS can be configured with the native file system of
the guest OS, and the application and data can be stored there.
Implementation Option 2
Architecture
This architecture provides a high performance solution in a virtual server environment. The
increase in performance is because the Raw Device Mapped (RDM) volume is configured in the
MPX200 iSCSI SAN and a single Vdisk is dedicated for a single VM. A single guest OS is
configured in that dedicated RDM volume. For the data LUNs, the iSCSI software initiator is
configured in the guest OS
’s
. The Vdisks are presented to individual VMs. The architecture
facilitates that the
guest OS’s
owns and manages the LUNs. The VM, in the virtual environment, is
like a standalone system running its own software iSCSI.
Benefits:
As a result, the guest OS will have all its features available to the devices as in implementation
option 1. In addition, the VMs are configured entirely on the SAN, thereby enabling the benefits
of
VMware’s traditional implementatio
n. Because a single VM runs on a single RDM LUN, this
reduces the number of shared resources; leading to high performance VMs. This also leads to
simplification of VM management because a larger number of disks are supported in the MPX200
EVA iSCSI solution to allow individual disks for every VM.
This option allows a maximum of 512 high-performing virtual machines that own one iSCSI LUN,
each in a single MPX200 configuration [512 for installing the guest OS and 512 data luns].
Figure 3 shows the improved way of configuring iSCSI for high performance virtual machines in a
hypervisor.