HP 6100 HP StorageWorks MPX200 - Simplified Cost-Effective Virtualization De - Page 5

HP StorageWorks MPX200 Solution, improved, implementation methods for VMware

Page 5 highlights

HP StorageWorks MPX200 Solution - improved implementation methods for VMware This section describes the current technology compared to the improved methods with the EVA and MPX200. Previous methods dictated that the storage initiator, either hardware or software, reside at the hypervisor level. This is because most mid- to high-end storage arrays allowed 128 to 512 host connections. Each initiator on the server side consumed one host connection on the array side. Therefore, enough array host connections were not available for every virtual machine on every physical platform. For example, one hypervisor can support up to 35 hosts. Typical cases have 8 to 12 virtual machines on one hypervisor. A mid-sized array would run out of host connections after, as few as, 11 hypervisors were deployed. Even if enough host connections were available from the array, the IT manager would still run into the limit of 256 LUNs within the storage area. Every hypervisor needs to be exposed to every LUN within the environment for VMotion to work. This inherent limitation has a workaround within the hypervisor. If more than 256 virtual machines are deployed, the respective hypervisors carve the assigned LUNs to provide partial LUN access to each guest OS. This is counterproductive to the existing policies for security, regulatory compliance, and streamlined storage management. This results in a circular paradox: you want unique LUNs assigned to each guest OS, but you cannot have LUNs independent to a guest OS unless it is running its own initiator, and you cannot run more initiators without more array host connections, and without more host connections, you cannot assign unique LUNs to each guest OS. The breakthrough initiator virtualization technology to overcome this paradox is in the MPX200. By bridging the Fibre Channel and iSCSI networks, the MPX200 can provide host connectivity to the EVA storage system but the solution goes well beyond the Fibre Channel-iSCSI routing capabilities. The MPX200 uses (at most) four host connections in its maximum configuration. However, those four connections can be virtualized to over 1024 iSCSI initiators. The paradox exists from both sides: array connections are preserved and iSCSI initiators are now available for each guest OS. The MPX200 iSCSI solution overcomes the traditional VMware implementation limitations in the following two possible implementations: Implementation Option 1 Architecture The goal of this architecture is to provide performance, flexibility and scalability in storage configuration in a virtual server environment by combining access to storage LUNs by using both iSCSI initiators configured in ESX server and on individual guest OS's. The VM is configured in the VMFS volume using ESX server's iSCSI initiator. The guest OS is installed on this volume. The data LUN is configured using the iSCSI software initiator at the VM's guest OS. The software iSCSI initiator in the guest OS accesses the MPX200 iSCSI solution for EVA and Vdisks in it. The Vdisks are presented to individual guest OS's. The architecture facilitates that the guest OS owns and manages the LUNs. The VM, in the virtual environment, is like a standalone system running its own software iSCSI. Benefits As a result, the guest OS will have all its features available to the devices. In the case of EVA, customers can continue to use their existing business compliance software licenses and policies. In addition, Microsoft's multipath I/O (MPIO) for multipathing and MSCS for clusters can also be deployed at the guest OS level. Hence, the VM is managed as a standalone system while it

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9

HP StorageWorks MPX200 Solution
improved
implementation methods for VMware
This section describes the current technology compared to the improved methods with the EVA
and MPX200. Previous methods dictated that the storage initiator, either hardware or software,
reside at the hypervisor level. This is because most mid- to high-end storage arrays allowed 128
to 512 host connections. Each initiator on the server side consumed one host connection on the
array side. Therefore, enough array host connections were not available for every virtual machine
on every physical platform. For example, one hypervisor can support up to 35 hosts. Typical
cases have 8 to 12 virtual machines on one hypervisor. A mid-sized array would run out of host
connections after, as few as, 11 hypervisors were deployed.
Even if enough host connections were available from the array, the IT manager would still run into
the limit of 256 LUNs within the storage area. Every hypervisor needs to be exposed to every LUN
within the environment for VMotion to work. This inherent limitation has a workaround within the
hypervisor. If more than 256 virtual machines are deployed, the respective hypervisors carve the
assigned LUNs to provide partial LUN access to each guest OS. This is counterproductive to the
existing policies for security, regulatory compliance, and streamlined storage management.
This results in a circular paradox: you want unique LUNs assigned to each guest OS, but you
cannot have LUNs independent to a guest OS unless it is running its own initiator, and you cannot
run more initiators without more array host connections, and without more host connections, you
cannot assign unique LUNs to each guest OS.
The breakthrough initiator virtualization technology to overcome this paradox is in the MPX200.
By bridging the Fibre Channel and iSCSI networks, the MPX200 can provide host connectivity to
the EVA storage system but the solution goes well beyond the Fibre Channel-iSCSI routing
capabilities. The MPX200 uses (at most) four host connections in its maximum configuration.
However, those four connections can be virtualized to over 1024 iSCSI initiators. The paradox
exists from both sides: array connections are preserved and iSCSI initiators are now available for
each guest OS. The MPX200 iSCSI solution overcomes the traditional VMware implementation
limitations in the following two possible implementations:
Implementation Option 1
Architecture
The goal of this architecture is to provide performance, flexibility and scalability in storage
configuration in a virtual server environment by combining access to storage LUNs by using both
iSCSI initiators configured in ESX server and on individual guest OS’s.
The VM is configured in the VMFS volume
using ESX server’s iSCSI initiator
. The guest OS is
installed on this volume. The data LUN is configured using the
iSCSI software initiator at the VM’s
guest OS. The software iSCSI initiator in the guest OS accesses the MPX200 iSCSI solution for
EVA and Vdisks in it. The Vdisks are presented to individual
guest OS’s
. The architecture
facilitates that the guest OS owns and manages the LUNs. The VM, in the virtual environment, is
like a standalone system running its own software iSCSI.
Benefits
As a result, the guest OS will have all its features available to the devices. In the case of EVA,
customers can continue to use their existing business compliance software licenses and policies.
In addition, Microsoft’s
multipath I/O (MPIO) for multipathing and MSCS for clusters can also be
deployed at the guest OS level. Hence, the VM is managed as a standalone system while it