HP BL680c HP local I/O technology for ProLiant and BladeSystem servers - Page 10

IOV – I/O Virtualization at the endpoint, processor complex

Page 10 highlights

environments. In order to do so, they both provide mechanisms to solve the two most important challenges of I/O virtualization under direct assignment - DMA and interrupt virtualization. In the language of the PCI-SIG, the generic name for these technologies is Address Translation Services. DMA virtualization enables system software to create multiple DMA protection domains, each of which then represents a separate environment to which a subset of the host physical memory is assigned. Access to a given domain can then be restricted to only the I/O device to which it is assigned, thus providing the DMA isolation needed for the VMM to provide a more secure operating environment for the virtual machines. DMA virtualization services also provide for built-in address translation, transforming (or remapping) the virtualized memory addresses in DMA requests from the I/O devices into the actual Host Physical Address and providing caching of the most frequently used remapping entries. This allows VMM software to offload the execution and management of address translations to the processor hardware itself, resulting in significant performance improvements. To complete the picture, both technologies also support the virtualization of interrupts, whether legacy interrupts from I/O interrupt controllers or MSIs, by re-mapping them using an interrupt-remapping table. Although hardware support for I/O virtualization will be delivered by Intel and AMD, and subsequently incorporated into the HP ProLiant architecture, primary responsibility for exploiting this new technology will fall to the VMM and OS vendors. Their operating software will need to be significantly revised in order to take full advantage of the performance and reliability improvements that can be gained. Intel VT-d and AMD IOMMU are expected to be incorporated into HP products with the release of the next generation of the corresponding HP ProLiant servers. IOV - I/O Virtualization at the endpoint While Intel VT-d technology is designed to address I/O virtualization issues associated with the processor complex, IOV (as it is commonly referred to) is designed to provide support for I/O virtualization at the other end of the I/O stack, with the I/O devices themselves, frequently referred to in architectural circles as I/O endpoints. As stated earlier, direct assignment, in its purest form, involves the use of a given I/O device (such as a NIC) by a single virtual machine. While this approach is cleaner than others, it also usually requires the use of separate devices for each virtual machine. IOV technology is designed to help solve this issue by specifying an architecture that will allow for the creation of I/O devices (such as a NIC) that can support multiple virtual functions that share one or more physical resources of the endpoint (Figure 10). To the system, each of these virtual functions appears as a separate PCIe function which can then be directly assigned. This capability allows each virtual machine to see its own separate, directly-assigned device when the actual physical device is being shared by multiple VMs. 10

  • 1
  • 2
  • 3
  • 4
  • 5
  • 6
  • 7
  • 8
  • 9
  • 10
  • 11
  • 12
  • 13

environments. In order to do so, they both provide mechanisms to solve the two most important
challenges of I/O virtualization under direct assignment – DMA and interrupt virtualization. In the
language of the PCI-SIG, the generic name for these technologies is Address Translation Services.
DMA virtualization enables system software to create multiple DMA protection domains, each of
which then represents a separate environment to which a subset of the host physical memory is
assigned. Access to a given domain can then be restricted to only the I/O device to which it is
assigned, thus providing the DMA isolation needed for the VMM to provide a more secure operating
environment for the virtual machines.
DMA virtualization services also provide for built-in address translation, transforming (or remapping)
the virtualized memory addresses in DMA requests from the I/O devices into the actual Host Physical
Address and providing caching of the most frequently used remapping entries. This allows VMM
software to offload the execution and management of address translations to the processor hardware
itself, resulting in significant performance improvements.
To complete the picture, both technologies also support the virtualization of interrupts, whether legacy
interrupts from I/O interrupt controllers or MSIs, by re-mapping them using an interrupt-remapping
table.
Although hardware support for I/O virtualization will be delivered by Intel and AMD, and
subsequently incorporated into the HP ProLiant architecture, primary responsibility for exploiting this
new technology will fall to the VMM and OS vendors. Their operating software will need to be
significantly revised in order to take full advantage of the performance and reliability improvements
that can be gained.
Intel VT-d and AMD IOMMU are expected to be incorporated into HP products with the release of the
next generation of the corresponding HP ProLiant servers.
IOV – I/O Virtualization at the endpoint
While Intel VT-d technology is designed to address I/O virtualization issues associated with the
processor complex, IOV (as it is commonly referred to) is designed to provide support for I/O
virtualization at the other end of the I/O stack, with the I/O devices themselves, frequently referred to
in architectural circles as I/O endpoints.
As stated earlier, direct assignment, in its purest form, involves the use of a given I/O device (such as
a NIC) by a single virtual machine. While this approach is cleaner than others, it also usually requires
the use of separate devices for each virtual machine.
IOV technology is designed to help solve this issue by specifying an architecture that will allow for the
creation of I/O devices (such as a NIC) that can support multiple virtual functions that share one or
more physical resources of the endpoint (Figure 10). To the system, each of these virtual functions
appears as a separate PCIe function which can then be directly assigned. This capability allows each
virtual machine to see its own separate, directly-assigned device when the actual physical device is
being shared by multiple VMs.
10