Hyper-V Storage I/O Performance

This section describes the different options and considerations for tuning storage I/O operation in a virtual motorcar. The storage I/O path extends from the guest storage stack, through the host virtualization layer, to the host storage stack, and so to the concrete deejay. Following are explanations most how optimizations are possible at each of these stages.

Virtual controllers

Hyper-V offers 3 types of virtual controllers: IDE, SCSI, and Virtual host bus adapters (HBAs).

IDE

IDE controllers expose IDE disks to the virtual car. The IDE controller is emulated, and it is the only controller that is bachelor for guest VMs running older version of Windows without the Virtual Machine Integration Services. Disk I/O that is performed by using the IDE filter driver that is provided with the Virtual Motorcar Integration Services is significantly amend than the disk I/O performance that is provided with the emulated IDE controller. Nosotros recommend that IDE disks exist used only for the operating organization disks considering they have performance limitations due to the maximum I/O size that can be issued to these devices.

SCSI (SAS controller)

SCSI controllers expose SCSI disks to the virtual auto, and each virtual SCSI controller tin can back up up to 64 devices. For optimal performance, we recommend that you attach multiple disks to a single virtual SCSI controller and create boosted controllers only as they are required to scale the number of disks connected to the virtual car. SCSI path is not emulated which makes it the preferred controller for whatever disk other than the operating organization disk. In fact with Generation ii VMs, it is the only type of controller possible. Introduced in Windows Server 2012 R2, this controller is reported as SAS to back up shared VHDX.

Virtual Fibre Aqueduct HBAs

Virtual Fibre Aqueduct HBAs can be configured to allow direct access for virtual machines to Fibre Channel and Fibre Aqueduct over Ethernet (FCoE) LUNs. Virtual Fibre Aqueduct disks bypass the NTFS file system in the root partition, which reduces the CPU usage of storage I/O.

Large data drives and drives that are shared between multiple virtual machines (for invitee clustering scenarios) are prime candidates for virtual Fibre Channel disks.

Virtual Fibre Channel disks crave one or more Fibre Channel host motorbus adapters (HBAs) to be installed on the host. Each host HBA is required to utilise an HBA driver that supports the Windows Server 2016 Virtual Fibre Channel/NPIV capabilities. The SAN cloth should support NPIV, and the HBA port(s) that are used for the virtual Fibre Channel should be set upward in a Fibre Aqueduct topology that supports NPIV.

To maximize throughput on hosts that are installed with more than one HBA, we recommend that you configure multiple virtual HBAs within the Hyper-5 virtual motorcar (up to four HBAs can exist configured for each virtual machine). Hyper-Five will automatically brand a best effort to residuum virtual HBAs to host HBAs that access the same virtual SAN.

Virtual disks

Disks can be exposed to the virtual machines through the virtual controllers. These disks could exist virtual hard disks that are file abstractions of a disk or a pass-through deejay on the host.

Virtual hard disks

There are 2 virtual hard disk drive formats, VHD and VHDX. Each of these formats supports three types of virtual hard disk drive files.

VHD format

The VHD format was the merely virtual hard disk format that was supported by Hyper-V in by releases. Introduced in Windows Server 2012, the VHD format has been modified to permit ameliorate alignment, which results in significantly amend performance on new large sector disks.

Whatever new VHD that is created on a Windows Server 2012 or newer has the optimal iv KB alignment. This aligned format is completely compatible with previous Windows Server operating systems. Yet, the alignment belongings will be broken for new allocations from parsers that are non 4 KB alignment-aware (such every bit a VHD parser from a previous version of Windows Server or a non-Microsoft parser).

Any VHD that is moved from a previous release does not automatically get converted to this new improved VHD format.

To convert to new VHD format, run the post-obit Windows PowerShell command:

              Convert-VHD –Path E:\vms\testvhd\examination.vhd –DestinationPath E:\vms\testvhd\test-converted.vhd                          

You can check the alignment property for all the VHDs on the system, and it should be converted to the optimal 4 KB alignment. You create a new VHD with the data from the original VHD by using the Create-from-Source option.

To check for alignment by using Windows Powershell, examine the Alignment line, as shown below:

              Get-VHD –Path E:\vms\testvhd\test.vhd  Path                    : Due east:\vms\testvhd\test.vhd VhdFormat               : VHD VhdType                 : Dynamic FileSize                : 69245440 Size                    : 10737418240 MinimumSize             : 10735321088 LogicalSectorSize       : 512 PhysicalSectorSize      : 512 BlockSize               : 2097152 ParentPath              : FragmentationPercentage : 10 Alignment               : 0 Fastened                : False DiskNumber              : IsDeleted               : False Number                  :                          

To verify alignment by using Windows PowerShell, examine the Alignment line, as shown below:

              Get-VHD –Path E:\vms\testvhd\test-converted.vhd  Path                    : E:\vms\testvhd\test-converted.vhd VhdFormat               : VHD VhdType                 : Dynamic FileSize                : 69369856 Size                    : 10737418240 MinimumSize             : 10735321088 LogicalSectorSize       : 512 PhysicalSectorSize      : 512 BlockSize               : 2097152 ParentPath              : FragmentationPercentage : 0 Alignment               : 1 Attached                : False DiskNumber              : IsDeleted               : False Number                  :                          

VHDX format

VHDX is a new virtual hard deejay format introduced in Windows Server 2012, which allows y'all to create resilient loftier-performance virtual disks upward to 64 terabytes. Benefits of this format include:

  • Support for virtual difficult disk storage chapters of up to 64 terabytes.

  • Protection confronting data abuse during power failures by logging updates to the VHDX metadata structures.

  • Ability to store custom metadata near a file, which a user might want to tape, such as operating system version or patches practical.

The VHDX format also provides the post-obit performance benefits:

  • Improved alignment of the virtual difficult disk format to piece of work well on large sector disks.

  • Larger block sizes for dynamic and differential disks, which allows these disks to attune to the needs of the workload.

  • four KB logical sector virtual disk that allows for increased operation when used by applications and workloads that are designed for 4 KB sectors.

  • Efficiency in representing data, which results in smaller file size and allows the underlying physical storage device to repossess unused space. (Trim requires pass-through or SCSI disks and trim-compatible hardware.)

When you upgrade to Windows Server 2016, we recommend that you convert all VHD files to the VHDX format due to these benefits. The only scenario where information technology would make sense to proceed the files in the VHD format is when a virtual car has the potential to exist moved to a previous release of Hyper-5 that does non back up the VHDX format.

Types of virtual difficult disk files

There are three types of VHD files. The following sections are the performance characteristics and merchandise-offs between the types.

The following recommendations should be taken into consideration with regards to selecting a VHD file type:

  • When using the VHD format, we recommend that you apply the fixed type because it has better resiliency and performance characteristics compared to the other VHD file types.

  • When using the VHDX format, nosotros recommend that y'all use the dynamic type because it offers resiliency guarantees in add-on to space savings that are associated with allocating space just when at that place is a demand to practice and so.

  • The stock-still type is likewise recommended, irrespective of the format, when the storage on the hosting book is not actively monitored to ensure that sufficient disk infinite is nowadays when expanding the VHD file at run time.

  • Snapshots of a virtual machine create a differencing VHD to store writes to the disks. Having only a few snapshots can elevate the CPU usage of storage I/Os, but might not noticeably affect operation except in highly I/O-intensive server workloads. However, having a large chain of snapshots can noticeably bear on operation because reading from the VHD can require checking for the requested blocks in many differencing VHDs. Keeping snapshot chains short is important for maintaining good disk I/O functioning.

Fixed virtual hd blazon

Space for the VHD is first allocated when the VHD file is created. This type of VHD file is less likely to fragment, which reduces the I/O throughput when a unmarried I/O is split into multiple I/Os. It has the lowest CPU overhead of the three VHD file types because reads and writes do non need to look up the mapping of the block.

Dynamic virtual hd blazon

Space for the VHD is allocated on need. The blocks in the disk showtime as unallocated blocks and are not backed past whatsoever actual space in the file. When a cake is beginning written to, the virtualization stack must allocate space within the VHD file for the block, and then update the metadata. This increases the number of necessary disk I/Os for the Write and increases CPU usage. Reads and writes to existing blocks incur disk admission and CPU overhead when looking up the blocks' mapping in the metadata.

Differencing virtual hard disk type

The VHD points to a parent VHD file. Any writes to blocks not written to result in space being allocated in the VHD file, as with a dynamically expanding VHD. Reads are serviced from the VHD file if the block has been written to. Otherwise, they are serviced from the parent VHD file. In both cases, the metadata is read to determine the mapping of the block. Reads and Writes to this VHD can consume more CPU and result in more I/Os than a fixed VHD file.

Block size considerations

Block size tin can significantly touch on operation. It is optimal to lucifer the block size to the resource allotment patterns of the workload that is using the disk. For example, if an application is allocating in chunks of 16 MB, it would be optimal to have a virtual hard disk block size of 16 MB. A cake size of >two MB is possible merely on virtual hard disks with the VHDX format. Having a larger cake size than the allocation pattern for a random I/O workload will significantly increase the space usage on the host.

Sector size implications

Most of the software industry has depended on disk sectors of 512 bytes, but the standard is moving to four KB disk sectors. To reduce compatibility issues that might arise from a change in sector size, hard bulldoze vendors are introducing a transitional size referred to as 512 emulation drives (512e).

These emulation drives offer some of the advantages that are offered by 4 KB disk sector native drives, such as improved format efficiency and an improved scheme for error correction codes (ECC). They come up with fewer compatibility issues that would occur by exposing a four KB sector size at the disk interface.

Support for 512e disks

A 512e disk can perform a write but in terms of a physical sector—that is, it cannot directly write a 512byte sector that is issued to it. The internal process in the deejay that makes these writes possible follows these steps:

  • The disk reads the iv KB physical sector to its internal cache, which contains the 512-byte logical sector referred to in the write.

  • Information in the 4 KB buffer is modified to include the updated 512-byte sector.

  • The deejay performs a write of the updated iv KB buffer back to its physical sector on the deejay.

This procedure is called read-modify-write (RMW). The overall functioning impact of the RMW procedure depends on the workloads. The RMW process causes performance deposition in virtual difficult disks for the following reasons:

  • Dynamic and differencing virtual hard disks take a 512-byte sector bitmap in front of their data payload. In add-on, footer, header, and parent locators align to a 512-byte sector. It is mutual for the virtual hard deejay driver to issue 512-byte write commands to update these structures, resulting in the RMW process described earlier.

  • Applications ordinarily outcome reads and writes in multiples of four KB sizes (the default cluster size of NTFS). Considering there is a 512-byte sector bitmap in front of the information payload cake of dynamic and differencing virtual hard disks, the 4 KB blocks are not aligned to the physical 4 KB boundary. The following figure shows a VHD iv KB block (highlighted) that is non aligned with physical 4 KB purlieus.

vhd 4 kb block

Each four KB write command that is issued by the electric current parser to update the payload data results in two reads for two blocks on the disk, which are then updated and subsequently written back to the ii disk blocks. Hyper-Five in Windows Server 2016 mitigates some of the performance effects on 512e disks on the VHD stack by preparing the previously mentioned structures for alignment to four KB boundaries in the VHD format. This avoids the RMW consequence when accessing the data inside the virtual hd file and when updating the virtual hard disk metadata structures.

As mentioned earlier, VHDs that are copied from previous versions of Windows Server volition not automatically exist aligned to iv KB. You can manually convert them to optimally align past using the Copy from Source disk option that is available in the VHD interfaces.

By default, VHDs are exposed with a concrete sector size of 512 bytes. This is done to ensure that physical sector size dependent applications are not impacted when the awarding and VHDs are moved from a previous version of Windows Server.

Past default, disks with the VHDX format are created with the 4 KB concrete sector size to optimize their performance contour regular disks and large sector disks. To make full utilize of iv KB sectors it'due south recommended to use VHDX format.

Support for native 4 KB disks

Hyper-V in Windows Server 2012 R2 and beyond supports four KB native disks. But information technology is still possible to shop VHD disk on 4 KB native disk. This is done past implementing a software RMW algorithm in the virtual storage stack layer that converts 512-byte admission and update requests to respective 4 KB accesses and updates.

Because VHD file can only expose themselves equally 512-byte logical sector size disks, it is very probable that there volition be applications that effect 512-byte I/O requests. In these cases, the RMW layer will satisfy these requests and cause operation degradation. This is as well true for a disk that is formatted with VHDX that has a logical sector size of 512 bytes.

It is possible to configure a VHDX file to be exposed equally a 4 KB logical sector size disk, and this would be an optimal configuration for operation when the deejay is hosted on a 4 KB native concrete device. Care should be taken to ensure that the guest and the application that is using the virtual disk are backed by the 4 KB logical sector size. The VHDX formatting will work correctly on a four KB logical sector size device.

Laissez passer-through disks

The VHD in a virtual auto can be mapped straight to a concrete disk or logical unit number (LUN), instead of to a VHD file. The do good is that this configuration bypasses the NTFS file system in the root partition, which reduces the CPU usage of storage I/O. The gamble is that physical disks or LUNs tin can be more difficult to move between machines than VHD files.

Pass-through disks should be avoided due to the limitations introduced with virtual machine migration scenarios.

Advanced storage features

Storage Quality of Service (QoS)

Starting in Windows Server 2012 R2, Hyper-V includes the ability to fix certain quality-of-service (QoS) parameters for storage on the virtual machines. Storage QoS provides storage functioning isolation in a multitenant surroundings and mechanisms to notify you when the storage I/O performance does not run across the defined threshold to efficiently run your virtual machine workloads.

Storage QoS provides the power to specify a maximum input/output operations per 2nd (IOPS) value for your virtual hard disk. An administrator can throttle the storage I/O to stop a tenant from consuming excessive storage resources that may impact another tenant.

You can likewise set up a minimum IOPS value. They will be notified when the IOPS to a specified virtual hard deejay is below a threshold that is needed for its optimal operation.

The virtual machine metrics infrastructure is also updated, with storage related parameters to allow the administrator to monitor the performance and chargeback related parameters.

Maximum and minimum values are specified in terms of normalized IOPS where every viii KB of data is counted as an I/O.

Some of the limitations are as follows:

  • Only for virtual disks

  • Differencing disk cannot have parent virtual disk on a different volume

  • Replica - QoS for replica site configured separately from main site

  • Shared VHDX is not supported

For more than info on Storage Quality of Service, encounter Storage Quality of Service for Hyper-V.

NUMA I/O

Windows Server 2012 and beyond supports large virtual machines, and whatsoever large virtual auto configuration (for instance, a configuration with Microsoft SQL Server running with 64 virtual processors) will also need scalability in terms of I/O throughput.

The following primal improvements beginning introduced in the Windows Server 2012 storage stack and Hyper-V provide the I/O scalability needs of large virtual machines:

  • An increase in the number of communication channels created betwixt the invitee devices and host storage stack.

  • A more efficient I/O completion mechanism involving interrupt distribution amongst the virtual processors to avert expensive interprocessor interruptions.

Introduced in Windows Server 2012, at that place are a few registry entries, located at HKLM\Organisation\CurrentControlSet\Enum\VMBUS\{device id}\{instance id}\StorChannel, that allow the number of channels to exist adapted. They besides marshal the virtual processors that handle the I/O completions to the virtual CPUs that are assigned past the application to exist the I/O processors. The registry settings are configured on a per-adapter basis on the device's hardware key.

  • ChannelCount (DWORD) The total number of channels to utilise, with a maximum of xvi. It defaults to a ceiling, which is the number of virtual processors/16.

  • ChannelMask (QWORD) The processor affinity for the channels. If it is not set or is set to 0, information technology defaults to the existing channel distribution algorithm that y'all use for normal storage or for networking channels. This ensures that your storage channels won't conflict with your network channels.

Offloaded Data Transfer integration

Crucial maintenance tasks for VHDs, such every bit merge, move, and compact, depend copying large amounts of data. The current method of copying data requires data to be read in and written to different locations, which can be a fourth dimension-consuming process. It besides uses CPU and memory resources on the host, which could accept been used to service virtual machines.

Storage surface area network (SAN) vendors are working to provide near-instantaneous copy operations of large amounts of data. This storage is designed to allow the arrangement above the disks to specify the motion of a specific information set from 1 location to another. This hardware characteristic is known as an Offloaded Data Transfer.

Hyper-Five in Windows Server 2012 and across supports Offload Data Transfer (ODX) operations and then that these operations can be passed from the invitee operating system to the host hardware. This ensures that the workload tin can use ODX-enabled storage as it would if it were running in a non-virtualized environment. The Hyper-Five storage stack likewise issues ODX operations during maintenance operations for VHDs such every bit merging disks and storage migration meta-operations where large amounts of data are moved.

Unmap integration

Virtual hard disk files exist as files on a storage book, and they share available infinite with other files. Because the size of these files tends to be large, the space that they eat can grow quickly. Demand for more than physical storage affects the It hardware upkeep. It's important to optimize the apply of physical storage every bit much as possible.

Earlier Windows Server 2012, when applications delete content inside a virtual hard disk, which effectively abandoned the content's storage space, the Windows storage stack in the guest operating arrangement and the Hyper-V host had limitations that prevented this information from existence communicated to the virtual hd and the physical storage device. This prevented the Hyper-V storage stack from optimizing the space usage by the VHD-based virtual deejay files. It also prevented the underlying storage device from reclaiming the space that was previously occupied by the deleted information.

Starting from Windows Server 2012, Hyper-Five supports unmap notifications, which let VHDX files to exist more efficient in representing that data within it. This results in smaller files size, and information technology allows the underlying physical storage device to reclaim unused space.

Merely Hyper-V-specific SCSI, enlightened IDE, and Virtual Fibre Aqueduct controllers allow the unmap command from the guest to accomplish the host virtual storage stack. On the virtual hard disks, simply virtual disks formatted as VHDX back up unmap commands from the guest.

For these reasons, we recommend that you utilise VHDX files attached to a SCSI controller when not using Virtual Fibre Channel disks.

Boosted References

  • Hyper-V terminology

  • Hyper-5 compages

  • Hyper-V server configuration

  • Hyper-5 processor performance

  • Hyper-Five memory performance

  • Hyper-5 network I/O operation

  • Detecting bottlenecks in a virtualized environment

  • Linux Virtual Machines