The “Hey, Scripting Guys!” blog has been retired. There are many useful posts in this blog, so we keep the blog here for historical reference. However, some information might be very outdated and many of the links might not work anymore.
New PowerShell content is being posted to the PowerShell Community blog where members of the community can create posts by submitting content in the GitHub repository .
Passing through devices to Hyper-V VMs by using discrete device assignment
Summary : Learn how to attach a device from your Hyper-V host to your VM by using a new feature of Windows Server 2016.
Today we have a guest blogger, Rudolf Vesely, who has blogged here on previous occasions. Here is a link to his previous posts if you would like to read them.
Here is what Rudolf says about himself.
I am an Azure Solutions Architect at Rackspace in London. I believe that public cloud is the future for most organizations and this is why I specialize in public clouds (Amazon Web Services and Azure) and hybrid cloud strategy. My unique role at Rackspace is to guide our customers in their cloud strategy and ensure that their strategy is aligned with their business.
I started my career as a developer and have continued programming and scripting as a hobby. This is the reason why I am a big proponent of DevOps. I believe PowerShell is a fantastic tool that facilitates the process of joining the Dev and Ops worlds.
Contact information:
- Twitter: @rudolfvesely
- Blog: Technology Stronghold
Introduction
Many new features in Windows Server 2016 (in any Technical Preview) will draw your attention, and it’s very easy to miss some of them. This is true especially when people don’t speak or blog about them too much like they do for Windows PowerShell 5.0, Storage Spaces Direct, or Storage Replica, as examples.
One feature that drew my attention is a new feature in Hyper-V called discrete device assignment. It can be very simply described as a device pass-through feature, the likes of which has existed on other hypervisors for many years.
Microsoft started with device pass-through on Hyper-V with disk pass-through (attaching a physical disk without using VHD / VHDX), but true pass-through came with single root I/O virtualization (SR-IOV) on Windows Server 2012. I recommend that you read John Howard’s excellent blog post series that describes SR-IOV and hardware and system requirements .
On Windows Server 2016, we finally get the ability to directly work with devices on the host and attach them to a child partition (guest VM) without being limited to only networking and storage. This feature was probably built for passing-through graphics processing units (GPUs) in Azure for N-series VMs (GPU-enabled VMs), but we can use it for anything else. Please keep in mind that this is at your own risk. Microsoft will not support your setups, and you may also have very hard security issues. Any device pass-through on any hypervisor opens the possibility to take down the host (for example, by triggering an error on the PCI Express bus) or worse, taking control of your host.
The last thing you need to consider is whether you have hardware to test on it. You need a modern client computer or server that has Intel Virtualization Technology for Directed I/O (VT-d) or AMD Virtualization (AMD-V). I use an industrial mini PC, which will be my new router and wireless access point (all virtualized), but you should be fine with a modern laptop. So, if you’re still reading, activate Intel VT-d in firmware of your testing computer, install the latest Technical Preview of Windows Server 2016, and start with PnpDevice.
PnpDevice module
On Windows Server 2016, thanks to the PnpDevice module, we finally have the possibility to work with hardware without directly touching Windows Management Instrumentation (WMI). The PnpDevice module will be very important for Nano Server and non-GUI servers, so I recommend that you try it.
Get-Command -Module PnpDevice
CommandType Name Version Source
----------- ---- ------- ------
Function Disable-PnpDevice 1.0.0.0 PnpDevice Function Enable-PnpDevice 1.0.0.0 PnpDevice Function Get-PnpDevice 1.0.0.0 PnpDevice Function Get-PnpDeviceProperty 1.0.0.0 PnpDevice
Let’s take a look what is attached to your system by using:
Get-PnpDevice –PresentOnly | Sort-Object –Property Class
A lot of devices were returned. Now it’s a good idea to choose what should be passed. I do not have multiple GPUs to pass it, but my goal is to virtualize my router and access point, and I have two wireless adapters in mini PCI-e slots. I found that the easiest way to find them is by using vendor ID:
(Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -like '*VEN_168C*' } # 168C == Qualcomm Atheros
If you installed drivers on your host for devices that you want to pass through (I did not), you can filter according to Class Net like this (example from my Surface Book):
I will assume that you have installed the VM and chosen a device to attach to it. I installed Windows 10 version 1511 because this is the easiest method to test that dual dynamic acceleration (DDA) works. Later I will try replacing Windows with virtualized pfSense (FreeBSD appliance) and DD-WRT (Linux appliance). Especially in FreeBSD, I might have problems with drivers, and I am sure that I will have no issues with drivers in Windows 10.
Let’s get VM object and device object of my Wi-Fi adapter:
$vmName = 'VMDDA1' $instanceId = '*VEN_168C&DEV_002B*'
$vm = Get-VM -Name $vmName $dev = (Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -like $instanceId }
Our first action should be to disable the device. You can check Device Manager by typing devmgmt.msc in the PowerShell console to see that the correct device was disabled.
# Make sure that your console or integrated scripting environment (ISE) runs in elevated mode Disable-PnpDevice -InstanceId $dev.InstanceId -Confirm:$false
Now we need to dismount the device from the host by using the Dismount-VmHostAssignableDevice cmdlet. To specify a location of the device, we need to get a specific property that is not presented in the device object by using Get-PnpDeviceProperty .
locationPath = (Get-PnpDeviceProperty -KeyName DEVPKEY_Device_LocationPaths -InstanceId $dev.InstanceId).Data[0] Dismount-VmHostAssignableDevice -LocationPath $locationPath -Force –Verbose
Now if you refresh the device object, you can see that something changed: Device is described as “Dismounted”:
(Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -like $instanceId }
You can check available assignable devices by using Get-VMHostAssignableDevice :
The last step is to attach an assignable device to the VM by using Add-VMAssignableDevice like this:
Add-VMAssignableDevice -VM $vm -LocationPath $locationPath –Verbose
You may get error like this one:
This is because you haven’t modified the VM configuration after you created it. There are two main requirements for passing through. The first is that the VM has to have Automatic Stop Action set to Turn off the virtual machine . This will probably be the only VM on your host that has this configuration because we usually want Shut down or Save . The second requirement is memory configuration. Dynamic memory is allowed, but minimum and startup memory have to be equal. Let’s fix it, and finally attach our device.
Set-VM -VM $vm -DynamicMemory -MemoryMinimumBytes 1024MB -MemoryMaximumBytes 4096MB -MemoryStartupBytes 1024MB -AutomaticStopAction TurnOff Add-VMAssignableDevice -VM $vm -LocationPath $locationPath –Verbose
If you spend some time playing with DDA, you can finish, for example, with multiple Wi-Fi adapters and one physical like I did (all functional):
Restore configuration
That was fun! Now it’s time to return devices to the host.
# Remove all devices from a single VM Remove-VMAssignableDevice -VMName VMDDA0 -Verbose
# Return all to host Get-VMHostAssignableDevice | Mount-VmHostAssignableDevice -Verbose
# Enable it in devmgmt.msc (Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -match 'VEN_168C&DEV_002B' } | Enable-PnpDevice -Confirm:$false -Verbose
$vmName = 'VMDDA0' $instanceId = '*VEN_168C&DEV_002B*' $ErrorActionPreference = 'Stop' $vm = Get-VM -Name $vmName $dev = (Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -like $instanceId } if (@($dev).Count -eq 1) {
Disable-PnpDevice -InstanceId $dev.InstanceId -Confirm:$false $locationPath = (Get-PnpDeviceProperty -KeyName DEVPKEY_Device_LocationPaths -InstanceId $dev.InstanceId).Data[0] Dismount-VmHostAssignableDevice -LocationPath $locationPath -Force -Verbose Set-VM -VM $vm -DynamicMemory -MemoryMinimumBytes 1024MB -MemoryMaximumBytes 4096MB -MemoryStartupBytes 1024MB -AutomaticStopAction TurnOff
# If you want to play with GPUs: # Set-VM -VM $vm -StaticMemory -MemoryStartupBytes 4096MB -AutomaticStopAction TurnOff # Set-VM -VM $vm -GuestControlledCacheTypes $true -LowMemoryMappedIoSpace 2048MB -HighMemoryMappedIoSpace 4096MB -Verbose
Add-VMAssignableDevice -VM $vm -LocationPath $locationPath -Verbose
$dev | Sort-Object -Property Class | Format-Table -AutoSize Write-Error -Message ('Number of devices: {0}' -f @($dev).Count)
Thank you for reading, and see you in the next article.
Thank you, Rudolf, for sharing your time and knowledge.
I invite you to follow me on Twitter and Facebook . If you have any questions, send email to me at [email protected] , or post your questions on the Official Scripting Guys Forum . Also check out my Microsoft Operations Management Suite Blog . See you tomorrow. Until then, peace.
Ed Wilson , Microsoft Scripting Guy
The "Scripting Guys" is a historical title passed from scripter to scripter. The current revision has morphed into our good friend Doctor Scripto who has been with us since the very beginning.
Discussion is closed. Login to edit/delete existing comments.
I had some problems and it took me hours to find a solution. So I want to share it. Dismounting the device from the host worked once, but I forgot to remove my VM from a cluster… So I had to undo my actions and start again and ended in “ The required virtualization driver (pcip.sys) failed to load. ” Anyway here ist the solution: You have to delete the device from the registry yourself. 1. “ psexec -s -i regedit.exe ” 2. You find your device as “Dismounted” under “ HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\PCIP ” e.g. “PCI Express Graphics Processing Unit – Dismounted” 3. Delete the whole key with subkeys under PCIP. 4. Reboot and you can unmount it again from the host 🙂
In the mount sub-section, $locationPath variable is missing $ sign.
PowerTip: Find all devices connected to a computer
Get started with powershell and sharepoint online, stay informed, insert/edit link.
Enter the destination URL
Or link to existing content
Navigation Menu
Search code, repositories, users, issues, pull requests..., provide feedback.
We read every piece of feedback, and take your input very seriously.
Saved searches
Use saved searches to filter your results more quickly.
To see all available qualifiers, see our documentation .
- Notifications You must be signed in to change notification settings
Plan-for-Deploying-Devices-using-Discrete-Device-Assignment.md
Latest commit, file metadata and controls, plan for deploying devices by using discrete device assignment.
Applies to: Windows Server 2022, Microsoft Hyper-V Server 2019, Windows Server 2019, Microsoft Hyper-V Server 2016, Windows Server 2016
Discrete Device Assignment allows physical Peripheral Component Interconnect Express (PCIe) hardware to be directly accessible from within a virtual machine (VM). This article discusses the type of devices that can be used, host system requirements, limitations imposed on the VMs, and security implications.
For Discrete Device Assignment, Microsoft supports two device classes: Graphics Adapters and NVMe Storage devices. Other devices are likely to work, and hardware vendors are able to offer statements of support for those devices. For other devices, contact specific hardware vendors for support.
To learn about other methods of GPU virtualization, see Plan for GPU acceleration in Windows Server . If you're ready to try Discrete Device Assignment, you can go to Deploy graphics devices using Discrete Device Assignment or Deploy NVMe Storage Devices using Discrete Device Assignment .
Supported VMs and guest operating systems
Discrete Device Assignment is supported for Generation 1 or 2 VMs. The guests supported include:
- Windows 10 or later
- Windows Server 2016 or later
- Windows Server 2012 R2 with the Update to add Discrete Device Assignment support for Azure .
For more information, see Supported Linux and FreeBSD virtual machines for Hyper-V on Windows Server and Windows .
System requirements
Your system must meet the Hardware Requirements for Windows Server and System Requirements for Hyper-V on Windows Server . Discrete Device Assignment also requires server class hardware that's capable of granting the operating system control over configuring the PCIe fabric (Native PCI Express Control). In addition, the PCIe Root Complex has to support Access Control Services (ACS), which enables Hyper-V to force all PCIe traffic through the Input-Output Memory Management Unit.
These capabilities usually aren't exposed directly in the BIOS of the server and are often hidden behind other settings. If the same capabilities are required for SR-IOV support and in the BIOS, you might need to set "Enable SR-IOV." Reach out to your system vendor if you're unable to identify the correct setting in your BIOS.
To help ensure the hardware is capable of Discrete Device Assignment, you can run the machine profile script on a Hyper-V enabled host. The script tests if your server is correctly set up and what devices are capable of Discrete Device Assignment.
Device requirements
Not every PCIe device can be used with Discrete Device Assignment. Older devices that use legacy (INTx) PCI Interrupts aren't supported. For more information, see Discrete Device Assignment - Machines and devices . You can also run the Machine Profile Script to display which devices are capable of being used for Discrete Device Assignment.
Device manufacturers can reach out to their Microsoft representative for more details.
Device driver
Discrete Device Assignment passes the entire PCIe device into the Guest VM. A host driver isn't required to be installed prior to the device being mounted within the VM. The only requirement on the host is that the device's PCIe Location Path can be determined. The device's driver can be installed to help in identifying the device. A GPU without its device driver installed on the host might appear as a Microsoft Basic Render Device. If the device driver is installed, its manufacturer and model is likely to be displayed.
When the device is mounted inside the guest, the Manufacturer's device driver can be installed like normal inside the guest VM.
VM limitations
Due to the nature of how Discrete Device Assignment is implemented, some features of a VM are restricted while a device is attached. The following features aren't available:
- VM Save/Restore
- Live migration of a VM
- The use of dynamic memory
- Adding the VM to a high availability (HA) cluster
Discrete Device Assignment passes the entire device into the VM. This pass means all capabilities of that device are accessible from the guest operating system. Some capabilities, like firmware updating, might adversely affect the stability of the system. Numerous warnings are presented to the admin when dismounting the device from the host. You should only use Discrete Device Assignment where the tenants of the VMs are trusted.
If the admin desires to use a device with an untrusted tenant, device manufactures can create a Device Mitigation driver that can be installed on the host. Contact the device manufacturer for details on whether they provide a Device Mitigation Driver.
If you would like to bypass the security checks for a device that doesn't have a Device Mitigation Driver, you have to pass the -Force parameter to the Dismount-VMHostAssignableDevice cmdlet. When you make this pass, you have changed the security profile of that system. You should only make this change during prototyping or trusted environments.
PCIe location path
The PCIe location path is required to dismount and mount the device from the Host. An example location path is PCIROOT(20)#PCI(0300)#PCI(0000)#PCI(0800)#PCI(0000) . The Machine Profile Script also returns the location path of the PCIe device.
Get the location path by using Device Manager
:::image type="content" source="../deploy/media/dda-devicemanager.png" alt-text="Screenshot of the device manager, showing the selections for finding a device path." border="false":::
- Open Device Manager and locate the device.
- Right-click the device and select Properties .
- On the Details tab, expand the Property drop-down menu and select Location Paths .
- Right-click the entry that begins with PCIROOT and select Copy to get the location path for the device.
Some devices, especially GPUs, require more MMIO space to be allocated to the VM for the memory of that device to be accessible. By default, each VM starts off with 128 MB of low MMIO space and 512 MB of high MMIO space allocated to it. However, a device might require more MMIO space, or multiple devices might be passed through such that the combined requirements exceed these values. Changing MMIO Space is straightforward and can be performed in PowerShell by using the following commands:
The easiest way to determine how much MMIO space to allocate is to use the Machine Profile Script . To download and run the Machine Profile Script, run the following commands in a PowerShell console:
For devices that can be assigned, the script displays the MMIO requirements of a given device. The following script output is an example:
The low MMIO space is used only by 32-bit operating systems and devices that use 32-bit addresses. In most circumstances, setting the high MMIO space of a VM is enough since 32-bit configurations aren't common.
When you assign MMIO space to a VM, be sure to specify sufficient MMIO space. The MMIO space should be the sum of the requested MMIO space for all desired assigned devices plus a buffer for other virtual devices that require a few MB of MMIO space. Use the default MMIO values previously described as the buffer for low and high MMIO (128 MB and 512 MB, respectively).
Consider the previous example. If you assign a single K520 GPU, set the MMIO space of the VM to the value outputted by the machine profile script plus a buffer: 176 MB + 512 MB. If you assign three K520 GPUs, set the MMIO space to three times the base amount of 176 MB plus a buffer, or 528 MB + 512 MB.
For a more in-depth look at MMIO space, see Discrete Device Assignment - GPUs on the Tech Community blog.
Machine profile script
To identify if the server is configured correctly, and what devices can be passed through by using Discrete Device Assignment, run the SurveyDDA.ps1. PowerShell script.
Before you use the script, ensure you have the Hyper-V role installed and you run the script from a PowerShell command window that has Administrator privileges.
If the system is incorrectly configured to support Discrete Device Assignment, the tool displays an error message with details about the issue. If the system is correctly configured, the tool enumerates all devices located on the PCIe Bus.
For each device it finds, the tool displays whether it's able to be used with Discrete Device Assignment. If a device is identified as being incompatible with Discrete Device Assignment, the script provides a reason. When a device is successfully identified as being compatible, the device's Location Path is displayed. Additionally, if that device requires MMIO space , it's displayed as well.
:::image type="content" source="./images/hyper-v-surveydda-ps1.png" alt-text="Screenshot of the requirements displayed in SurveyDDA.ps1.":::
- PC & TABLETS
- SERVERS & STORAGE
- SMART DEVICES
- SERVICES & SOLUTIONS
Lenovo Press
- Portfolio Guide
- 3D Tour Catalog
- Seller Training Courses
- HX Series for Nutanix
- MX Series for Microsoft
- SX for Microsoft
- VX Series for VMware
- WenTian (联想问天)
- Mission Critical
- Hyperconverged
- Edge Servers
- Multi-Node Servers
- Supercomputing
- Expansion Units
- Network Modules
- Storage Modules
- Network Adapters
- Storage Adapters
- Coprocessors
- GPU adapters
- RAID Adapters
- Ethernet Adapters
- InfiniBand / OPA Adapters
- Host Bus Adapters
- PCIe Flash Adapters
- External Storage
- Backup Units
- Top-of-Rack Switches
- Power Distribution Units
- Rack Cabinets
- KVM Switches & Consoles
- SAN Storage
- Software-Defined Storage
- Direct-Attached Storage
- Tape Drives
- Tape Autoloaders and Libraries
- 1 Gb Ethernet
- 10 Gb Ethernet
- 25 Gb Ethernet
- 40 Gb Ethernet
- 100 Gb Ethernet
- Campus Networking
Solutions & Software
- Artificial Intelligence
- Hortonworks
- Microsoft Data Warehouse Fast Track
- Microsoft Applications
- SAP Business Suite
- Citrix Virtual Apps
- VMware Horizon
- Cloud Storage
- MSP Solutions
- Microsoft Hyper-V
- OpenStack Cloud
- VMware vCloud
- VMware vSphere
- Microsoft SQL Server
- SAP NetWeaver BWA
- Edge and IoT
- High Performance Computing
- Security Key Lifecycle Manager
- Microsoft Windows
- Red Hat Enterprise Linux
- SUSE Linux Enterprise Server
- Lenovo XClarity
- BladeCenter Open Fabric Manager
- IBM Systems Director
- Flex System Manager
- System Utilities
- Network Management
- About Lenovo Press
- Newsletter Signup
Introduction to Windows Server 2016 Hyper-V Discrete Device Assignment
Planning / implementation.
- David Tanaka
Form Number
This paper describes the steps on how to enable Discrete Device Assignment (also known as PCI Passthrough) available as part of the Hyper-V role in Microsoft Windows Server 2016.
Discrete Device Assignment is a performance enhancement that allows a specific physical PCI device to be directly controlled by a guest VM running on the Hyper-V instance. Specifically, this new feature aims to deliver a certain type of PCI device class, such as Graphics Processing Units (GPU) or Non-Volatile Memory express (NVMe) devices, to a Windows Server 2016 virtual machine, where the VM will be given full and direct access to the physical PCIe device.
In this paper we describe how to enable and use this feature on Lenovo servers using Windows Server 2016 Technical Preview 4 (TP4). We provide the step-by-step instructions on how to make an NVIDIA GPU available to a Hyper-V virtual machine.
This paper is aimed at IT specialists and IT managers wanting to understand more about the new features of Windows Server 2016 and is part of a series of technical papers on the new operating system.
Table of Contents
Introduction Installing the GPU and creating a VM Enabling the device inside the VM Restoring the device to the host system Summary
Change History
Changes in the April 18 update:
- Corrections to step 3 and step 4 in "Restoring the device to the host system" on page 12.
Related product families
Product families related to this document are the following:
- Microsoft Alliance
View all documents published by this author
Configure and Buy
Full change history, course detail, employees only content.
The content in this document with a is only visible to employees who are logged in. Logon using your Lenovo ITcode and password via Lenovo single-signon (SSO).
The owner of the document has determined that this content is classified as Lenovo Internal and should not be normally be made available to people who are not employees or contractors. This includes partners, customers, and competitors. The reasons may vary and you should reach out to the authors of the document for clarification, if needed. Be cautious about sharing this content with others as it may contain sensitive information.
Any visitor to the Lenovo Press web site who is not logged on will not be able to see this employee-only content. This content is excluded from search engine indexes and will not appear in any search results.
For all users, including logged-in employees, this employee-only content does not appear in the PDF version of this document.
This functionality is cookie based. The web site will normally remember your login state between browser sessions, however, if you clear cookies at the end of a session or work in an Incognito/Private browser window, then you will need to log in each time.
If you have any questions about this feature of the Lenovo Press web, please email David Watts at [email protected] .
- About Lenovo
- Our Company
- Smarter Technology For All
- Investors Relations
- Product Recycling
- Product Security
- Product Recalls
- Executive Briefing Center
- Lenovo Cares
- Formula 1 Partnership
- Products & Services
- Laptops & Ultrabooks
- Desktop Computers
- Workstations
- Gaming & VR
- Servers, Storage, & Networking
- Accessories & Software
- Services & Warranty
- Product FAQs
- Lenovo Coupons
- Cloud Security Software
- Windows 11 Upgrade
- Shop By Industry
- Small Business Solutions
- Large Enterprise Solutions
- Government Solutions
- Healthcare Solutions
- K-12 Solutions
- Higher Education Solutions
- Student & Teacher Discounts
- Healthcare Discounts
- First Responder Discount
- Senior Discounts
- Gaming Community
- LenovoEDU Community
- LenovoPRO Community
- LenovoPRO Small Business
- MyLenovo Rewards
- Lenovo Financing
- Trade-in Program
- Customer Discounts
- Affiliate Program
- Legion Influencer Program
- Student Influencer Program
- Affinity Program
- Employee Purchase Program
- Laptop Buying Guide
- Where to Buy
- Customer Support
- Policy FAQs
- Return Policy
- Shipping Information
- Order Lookup
- Register a Product
- Replacement Parts
- Technical Support
- Provide Feedback
Multiple GPU Assignments to a Single Hyper-V VM with DDA
I recently configured Discrete Device Assignment (DDA) on my Windows Server with Hyper-V and successfully assigned a GPU to a virtual machine using the steps outlined in this reference manual .
- Windows Server with Hyper-V
- Multiple GPUs available (Example: NVIDIA RTX A400)
What I’ve Done:
- Obtain the location path of the GPU that I want to assign to a VM: "PCIROOT(36)#PCI(0000)#PCI(0000)"
- Dismount the device: Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -Force
- Assign the device to the VM: Add-VMAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -VMName Debian12_Dev
- Power on the VM, and the guest OS (Debian) is able to use the GPU.
Now, I want to add multiple GPUs to a single VM using Hyper-V DDA. I tried the following:
- GPU1 device location path: PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)
- GPU2 device location path: PCIROOT(36)#PCI(0000)#PCI(0000)
- Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)" -Force
- Dismount-VMHostAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -Force
- Add-VMAssignableDevice -LocationPath "PCIROOT(36)#PCI(0000)#PCI(0000)" -VMName Debian12_Dev
- Add-VMAssignableDevice -LocationPath "PCIROOT(80)#PCI(0200)#PCI(0000)#PCI(1000)#PCI(0000)" -VMName Debian12_Dev
- Power on the VM, but the guest OS (Debian) identifies only one GPU.
Question: Has anyone tried adding multiple GPUs to a single VM using Hyper-V DDA? If so, what steps did you follow, and did you encounter any challenges?
I’m seeking to optimize GPU resources for specific workloads within a single VM and would appreciate any insights, experiences, or tips from the community.
Thanks in advance!
The aforementioned issues were successfully resolved by configuring MMIO space, as outlined in the official Microsoft document: Microsoft Official Document .
GPUs, particularly, require additional MMIO space for the VM to access the memory of that device. While each VM starts with 128 MB of low MMIO space and 512 MB of high MMIO space by default, certain devices or multiple devices may require more space, potentially exceeding these values.
Subsequently, I reconfigured the VM following the instructions in the Microsoft Official Document: VM Preparation for Graphics Devices .
Enabled Write-Combining on the CPU using the cmdlet: Set-VM -GuestControlledCacheTypes $true -VMName VMName
Configured the 32-bit MMIO space with the cmdlet: Set-VM -LowMemoryMappedIoSpace 3Gb -VMName VMName
Configured greater than 32-bit MMIO space with the cmdlet: Set-VM -HighMemoryMappedIoSpace 33280Mb -VMName VMName
To dismount the GPU devices from the host:
- Located the device’s location path
- Copied the device’s location path
- Disabled the GPU in Device Manager
Dismounted the GPU devices from the host partition using the cmdlet: Dismount-VMHostAssignableDevice -Force -LocationPath $locationPath
Assigned the GPU devices to the VM using the cmdlet: Add-VMAssignableDevice -LocationPath $locationPath -VMName VMName
The configuration of the VM for DDA has been successfully completed.
Both GPUs are now recognized in my Linux Hyper-V VM:
Related Topics
Windows 11 Top Contributors: neilpzz - Ramesh Srinivasan - _AW_ - RAJU.MSC.MATHEMATICS ✅
October 14, 2024
Windows 11 Top Contributors:
neilpzz - Ramesh Srinivasan - _AW_ - RAJU.MSC.MATHEMATICS ✅
- Search the community and support articles
- Search Community member
Ask a new question
Is it possible to use Direct Device assignment on desktop Windows 11?
I would like to assign custom build PCIe device plugged in standard PC running win 11 to Hyper-V VM guest running Ubuntu 22.04 LTS.
I am able to remove device from host and assign it to guest with these commands:
Dismount-VMHostAssignableDevice -force -LocationPath 'PCIROOT(0)#PCI(1B04)#PCI(0000)'
Add-VMAssignableDevice -LocationPath 'PCIROOT(0)#PCI(1B04)#PCI(0000)' -VMName 'Ubuntu 22.04 LTS - Xilinx'
But VM doesn't start after doing this. It fails with this error:
Virtual pci Express Port (instance ID ....): Failed to Power on with Error 'A hypervisor feature is not available to the user'. (0xC035001e)
Searching various forum I was not able to find definitive answer whether DDA should work on Win 11 or not. Please can you definitely tell me what is the status of this very nice and valuable feature for windows 11? If it is not supported yet, is there any plan to add it?
Thanks for information
Best regards
Report abuse
Reported content has been submitted
Replies (2)
- Independent Advisor
Was this reply helpful? Yes No
Sorry this didn't help.
Great! Thanks for your feedback.
How satisfied are you with this reply?
Thanks for your feedback, it helps us improve the site.
Thanks for your feedback.
Thanks EliseM_456,
question was reposted here: https://learn.microsoft.com/en-us/answers/questions/1347364/is-it-possible-to-use-direct-device-assignment-on
Question Info
- Devices and drivers
- Norsk Bokmål
- Ελληνικά
- Русский
- עברית
- العربية
- ไทย
- 한국어
- 中文(简体)
- 中文(繁體)
- 日本語
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Discrete Device Assignment GPU Issue
I have a Discrete Device Assignment issue while using Hyper-V on Windows Server 2019 (and a 2019 VM). I followed the documentation on this page, including running the SurveyDDA script which reports no errors for the device: https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment
I was successfully able to assign the GPU (Nvidia A40) to the VM and install the Nvidia Server Driver on it, though the device reports a Code 12 error (Insufficient resources) within Device Manager. I initially used the recommended settings from the DDA script, and have already tried increasing the MMIO space to the maximum available (3Gb for low, and 33280Mb for high), but that hasn’t changed the behavior.
Any help is appreciated!
Windows Server 2019 A Microsoft server operating system that supports enterprise-level management updated to data storage. 3,775 questions Sign in to follow Follow
Windows Server A family of Microsoft server operating systems that support enterprise-level management, data storage, applications, and communications. 13,168 questions Sign in to follow Follow
Hyper-V A Windows technology providing a hypervisor-based virtualization solution enabling customers to consolidate workloads onto a single server. 2,725 questions Sign in to follow Follow
Hello there,
Make sure your VMs are running on the latest configuration version.
Another thing you need to watch out for is that when you use dynamic memory the startup memory and the minimum memory values have to match.
Make sure that the automatic stop action for the virtual machine is set to “turn off the virtual machine and not to the default of “save the virtual machine state”. You cannot use DDA unless you do so.
------------------------------------------------------------------------------------------------------------------------------------------
--If the reply is helpful, please Upvote and Accept it as an answer–
IMAGES
VIDEO
COMMENTS
In this article. Learn how to use Discrete Device Assignment (DDA) to pass an entire PCIe device into a virtual machine (VM) with PowerShell. Doing so allows high performance access to devices like NVMe storage or graphics cards from within a VM while being able to apply the device's native drivers. For more information on devices that work and ...
Get the location path by using Device Manager. Open Device Manager and locate the device. Right-click the device and select Properties. On the Details tab, expand the Property drop-down menu and select Location Paths. Right-click the entry that begins with PCIROOT and select Copy to get the location path for the device.
With Windows Server 2016, we're introducing a new feature, called Discrete Device Assignment, in Hyper-V. Users can now take some of the PCI Express devices in their systems and pass them through directly to a guest VM. This is actually much of the same technology that we've used for SR-IOV networking in the past.
One feature that drew my attention is a new feature in Hyper-V called discrete device assignment. It can be very simply described as a device pass-through feature, the likes of which has existed on other hypervisors for many years. ... Microsoft started with device pass-through on Hyper-V with disk pass-through (attaching a physical disk ...
This is the third post in a four part series. My previous two blog posts talked about Discrete Device Assignment ( link ) and the machines and devices necessary ( link ) to make it work in Windows Server 2016 TP4. This post goes into more detail, focusing on GPUs.
First, we're not supporting Discrete Device Assignment in Hyper-V in Windows 10. Only Server versions of Windows support this. This isn't some wanton play for more of your hard-earned cash but rather just a side effect of being comfortable supporting server-class machines. They tend to work and be quite stable.
Below I describe the steps to get DDA working. There's also a rough video out on my Vimeo channel: Discrete Device Assignment with a GPU in Windows 2016 TPv4. Setting up Discrete Device Assignment with a GPU Preparing a Hyper-V host with a GPU for Discrete Device Assignment. First of all, you need a Windows Server 2016 Host running Hyper-V.
Setup all your drivers and stuffs. Extract Windows\Branding & Windows\System32\spp\tokens\skus from server ISO. Copy skus to the same directory. Run slmgr.vbs /rilc in cmd. Run slmgr.vbs /ipk KEY where key is the server one you want. Copy branding to the same directory. Activate, there you go.
Discrete Device Assignment (DDA) allows you to dedicate a physical graphical processing unit (GPU) to your workload. In a DDA deployment, virtualized workloads run on the native driver and typically have full access to the GPU's functionality. DDA offers the highest level of app compatibility and potential performance.
GPU virtualization makes the ability to process massive amounts of data quickly and efficiently possible. With Windows Server 2025 Microsoft is introducing multiple new virtualized GPU advancements, including GPUs with clustered VMs through DDA (Discrete Device Assignment), GPU-P (GPU Partitioning) and Live Migration for GPU-Ps.
This entry was posted in Discrete Device Assignment, Hyper-V, IT Pro, Windows 10, Windows 8.1, Windows Server 2012 R2, Windows Server 2016 and tagged Discrete Device Assignment, GPU, WIndows 2016 by workinghardinit. Bookmark the permalink.
For Discrete Device Assignment, Microsoft supports two device classes: Graphics Adapters and NVMe Storage devices. Other devices are likely to work, and hardware vendors are able to offer statements of support for those devices. For other devices, contact specific hardware vendors for support.
3 Preparing a Hyper-V host with a GPU for Discrete Device Assignment First of all, you need a Windows Server 2016 Host running Hyper-V. It needs to meet the hardware specifications discussed above, boot form EUFI with VT-d enabled and you need a PCI Express GPU to work with that can be used for discrete device assignment.
This paper describes the steps on how to enable Discrete Device Assignment (also known as PCI Passthrough) available as part of the Hyper-V role in Microsoft Windows Server 2016. Discrete Device Assignment is a performance enhancement that allows a specific physical PCI device to be directly controlled by a guest VM running on the Hyper-V instance. Specifically, this new feature aims to ...
In this article. With Windows Server 2016, we're introducing a new feature, called Discrete Device Assignment, in Hyper-V. Users can now take some of the PCI Express devices in their systems and pass them through directly to a guest VM. This is actually much of the same technology that we've used for SR-IOV networking in the past.
I recently configured Discrete Device Assignment (DDA) on my Windows Server with Hyper-V and successfully assigned a GPU to a virtual machine using the steps outlined in this reference manual. My Setup: Windows Server with Hyper-V; Multiple GPUs available (Example: NVIDIA RTX A400) What I've Done: Successfully assigned one GPU to a VM using DDA
Enabling Discrete Device Assignment on Windows 10. Hello, I've been experimenting with different versions of Windows 10, and I am almost ready to buy a license, but unfortunately there is one major problem that is a showstopper for me. For my work, I need to be able to pass through physical devices to a linux-based HyperV virtual machine.
First published on TECHNET on Nov 24, 2015. In my previous three posts, I outlined a new feature in Windows Server 2016 TP4 called Discrete Device Assignment. This post talks about support for Linux guest VMs, and it's more of a description of my personal journey rather than a straight feature description. If it were just a description, I ...
Hello, I would like to assign custom build PCIe device plugged in standard PC running win 11 to Hyper-V VM guest running Ubuntu 22.04 LTS. I am able to remove device from host and assign it to guest with these commands: Dismount-VMHostAssignableDevice -force -LocationPath 'PCIROOT (0)#PCI (1B04)#PCI (0000)'. Add-VMAssignableDevice -LocationPath ...
1 answer. Make sure your VMs are running on the latest configuration version. Another thing you need to watch out for is that when you use dynamic memory the startup memory and the minimum memory values have to match. Make sure that the automatic stop action for the virtual machine is set to "turn off the virtual machine and not to the ...