microsoft discrete device assignment

The “Hey, Scripting Guys!” blog has been retired. There are many useful posts in this blog, so we keep the blog here for historical reference. However, some information might be very outdated and many of the links might not work anymore.

New PowerShell content is being posted to the PowerShell Community blog where members of the community can create posts by submitting content in the GitHub repository .

Passing through devices to Hyper-V VMs by using discrete device assignment

Summary : Learn how to attach a device from your Hyper-V host to your VM by using a new feature of Windows Server 2016.

Today we have a guest blogger, Rudolf Vesely, who has blogged here on previous occasions. Here is a link to his previous posts if you would like to read them.

Here is what Rudolf says about himself.

I am an Azure Solutions Architect at Rackspace in London. I believe that public cloud is the future for most organizations and this is why I specialize in public clouds (Amazon Web Services and Azure) and hybrid cloud strategy. My unique role at Rackspace is to guide our customers in their cloud strategy and ensure that their strategy is aligned with their business.

I started my career as a developer and have continued programming and scripting as a hobby. This is the reason why I am a big proponent of DevOps. I believe PowerShell is a fantastic tool that facilitates the process of joining the Dev and Ops worlds.

Contact information:

  • Twitter: @rudolfvesely
  • Blog: Technology Stronghold

Introduction

Many new features in Windows Server 2016 (in any Technical Preview) will draw your attention, and it’s very easy to miss some of them. This is true especially when people don’t speak or blog about them too much like they do for Windows PowerShell 5.0, Storage Spaces Direct, or Storage Replica, as examples.

One feature that drew my attention is a new feature in Hyper-V called discrete device assignment. It can be very simply described as a device pass-through feature, the likes of which has existed on other hypervisors for many years.

Microsoft started with device pass-through on Hyper-V with disk pass-through (attaching a physical disk without using VHD / VHDX), but true pass-through came with single root I/O virtualization (SR-IOV) on Windows Server 2012. I recommend that you read John Howard’s excellent blog post series that describes SR-IOV and hardware and system requirements .

On Windows Server 2016, we finally get the ability to directly work with devices on the host and attach them to a child partition (guest VM) without being limited to only networking and storage. This feature was probably built for passing-through graphics processing units (GPUs) in Azure for N-series VMs (GPU-enabled VMs), but we can use it for anything else. Please keep in mind that this is at your own risk. Microsoft will not support your setups, and you may also have very hard security issues. Any device pass-through on any hypervisor opens the possibility to take down the host (for example, by triggering an error on the PCI Express bus) or worse, taking control of your host.

The last thing you need to consider is whether you have hardware to test on it. You need a modern client computer or server that has Intel Virtualization Technology for Directed I/O (VT-d) or AMD Virtualization (AMD-V). I use an industrial mini PC, which will be my new router and wireless access point (all virtualized), but you should be fine with a modern laptop. So, if you’re still reading, activate Intel VT-d in firmware of your testing computer, install the latest Technical Preview of Windows Server 2016, and start with PnpDevice.

PnpDevice module

On Windows Server 2016, thanks to the PnpDevice module, we finally have the possibility to work with hardware without directly touching Windows Management Instrumentation (WMI). The PnpDevice module will be very important for Nano Server and non-GUI servers, so I recommend that you try it.

Get-Command -Module PnpDevice

CommandType Name                  Version Source

----------- ----                  ------- ------

Function    Disable-PnpDevice     1.0.0.0 PnpDevice Function    Enable-PnpDevice      1.0.0.0 PnpDevice Function    Get-PnpDevice         1.0.0.0 PnpDevice Function    Get-PnpDeviceProperty 1.0.0.0 PnpDevice

Let’s take a look what is attached to your system by using:

Get-PnpDevice –PresentOnly | Sort-Object –Property Class

Screenshot of devices attached to the system.

A lot of devices were returned. Now it’s a good idea to choose what should be passed. I do not have multiple GPUs to pass it, but my goal is to virtualize my router and access point, and I have two wireless adapters in mini PCI-e slots. I found that the easiest way to find them is by using vendor ID:

(Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -like '*VEN_168C*' } # 168C == Qualcomm Atheros

Screenshot of result by using the vendor ID.

If you installed drivers on your host for devices that you want to pass through (I did not), you can filter according to Class Net like this (example from my Surface Book):

Screenshot of filtered drivers.

I will assume that you have installed the VM and chosen a device to attach to it. I installed Windows 10 version 1511 because this is the easiest method to test that dual dynamic acceleration (DDA) works. Later I will try replacing Windows with virtualized pfSense (FreeBSD appliance) and DD-WRT (Linux appliance). Especially in FreeBSD, I might have problems with drivers, and I am sure that I will have no issues with drivers in Windows 10.

Let’s get VM object and device object of my Wi-Fi adapter:

$vmName = 'VMDDA1' $instanceId = '*VEN_168C&DEV_002B*'

$vm = Get-VM -Name $vmName $dev = (Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -like $instanceId }

Our first action should be to disable the device. You can check Device Manager by typing devmgmt.msc in the PowerShell console to see that the correct device was disabled.

# Make sure that your console or integrated scripting environment (ISE) runs in elevated mode Disable-PnpDevice -InstanceId $dev.InstanceId -Confirm:$false

Screenshot that shows disabled device in Device Manager.

Now we need to dismount the device from the host by using the Dismount-VmHostAssignableDevice cmdlet. To specify a location of the device, we need to get a specific property that is not presented in the device object by using Get-PnpDeviceProperty .

locationPath = (Get-PnpDeviceProperty -KeyName DEVPKEY_Device_LocationPaths -InstanceId $dev.InstanceId).Data[0] Dismount-VmHostAssignableDevice -LocationPath $locationPath -Force –Verbose

Now if you refresh the device object, you can see that something changed: Device is described as “Dismounted”:

(Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -like $instanceId }

You can check available assignable devices by using Get-VMHostAssignableDevice :

Screenshot that shows available assignable devices.

The last step is to attach an assignable device to the VM by using Add-VMAssignableDevice like this:

Add-VMAssignableDevice -VM $vm -LocationPath $locationPath –Verbose

You may get error like this one:

Screenshot of errors.

This is because you haven’t modified the VM configuration after you created it. There are two main requirements for passing through. The first is that the VM has to have Automatic Stop Action set to Turn off the virtual machine . This will probably be the only VM on your host that has this configuration because we usually want Shut down or Save . The second requirement is memory configuration. Dynamic memory is allowed, but minimum and startup memory have to be equal. Let’s fix it, and finally attach our device.

Set-VM -VM $vm -DynamicMemory -MemoryMinimumBytes 1024MB -MemoryMaximumBytes 4096MB -MemoryStartupBytes 1024MB -AutomaticStopAction TurnOff Add-VMAssignableDevice -VM $vm -LocationPath $locationPath –Verbose

Screenshot that shows AutomaticStopAction is set to TurnOff and memory is configured.

If you spend some time playing with DDA, you can finish, for example, with multiple Wi-Fi adapters and one physical like I did (all functional):

Screenshot that shows multiple Wi-Fi adapters and one physical adapter.

Restore configuration

That was fun! Now it’s time to return devices to the host.

# Remove all devices from a single VM Remove-VMAssignableDevice -VMName VMDDA0 -Verbose

# Return all to host Get-VMHostAssignableDevice | Mount-VmHostAssignableDevice -Verbose

# Enable it in devmgmt.msc (Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -match 'VEN_168C&DEV_002B' } | Enable-PnpDevice -Confirm:$false -Verbose

$vmName = 'VMDDA0' $instanceId = '*VEN_168C&DEV_002B*' $ErrorActionPreference = 'Stop' $vm = Get-VM -Name $vmName $dev = (Get-PnpDevice -PresentOnly).Where{ $_.InstanceId -like $instanceId } if (@($dev).Count -eq 1) {

Disable-PnpDevice -InstanceId $dev.InstanceId -Confirm:$false $locationPath = (Get-PnpDeviceProperty -KeyName DEVPKEY_Device_LocationPaths -InstanceId $dev.InstanceId).Data[0] Dismount-VmHostAssignableDevice -LocationPath $locationPath -Force -Verbose Set-VM -VM $vm -DynamicMemory -MemoryMinimumBytes 1024MB -MemoryMaximumBytes 4096MB -MemoryStartupBytes 1024MB -AutomaticStopAction TurnOff

# If you want to play with GPUs: # Set-VM -VM $vm -StaticMemory -MemoryStartupBytes 4096MB -AutomaticStopAction TurnOff # Set-VM -VM $vm -GuestControlledCacheTypes $true -LowMemoryMappedIoSpace 2048MB -HighMemoryMappedIoSpace 4096MB -Verbose

Add-VMAssignableDevice -VM $vm -LocationPath $locationPath -Verbose

$dev | Sort-Object -Property Class | Format-Table -AutoSize Write-Error -Message ('Number of devices: {0}' -f @($dev).Count)

Thank you for reading, and see you in the next article.

Thank you, Rudolf, for sharing your time and knowledge.

I invite you to follow me on Twitter and Facebook . If you have any questions, send email to me at [email protected] , or post your questions on the Official Scripting Guys Forum . Also check out my Microsoft Operations Management Suite Blog . See you tomorrow. Until then, peace.

Ed Wilson , Microsoft Scripting Guy

The "Scripting Guys" is a historical title passed from scripter to scripter. The current revision has morphed into our good friend Doctor Scripto who has been with us since the very beginning.

Discussion is closed. Login to edit/delete existing comments.

I had some problems and it took me hours to find a solution. So I want to share it. Dismounting the device from the host worked once, but I forgot to remove my VM from a cluster… So I had to undo my actions and start again and ended in “ The required virtualization driver (pcip.sys) failed to load. ” Anyway here ist the solution: You have to delete the device from the registry yourself. 1. “ psexec -s -i regedit.exe ” 2. You find your device as “Dismounted” under “ HKEY_LOCAL_MACHINE\SYSTEM\ControlSet001\Enum\PCIP ” e.g. “PCI Express Graphics Processing Unit – Dismounted” 3. Delete the whole key with subkeys under PCIP. 4. Reboot and you can unmount it again from the host 🙂

In the mount sub-section, $locationPath variable is missing $ sign.

Working Hard In IT

My view on it from the trenches.

Working Hard In IT

Setting up Discrete Device Assignment with a GPU

Introduction.

Let’s take a look at setting up Discrete Device Assignment with a GPU. Windows Server 2016 introduces Discrete Device Assignment (DDA). This allows a PCI Express connected device, that supports this, to be connected directly through to a virtual machine.

The idea behind this is to gain extra performance. In our case we’ll use one of the four display adapters in our NVIDIA GROD K1 to assign to a VM via DDA. The 3 other can remain for use with RemoteFX. Perhaps we could even leverage DDA for GPUs that do not support RemoteFX to be used directly by a VM, we’ll see.

As we directly assign the hardware to VM we need to install the drivers for that hardware inside of that VM just like you need to do with real hardware.

I refer you to the starting blog of a series on DDA in Windows 2016:

  • Discrete Device Assignment — Guests and Linux

Here you can get a wealth of extra information. My experimentations with this feature relied heavily on these blogs and MSFT provide GitHub script to query a host for DDA capable devices. That was very educational in regards to finding out the PowerShell we needed to get DDA to work! Please see A 1st look at Discrete Device Assignment in Hyper-V to see the output of this script and how we identified that our NVIDIA GRID K1 card was a DDA capable candidate.

Requirements

There are some conditions the host system needs to meet to even be able to use DDA. The host needs to support Access Control services which enables pass through of PCI Express devices in a secure manner. The host also need to support SLAT and Intel VT-d2 or AMD I/O MMU. This is dependent on UEFI, which is not a big issue. All my W2K12R2 cluster nodes & member servers run UEFI already anyway. All in all, these requirements are covered by modern hardware. The hardware you buy today for Windows Server 2012 R2 meets those requirements when you buy decent enterprise grade hardware such as the DELL PowerEdge R730 series. That’s the model I had available to test with. Nothing about these requirements is shocking or unusual.

A PCI express device that is used for DDA cannot be used by the host in any way. You’ll see we actually dismount it form the host. It also cannot be shared amongst VMs. It’s used exclusively by the VM it’s assigned to. As you can imagine this is not a scenario for live migration and VM mobility. This is a major difference between DDA and SR-IOV or virtual fibre channel where live migration is supported in very creative, different ways. Now I’m not saying Microsoft will never be able to combine DDA with live migration, but to the best of my knowledge it’s not available today.

The host requirements are also listed here: https://technet.microsoft.com/en-us/library/mt608570.aspx

  • The processor must have either Intel’s Extended Page Table (EPT) or AMD’s Nested Page Table (NPT).

The chipset must have:

  • The firmware tables must expose the I/O MMU to the Windows hypervisor. Note that this feature might be turned off in the UEFI or BIOS. For instructions, see the hardware documentation or contact your hardware manufacturer.

You get this technology both on premises with Windows Server 2016 as and with virtual machines running Windows Server 2016; Windows 10 (1511 or higher) and Linux distros that support it. It’s also an offering on high end Azure VMs (IAAS). It supports both Generation 1 and generation 2 virtual machines. All be it that generation 2 is X64 bit only, this might be important for certain client VMs. We’ve dumped 32 bit Operating systems over decade ago so to me this is a non-issue.

For this article I used a DELL PowerEdge R730, a NVIIA GRID K1 GPU. Windows Server 2016 TPv4 with CU of March 2016 and Windows 10 Insider Build 14295.

Microsoft supports 2 devices at the moment:

  • NVMe (Non-Volatile Memory express) SSD controllers

Other devices might work but you’re dependent on the hardware vendor for support. Maybe that’s OK for you, maybe it’s not.

Below I describe the steps to get DDA working. There’s also a rough video out on my Vimeo channel: Discrete Device Assignment with a GPU in Windows 2016 TPv4 .

Preparing a Hyper-V host with a GPU for Discrete Device Assignment

First of all, you need a Windows Server 2016 Host running Hyper-V. It needs to meet the hardware specifications discussed above, boot form EUFI with VT-d enabled and you need a PCI Express GPU to work with that can be used for discrete device assignment.

It pays to get the most recent GPU driver installed and for our NVIDIA GRID K1 which was 362.13 at the time of writing.

clip_image001

On the host when your installation of the GPU and drivers is OK you’ll see 4 NIVIDIA GRID K1 Display Adapters in device manager.

clip_image002

We create a generation 2 VM for this demo. In case you recuperate a VM that already has a RemoteFX adapter in use, remove it. You want a VM that only has a Microsoft Hyper-V Video Adapter.

clip_image003

In Hyper-V manager I also exclude the NVDIA GRID K1 GPU I’ll configure for DDA from being used by RemoteFX. In this show case that we’ll use the first one.

clip_image005

OK, we’re all set to start with our DDA setup for an NVIDIA GRID K1 GPU!

Assign the PCI Express GPU to the VM

Prepping the gpu and host.

As stated above to have a GPU assigned to a VM we must make sure that the host no longer has use of it. We do this by dismounting the display adapter which renders it unavailable to the host. Once that is don we can assign that device to a VM.

Let’s walk through this. Tip: run PoSh or the ISE as an administrator.

We run Get-VMHostAssignableDevice. This return nothing as no devices yet have been made available for DDA.

I now want to find my display adapters

#Grab all the GPUs in the Hyper-V Host

$MyDisplays = Get-PnpDevice | Where-Object {$_.Class -eq “Display”}

$MyDisplays | ft -AutoSize

This returns

clip_image007

As you can see it list all adapters. Let’s limit this to the NVIDIA ones alone.

#We can get all NVIDIA cards in the host by querying for the nvlddmkm

#service which is a NVIDIA kernel mode driver

$MyNVIDIA = Get-PnpDevice | Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”}

$MyNVIDIA | ft -AutoSize

clip_image009

If you have multiple type of NVIDIA cared you might also want to filter those out based on the friendly name. In our case with only one GPU this doesn’t filter anything. What we really want to do is excluded any display adapter that has already been dismounted. For that we use the -PresentOnly parameter.

#We actually only need the NVIDIA GRID K1 cards, let’s filter some #more,there might be other NVDIA GPUs.We might already have dismounted #some of those GPU before. For this exercise we want to work with the #ones that are mountedt he paramater -PresentOnly will do just that.

$MyNVidiaGRIDK1 = Get-PnpDevice -PresentOnly| Where-Object {$_.Class -eq “Display”} |

Where-Object {$_.Service -eq “nvlddmkm”} |

Where-Object {$_.FriendlyName -eq “NVIDIA Grid K1”}

$MyNVidiaGRIDK1 | ft -AutoSize

Extra info: When you have already used one of the display adapters for DDA (Status “UnKnown”). Like in the screenshot below.

clip_image011

We can filter out any already unmounted device by using the -PresentOnly parameter. As we could have more NVIDIA adaptors in the host, potentially different models, we’ll filter that out with the FriendlyName so we only get the NVIDIA GRID K1 display adapters.

clip_image013

In the example above you see 3 display adapters as 1 of the 4 on the GPU is already dismounted. The “Unkown” one isn’t returned anymore.

Anyway, when we run

We get an array with the display adapters relevant to us. I’ll use the first (which I excluded form use with RemoteFX). In a zero based array this means I disable that display adapter as follows:

Disable-PnpDevice -InstanceId $MyNVidiaGRIDK1[0].InstanceId -Confirm:$false

clip_image015

When you now run

Again you’ll see

clip_image017

The disabled adapter has error as a status. This is the one we will dismount so that the host no longer has access to it. The array is zero based we grab the data about that display adapter.

#Grab the data (multi string value) for the display adapater

$DataOfGPUToDDismount = Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths -InstanceId $MyNVidiaGRIDK1[0].InstanceId

$DataOfGPUToDDismount | ft -AutoSize

clip_image019

We grab the location path out of that data (it’s the first value, zero based, in the multi string value).

#Grab the location path out of the data (it’s the first value, zero based)

#How do I know: read the MSFT blogs and read the script by MSFT I mentioned earlier.

$locationpath = ($DataOfGPUToDDismount).data[0]

$locationpath | ft -AutoSize

clip_image021

This locationpath is what we need to dismount the display adapter.

#Use this location path to dismount the display adapter

Dismount-VmHostAssignableDevice -locationpath $locationpath -force

Once you dismount a display adapter it becomes available for DDA. When we now run

clip_image023

As you can see the dismounted display adapter is no longer present in display adapters when filtering with -presentonly. It’s also gone in device manager.

clip_image024

Yes, it’s gone in device manager. There’s only 3 NVIDIA GRID K1 adaptors left. Do note that the device is unmounted and as such unavailable to the host but it is still functional and can be assigned to a VM.That device is still fully functional. The remaining NVIDIA GRID K1 adapters can still be used with RemoteFX for VMs.

It’s not “lost” however. When we adapt our query to find the system devices that have dismounted I the Friendly name we can still get to it (needed to restore the GPU to the host when needed). This means that -PresentOnly for system has a different outcome depending on the class. It’s no longer available in the display class, but it is in the system class.

clip_image026

And we can also see it in System devices node in Device Manager where is labeled as “PCI Express Graphics Processing Unit – Dismounted”.

We now run Get-VMHostAssignableDevice again see that our dismounted adapter has become available to be assigned via DDA.

clip_image029

This means we are ready to assign the display adapter exclusively to our Windows 10 VM.

Assigning a GPU to a VM via DDA

You need to shut down the VM

Change the automatic stop action for the VM to “turn off”

clip_image031

This is mandatory our you can’t assign hardware via DDA. It will throw an error if you forget this.

I also set my VM configuration as described in https://blogs.technet.microsoft.com/virtualization/2015/11/23/discrete-device-assignment-gpus/

I give it up to 4GB of memory as that’s what this NVIDIA model seems to support. According to the blog the GPUs work better (or only work) if you set -GuestControlledCacheTypes to true.

“GPUs tend to work a lot faster if the processor can run in a mode where bits in video memory can be held in the processor’s cache for a while before they are written to memory, waiting for other writes to the same memory. This is called “write-combining.” In general, this isn’t enabled in Hyper-V VMs. If you want your GPU to work, you’ll probably need to enable it”

#Let’s set the memory resources on our generation 2 VM for the GPU

Set-VM RFX-WIN10ENT -GuestControlledCacheTypes $True -LowMemoryMappedIoSpace 2000MB -HighMemoryMappedIoSpace 4000MB

You can query these values with Get-VM RFX-WIN10ENT | fl *

We now assign the display adapter to the VM using that same $locationpath

Add-VMAssignableDevice -LocationPath $locationpath -VMName RFX-WIN10ENT

Boot the VM, login and go to device manager.

clip_image033

We now need to install the device driver for our NVIDIA GRID K1 GPU, basically the one we used on the host.

clip_image035

Once that’s done we can see our NVIDIA GRID K1 in the guest VM. Cool!

clip_image037

You’ll need a restart of the VM in relation to the hardware change. And the result after all that hard work is very nice graphical experience compared to RemoteFX

clip_image039

What you don’t believe it’s using an NVIDIA GPU inside of a VM? Open up perfmon in the guest VM and add counters, you’ll find the NVIDIA GPU one and see you have a GRID K1 in there.

clip_image041

Start some GP intensive process and see those counters rise.

image

Remove a GPU from the VM & return it to the host.

When you no longer need a GPU for DDA to a VM you can reverse the process to remove it from the VM and return it to the host.

Shut down the VM guest OS that’s currently using the NVIDIA GPU graphics adapter.

In an elevated PowerShell prompt or ISE we grab the locationpath for the dismounted display adapter as follows

$DisMountedDevice = Get-PnpDevice -PresentOnly |

Where-Object {$_.Class -eq “System” -AND $_.FriendlyName -like “PCI Express Graphics Processing Unit – Dismounted”}

$DisMountedDevice | ft -AutoSize

clip_image045

We only have one GPU that’s dismounted so that’s easy. When there are more display adapters unmounted this can be a bit more confusing. Some documentation might be in order to make sure you use the correct one.

We then grab the locationpath for this device, which is at location 0 as is an array with one entry (zero based). So in this case we could even leave out the index.

$LocationPathOfDismountedDA = ($DisMountedDevice[0] | Get-PnpDeviceProperty DEVPKEY_Device_LocationPaths).data[0]

$LocationPathOfDismountedDA

clip_image047

Using that locationpath we remove the DDA GPU from the VM

#Remove the display adapter from the VM.

Remove-VMAssignableDevice -LocationPath $LocationPathOfDismountedDA -VMName RFX-WIN10ENT

We now mount the display adapter on the host using that same locationpath

#Mount the display adapter again.

Mount-VmHostAssignableDevice -locationpath $LocationPathOfDismountedDA

We grab the display adapter that’s now back as disabled under device manager of in an “error” status in the display class of the pnpdevices.

#It will now show up in our query for -presentonly NVIDIA GRIDK1 display adapters

#It status will be “Error” (not “Unknown”)

clip_image049

We grab that first entry to enable the display adapter (or do it in device manager)

#Enable the display adapater

Enable-PnpDevice -InstanceId $MyNVidiaGRIDK1[0].InstanceId -Confirm:$false

The GPU is now back and available to the host. When your run you Get-VMHostAssignableDevice it won’t return this display adapter anymore.

We’ve enabled the display adapter and it’s ready for use by the host or RemoteFX again. Finally, we set the memory resources & configuration for the VM back to its defaults before I start it again (PS: these defaults are what the values are on standard VM without ever having any DDA GPU installed. That’s where I got ‘m)

#Let’s set the memory resources on our VM for the GPU to the defaults

Set-VM RFX-WIN10ENT -GuestControlledCacheTypes $False -LowMemoryMappedIoSpace 256MB -HighMemoryMappedIoSpace 512MB

clip_image053

Now tell me all this wasn’t pure fun!

Share this:

73 thoughts on “ setting up discrete device assignment with a gpu ”.

“This is mandatory our you can’t assign hardware via DDA. It will throw an error if you forget this”

Are you actually saying that when DDA is used in a VM any reboot of the Host results in a brute “Power Off” of the VM ? Or can you set this back to shutdown or save after you have assigned the device…?

Nope, you cannot do that. It acts as hardware, both in the posivitive way (stellar peformance for certain use cases) and in the negative way (you lose some capabilities you’ve become used to with virtualization). Now do note that this is TPv4 or a v1 implementation. We’ll see where this lands in the future. DDA is only for selecte use cases & needs whee the benefits outweigh the drawback and as it breaks through the virtualization layer as such it is also only for trusted admin scenarios.

Haha, yes, understand. But suppose you add a NMVe this way and then reboot the host while heavy IO is going on… “power Off” -> Really ??? 🙂 Even it’s real HW, you don’t need to turn off or cut power to a real HW system either… Same goes for SRIOV actually, so just sounds like it’s still in beta-testingstage for that matter… Put differently: DD is totally useless if Power Off will be your only choice @RTM…

I would not call that totally useless 🙂 A desktop is not totally useless because it can’t save state when you have a brown out. And you also manage a desktop, for planned events you shut it down. The use case determines what’s most immportant.

Shutdown wasn’t an option. Byebye CAU in VDI environment… Are would you go shutdown each VM manually ? I guess it will get better by the time it RTMs. I reckon MS understands that aswell…

Depends on use case. Ideally it comes without any restrictions. Keep the feedback coming. MSFT reads this blog every now and then 🙂 and don’t forget about uservoice https://windowsserver.uservoice.com/forums/295047-general-feedback !

So do you think some of the newer graphics cards that will “support” this type of DDA will be able to expose segments of their hardware? let’s say, an ATI FirePro s7150. It has the capability to serve ~16 users, but today, only one VM can use the entire card.

It’s early days yet and I do hope more information both from MSFT and card vendors will become available in the next months.

Pingback: The Ops Team #018 – “Bruised Banana” | The Ops Team | | php Technologies

Pingback: The Ops Team #018 - "Bruised Banana"

Pingback: Discrete Device Assignment with Storage Controllers - Working Hard In IT

I’m super close on this. I have the GPU assigned (a K620), but when I install the drivers and reboot, windows is ‘corrupt’. It won’t boot, and it’s not repairable. I can revert to snapshots pre-driver, but that’s about it. I’ve tried this in a Win 8 VM and a Win 10 VM. Both generation 2.

I have not seen that. Some issues with Fast Ring Windows 10 releases such as driver issues / errors but not corruption.

I think my issues is due to my video card. I’m testing this with a K620. I’m unclear if the K620 supports Access Control Services. I’m curious, your instructions use the -force flag on the Dismount-VmHostAssignableDevice command. Was the -force required with your GRID card as well? That card would absolutely support Access Control Services, I’m wondering if the -force was included for the card you were using, or for the PCI Express architecture. Thanks again for the article, I’m scrambling to find a card that supports Access Control Services to test further. I’m using the 620 because it does not require 6-pin power (My other Quadro cards do).

Hi, I’m still trying to get the details from MSFT/NVIDIA but without the force it doens’t work but throws an error. You can always try that. It’s very unclear what’s exactly supported and what not and I’ve heard (nor read) contradicting statements by the vendors involved. Early days yet.

The error is: The operation failed. The manufacturer of this device has not supplied any directives for securing this device while exposing it to a virtual machine. The device should only be exposed to trusted virtual machines. This device is not supported when passed through to a virtual machine.

Hi, I’m wondering if you have any information or experience with using DDA combined with windows server remoteApps technology. I have set up a generation 2 Windows 10 VM with a NVIDIA Grid K2 assigned to it. Remote desktop sessions work perfectly, however my remoteApp sessions occasionally freeze with a dwm.exe appcrash. I’m wondering if this could be something caused by the discrete device assignment? Are remoteApps stable with DDA?

I’m also used a PowerEdge R730 and a Tesla K80, Everything goes fine following your guide by the letter, until installing the driver on the VM, where I get a Code 12 error “This device cannot find enough free resources that it can use. (Code 12)” in Device Manager (Problem Status : {Conflicting Address Range} The specified address range conflicts with the address space.)

Any ideas what might be causing this, the driver is the latest version, and installed on the host without problems.

Hi, i have kinda same problem, same error msg but on IBM x3650 M3 with a gtx970 (thing its gtx960 works well..) u fixed it in any way? thx in advice =))

Same here with the R730 and a Tesla K80. Just finished up the above including installing Cuda and I get the same Code 12 error. Anyone figure out how to clear this error up?

i have the same problem with a HP DL 380p Gen.8:

I had the Problem in the HOST System too, there i had to enable the “PCI Express 64-BIT BAR Support” in the Bios. Then die Card works in the HOST System.

But not in the VM.

Nice read. I’ve been looking around for articles about using pass through with non-quadro cards, but haven’t been able to find much. Yours is actually the best I’ve read. By that I mean two nvidia retail geforce series cards, one for the host one for a pass through to a VM. From what I’ve read I don’t see anything to stop it working, so long as the guest card is 700 series or above, since the earlier cards don’t have support. Is that a fair assesment?

Hi. I have an error when Dismount-VmHostAssignableDevice: “The current configuration does not allow for OS control of the PCI Express bus. Please check your BIOS or UEFI settings.” What check in BIOS? Or maybe try Uninstall in Device manager?

Hello, did you found solution to this problem? I have same problem on my HP Z600 running Windows Server 2016.

I assigned a K2 GPU to a VM but now I am not able to boot the VM anymore…

I will get an error that a device is assigned to the VM and therefore it cannot be booted.

Cluster of 3x Dell R720 on Windows Server 2016, VM is limited to a single Node which has the K2 card (the other two node don’t have graphics cards yet).

Sure you didn’t assing it to 2 VMs by mistake? If both are shut down you can do it …

It looks like it just won’t work when the VM is marked as high available. When I remove this flag and store it on a local hdd of a cluster node it works.

Do you know if HP m710x with Intel P580 support DDA?

No I don’t. I’ve only use the the NVidia GRID/Tesla series so far. Ask your HP rep?

Tried to add a MVIDIA TESLA M10 (GRID) card (4xGPU) to 4 different VMs. It worked flawlessly but after that I could not get back all the GPUs when I tried to remove it from the VM. After Remove-VMAssignableDevice the GPU disappeared from the VM Device manager but I could not mount it back at the host. When listing it shows the “System PCI Express Graphics Processing Unit – Dismounted” line with the “Unknown” status. GPU disappeared from the VM but cannot be mounted and enabled as of your instructions. GPU disappeared? What could possibly caused this?

I have not run into that issue yet. Sorry.

This is great work and amazing. I have tried with NVIDIA Quadro K2200 and able to use OpenGL for apps I need.

One thing I noticed, Desktop Display is attached to Microsoft Hyper V Video and dxdiag shows it as primary video adapter. I am struggling to find a way if HYper V Video could be disabled and VM can be forced to use NVIDIA instead for all Display processing as primary video adapter. Thoughts?

Well, my personal take on this it’s not removable and it will function as it does on high end systems with an on board GPU and a specialty GPU. It used the high power one only when needed to save energy – always good, very much so on laptops. But that’s maybe not a primary concern. If your app is not being served by the GPU you want it to be serviced by you can try and dive into the settings in the control panel / software of the card, NVIDIA allows for this. Look if this helps you achieve what you need.

My vm is far from stable with a gpu through dda. (Msi r9 285 Gaming 2Gb). Yes it does work, performance is great, sometimes the vm locks up and gives a gpu driver issue. I dont get errors that get logged, just reboots or black/blue screens. Sometimes the vm crashes and comes online during the connection time of a remote connection. (Uptime reset).

I dont know if it is a problem with Ryzen. (1600x) 16Gb gigabyte ab350 gaming 3.

Launching Hwinfo64 within the vm those complete lockup the host and the vms. Outside the vm no problems.

Still great guide, the only one I could find.

Vendors / MSFT need to state support – working versus supported and this is new territory.

I disable ULPS, to prevent the gpu from idleing this morning. Vm did keep online for over 4 hours. But still at somepoint it goes doen. Here alle the error codes of the bluescreens -> http://imgur.com/a/vNWuf It seems like a driver issue to me.

when reading “Remove a GPU from the VM & return it to the host.” there is a typo.

Where-Object {$_.Class -eq “System” -AND $_.FriendlyName -like “PCI Express Graphics Processing Unit – Dismounted”}

the –, should be –

I got stuck when trying to return the gpu back to the main os, this helped

I see your website formats small -‘s as big ones

hmm now it doesnt, anyway, the –, should be – (guess i made a typo myself first)

ok something weird is going on..

Pingback: MCSA Windows Server 2016 Study Guide – Technology Random Blog

We are running the same setup with Dell 730 and Grid K1, all the setup as you listed works fine and the VM works with the DDA but after a day or so the grid inside the VM reports “Windows has stopped this device because it has reported problems. (Code 43)”

I have read that NVidia only support Grid k2 and above with DDA, so I am wondering if that’s the reason the driver is crashing?

We are running 22.21.13.7012 driver version

Have you seen this occur in your setup

It’s a lab setup only nowadays. The K1 is getting a bit old and there are no production installs I work with using that today. Some drivers do have know issues. Perhaps try R367(370.16) the latest update of the branch that still support K1/K2 with Windows Server 2016.

Thanks for your quick reply,

Yes it is an older card, we actually purchased this card some time ago for use with a Windows 2012 R2 RDS session host not knowing that it wouldn’t work with remotefx through session host.

We are now hoping to make use of it in server 2016 remotefx but I don’t think this works with a session host either, so are now testing out DDA.

We installed version 370.12 which seems to install driver version 22.21.13.7012 listed in Device manager.

I will test this newer version and let you know the results

Thanks again.

Did a quick check:

RemoteFX & DDA with 22.21.13.7012 works and after upgrading to 23.21.13.7016 it still does. Didn’t run it for loner naturally. I have seen error 43 with RemoteFX VM a few times, but that’s immediate and I have never found a good solution but replacing the VM with a clean one. Good luck.

Hello, you can read more on how to clean BIOS. Whether or not to include SR-IOV and what else will be needed.

Need help setting up the BIOS Motherboard “SuperMicro X10DRG-Q”, GPU nVIDIA K80

I assigned the video card TESLA k80, it is defined in the virtual machine, but when I look dxdiag I see an error

I have attached a Grid K1 card via DDA to a Windows 10 VM and it shows up fine and installs drivers OK in the VM but the VM still seems to be using the Microsoft Hyper-V adapter for video and rendering (Tested with Heaven Benchmark). GPU Z does not pick up any adapter. When I try to access the Nvidia Control panel I get a message saying “You are not currently using a display attached to an Nvidia GPU” Host is Windows Server 2016 Standard with all updates.

If anyone has any ideas that would help a lot, thanks.

Hi Everybody,

Someone can help me out here? 🙂

I have a “problem” with the VM console after implementing DDA. When installed the drivers on the HyperV host and configured DDA on the host and assigned a GPU to the VM that part works fine. After installing the drivers on the VM to install the GPU the drivers gets installed just fine. But after installing and a reboot of the VM I cannot manage the VM through the hyper-V console and the screen goes black. RDP on the VM works fine. What am I doing wrong here?

My setup is:

Server 2016 Datacenter Hyper-V HP Proliant DL380. Nividia Tesla M10 128 GB Profile: Virtualisation Optimisation.

I have tested version 4.7 NVIDIA GRID (on host and VM) and 6.2 NVIDIA Virtual GPU Software (on host and VM.

Kind regards

Does the GRID K1 need a nvidia vGPU license? I’m considering purchasing a K1 on ebay but am concern once I install in my server that the functionality of the card will be limited w/o a license. Is their licensing “honor” based? My intent is to use this card in a DDA configuration. If the functionality is limited I will likely need to return. Please advise. Thanks!

Nah, that’s an old card pre-licensing era afaik.

Thanks! Looks like I have this installed per the steps in this thread – a BIG THANK YOU! My guest VM (Win 10) sees the K1 assigned to it but is not detected on the 3D apps I’ve tried. Any thoughts on this?

I was reading some other posts on nvidia’s gridforums and the nvidia reps said to stick with the R367 release of drivers (369.49); which I’ve done on both the host and guest VM (I also tried the latest 370.x drivers). Anyway, I launch the CUDA-Z utility from the guest and no CUDA devices found. Cinebench sees the K1 but a OpenGL benchmark test results in 11fps (probably b/c it’s OpenGL and not CUDA). I also launch Blender 2.79b and 2.8 and it does not see any CUDA devices. Any thoughts on what I’m missing here?

I’m afraid no CUDA support is there with DDA.

Thanks for the reply. I did get CUDA to work by simply spinning up a new guest… must of been something corrupt with my initial VM. I also use the latest R367 drivers w/ no issue (in case anyone else is wondering).

Good to know. Depending on what you read CUDA works with passthrough or is for shared GPU. The 1st is correct it seems. Thx.

Great post, thank you for the info. My situation is similar but slightly different. I’m running a Dell PE T30 (it was a black Friday spur of the moment buy last year) that I’m running windows 10 w/Hyper-V enabled. There are two guests, another Windows 10 which is what I use routinely for my day-to-day life, and a Windows Server 2016 Essentials. This all used to be running on a PE2950 fully loaded running Hyper-V 2012 R2 core…

When moving to the T30 (more as an experiment) I was blow away at home much the little GPU on the T30 improved my windows 10 remote desktop experience. My only issue, not enough horse power. It only has two cores and I’m running BlueIris video software, file/print service and something called PlayOn for TV recording. This overwhelmed the CPU.

So this year I picked up a T130 with Windows 2016 standard with four cores and 8 threads. But, the T130 does not have a GPU, so, I purchased a video card and put it in. Fired it up, and, no GPU for the Hyper-V guests. I had to add the Remote desktop role to the 2016 server to then let Hyper-V use it, and then, yup, I needed an additional license at an additional fee, which I don’t really want to pay if I don’t have to… So my question:

– Is there an EASY way around this so I can use WS2016S as the host and the GPU for both guests but not have to buy a license? I say easy because the DDA sounds like it would meet this needs (for one guest?), but, also seems like more work than I’d prefer to embark on..

– Do I just use windows 10 as my Host and live the limitations, which sounds like the only thing I care about is the Virtualizing GPUs using RemoteFX. But I’m also confused on this since windows 10 on the T30 is using the GPU to make my remote experience better. So I know I’m missing some concept here…

Thanks for the help – Ed.

I cannot dismount my Grid K1 as per your instructions My setup is as follows

Motherboard: Supermicro X10DAi (SR-IOV enabled) Processor: Intel Xeon E2650-V3 Memory: 128GB DDR4 GPU: Nvidia Grid K1

When I try to dismount the card from the host I get the following: Dismount-VmHostAssignableDevice : The operation failed. The current configuration does not allow for OS control of the PCI Express bus. Please check your BIOS or UEFI settings. At line:1 char:1 + Dismount-VmHostAssignableDevice -force -locationpath $locationpath + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Dismount-VMHostAssignableDevice], Virtualizat ionException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.DismountVMHos tAssignableDevice

Either a BIOS (EUFI) configuration issues that can be corrected or a BIOS (EUFI) that does not fully support OS control of the bus. Check the BIOS settings, but normally the vendors should specify what gear is compatible with DDA. If is is not an BIOS upgrade might introduce it. But this is a game of dotting the i’s and crossing the t’s before getting the hardware. Opportunistic lab builds with assembled gears might work but no guarantees given.

OK I now got 2 Nvidia Grid K1 cards installed in a HPDL380p server, 6 GPUs are showing healthy, 2 are showing error code 43, I have tried every variation of driver, BIOS, firmware, and am at my wits end, I know these cards are not faulty

Hi, thanks for the post. I did all the steps of the publication and I have an error when starting the VM windows10 Generation 2: I get the following message:

[Content] ‘VMWEBSCAN’ failed to start.

Virtual Pci Express Port (Instance ID 2614F85C-75E4-498F-BD71-721996234446): Failed to Power on with Error ‘A hypervisor feature is not available to the user.’.

[Expanded Information] ‘VMWEBSCAN’ failed to start. (Virtual machine ID B63B2531-4B4D-4863-8E3C-D8A36DC3E7AD)

‘VMWEBSCAN’ Virtual Pci Express Port (Instance ID 2614F85C-75E4-498F-BD71-721996234446): Failed to Power on with Error ‘A hypervisor feature is not available to the user.’ (0xC035001E). (Virtual machine ID B63B2531-4B4D-4863-8E3C-D8A36DC3E7AD)

I am using a PowerEdge R710 gen2 and Nvidia QUADRO P2000 that is supposed to support DDA.

Well. make sure you have the latest BIOS. But it is old hardware and support for DDA is very specific with models, hardware, Windows versions etc. The range of supported hardware is small. Verify everything like moel of CPU, chipset, R-IOV, VT-d/AMD-Vi, MSI/MSI-X, 64 PCI BAR, IRQ remapping. I would note even try with Windows Server 2019. That is only for the Tesla models, not even GRID is supported. Due to the age of the server and required BIOS support I’m afraid this might never work and even if it does can break at any time. Trial and error. You might get lucky but it will never be supported and it might break at every update.

Pingback: How-to install a Ubuntu 18.04 LTS guest/virtual machine (VM) running on Windows Server 2019 Hyper-V using Discrete Device Assignment (DDA) attaching a NVIDIA Tesla V100 32Gb on Dell Poweredge R740 – Blog

Any idea if this’ll work on IGPU such as intel’s UHD? Can’t find anything about it on the net

Can you add multiple gpu’s to the vm ?

As long as the OS can handle it sure, multiple GPUs, multiple NVMEs …

Need your advise. We are planning to create Hyper -V cluster based on two HP DL380 servers. Both servers will have Nvidia GPU card inside. The question is if it’s possible to create Hyper-v cluster based on those 2 nodes with most VMs with high availability and one VM on each node without it but with DDA to GPU? So, if I understand from this thread and comments correctly, I have to store VMs on shared data storage as usual, but for VMs with DDA I have to store them on local drive of the node. And I have to unflag HA for VMs with DDA. That’s all. Am I right?

Thanks in advance

You can also put them on shared storage but they cannot live migrate. the auto-stop action has to be set to shutdown. Whether you can use local storage depends on the storage array. On S2D having storage, other than the OS, outside of the virtual disks from the storage pool is not recommended. MSFT wants to improve this for DDA but when or if that will be available in vNext is unknown. Having DDA VM’s on shared storage also causes some extra work and planning if you want them to work on another node. Also see https://www.starwindsoftware.com/blog/windows-2016-makes-a-100-in-box-high-performance-vdi-solution-a-realistic-option “Now do note that the DDA devices on other hosts and as such also on other S2D clusters have other IDs and the VMs will need some intervention (removal, cleanup & adding of the GPU) to make them work. This can be prepared and automated, ready to go when needed. When you leverage NVME disks inside a VM the data on there will not be replicated this way. You’ll need to deal with that in another way if needed. Such as a replica of the NVMe in a guest and NVMe disk on a physical node in the stand by S2D cluster. This would need some careful planning and testing. It is possible to store data on a NVMe disk and later assign that disk to a VM. You can even do storage Replication between virtual machines, one does have to make sure the speed & bandwidth to do so is there. What is feasible and will materialize in real live remains to be seen as what I’m discussing here are already niche scenarios. But the beauty of testing & labs is that one can experiments a bit. Homo ludens, playing is how we learn and understand.”

Many thanks for you reply. Very useful. And what about GPU virtualization (GPU-PV)? Just as an idea to install Windows 10 VM and use GPU card on it. We’ll install CAD system on this VM and users will have access to it via RDP. Will it work fine?

Hyper-V only has RemoteFX which is disabled by default as it has some security risks being older technology. Then there DDA. GPU-PV is not available and while MSFT has plans/ is working on improvements I know no roadmap or timeline details for this.

Pingback: How to Customize Hyper-V VMs using PowerShell

Hi, I try to use DDA on my Dell T30 with an i7-6700k build in. Unfortunately I get the error when I try to dismount my desired device. Andy idea? Is the system not able to use DDA?

Dismount-VMHostAssignableDevice : The operation failed. The current configuration does not allow for OS control of the PCI Express bus. Please check your BIOS or UEFI settings. At line:1 char:1 + Dismount-VMHostAssignableDevice -LocationPath “PCIROOT(0)#PCI(1400)#U … + ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [Dismount-VMHostAssignableDevice], VirtualizationException + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.DismountVMHostAssignableDevice

Kind regards Jens

I have only used DDA with certified OEM solutions. It is impossible for me to find out what combinations of MobBo/BIOS/GPU cards will work and are supported.

Dell T30 with an i7-6700k and enabled all virtual things in BIOS.

When I try to dismount the card from the host I get the following: Dismount-VmHostAssignableDevice : The operation failed. The current configuration does not allow for OS control of the PCI

Did someone get this running with an Dell T30?

Leave a Reply, get the discussion going, share and learn with your peers. Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed .

WorkingHardInIT Blog Privacy Policy

Privacy Preference Center

Consent management.

  • Cookie Settings

Privacy Policy

The WorkingHardInIT blog is a non commercial blog where technical information is shared with the global community. We only store the minimal data need for the shortest amount of time to be able to run the website and let you interact with it. We never share and/or sell any personal or general information about this website to anyone.

All Cookies

Cookies are used minimally where needed, which you can turn off at any time by modifying your internet browser’s settings.

Get the Reddit app

Some ponderings on dda on windows 10/11 pro.

So over the past few months I've been messing with my first virtual machines using VirtualBox, a Ubuntu 20.04 lts vm here, an android 9.0 vm there, but now I'm trying to learn how to do some deeper stuff with real graphics. (ie Blender/Steam games) HyperV, Unraid, Proxmox, iommu, sr-iov, passthrough, gpu-pv, so many new terms and threads I can barely keep track...

So on my quest for understanding I fell into the rabbit hole that is DDA on Windows 10/11 Pro.

Apparently in 2016-2018 there were some indications that DDA in some form added into Win10Pro but it would return a "a hypervisor feature is not available to the user" error. This prompted other users to search for a work around but from the threads I found, one was never discovered/publicly known.

Indications of existence:

https://web.archive.org/web/20180108210157/https://windowsserver.uservoice.com/forums/295062-linux-support/suggestions/8730703-remotefx-opengl-guest-support-in-windows-server

https://www.reddit.com/r/HyperV/comments/8jwt5c/windows_10_hyperv_dda/

Other people who searched (apparently in vain sadly):

https://www.windowsq.com/t/does-anyone-know-how-to-bypass-the-dda-lock-on-windows-10-11-hyper-v.2220/

https://www.reddit.com/r/HyperV/comments/kfctxx/is_there_any_way_to_unblock_dda_in_windows_10/

https://docs.microsoft.com/en-us/answers/questions/303104/enabling-discrete-device-assignment-on-windows-10.html

https://superuser.com/questions/1365722/windows-10-virtual-machines-and-dda-for-gpus

So that's that, right? No DDA on consumer Windows for you haha!

But here’s what I found interesting a few days ago there was a post titled “GPU acceleration for Azure IoT Edge for Linux on Windows (Preview)” and curiously Gpu acceleration via DDA is mentioned as compatible with Windows 11 Pro but only with ONE GPU, the NVIDIA T4.

It only lists Geforce/Quadro as compatible with gpu-pv not DDA.

So wait, DDA is supported somehow on Windows 11 Pro but only with the Nvidia T4, a $2000 gpu? Something to think about eh? If it’s truly locked to the T4 I wonder if Windows 11 checks that it’s a T4 or if the “device mitigation driver” does that.

Source: https://docs.microsoft.com/en-us/azure/iot-edge/gpu-acceleration?view=iotedge-2018-06

I spent today Installing Windows 11 Pro on my 2200g / b450m Pro4 (p1.60) / GTX 1660 Super build to see if I could throw in a AMD Radeon HD7570 1GB, but I never got that far since all but 2 of my devices failed the Microsoft DDA Powershell script on Windows 11 Pro with sv-iov, iommu, PCIe ARI Support, SVM, enabled. I have no idea if it’s my motherboard, bios, processor, I’m stumped for the moment.

https://docs.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment

https://github.com/MicrosoftDocs/Virtualization-Documentation/blob/live/hyperv-tools/DiscreteDeviceAssignment/SurveyDDA.ps1

My long term plan moving forward is to pick an X570 board and 5800x and run dual graphics cards using gpu-pv on Windows 11 Pro and assign the weaker one to run on a 4 core Win10home HyperV VM.

Wow you made it this far. If you see any errors feel free to let me know.

Also do you think we’ll ever see the day when DDA on Windows Pro is fully enabled and not just a “Only Server versions of Windows support this. This isn't some wanton play for more of your hard-earned cash but rather just a side effect of being comfortable supporting server-class machines. They tend to work and be quite stable.” kind of thing? I’d be interested in hearing your thoughts.

By continuing, you agree to our User Agreement and acknowledge that you understand the Privacy Policy .

Enter the 6-digit code from your authenticator app

You’ve set up two-factor authentication for this account.

Enter a 6-digit backup code

Create your username and password.

Reddit is anonymous, so your username is what you’ll go by here. Choose wisely—because once you get a name, you can’t change it.

Reset your password

Enter your email address or username and we’ll send you a link to reset your password

Check your inbox

An email with a link to reset your password was sent to the email address associated with your account

Choose a Reddit account to continue

Windows 11 Forum

  • Search forums

Follow along with the video below to see how to install our site as a web app on your home screen.

Note: This feature may not be available in some browsers.

  • Windows Support Forums
  • Virtualization and WSL

Pass Thru devices in Hyper-V (IOMMU)

  • Thread starter jimbo45
  • Start date Apr 10, 2022

jimbo45

Well-known member

  • Apr 10, 2022

Hi folks Is there some documentation around on how to pass devices, specifically GPU's, Disks and CPU's (if on an MP system). On Linux host it's easy enough : For Intel CPUs (VT-d) set intel_iommu=on. Since the kernel config option CONFIG_INTEL_IOMMU_DEFAULT_ON is not set by default when booting the host in Linux. You should also append the iommu=pt parameter. This will prevent Linux from touching devices which cannot be passed through. I'm sure for AMD there's similar. I'm messing around with HYPER-V today -- I've got a machine with 2 GPU's, 2 Physical CPU's (4 cores each) and 4 Mobo disk ports spare. I can't get the VM booted on to a "Native disk" and the 2nd monitor / GPU just seems to display what the ist one is showing - or just blank, Using passthru if you have the hardware makes the VM about 99% as respomsive as it would be on a physical machine - It certainly does running Windows on a Linux beast. Anybody got decent passthru work with HYPER-V with any version of VM. Using Host USB devices doesn't address the problem either since the Host still has to provide the main USB driver in the first place. Cheers jimbo  

My Computer

  • OS Windows XP,7,10,11 Linux Arch Linux Computer type PC/Desktop CPU 2 X Intel i7

Kari

PhD in Malt Based Liquids

devblogs.microsoft.com

Passing through devices to Hyper-V VMs by using discrete device assignment - Scripting Blog [archived]

devblogs.microsoft.com

My Computers

System one system two.

  • OS Windows 11 PRO x64 Dev Manufacturer/Model Hyper-V Virtual Machine (host in System 2 specs) CPU Intel Core i7-8550U Memory 6 GB Graphics Card(s) Microsoft Hyper-V Video Monitor(s) Displays Laptop display (17.1") & Samsung U28E590 (27.7")
  • Operating System Windows 11 PRO x64 Dev Channel Computer type Laptop Manufacturer/Model HP HP ProBook 470 G5 CPU Intel Core i7-8550U Motherboard HP 837F KBC Version 02.3D.00 Memory 16 GB Graphics card(s) Intel(R) UHD Graphics 620 & NVIDIA GeForce 930MX Sound Card Conexant ISST Audio Monitor(s) Displays Laptop display (17.1") & Samsung U28E590 (27.7") Hard Drives 128 GB SSD & 1 TB HDD Mouse Wireless Logitech MSX mouse Keyboard Wireless Logitech MK710 keyboard Internet Speed 100 Mbps down, 20 Mbps up Browser Edge Chromium Dev Channel Antivirus Windows Defender Other Info 2 * 3 TB USB HDD 6 TB WD Mirror NAS
Kari said: Although already almost 6 yeas old, this article is still one of the best about passing through devices in Hyper-V: Passing through devices to Hyper-V VMs by using discrete device assignment - Scripting Blog [archived] Summary: Learn how to attach a device from your Hyper-V host to your VM by using a new feature of Windows Server 2016. Today we have a guest blogger, Rudolf Vesely, who has blogged here on previous occasions. Here is a link to his previous posts if you would like to read them. devblogs.microsoft.com Kari Click to expand...

microsoft discrete device assignment

Similar threads

antpalmer

  • Jul 22, 2024

jimbo45

  • Jun 17, 2024
  • Jul 30, 2024
  • Jun 22, 2024

Bree

  • Jun 18, 2024

Uninstall or Reinstall Recall in Windows 11

Uninstall or Reinstall Recall in Windows 11

  • Aug 29, 2024

Uninstall or Reinstall Copilot app in Windows 11 and Windows 10

Uninstall or Reinstall Copilot app in Windows 11 and Windows 10

  • Jun 20, 2024

Enable or Disable Sudo Command in Windows 11

Enable or Disable Sudo Command in Windows 11

  • Feb 8, 2024

Enable or Disable Feeds on Widgets Board in Windows 11

Enable or Disable Feeds on Widgets Board in Windows 11

  • Dec 4, 2023

Use ViVeTool to Enable or Disable Hidden Features in Windows 11

Use ViVeTool to Enable or Disable Hidden Features in Windows 11

  • Dec 1, 2023

Disable Ads in Windows 11

Disable Ads in Windows 11

  • Jul 28, 2022

Disable Modern Standby in Windows 10 and Windows 11

Disable Modern Standby in Windows 10 and Windows 11

  • Jan 11, 2022

Disable "Show more options" context menu in Windows 11

Disable "Show more options" context menu in Windows 11

  • Oct 4, 2021

Download Official Windows 11 ISO file from Microsoft

Download Official Windows 11 ISO file from Microsoft

  • Aug 19, 2021

Restore Classic File Explorer with Ribbon in Windows 11

Restore Classic File Explorer with Ribbon in Windows 11

  • Jul 19, 2021

Repair Install Windows 11 with an In-place Upgrade

Repair Install Windows 11 with an In-place Upgrade

  • Jul 5, 2021

Clean Install Windows 11

Clean Install Windows 11

  • Jun 22, 2021

Latest Support Threads

  • Started by imeem
  • Today at 9:39 PM

FirasKing Air

  • Started by FirasKing Air
  • Today at 7:22 PM
  • Replies: 13
  • Started by Madness10
  • Today at 4:28 PM

sabijeris

  • Started by sabijeris
  • Today at 2:32 PM
  • Replies: 12
  • Today at 2:25 PM

Latest Tutorials

microsoft discrete device assignment

  • Started by Brink
  • Yesterday at 2:47 PM

microsoft discrete device assignment

  • Thursday at 11:05 AM

microsoft discrete device assignment

  • Wednesday at 2:08 PM

microsoft discrete device assignment

  • Wednesday at 1:31 PM

microsoft discrete device assignment

  • Tuesday at 3:45 PM
  • IT systems management and monitoring

microsoft discrete device assignment

freshidea - Fotolia

How to deploy graphics devices using Hyper-V DDA

Admins can map a vm to a gpu with hyper-v, but they must first complete a few steps, such as prepare the vm and dismount the pcie device from the host partition..

Brien Posey

  • Brien Posey

Windows Server 2016 now includes the ability to perform Discrete Device Assignment in its feature set, a feature which enables IT administrators to map VMs directly to a Peripheral Component Interconnect Express device and gives those VMs exclusive access to the device.

There are three main steps in performing a Discrete Device Assignment of a GPU: Preparing the VM, dismounting the Peripheral Component Interconnect Express, or PCIe, device from the host partition and making the device assignment within the VM. Admins must perform all three of these steps within PowerShell.

Unfortunately, this process can have some ambiguity. Some hardware vendors have additional requirements that go beyond those of Hyper-V. As a result, admins should get in touch with their GPU vendor and determine if the GPU works with Hyper-V Discrete Device Assignment (DDA) and if it must meet any special requirements before deployment.

Prepare the VM

Generally, the only step admins must complete to prepare a VM for a Hyper-V DDA is to set the VM's automatic stop action by selecting Turn off the virtual machine in the Hyper-V Manager platform. You can use a PowerShell command for performing the same action:

Screenshot of Automatic Stop Action options

The GPU vendor might require admins to perform some additional preparatory steps on the VM. These steps can include enabling a CPU's write combining feature, configuring the 32-bit memory-mapped I/O space or configuring MMIO space greater than 32 bits. You can use additional PowerShell commands to perform these steps:

Dismount the PCIe device from the host partition

Admins must take two steps to dismount a GPU's PCIe device from the host partition. First, admins must ask the GPU vendor if the GPU uses a security mitigation driver. If the GPU does, they must install it before removing the GPU from the host partition. The dismount command varies depending on whether a security mitigation driver exists.

Next, admins must find the GPU's location path by opening the Device Manager in the parent partition, right clicking on the GPU and selecting the Properties command from the shortcut menu. When the GPU properties sheet appears, select the Details tab and then select Location paths from the Property drop-down menu. Make note of the path that references PCI Root.

Screenshot of Microsoft Basic Display Adapter

If admins use a security mitigation driver, they can use this command to dismount the PCIe device:

Otherwise, they should use this command:

Assign the GPU to the VM

The last step in performing a Hyper-V DDA is to map the GPU to the VM. From the host partition, admins must enter this command:

Afterward, admins can boot the VM and install the GPU vendor's video drivers into the VM.

Dig Deeper on IT systems management and monitoring

microsoft discrete device assignment

Windows Server 2025 Hyper-V updates promise speed boost

BrienPosey

How to use ChatGPT with PowerShell to write code for admins

AnthonyHowell

How to create snapshots for Azure VMs and managed disks

StuartBurns

Running GPU passthrough for a virtual desktop with Hyper-V

QAOps represents an extension of DevOps that can help enterprises mature their software QA practices. Learn how QAOps can help ...

Usability is key to the success of an application. Learn what developers and testers should look for to optimize usability and ...

Testing can slow development and quick releases can come at the price of quality. Teams should not have to choose. Learn to ...

In a rapidly evolving landscape, these 10 programming languages are becoming increasingly obscure. Learn how programming ...

Rust or Ruby? Go or Groovy? As the competitive IT landscape evolves, developers can enhance their skills and career potential by ...

Authorization is a critical security component of a microservices architecture. Follow these five guiding principles to deploy ...

FinOps tools help organizations optimize cloud spending and use. Review the different native and third-party options to find the ...

Cloud-native applications have become ubiquitous in IT environments. Review key principles, patterns and developmental factors to...

AI is bringing previously unimagined capabilities in automation, optimization and predictive analytics to cloud management while ...

Compare Datadog vs. New Relic capabilities including alerts, log management, incident management and more. Learn which tool is ...

Many organizations struggle to manage their vast collection of AWS accounts, but Control Tower can help. The service automates ...

There are several important variables within the Amazon EKS pricing model. Dig into the numbers to ensure you deploy the service ...

More companies today hire developers who work remotely. Follow these steps for an efficient remote onboarding process for devs, ...

GPTScript enables programmers to use natural language syntax and tap into OpenAI when building apps. Here's a basic GPTScript ...

Generic variables give the TypeScript language versatility and compile-time type safety that put it on par with Java, C# and C++....

Rocky Linux and AlmaLinux are new distributions created after Red Hat announced the discontinuation of CentOS. These ...

The Broadcom CEO says public cloud migration trauma can be cured by private cloud services like those from VMware, but VMware ...

New capabilities for VMware VCF can import and manage existing VMware services through a single console interface for a private ...

  • PC & TABLETS
  • SERVERS & STORAGE
  • SMART DEVICES
  • SERVICES & SOLUTIONS

Lenovo Press

  • Portfolio Guide
  • 3D Tour Catalog
  • HX Series for Nutanix
  • MX Series for Microsoft
  • SX for Microsoft
  • VX Series for VMware
  • WenTian (联想问天)
  • Mission Critical
  • Hyperconverged
  • Edge Servers
  • Multi-Node Servers
  • Supercomputing
  • Expansion Units
  • Network Modules
  • Storage Modules
  • Network Adapters
  • Storage Adapters
  • Coprocessors
  • GPU adapters
  • RAID Adapters
  • Ethernet Adapters
  • InfiniBand / OPA Adapters
  • Host Bus Adapters
  • PCIe Flash Adapters
  • External Storage
  • Backup Units
  • Top-of-Rack Switches
  • Power Distribution Units
  • Rack Cabinets
  • KVM Switches & Consoles
  • SAN Storage
  • Software-Defined Storage
  • Direct-Attached Storage
  • Tape Drives
  • Tape Autoloaders and Libraries
  • 1 Gb Ethernet
  • 10 Gb Ethernet
  • 25 Gb Ethernet
  • 40 Gb Ethernet
  • 100 Gb Ethernet
  • Campus Networking

Solutions & Software

  • Artificial Intelligence
  • Hortonworks
  • Microsoft Data Warehouse Fast Track
  • Microsoft Applications
  • SAP Business Suite
  • Citrix Virtual Apps
  • VMware Horizon
  • Cloud Storage
  • MSP Solutions
  • Microsoft Hyper-V
  • OpenStack Cloud
  • VMware vCloud
  • VMware vSphere
  • Microsoft SQL Server
  • SAP NetWeaver BWA
  • Edge and IoT
  • High Performance Computing
  • Security Key Lifecycle Manager
  • Microsoft Windows
  • Red Hat Enterprise Linux
  • SUSE Linux Enterprise Server
  • Lenovo XClarity
  • BladeCenter Open Fabric Manager
  • IBM Systems Director
  • Flex System Manager
  • System Utilities
  • Network Management
  • About Lenovo Press
  • Newsletter Signup

microsoft discrete device assignment

Introduction to Windows Server 2016 Hyper-V Discrete Device Assignment

Planning / implementation.

microsoft discrete device assignment

  • David Tanaka

Form Number

This paper describes the steps on how to enable Discrete Device Assignment (also known as PCI Passthrough) available as part of the Hyper-V role in Microsoft Windows Server 2016.

Discrete Device Assignment is a performance enhancement that allows a specific physical PCI device to be directly controlled by a guest VM running on the Hyper-V instance.  Specifically, this new feature aims to deliver a certain type of PCI device class, such as Graphics Processing Units (GPU) or Non-Volatile Memory express (NVMe) devices, to a Windows Server 2016 virtual machine, where the VM will be given full and direct access to the physical PCIe device.

In this paper we describe how to enable and use this feature on Lenovo servers using Windows Server 2016 Technical Preview 4 (TP4). We provide the step-by-step instructions on how to make an NVIDIA GPU available to a Hyper-V virtual machine.

This paper is aimed at IT specialists and IT managers wanting to understand more about the new features of Windows Server 2016 and is part of a series of technical papers on the new operating system.

Table of Contents

Introduction Installing the GPU and creating a VM Enabling the device inside the VM Restoring the device to the host system Summary

Change History

Changes in the April 18 update:

  • Corrections to step 3 and step 4 in "Restoring the device to the host system" on page 12.

Related product families

Product families related to this document are the following:

  • Microsoft Alliance

View all documents published by this author

Configure and Buy

Full change history, course detail, employees only content.

The content in this document with a is only visible to employees who are logged in. Logon using your Lenovo ITcode and password via Lenovo single-signon (SSO).

The owner of the document has determined that this content is classified as Lenovo Internal and should not be normally be made available to people who are not employees or contractors. This includes partners, customers, and competitors. The reasons may vary and you should reach out to the authors of the document for clarification, if needed. Be cautious about sharing this content with others as it may contain sensitive information.

Any visitor to the Lenovo Press web site who is not logged on will not be able to see this employee-only content. This content is excluded from search engine indexes and will not appear in any search results.

For all users, including logged-in employees, this employee-only content does not appear in the PDF version of this document.

This functionality is cookie based. The web site will normally remember your login state between browser sessions, however, if you clear cookies at the end of a session or work in an Incognito/Private browser window, then you will need to log in each time.

If you have any questions about this feature of the Lenovo Press web, please email David Watts at [email protected] .

  • About Lenovo
  • Our Company
  • Smarter Technology For All
  • Investors Relations
  • Product Recycling
  • Product Security
  • Product Recalls
  • Executive Briefing Center
  • Lenovo Cares
  • Formula 1 Partnership
  • Products & Services
  • Laptops & Ultrabooks
  • Desktop Computers
  • Workstations
  • Gaming & VR
  • Servers, Storage, & Networking
  • Accessories & Software
  • Services & Warranty
  • Product FAQs
  • Lenovo Coupons
  • Cloud Security Software
  • Windows 11 Upgrade
  • Shop By Industry
  • Small Business Solutions
  • Large Enterprise Solutions
  • Government Solutions
  • Healthcare Solutions
  • K-12 Solutions
  • Higher Education Solutions
  • Student & Teacher Discounts
  • Healthcare Discounts
  • First Responder Discount
  • Senior Discounts
  • Gaming Community
  • LenovoEDU Community
  • LenovoPRO Community
  • LenovoPRO Small Business
  • MyLenovo Rewards
  • Lenovo Financing
  • Trade-in Program
  • Customer Discounts
  • Affiliate Program
  • Legion Influencer Program
  • Student Influencer Program
  • Affinity Program
  • Employee Purchase Program
  • Laptop Buying Guide
  • Where to Buy
  • Customer Support
  • Policy FAQs
  • Return Policy
  • Shipping Information
  • Order Lookup
  • Register a Product
  • Replacement Parts
  • Technical Support
  • Provide Feedback

microsoft discrete device assignment

microsoft discrete device assignment

Update to add Discrete Device Assignment support for Azure that runs on Windows Server 2012 R2-based guest VMs

About this update.

This article describes an update that adds Discrete Device Assignment (DDA) support to Windows Server 2012 R2-based guest virtual machines (VMs). The update is targeted towards those VMs that are running on Azure's N-Series GPU-based VMs or on-premises deployments of a later version of Windows Server 2012 R2. 

How to get this update

Important If you install a language pack after you install this update, you must reinstall this update. Therefore, we recommend that you install any language packs that you need before you install this update. For more information, see Add language packs to Windows .

Method 1: Windows Update

This update is provided as a Recommended update on Windows Update. For more information on how to run Windows Update, see  How to get an update through Windows Update .

Method 2: Microsoft Download Center

The following files are available for download from the Microsoft Download Center:

Operating system

Update

All supported x64-based versions of Windows Server 2012 R2

For more information about how to download Microsoft support files, select the following article number to view the article in the Microsoft Knowledge Base:

119591 How to obtain Microsoft support files from online services Microsoft scanned this file for viruses. Microsoft used the most current virus-detection software that was available on the date that the file was posted. The file is stored on security-enhanced servers that help prevent any unauthorized changes to the file.

Update detail information

Prerequisites.

To install this update, you should first install  April 2014, update rollup for Windows RT 8.1, Windows 8.1, and Windows Server 2012 R2 (2919355)  in Windows Server 2012 R2.

Registry information

To apply this update, you don't have to make any changes to the registry.

Restart requirement

You may have to restart the computer after you apply this update.

Update replacement information

This update doesn't replace a previously released update.

Learn about the terminology that Microsoft uses to describe software updates.

File Information

The English (United States) version of this software update installs files that have the attributes that are listed in the following tables.

Windows Server 2012 R2

The files that apply to a specific product, milestone (RTM, SPn), and service branch (LDR, GDR) can be identified by examining the file version numbers as shown in the following table:

Version

Product

Milestone

Service branch

6.3.960 0.18 xxx

Windows Server 2012 R2

RTM

GDR

GDR service branches contain only those fixes that are widely released to address widespread, critical issues. LDR service branches contain hotfixes in addition to widely released fixes.

The MANIFEST files (.manifest) and the MUM files (.mum) that are installed for each environment are listed in the "Additional file information" section. MUM, MANIFEST, and the associated security catalog (.cat) files, are very important to maintain the state of the updated components. The security catalog files, for which the attributes are not listed, are signed with a Microsoft digital signature.

x64 Windows Server 2012 R2

File name

File version

File size

Date

Time

Platform

SP requirement

Service branch

Vpci.sys

6.3.9600.18219

72,024

26-Jan-2016

19:15

x64

None

Not applicable

Vpcivsp.sys

6.3.9600.18219

65,536

26-Jan-2016

14:48

x64

SP.

AMD64_WVPCIVSP

Vpcivsp.sys

6.3.9600.18219

65,536

26-Jan-2016

14:48

x64

SP_

AMD64_WVPCIVSP

Additional file information

x64 Windows Server 2012 R2

File property

Value

File name

Amd64_wvpci.inf_31bf3856ad364e35_6.3.9600.18219_none_6e65b6b5b8a47d85.manifest

File version

Not applicable

File size

1,752

Date (UTC)

26-Jan-2016

Time (UTC)

19:21

Platform

Not applicable

File name

Amd64_wvpcivsp.inf_31bf3856ad364e35_6.3.9600.18219_none_6fd0ad9bb353ca78.manifest

File version

Not applicable

File size

2,118

Date (UTC)

26-Jan-2016

Time (UTC)

15:32

Platform

Not applicable

File name

Amd64_wvpcivsp_coresys.inf_31bf3856ad364e35_6.3.9600.18219_none_15b922e5c3f03741.manifest

File version

Not applicable

File size

2,150

Date (UTC)

26-Jan-2016

Time (UTC)

15:32

Platform

Not applicable

File name

Update.mum

File version

Not applicable

File size

2,078

Date (UTC)

26-Jan-2016

Time (UTC)

19:21

Platform

Not applicable

Facebook

Need more help?

Want more options.

Explore subscription benefits, browse training courses, learn how to secure your device, and more.

microsoft discrete device assignment

Microsoft 365 subscription benefits

microsoft discrete device assignment

Microsoft 365 training

microsoft discrete device assignment

Microsoft security

microsoft discrete device assignment

Accessibility center

Communities help you ask and answer questions, give feedback, and hear from experts with rich knowledge.

microsoft discrete device assignment

Ask the Microsoft Community

microsoft discrete device assignment

Microsoft Tech Community

microsoft discrete device assignment

Windows Insiders

Microsoft 365 Insiders

Was this information helpful?

Thank you for your feedback.

microsoft discrete device assignment

This browser is no longer supported.

Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.

Enabling Discrete Device Assignment on Windows 10

I've been experimenting with different versions of Windows 10, and I am almost ready to buy a license, but unfortunately there is one major problem that is a showstopper for me. For my work, I need to be able to pass through physical devices to a linux-based HyperV virtual machine. Microsoft calls this feature DDA or Discrete Device Assignment, and as far as I can tell it is only supported on Server editions of Windows. Unfortunately it is not viable for me to use Server as my main workstation OS. Dual-booting with it is also not an option.

Is there any way to somehow enable this feature on Windows 10 with some sort of registry/group policy hack? Or perhaps is it possible to move the feature from a Server installation over to a Win10 installation so I can use it?

My master goal is to pass through a PCI device (a USB3.1 controller/root hub) to a linux-based virtual machine. GPU passthrough would certainly be useful in the future, but right now it's not as important as the root hub.

DDA seems like a mature technology at this point, so I think it should be included in Pro, or at least in Enterprise/Pro for Workstation. Linux can already to this out of the box and it's a free operating system.

Currently this is the only thing that's holding me back from getting a proper license for Windows 10, and I would really appreciate it if someone form the Server or HyperV teams could point me in the right direction and show me how I can enable the feature.

Thanks in advance

Hyper-V A Windows technology providing a hypervisor-based virtualization solution enabling customers to consolidate workloads onto a single server. 2,680 questions Sign in to follow Follow

I would like to check if the reply could be of help? If yes, please help accept answer , so that others meet a similar issue can find useful information quickly. If you have any other concerns or questions, please feel free to feedback.

Best Regards, Joann

We haven't heard from you for a few days. I hope you have solved your issue and my tips are useful to you. If yes, please help accept answer, so that others meet a similar issue can find useful information quickly. If you have any other concerns or questions, please feel free to feedback.

I don't know about getting to to work in Linux VMs, but Windows VMs should work. Here's info on getting a GPU passed through: https://www.reddit.com/r/sysadmin/comments/jym8xz/gpu_partitioning_is_finally_possible_in_hyperv/

That post seems to describe GPU partitioning, not DDA. For me, the most important thing to pass through is a USB root hub (a PCIe device), which is still impossible.

As I understand you may want to know whether you could use win10 to configure DDA so that you could pass through your PCIe to your linux-based VM.

According to the article, it only supports Microsoft Hyper-V Server 2016, Windows Server 2016, Microsoft Hyper-V Server 2019, Windows Server 2019.

https://learn.microsoft.com/en-us/windows-server/virtualization/hyper-v/plan/plan-for-deploying-devices-using-discrete-device-assignment

And after research, it seems there's no official method to bypass DDA for Windows 10, so it's not recommended to use third-party method to configure DDA for Windows 10 host in case you meet any unexpected issue. Thanks for your understanding!

Thank you for your time! If you have any other concerns or questions, please feel free to feedback!

Best regards Joann

--------------------------------------------------------------------------------------------------------------------

If the Answer is helpful, please click " Accept Answer " and upvote it.

Note: Please follow the steps in our documentation to enable e-mail notifications if you want to receive the related email notification for this thread.

I am aware that it's only officially supported on Server, I am asking for this feature to be made available in Windows 10.

Is there perhaps an unofficial method to bypass DDA? As far as I can understand, it is there, but it's locked, which is quite annoying.

Again, I would buy a win10 license if there was a way to do this.

Thanks, Zsolt

Thank you for your reply!

I am afraid we may only be able to offer you the official versions that support configuring DDA. In this sense, we strongly recommend you to report this issue to Microsoft directly with the Feedback Hub app. The Feedback Hub app lets you tell Microsoft about any problems you run in to while using Windows 10. For more information to get in touch with us, click here: https://support.microsoft.com/en-us/windows/send-feedback-to-microsoft-with-the-feedback-hub-app-f59187f8-8739-22d6-ba93-f66612949332

Improving the quality of the products and services is a never-ending process for Microsoft. Thanks again for choosing us!

And I would appreciate it if you could click "Accept Answer" to let others who share the similar with yours to grab this information quickly.

If you have any other concerns, please feel free to feedback!

Best wishes Joann Zhu

IMAGES

  1. Plan for deploying devices by using Discrete Device Assignment

    microsoft discrete device assignment

  2. Discrete Device Assignment -- Description and background

    microsoft discrete device assignment

  3. Setting up Discrete Device Assignment with a GPU

    microsoft discrete device assignment

  4. Introduction to Windows Server 2016 Hyper-V Discrete Device Assignment

    microsoft discrete device assignment

  5. PCI Passthrough/Discrete Device Assignment for WSL2. · Issue #5492

    microsoft discrete device assignment

  6. Introduction to Windows Server 2016 Hyper-V Discrete Device Assignment

    microsoft discrete device assignment

VIDEO

  1. NPTEL Course, Discrete Mathematics, Assignment 01 Answers, July 2024

  2. NPTEL||DISCRETE MATHEMATICS ||WEEK6 || ASSIGNMENT ANSWERS

  3. Discrete structure assignment

  4. Dynacord VRS-23 BBD analog delay

  5. NPTEL||DISCRETE MATHEMATICS||WEEK 5 ||ASSIGNMENT ANSWERS||CSIT_CODING

  6. Assignment On Comfort Device || Comfort Device || BSc Nursing ||

COMMENTS

  1. Plan for deploying devices by using Discrete Device Assignment

    Plan for deploying devices by using Discrete ...

  2. Deploy graphics devices by using Discrete Device Assignment

    Deploy graphics devices by using Discrete ...

  3. Discrete Device Assignment -- Description and background

    With Windows Server 2016, we're introducing a new feature, called Discrete Device Assignment, in Hyper-V. Users can now take some of the PCI Express devices in their systems and pass them through directly to a guest VM. This is actually much of the same technology that we've used for SR-IOV networking in the past.

  4. Discrete Device Assignment -- GPUs

    This is the third post in a four part series. My previous two blog posts talked about Discrete Device Assignment ( link ) and the machines and devices necessary ( link ) to make it work in Windows Server 2016 TP4. This post goes into more detail, focusing on GPUs.

  5. Passing through devices to Hyper-V VMs by using discrete device assignment

    One feature that drew my attention is a new feature in Hyper-V called discrete device assignment. It can be very simply described as a device pass-through feature, the likes of which has existed on other hypervisors for many years. ... Microsoft started with device pass-through on Hyper-V with disk pass-through (attaching a physical disk ...

  6. Discrete Device Assignment -- Machines and devices

    First, we're not supporting Discrete Device Assignment in Hyper-V in Windows 10. Only Server versions of Windows support this. This isn't some wanton play for more of your hard-earned cash but rather just a side effect of being comfortable supporting server-class machines. They tend to work and be quite stable.

  7. Setting up Discrete Device Assignment with a GPU

    Setting up Discrete Device Assignment with a GPU

  8. Some ponderings on DDA on Windows 10/11 Pro : r/HyperV

    Setup all your drivers and stuffs. Extract Windows\Branding & Windows\System32\spp\tokens\skus from server ISO. Copy skus to the same directory. Run slmgr.vbs /rilc in cmd. Run slmgr.vbs /ipk KEY where key is the server one you want. Copy branding to the same directory. Activate, there you go.

  9. Pass Thru devices in Hyper-V (IOMMU)

    On Linux host it's easy enough : For Intel CPUs (VT-d) set intel_iommu=on. Since the kernel config option CONFIG_INTEL_IOMMU_DEFAULT_ON is not set by default when booting the host in Linux. You should also append the iommu=pt parameter. This will prevent Linux from touching devices which cannot be passed through.

  10. How to deploy graphics devices using Hyper-V DDA

    Windows Server 2016 now includes the ability to perform Discrete Device Assignment in its feature set, a feature which enables IT administrators to map VMs directly to a Peripheral Component Interconnect Express device and gives those VMs exclusive access to the device. ... There are three main steps in performing a Discrete Device Assignment ...

  11. Deploy NVMe Storage Devices using Discrete Device Assignment

    Starting with Windows Server 2016, you can use Discrete Device Assignment, or DDA, to pass an entire PCIe Device into a VM. This will allow high performance access to devices like NVMe storage or Graphics Cards from within a VM while being able to leverage the devices native drivers.

  12. Introduction to Windows Server 2016 Hyper-V Discrete Device Assignment

    This paper describes the steps on how to enable Discrete Device Assignment (also known as PCI Passthrough) available as part of the Hyper-V role in Microsoft Windows Server 2016. Discrete Device Assignment is a performance enhancement that allows a specific physical PCI device to be directly controlled by a guest VM running on the Hyper-V instance. Specifically, this new feature aims to ...

  13. Use GPUs with Clustered VMs through Direct Device Assignment

    Using GPUs with clustered VMs through DDA (Discrete Device Assignment) becomes particularly significant in failover clusters, offering direct GPU access. Using GPUs with clustered VMs through DDA allows you to assign one or more entire physical GPUs to a single virtual machine (VM). DDA allows virtual machines (VMs) to have direct access to the ...

  14. Update to add Discrete Device Assignment support for Azure that runs on

    This article describes an update that adds Discrete Device Assignment (DDA) support to Windows Server 2012 R2-based guest virtual machines (VMs). The update is targeted towards those VMs that are running on Azure's N-Series GPU-based VMs or on-premises deployments of a later version of Windows Server 2012 R2. ... Microsoft used the most current ...

  15. Enabling Discrete Device Assignment on Windows 10

    Enabling Discrete Device Assignment on Windows 10. Hello, I've been experimenting with different versions of Windows 10, and I am almost ready to buy a license, but unfortunately there is one major problem that is a showstopper for me. For my work, I need to be able to pass through physical devices to a linux-based HyperV virtual machine.

  16. Discrete Device Assignment -- Guests and Linux

    First published on TECHNET on Nov 24, 2015. In my previous three posts, I outlined a new feature in Windows Server 2016 TP4 called Discrete Device Assignment. This post talks about support for Linux guest VMs, and it's more of a description of my personal journey rather than a straight feature description. If it were just a description, I ...

  17. Discrete Device Assignment -- Description and background

    With Windows Server 2016, we're introducing a new feature, called Discrete Device Assignment, in Hyper-V. Users can now take some of the PCI Express devices in their systems and pass them through directly to a guest VM. This is actually much of the same technology that we've used for SR-IOV networking in the past.

  18. Teams Microsoft

    The first step is to open Teams and click Assignments. Click Lesson Plan assignment. With your mouse, hover over Assignment to see the instructions.. Click to open Attach menu. Click Upload from this device and choose the file you want to upload . Click Turn in: Lesson Plan That's it. You're done.

  19. Plan for GPU acceleration in Windows Server

    For more information, see GPU Acceleration in Windows containers. Discrete Device Assignment (DDA) Discrete Device Assignment (DDA) allows you to dedicate one or more physical GPUs to a virtual machine. In a DDA deployment, virtualized workloads run on the native driver and typically have full access to the GPU's functionality. DDA offers the ...

  20. Plans to support Discrete Device Assignment

    Plans to support Discrete Device Assignment. Does anyone know if there is a plan to support Discrete Device Assignment (DDA) under Docker for Windows Containers? Ultimately it will enable vendors like Nvidia to support Docker for Windows based GPU compute applications. Hopefully support for other devices would be included as well.

  21. Enabling Discrete Device Assignment on Windows 10

    For my work, I need to be able to pass through physical devices to a linux-based HyperV virtual machine. Microsoft calls this feature DDA or Discrete Device Assignment, and as far as I can tell it is only supported on Server editions of Windows. Unfortunately it is not viable for me to use Server as my main workstation OS.