Windows 10 Virtual Machine with CPU and GPU Passthrough
Posted Mon, 15 May 2023 20:59:48 +0200 | Virtual Machines|
I used this great guide for setting up a Windows 10 VM with GPU passthrough using Virt Manager. I encountered some issues though that weren’t brought up in the guide, and also found some additional optimizations to the process. Since it might help others I thought I would document them here.
Now, because the process of GPU passthrough is not entirely trivial, you might want to consider running your application
using wine, or if it’s a Steam game, proton. Personally, I haven’t had much problems with running obscure applications
like proprietary build SDKs or old visual novels using wine. I have also been able to run some uncooperative Steam games
in proton by changing the proton version in Steam to experimental, or fix issues like crackling audio by going into
the launch options for the game and setting PULSE_LATENCY_MSEC
to some appropriate value. Check https://www.protondb.com for your
game to see feedback about how well it runs, there are usually good tips for which proton version to use, or which
launch options to tinker with.
Nevertheless, if you have problems using wine or proton, or if you would prefer not to use them, then here are my notes.
- In case you have a previous Windows VM that you would like to add GPU passthrough to, it is critically important
that it uses the UEFI firmware (
OVMF_*.fd
or similar) as described in section 5. Not sure if this has an effect on the actual GPU passthrough, but if it has old-fashioned BIOS, which Virt Manager will use by default, Windows is unable to use more than one virtual CPU core. This leads to potato performance for applications like games. If you see that your old Windows VM uses BIOS, create a new one from scratch. Attempting to change it to use UEFI leads to the UEFI unable to boot the VM. Also, if your old Windows VM is called something other thanwin10
, then in section 7 it seems to be enough to just go into the qemu hook script and change it to check for whatever name it has. Of course, do this before running theinstall_hooks.sh
script. - I found rumors that Windows 10 has DRM that renders it unable to use more than one CPU socket without a valid license. Whether or not this is the case, as long as you don’t use more than one CPU socket and instead use more CPU cores when you are manually configuring the CPU topology in section 5, I didn’t encounter any problems. Just make sure to use UEFI as I described above, otherwise the VM will freeze if you as much as think about adding more than one CPU core or thread.
- I have an AMD GPU, but I assume this would be useful for Nvidia folks too. There is an easier way to dump the ROM compared to the method described in section 6. Just follow the instructions as described here, but remember that the path where the ROM should be placed must be the same as described at the end of section 6. Otherwise the VM will crash.
- In section 8,
when you are adding the ROM file to the passed-through GPU PCI devices by editing the XML, for me it is vital that
bar
is set tooff
. You can either set this by unchecking the checkbox in the Virt Manager GUI for the GPU PCI devices, or by using abar='off'
XML attribute in the<rom file='/path/to/gpu.rom'/>
tag. I just happened to discover this by randomly trying things after the VM kept freezing. - Also in section 8 you also need to passthrough your keyboard and optionally your mouse, the virtual keyboard and mouse devices do not seem to work. The process is simple, just add them from the list of host USB devices insead of host PCI devices. I didn’t have to do anything else like force my Linux host to release its grip on them or anything.
After this everything worked smoothly. Note that once you start the VM in Virt Manager your host environment’s access to the GPU is removed and all your open applications are killed. Don’t worry, when you power off the VM you will be returned to the lock screen of your host environment, but of course if you had any unsaved work that work is now lost.
If you need to troubleshoot your VM and have a spare computer lying around, it is useful to start an SSH server on your VM host and manage Virt Manager remotely through your spare computer. Because your VM host loses access to the GPU, if the VM hangs the VM host is essentially stuck in a black screen of death. By remotely shutting down the VM, you can recover from this without having to do a hard reset of your VM host by holding down its power button.
I still haven’t tried passing through sound to the VM, if I get around to it and have something useful to report I’ll post an update.