Setting Up a Windows Gaming VM using GPU Pass Through on Linux

THIS GUIDE IS UNFINISHED

I have read through a large number of guides on how to set up a gaming VM in Linux and all of them seem to have a lot of holes in the process, incorrect information, or are too long and dense to be called a guide and act more as a technical paper on how IOMMU, DMA, etc. function on a low level. This guide hopes to be as complete as possible while also maintaining a step by step process and not going into so much detail that it becomes unreadable.

Be careful to replace example variables used in this guide with your own variables. The guide will instruct on how to find your own variables to use.

This guide was written using this hardware and software:
Host OS: Linux 5.0.0-37-generic #40~18.04.1-Ubuntu SMP x86_64 x86_64 x86_64 GNU/Linux
Qemu information:
Compiled against library: libvirt 4.0.0
Using library: libvirt 4.0.0
Using API: QEMU 4.0.0
Running hypervisor: QEMU 2.11.1
Hardware:
Aorus Pro X399
AMD ThreadRipper 2920x
Nvidia 960
Nvidia 740

Step 1: Enable IOMMU and Virtualization

1: Ensure your hardware is compatible with a full suite of virtualization tools. On Intel hardware you should ensure that it is compatible with both VT-x and VT-d. AMD hardware should be capable of AMD-Vi. AMD-V is not sufficient. It must very specifically be AMD-Vi. You can determine this by searching the web for the model of both your CPU and motherboard and “VT-d capable” or “AMD-Vi capable” depending on the hardware you are using.

2: Update your motherboard’s BIOS firmware: Mine was not able to enable the IOMMU feature until I updated. Even if you BIOS supports IOMMU already, a newer version may have better support that will fix issues causing slowness before the issue ever arises.

3: Enable virtualization features: on Intel systems you need to enable both VT-d and VT-x. On AMD you need to enable AMD-Vi. The specific instructions on how to perform this vary between BIOS, so once again it is best to search the internet on instructions on how to perform this task.
Ryzen caveat: On any BIOS that has the options “Auto | Disabled | Enabled” you should choose enabled. Auto will give worse groupings and make the most difficult portions of this process even harder.

4: Configure Linux to use IOMMU:
Edit the file /etc/default/grub. Find the line that starts with GRUB_CMDLINE_LINUX_DEFAULT= and add the following to the end of the line:
For AMD hardware: amd_iommu=on
For Intel hardware: intel_iommu=on
If your boot time is noticeably longer try changing amd/intel_iommu=on to iommu=1 or iommu=pt. If you have issues later where your VM is laggy or slow, you may need to addiommu=pt after amd/intel_iommu=on.

5: Update grub: Execute sudo update-grub

6: REBOOT: None of these changes take effect until after you reboot.

7: Check that IOMMU was enabled by running dmesg | grep -i -e DMAR -e IOMMU. You should see something like this:

[    0.842942] AMD-Vi: IOMMU performance counters supported
[    0.842994] AMD-Vi: IOMMU performance counters supported
[    0.847834] iommu: Adding device 0000:00:19.6 to group 8
[    0.847846] iommu: Adding device 0000:00:19.7 to group 8
[    0.847855] iommu: Adding device 0000:01:00.0 to group 0
[    0.847863] iommu: Adding device 0000:01:00.1 to group 0
[    0.850572] AMD-Vi: Found IOMMU at 0000:00:00.2 cap 0x40
[    0.850574] AMD-Vi: Found IOMMU at 0000:40:00.2 cap 0x40
[    0.851486] perf/amd_iommu: Detected AMD IOMMU #0 (2 banks, 4 counters/bank).
[    0.851523] perf/amd_iommu: Detected AMD IOMMU #1 (2 banks, 4 counters/bank).

Intel hardware will obviously look different, and AMD hardware will not look exactly the same. This is only a snippet of the output I received from my own output.

Step 2: Gathering System Information to Determine Course of Action

Next you we need to get IOMMU configuration information and hardware IDs. The hardware IDs are used to blacklist the video card so that the host does not take ownership. If it does and you try to attach the video card to the guest, either the guest will hang at boot or the whole host will lock up and need a hard reboot. The IOMMU information is to know the identifier of the device that will be attached to the guest for pass through. IOMMU information is also needed to determine if the ACS patch will be needed. ACS and the downsides of its kernel patch will be explained in its own portion of this guide.

We will detect the hardware ID of the video card and black list it.
Get the hardware ID and IOMMU channel of the video card: Execute lspci -nn | grep -i vga to see all video cards installed. The output should look something like this:

08:00.0 VGA compatible controller [0300]: NVIDIA Corporation GK208B [GeForce GT 710] [10de:128b] (rev a1)
41:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 960] [10de:1401] (rev a1) 

The first numbers before “VGA” is the channel of the device. The hex numbers in [brackets] near the end of each line is the hardware ID. The video card that I want to blacklist is the GTX 960, so the relevant information that I need is the channel 41:00.0 and hardware ID [10de:1401]. Your channel and hardware ID will be different. The HDMI audio device on the video card was not seen in the output, and we will need to pass that through too. It can be detected by running lspci -nn | grep -i audio.

$ lspci -nn | grep -i audio
08:00.1 Audio device [0403]: NVIDIA Corporation GK208 HDMI/DP Audio Controller [10de:0e0f] (rev a1)
0a:00.3 Audio device [0403]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) HD Audio Controller [1022:1457]
41:00.1 Audio device [0403]: NVIDIA Corporation Device [10de:0fba] (rev a1)

As we can see, the video card is 41:00.0 and the audio card is 41:00.1. the last number should be 1-2 numbers off from the video card and the proceeding number should always be the same. The IOMMU channels should never start with different numbers.

We now have a complete list of devices that we want to blacklist:
GeForce GTX 960 video card
IOMMU channel: 41:00.0
Hardware ID: 10de:1401
GeForce GTX 960 HDMI audio
IOMMU channel: 41:00.1
Hardware ID: 10de:0fba

Next we need to look at the IOMMU channels. You can only pass all devices in the IOMMU group or none of them. If another device is in the same group it will have to be passed with the video card or you will have to use the ACS patch.

Execute ls /sys/kernel/iommu_groups/*/devices to see IOMMU groups. It will show each IOMMU device (channel) in every IOMMU group on the host. Example:

/sys/kernel/iommu_groups/14/devices:
 0000:40:08.0@  0000:40:08.1@  0000:43:00.0@  0000:43:00.2@

For me, channels 41:00.0 and 41:00.1 are located in group 11:

/sys/kernel/iommu_groups/11/devices:
 0000:40:03.0@  0000:40:03.1@  0000:41:00.0@  0000:41:00.1@

As we can see the group is tainted with other devices. Lets look at what these are:

lspci -nn | grep 40:03 will show us the other devices in the group:

40:03.0 Host bridge [0600]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe Dummy Host Bridge [1022:1452]
 40:03.1 PCI bridge [0604]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) PCIe GPP Bridge [1022:1453]

Device 40:03.0 is a PCIe Dummy Host Bridge and device 40:03.1 is a PCIe GPP bridge. Thankfully both of these devices can be ignored. The one caveat in needing to pass all devices in a IOMMU group is that bridges do not need to be passed, and further should not ever be passed.

If your IOMMU groups are not tainted you can completely skip the ACS portion of this guide. ACS can introduce security flaws, so you should not use the patch unless it is completely necessary.

Step 3: Blacklisting Devices

Your host computer’s BIOS selects a default video card to use when booting. If it defaults to using the video card that needs to be passed through to the guest, you should enter your BIOS settings again to select the second video card installed in your system.

Currently, when booting, Linux will detect each piece of hardware connected to your host and assign the best kernel driver to use, but we don’t want the video card using nvidia or amd drivers. We will blacklist the device three different ways. This is overkill, but will also ensure that the vfio-pci driver will always be used instead of the normal driver when rebooting.

1. Determine the current driver being used:
Execute lspci -nnk -d {your hardware ID}

$ lspci -nnk -d 10de:1401
41:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 960] [10de:1401] (rev a1)
         Subsystem: eVga.com. Corp. GM206 [GeForce GTX 960] [3842:2962]
         Kernel driver in use: nvidia
         Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia

We can see that my video card is using the nvidia driver. The driver needs to change to the vfio-pci driver.

2: Blacklist via initramfs modules
Execute sudo vim /etc/initramfs-tools/modules
Add these lines, replace the driver and hardware IDs with your own:

softdep nvidia pre: vfio vfio_pci

vfio
vfio_iommu_type1
vfio_virqfd
options vfio_pci ids=10de:1401,10de:0fba
vfio_pci ids=10de:1401,10de:0fba
vfio_pci
nvidia

3: Blacklist via /etc/modules
Execute sudo vim /etc/modules and add these lines, replace the hardware IDs with your own:

vfio
vfio_iommu_type1
vfio_pci ids=1002:699f,1002:aae0

4. Blacklist via /etc/modprobe.d/
Execute /etc/modprobe.d/nvidia.conf where nvidia is the name of the driver your system is currently using.
Add these lines, replace the hardware IDs with your own and nvidia with the driver your system is using:
softdep nvidia pre: vfio vfio_pci
Execute sudo vim /etc/modprobe.d/vfio_pci.conf . Add the following lines and replace the hardware IDs with your own:
options vfio_pci ids=1002:699f,1002:aae0

5. Update grub and initramfs, then REBOOT.

sudo update-initramfs -u
sudo update-grub
sudo reboot now

6. Execute lspci -nnk -d {your hardware ID} again. This time you should see that the driver changed to vfio-pci

$ lspci -nnk -d 10de:1401
41:00.0 VGA compatible controller [0300]: NVIDIA Corporation GM206 [GeForce GTX 960] [10de:1401] (rev a1)
         Subsystem: eVga.com. Corp. GM206 [GeForce GTX 960] [3842:2962]
         Kernel driver in use: vfio-pci
         Kernel modules: nvidiafb, nouveau, nvidia_drm, nvidia

If the driver is still not showing vfio-pci, double check the previous steps for mistakes. The final, least stable method of blacklisting is to add the blacklist to grub. If vfio-pci is already in use, skip this next step.
sudo vim /etc/default/grub
On the line GRUB_CMDLINE_LINUX_DEFAULT= add vfio-pci.ids=1002:699f,1002:aae0. Change the hardware IDs to your own, then run sudo update-grub and reboot again.

Step 5: Installing Software Components

It get easier from here.

Step 6: Apparmor Caveat

Execute sudo apparmor_status
If apparmor is not installed and running skip this section. It doesn’t apply to you.
If you see apparmor module is loaded. in the output you will need to edit /etc/apparmor.d/abstractions/libvirt-qemu.
Executesudo vim /etc/apparmor.d/abstractions/libvirt-qemu
find the section commented # for usb access and change it to look exactly like this:

# for usb access
 /dev/bus/usb/** rw,
 /etc/udev/udev.conf r,
 /sys/bus/ r,
 /sys/class/ r,
 /run/udev/data/* rw,

If you want to pass a hard drive to the guest, you will also need to edit the same file and add the device. If you are using an Nvidia video card this step will be necessary later. Example:
/dev/sdb rw,

Afterwards execute service apparmor restart

If you are still having issue you can check dmesg for apparmor logs with dmesg | grep apparmor. If you see apparmor is still blocking things you can either troubleshoot it or remove apparmor entirely with the following commands:

service apparmor stop
service apparmor teardown
update-rc.d -f apparmor remove
apt remove apparmor
sudo reboot now

Step 7: Configuring the Virtual Machine

  1. Execute sudo apt install virt-manager qemu-kvm ovmf
  2. Download the Windows installation iso file.
    Disk images available directly from Microsoft here.
  3. Open virt-manager
  4. Create a new VM
  5. “Local install media”
  6. “Use ISO image” > browse to the ISO you downloaded
  7. Choose your RAM and CPU allocations.
  8. Check “customize configuration before install”
  9. Under the CPUs tab click the Topology drop down and change sockets to 1, threads to 1, and and cores to the appropriate number of core you want to use.
    Windows 10 is designed to only allow 1 socket, so it will only use 1 core in a 6/1/1 configuration instead of using 6 cores in a 1/6/1 configuration
    IF YOU SKIP THE PREVIOUS STEP WINDOWS WILL FAIL TO INSTALL AND TIME OUT IN THE OUT OF BOX EXPERIENCE PORTIONS OF INSTALLATION
  10. In the Overview tab change the chipset to Q35
  11. Double check the “current allocation” of CPUs in the CPU tab. It may have changed to 1 when the topology was fixed.
  12. In the Overview tab change firmware from BIOS to OVMF code.
  13. In the Sound tab:
    If you are using spice audio pass through leave the sound card as ich7.
    If you do not plan to do that, change the card to ac97.
  14. In the IDE disk 1 tab click the advanced options drop down and change the disk bus to SATA.
  15. In the IDE cdrom 1 tab change the disk bus to SATA
  16. Click +add hardware
  17. choose PCI host device
  18. Find the IOMMU channel of the device you previously blacklisted and set to use the vfio-pci driver. Select it and click finish.
  19. Repeat step the previous step for all other devices that you want to pass through.
    Remember, you need to pass the entire IOMMU group, so if your video card has a separate device for HDMI audio, that needs to use the vfio-pci driver and be passed through as well.
  20. Click begin installation.
  21. Follow the standard Windows installation steps.

At this point if you are passing an AMD video card to the guest everything should be working fine. You can start installing and playing games. If you are using an Nvidia card, the driver will install successfully, but the video card will fail to initialize when booting with error code 43. There is more work to be done to get the card running.

Bypassing Nvidia Error 43

virsh edit [guest name]
if you dont know your guest name run virsh list
if the list is empty or you get the error “failed to get domain ‘[guest name]'” try putting sudo before all of your virsh commands

...
<features>
<acpi/>
<apic/>
<hyperv>
<relaxed state='on'/>
<vapic state='on'/>
<spinlocks state='on' retries='8191'/>
<vendor_id state='on' value='[anything 12 characters long]'/>
</hyperv>
<kvm>
<hidden state='on'/>
</kvm>
<vmport state='off'/>
<ioapic driver='kvm'/>
</features>
...

Optional: Add a Physical Disk

The virtual disk I created for my Windows installation is only 120GB. Clearly that will only hold half a game with today’s inflated game sizes so I decided to mount an entire disk that I already had formatted as NTFS.

  1. Determine which device is the hard drive you want to mound with grep "/dev/sd*" /etc/mtab. Mine was sdd.
  2. unmount the device from the host with sudo umount <mount path>
  3. mount the drive to the guest with sudo virsh attach-disk <guestname> /dev/<disk> vdc. vdc is the mount point on the guest. More than likely vda and vdb are already taken by the C drive and a a virtual cdrom that holds the install ISO. If you have more than 2 virtual disks installed then something will have already been mounted in vdc. Choose a different letter Eg. “vdd”.

Dealing with the Sound Issue

This ended up being much more of a hassle than I expected. Using the ich6 driver and passing audio over the spice channel caused the same hitching that nearly everyone experiences. The ac97 drivers have not supported any Windows version after Vista. There is a workaround to install the driver anyway but I don’t like the idea of installing something that far out of support. I decided to try to use an HDMI audio isolator so that I could pull audio from the HDMI port. The first device was cheap and only passed mono sound. The second device was also cheap and didn’t pass any sound at all. The next option was more expensive than I was willing to pay. The solution I found was to use the ich6 driver with a cheap USB to 3.5mm sound adapter (Amazon). I used the “add hardware” wizard in qemu to add the USB host device without needing to do any device blacklisting, pulseaudio configuration changes, or IOMMU finagling to get it to work. Its been working flawlessly. The output from the guest and host are run into this mixer which outputs to a multi output amp that provides sound to my headphones and speakers independently.