SHARE
TWEET

Setup of Oculus Rift in VM with GPU passthrough

larsupilami73 Jul 9th, 2019 (edited) 717 Never
Not a member of Pastebin yet? Sign Up, it unlocks many cool features!
  1. How to set up Windows10 Virtual Machine with GPU passthrough via Qemu/VFIO/OVMF (with minimal system changes)
  2. And install the Oculus Rift.
  3. -------------------------------------------------------------------------------------------------------------
  4. Date:12/07/2019     Author: larsupilami73
  5.  
  6.  
  7. Goals:
  8.  
  9. a. GPU passthrough with Windows10 in a virtual machine for running games, Unigine benchmarks, Oculus Rift etc.,
  10. b. Works for both identical or different Nvidia GPUs (no clue if it works for AMD GPUs)
  11. c. Minimal changes to the system: no Grub config, initramfs, /etc/modules changes,
  12. d. After shutdown of the VM, the 2nd GPU can be reclaimed for CUDA use.
  13.  
  14.  
  15. 1.1 Hardware:
  16. -------------
  17. AMD 1950x Threadripper (non overclocked),
  18. AsRock Taichi X399 Motherboard, bios version 3.30, agesa sp3r2-1.1.0.1,
  19. 32 GB RAM,
  20. Two identical Asus ROG Strix 1080Ti-11G-GAMING (vbios 86.02.39.00.54), not in SLI,
  21. One extra Samsung 860 SSD 250GB for the Windows10 installation (unformated, not used by host Debian OS)
  22. One extra Logitech K400+ wireless keyboard with touchpad for the virtual machine
  23.  
  24.  
  25. 1.2 Host:
  26. ---------
  27. CrunchBangPlusPlus 9, which is Debian Stretch with Openbox desktop (changed from 'stable' release to 'testing', see https://wiki.debian.org/DebianTesting)
  28. Kernel 4.19.05-amd64
  29. Nvidia drivers 418.56
  30.  
  31.  
  32. 2 Method outline:
  33. -----------------
  34. The problem:
  35. Qemu needs the 2nd GPU (the one be passed through) to be bound to the VFIO driver for passthrough.
  36. Dynamic rebinding of the 2nd GPU from the Nvidia driver to VFIO and back is possible, however the GPU needs to be free of processes using it.
  37. Now, once the displaymanager (lxdm, slim etc.) is started, X is started too. X then grabs *all* GPUs it can find that are bound to the nvidia driver.
  38. You can check this by typing in a terminal:
  39.  
  40.     sudo lsof /dev/nvidia*
  41.  
  42. resulting in:
  43. ...
  44. COMMAND  PID USER   FD   TYPE  DEVICE SIZE/OFF  NODE NAME
  45. Xorg    1133 root  mem    CHR   195,1          46269 /dev/nvidia1  <---1st GPU, running desktop
  46. ...
  47. Xorg    1133 root  mem    CHR   195,0          46276 /dev/nvidia0  <---2nd GPU, X still occupies it, even if it is not connected to a monitor
  48. ...
  49.  
  50.  
  51. Also, trying to reset the 2nd GPU:
  52.  
  53.     sudo nvidia-smi -i 0 --gpu-reset
  54.  
  55. results in:
  56.  
  57. GPU 00000000:09:00.0 is currently in use by another process.
  58. 1 device is currently being used by one or more other processes
  59. ...
  60.  
  61.  
  62. As far as I know, there is no way to tell X (xserver-xorg-video-nvidia) to leave a certain GPU alone.
  63. So to do dynamic rebinding of the 2nd GPU to the VFIO driver, first X needs to be stopped, which in turn requires the displaymanager to be stopped,
  64. which requires you to log out, drop to a terminal, login etc, manually call an unbinding script etc.
  65. This is annoying. We can avoid X grabbing the 2nd GPU, by binding it to the VFIO driver early on, at system boot.
  66. This is the approach followed in this excellent Arch wiki:
  67.  
  68. https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
  69.  
  70. However, it requires a change of initramfs, Grub config, kernel module parameters etc., parts of my system that I generally don't like touching,
  71. for fear an update will mess up my settings. Worse yet, for two identical GPUs, it gets even more complicated, needing a 'hook' script:
  72.  
  73. https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Using_identical_guest_and_host_GPUs
  74.  
  75. There is a *less invasive* way, which is outlined in the rest of this text.
  76. The recipe goes like this:
  77.  
  78. -Boot with normal initramfs. The kernel binds both nvidia GPUs to the nvidia driver.
  79. -When the cron service is started, a cron job with the '@reboot' setting calls a shell script that
  80.  unbinds nvidia driver from the 2nd GPU and binds it to VFIO.
  81. -Then the displaymanager starts X, ignoring the GPU bound to VFIO.
  82. -Login to normal desktop using 1st GPU.
  83. -Now another script can start windows10 VM or rebind the 2nd GPU to the nvidia driver for CUDA or to do whatever,
  84.  because your at-present-running X server is only concerned with the 1st GPU.
  85.  
  86. This method was hinted at by 'TheCakeIsNaOH' here:
  87.  
  88. https://forum.level1techs.com/t/identical-gpu-passthrough-ubuntu/138843/14
  89.  
  90. The purpose of this text is to write this all out a bit more, step-by-step.
  91.  
  92.  
  93. 3 Lets do it:
  94. -------------
  95.  
  96. 3.1 IOMMU groups:
  97. -----------------
  98.  
  99. Follow: https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF
  100. up to and including section 2.3.1 'Isolating the GPU'. At this point, your IOMMU groups should be sane, meaning
  101. your 2nd GPU is in its own IOMMU group.
  102.  
  103. In my system, the 2nd GPU and its audio interface (the card I want to pass through) are in group 16,
  104. as given by the script 'Ensuring that the groups are valid' in section 2.2 of the Archwiki article:
  105.  
  106. ----------------------------------------------
  107. #!/bin/bash
  108. shopt -s nullglob
  109. for g in /sys/kernel/iommu_groups/*; do
  110.     echo "IOMMU Group ${g##*/}:"
  111.     for d in $g/devices/*; do
  112.         echo -e "\t$(lspci -nns ${d##*/})"
  113.     done;
  114. done;
  115. ----------------------------------------------
  116.  
  117. This outputs:
  118.  
  119. ...
  120. IOMMU Group 16 09:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)
  121. IOMMU Group 16 09:00.1 Audio device [0403]: NVIDIA Corporation GP102 HDMI Audio Controller [10de:10ef] (rev a1)
  122. ...
  123.  
  124. while my 1st GPU (the one running my Linux desktop) is in group 32:
  125.  
  126. ...
  127. IOMMU Group 32 41:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)
  128. IOMMU Group 32 41:00.1 Audio device [0403]: NVIDIA Corporation GP102 HDMI Audio Controller [10de:10ef] (rev a1)
  129. ...
  130.  
  131. Note: strangely enough, nividia-smi indexes the 2nd GPU as '0' while the 1st one is '1':
  132.  
  133.     nvidia-smi
  134.  
  135. Thu May 30 12:06:19 2019      
  136. +-----------------------------------------------------------------------------+
  137. | NVIDIA-SMI 418.56       Driver Version: 418.56       CUDA Version: 10.1     |
  138. |-------------------------------+----------------------+----------------------+
  139. | GPU  Name        Persistence-M| Bus-Id        Disp.A | Volatile Uncorr. ECC |
  140. | Fan  Temp  Perf  Pwr:Usage/Cap|         Memory-Usage | GPU-Util  Compute M. |
  141. |===============================+======================+======================|
  142. |   0  GeForce GTX 108...  Off  | 00000000:09:00.0 Off |                  N/A |
  143. |  0%   32C    P8    11W / 250W |      2MiB / 11178MiB |      0%      Default |
  144. +-------------------------------+----------------------+----------------------+
  145. |   1  GeForce GTX 108...  Off  | 00000000:41:00.0  On |                  N/A |
  146. |  0%   35C    P8    12W / 250W |    153MiB / 11178MiB |      6%      Default |
  147. +-------------------------------+----------------------+----------------------+
  148.  
  149. +-----------------------------------------------------------------------------+
  150. | Processes:                                                       GPU Memory |
  151. |  GPU       PID   Type   Process name                             Usage      |
  152. |=============================================================================|
  153. |    1      1138      G   /usr/lib/xorg/Xorg                           148MiB |
  154. |    1      1683      G   compton                                        3MiB |
  155. +-----------------------------------------------------------------------------+
  156.  
  157.  
  158. Also, nvidia-smi does not report processes that have opened /dev/nvidia0:
  159.  
  160.     sudo lsof /dev/nvidia0
  161.  
  162. will report several opened files.
  163.  
  164.  
  165. 3.2 Disable nvidia-persistenced:
  166. --------------------------------
  167. The nividia-persistenced daemon must be disabled to prevent it to (re-)initialize the 2nd GPU.
  168. For more info, see: https://docs.nvidia.com/deploy/driver-persistence/index.html
  169. Do once:
  170.  
  171.     sudo systemctl disable nvidia-persistenced
  172.  
  173. Reboot.Check with:
  174.  
  175.     nvidia-smi -i 0 -q
  176.  
  177. ==============NVSMI LOG==============
  178.  
  179. Timestamp                           : Thu May 30 12:36:45 2019
  180. Driver Version                      : 418.56
  181. CUDA Version                        : 10.1
  182.  
  183. Attached GPUs                       : 2
  184. GPU 00000000:09:00.0
  185.     Product Name                    : GeForce GTX 1080 Ti
  186.     Product Brand                   : GeForce
  187.     Display Mode                    : Enabled
  188.     Display Active                  : Disabled
  189.     Persistence Mode                : Disabled  <----OK!
  190. ...
  191.  
  192.  
  193. My conky script uses nvidia-smi to obtain GPUs memory, fan state etc.
  194. For some reason, disabling nvidia-persistenced slows down nvidia-smi (and my desktop),
  195. but only AFTER a the first time a windows virtual machine has been shut down in my normal desktop environment.
  196. To avoid this, nvidia persistence mode will be re-enabled after the VM is shut down in a script (see Section 3.5.2).
  197.  
  198. If needed, persistence can be controlled manually, per GPU, by doing:
  199.  
  200.     sudo nvidia-smi -i {0,1} -pm {DISABLED,ENABLED}
  201.  
  202.  
  203. 3.3 Install the unbind script:
  204. -------------------------------
  205.  
  206. 3.3.1 Create a script called 'unbind_nvidia_bind_vfio.sh':
  207. ----------------------------------------------------------
  208.  
  209. #!/bin/sh
  210. #place in /usr/local/bin
  211. #unbinds GPU from Nvidia driver, bind to VFIO
  212.  
  213. /sbin/modprobe vfio
  214. /sbin/modprobe vfio_pci
  215.  
  216. # VGA
  217. echo '0000:09:00.0' > /sys/bus/pci/devices/0000:09:00.0/driver/unbind
  218. echo '10de 1b06' > /sys/bus/pci/drivers/vfio-pci/new_id
  219. echo '0000:09:00.0' > /sys/bus/pci/devices/0000:09:00.0/driver/bind
  220. echo '10de 1b06' > /sys/bus/pci/drivers/vfio-pci/remove_id
  221.  
  222. # Audio  
  223. echo '0000:09:00.1' > /sys/bus/pci/devices/0000:09:00.1/driver/unbind
  224. echo '10de 10ef' > /sys/bus/pci/drivers/vfio-pci/new_id
  225. echo '0000:09:00.1' > /sys/bus/pci/devices/0000:09:00.1/driver/bind
  226. echo '10de 10ef' > /sys/bus/pci/drivers/vfio-pci/remove_id
  227.  
  228. #this is a kvm option, needed to avoid blue-screen-of-death at ovmf uefi boot of the windows10 iso.
  229. #see: https://forum.level1techs.com/t/windows-10-1803-as-guest-with-qemu-kvm-bsod-under-install/127425/9
  230. echo 1 > /sys/module/kvm/parameters/ignore_msrs
  231.  
  232. exit 0
  233.  
  234.  
  235. In above script, adjust the PCI adresses to your 2nd GPU (the one to be passed through) and its vendor and device IDs,
  236. as reported by the IOMMU script in 3.1. Note that device IDs an PCI addresses are different for the VGA and Audio section!
  237. Copy the script to /usr/local/bin and adjust the execution permissions and owner (if needed):
  238.  
  239.     sudo chmod 755 unbind_nvidia_bind_vfio.sh
  240.     sudo chown root unbind_nvidia_bind_vfio.sh
  241.  
  242.  
  243. 3.3.2 Check that the unbinding script works:
  244. --------------------------------------------
  245. In this section, we will check if above script works, by calling it manually from command line.
  246. As explained in section 2, rebinding does not work before all processes using the 2nd GPU (X servers etc.) are shut down,
  247. so we first need to drop to a plain-text terminal and stop all X servers.
  248.  
  249. -Exit your desktop environment. This returns you to the display manager (lxdm for me).
  250. -Go to a plain-text terminal session (ctrl-alt-F5 etc.).
  251. -Log in as a normal user.
  252. -Then stop the displaymanager service:
  253.  
  254.     sudo service lxdm stop
  255.    
  256. -You will be prompted to log in again.
  257. -Run the unbind script from section 3.3.1 manually:
  258.  
  259.     sudo /etc/usr/local/bin/unbind_nvidia_bind_vfio.sh
  260.    
  261. -Check for the display part of the 2nd GPU (change vendor and device id as in the unbind script to that of your card):
  262.    
  263.     lspci -nnk -d 10de:1b06
  264.    
  265. which for me returns:
  266.  
  267. 09:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)
  268.     Subsystem: ASUSTeK Computer Inc. GP102 [GeForce GTX 1080 Ti] [1043:85f1]
  269.     Kernel driver in use: vfio-pci      <---2nd GPU, OK!
  270.     Kernel modules: nvidia
  271. 41:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)
  272.     Subsystem: ASUSTeK Computer Inc. GP102 [GeForce GTX 1080 Ti] [1043:85f1]
  273.     Kernel driver in use: nvidia        <---1st GPU, OK!
  274.     Kernel modules: nvidia
  275.  
  276.  
  277. -Check for the audio part of the 2nd GPU:
  278.    
  279.     lspci -nnk -d 10de:10ef
  280.  
  281. which for me returns:
  282.  
  283. 09:00.1 Audio device [0403]: NVIDIA Corporation GP102 HDMI Audio Controller [10de:10ef] (rev a1)
  284.     Subsystem: ASUSTeK Computer Inc. GP102 HDMI Audio Controller [1043:85f1]
  285.     Kernel driver in use: vfio-pci      <---OK!
  286.     Kernel modules: snd_hda_intel
  287. 41:00.1 Audio device [0403]: NVIDIA Corporation GP102 HDMI Audio Controller [10de:10ef] (rev a1)
  288.     Subsystem: ASUSTeK Computer Inc. GP102 HDMI Audio Controller [1043:85f1]
  289.     Kernel driver in use: snd_hda_intel
  290.     Kernel modules: snd_hda_intel
  291.  
  292.  
  293. 'Kernel driver in use' should now be 'vfio-pci' for the passthrough card.
  294.  
  295.  
  296. -Restart the displaymanager service:
  297.  
  298.     sudo service lxdm start
  299.  
  300. -The display manager will start. Log in to a normal desktop environment.
  301.  
  302. At this moment, X is not using the 2nd GPU and a Qemu command to start the Windows VM (see further, Section 3.5)
  303. can be called and should work. Also, rebinding to the nvidia driver (see further, Section 3.4) should work while X is running.
  304. Unfortunately, the situation as it is now does not survive a reboot.
  305. As explained in the outline, Section 2, to avoid having to stop X and do this manual unbinding every time again,
  306. the unbinding script will be called by cron at boot. This is explained in the next section.
  307.  
  308.  
  309. 3.3.3 Make the script execute at boot:
  310. --------------------------------------
  311. -To make cron execute the unbind_nvidia_bind_vfio.sh script at boot, do in a terminal:
  312.  
  313.     sudo crontab -e
  314.  
  315. -Select editor like nano and add to the file:
  316.  
  317.     @reboot  /usr/local/bin/unbind_nvidia_bind_vfio.sh 2>&1 | /usr/bin/logger -t unbind_nvidia_bind_vfio
  318.  
  319. The last part causes the output and errors of the unbind script to be logged.
  320.  
  321. -Save, reboot, log in to desktop environment and check with:
  322.  
  323.     sudo cat /var/log/syslog | grep unbind_nivida_bind_vfio
  324.  
  325. This returns:
  326.  
  327. Jun  3 19:50:42 home CRON[955]: (root) CMD (/usr/local/bin/unbind_nvidia_bind_vfio.sh 2>&1 | /usr/bin/logger -t unbind_nvidia_bind_vfio)
  328. Jun  3 19:50:42 home unbind_nvidia_bind_vfio: /usr/local/bin/unbind_nvidia_bind_vfio.sh: 11: echo: echo: I/O error
  329. Jun  3 19:50:43 home unbind_nvidia_bind_vfio: /usr/local/bin/unbind_nvidia_bind_vfio.sh: 17: echo: echo: I/O error
  330.  
  331. I have no idea why the I/O errors happen, but it seems harmless.
  332. Check that the 2nd GPU is bound to the VFIO driver:
  333.  
  334.     lspci -nnk -d 10de:1b06
  335.  
  336.  
  337. 09:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)
  338.     Subsystem: ASUSTeK Computer Inc. GP102 [GeForce GTX 1080 Ti] [1043:85f1]
  339.     Kernel driver in use: vfio-pci      <---2nd GPU, OK!
  340.     Kernel modules: nvidia
  341. 41:00.0 VGA compatible controller [0300]: NVIDIA Corporation GP102 [GeForce GTX 1080 Ti] [10de:1b06] (rev a1)
  342.     Subsystem: ASUSTeK Computer Inc. GP102 [GeForce GTX 1080 Ti] [1043:85f1]
  343.     Kernel driver in use: nvidia        <---1st GPU, OK!
  344.     Kernel modules: nvidia
  345.  
  346. And similarly for the audio part of the 2nd GPU:
  347.  
  348.     lspci -nnk -d 10de:10ef
  349.    
  350. 09:00.1 Audio device [0403]: NVIDIA Corporation GP102 HDMI Audio Controller [10de:10ef] (rev a1)
  351.     Subsystem: ASUSTeK Computer Inc. GP102 HDMI Audio Controller [1043:85f1]
  352.     Kernel driver in use: vfio-pci      <---OK!
  353.     Kernel modules: snd_hda_intel
  354. ...
  355.  
  356.  
  357. 3.4 Install the rebind script:
  358. ------------------------------
  359. Create the following script 'unbind_vfio_bind_nvidia.sh' in /usr/local/bin:
  360.  
  361. #!/bin/sh
  362. #place in /usr/local/bin
  363. #unbind vfio and rebind 2nd GPU to nvidia
  364.  
  365. # Unbind the GPU from vfio-pci
  366. echo -n "0000:09:00.0" > /sys/bus/pci/drivers/vfio-pci/unbind || echo "Failed to unbind gpu from vfio-pci"
  367. echo -n "0000:09:00.1" > /sys/bus/pci/drivers/vfio-pci/unbind || echo "Failed to unbind gpu-audio from vfio-pci"
  368.  
  369. # Remove GPU from vfio-pci
  370. echo -n "10de 1b06" > /sys/bus/pci/drivers/vfio-pci/remove_id
  371. echo -n "10de 10ef" > /sys/bus/pci/drivers/vfio-pci/remove_id
  372.  
  373. # Remove vfio driver (is this needed?)
  374. /sbin/modprobe -r vfio-pci
  375.  
  376. # Bind the GPU to it's drivers
  377. echo -n "0000:09:00.0" > /sys/bus/pci/drivers/nvidia/bind || echo "Failed to bind nvidia"
  378. echo -n "0000:09:00.1" > /sys/bus/pci/drivers/snd_hda_intel/bind || echo "Failed to bind nvidia"
  379.  
  380. exit 0
  381.  
  382.  
  383. -Change permissions and if needed root ownership:
  384.  
  385.     sudo chmod 755 unbind_vfio_bind_nvidia.sh
  386.     sudo chown root unbind_vfio_bind_nvidia.sh
  387.  
  388. -Check that the script works:
  389.     sudo unbind_vfio_bind_nvidia.sh
  390.     lspci -nnk -d 10de:1b06
  391.     lspci -nnk -d 10de:10ef
  392.  
  393.  
  394. -Check that nvidia-smi can access the 2nd GPU again:
  395.    
  396.     nvidia-smi
  397.  
  398.  
  399.  
  400. sudo nvidia-smi -i 0 --gpu-reset
  401. -->
  402. GPU 00000000:09:00.0 is currently in use by another process.
  403.  
  404.  
  405. Important note: nvidia-smi seems to renumber the GPUs according to their PCI address,
  406. so the 1st GPU that WAS number 1 (adress 41:00.x) becomes GPU 0 as long as it is the only
  407. one bound to the nvidia-driver. All terribly confusing :-/
  408.  
  409.  
  410. Before continuing, make sure, the 2nd GPU is bound to vfio, by calling the unbind_nvidia_bind_vfio.sh script.
  411.  
  412.  
  413. 3.5 Qemu Windows10 booting:
  414. ---------------------------
  415.  
  416.  
  417. 3.5.1 Preparations:
  418. -------------------
  419. -Download the windows10 .iso from:
  420.  
  421. https://www.microsoft.com/en-us/software-download/windows10ISO
  422.  
  423. I used Win10_1903_V1_EnglishInternational_x64.iso
  424.  
  425. Note:
  426. you can install the iso without a product key.
  427. Some features like choosing the wallpaper will be disabled.
  428. For more info, see: https://www.howtogeek.com/244678/you-dont-need-a-product-key-to-install-and-use-windows-10/
  429.  
  430. -Download the Virtio .iso drivers that windows10 will use to access the SSD in the virtual environment:
  431.  
  432. https://fedorapeople.org/groups/virt/virtio-win/direct-downloads/latest-virtio/virtio-win.iso
  433.  
  434. I have virtio-win-0.1.171.iso
  435. More info: https://passthroughpo.st/disk-passthrough-explained/
  436.  
  437. -Follow the steps of "Configuring libvirt" from the Archwiki:
  438.  
  439. https://wiki.archlinux.org/index.php/PCI_passthrough_via_OVMF#Setting_up_an_OVMF-based_guest_VM
  440.  
  441. -Copy /usr/share/OVMF_VARS.fd to (a directory in) your home. This enables storing changed virtualized UEFI boot parameters.
  442.  
  443. -Obviously, Qemu needs to be installed too:
  444.     qemu-system-x86_64 --version
  445.  
  446. QEMU emulator version 3.1.0 (Debian 1:3.1+dfsg-7)
  447. Copyright (c) 2003-2018 Fabrice Bellard and the QEMU Project developers
  448.  
  449. (alternatively, go the 'virt-manager route'. See the Archwiki)
  450.  
  451. 3.5.2 The Qemu script:
  452. ----------------------
  453. Create a script 'start_windows.sh' in your home directory and modify to your system specifics (see below):
  454.  
  455. #!/bin/sh
  456. #watch out: no spaces allowed between ',' and options of qemu!
  457.  
  458. #if not done so already, disable nvidia persistence mode
  459. nvidia-smi -i 0 -pm DISABLED
  460. nvidia-smi -i 1 -pm DISABLED
  461.  
  462. #if 2nd GPU is not bound to vfio-pci driver, call unbinding script
  463. if [ ! -e /sys/bus/pci/drivers/vfio-pci/0000:09:00.0 ]; then
  464.     /usr/local/bin/unbind_nvidia_bind_vfio.sh
  465.     sleep 1
  466. fi
  467.  
  468. export QEMU_AUDIO_DRV=alsa QEMU_AUDIO_TIMER_PERIOD=0
  469. qemu-system-x86_64 \
  470.     -machine q35,accel=kvm \
  471.     -enable-kvm -m 16384 -cpu host,kvm=off -smp 8,sockets=1,cores=8,threads=1 \
  472.     -vga none \
  473.     -nographic \
  474.     -rtc base=localtime,clock=vm \
  475.     -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
  476.     -device piix4-ide,bus=pcie.0,id=piix4-ide \
  477.     -device vfio-pci,host=09:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
  478.     -device vfio-pci,host=09:00.1,bus=pcie.0 \
  479.     -device nec-usb-xhci \
  480.     -device usb-host,hostbus=5,hostaddr=2 \
  481.     -drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,readonly \
  482.     -drive file=/home/lars/windows_vm/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \
  483.     -boot order=dc \
  484.     -drive if=virtio,id=disk0,cache=none,format=raw,file=/dev/sda \
  485.     -drive file=/home/lars/windows_vm/Win10_1903_V1_EnglishInternational_x64.iso,index=1,media=cdrom \
  486.     -drive file=/home/lars/windows_vm/virtio-win-0.1.171.iso,index=2,media=cdrom \
  487.    
  488.  
  489. #rebind to nvidia driver
  490. /usr/local/bin/unbind_vfio_bind_nvidia.sh
  491.  
  492. #re-enable nvidia persistence mode otherwise nvidia-smi runs slow, slowing down conky and my desktop.
  493. sleep 1
  494. nvidia-smi -i 0 -pm ENABLED
  495. nvidia-smi -i 1 -pm ENABLED
  496.  
  497. exit 0
  498.  
  499. Note: jum ahead to Section 3.7.2 for the *final* script including USB passthrough needed for Oculus Rift.
  500.  
  501. Qemu script modifications:
  502. --------------------------
  503. -Change the PCI address of the 2nd GPU (0000:09:00.0) to that of your 2nd GPU. Likewise with the audio part (0000:09:00.1).
  504.  
  505. -Change '/home/lars' to the location where you keep your scripts.
  506.  
  507. -'/dev/sda' is the SSD to install Windows10 on.
  508.   Find out the device name with 'lsblk'. Before 1st boot, remove any existing partitions with gparted,
  509.  
  510. -Make sure you, the regular user, has permissions to read/write this disk.
  511.  
  512. -'hostbus=5,hostaddr=2' is the address of my Logitech K400+ wireless keyboard/touchpad that the windows10 guest will use.
  513.  Other (extra) usb ports can be passed through in the same manner.
  514.  Find out the hostbus and hostaddress with:
  515.  
  516.     lsusb
  517.    
  518. ...
  519. Bus 005 Device 002: ID 046d:c52b Logitech, Inc. Unifying Receiver
  520. ...
  521.  
  522. For more info, see: https://unix.stackexchange.com/questions/452934/can-i-pass-through-a-usb-port-via-qemu-command-line
  523.  
  524.  
  525. -Make the script executable:
  526.  
  527.     chmod +755 startwindows.sh
  528.  
  529.  
  530.  
  531. 3.5.3 First time booting and installing Windows10:
  532. --------------------------------------------------
  533. -Execute the script as root (how to avoid running as root?):
  534.  
  535.     sudo startwindows.sh
  536.  
  537. -After a few seconds, the monitor connected to the 2nd GPU will display OVMF UEFI booting.
  538.  
  539. -If you drop into the OVMF boot menu, type 'exit' and enter
  540.  
  541. -Select the boot menu and select one of the Qemu CDROMs to boot the windows iso
  542. First time, I got a blue screen saying 'system thread_ exception not handled'.
  543. This line, added to unbind_nvidia_bind_vfio.sh solved this:
  544.  
  545.     echo 1 > /sys/module/kvm/parameters/ignore_msrs
  546.  
  547. For more info, see: https://forum.level1techs.com/t/windows-10-1803-as-guest-with-qemu-kvm-bsod-under-install/127425/9
  548.  
  549. -If Windows asks for a key click "I don't have a key"
  550.  
  551. -Asked where to install, you will see an empty list. On the lower left, you an choose to 'Load drivers'. Then select the 'CDROM' with the Virtio drivers and install.
  552.  The SSD where Windows will be installed upon, will then appear in the list. Select it and continue to install Windows.
  553.  For some nice screenshots about installing the Virtio drivers, see:
  554.  http://www.zeta.systems/blog/2018/07/03/Installing-Virtio-Drivers-In-Windows-On-KVM/
  555.  
  556. -I chose regular Windows 10 Home.
  557.  For differences between the versions, see:
  558.  https://answers.microsoft.com/en-us/windows/forum/windows_10-other_settings/whats-the-difference-between-windows-10-education/f05e202f-815a-47dc-a641-e3a85e974a0b
  559.  
  560. -Install Nvidia drivers (download and execute .exe from card manufacturer site). Reboot.
  561.  
  562. -(Specific for my Asus GPU) Install 'GPUTweakII' bloatware to control GPU overclocking and 'AURA_RGBLightingControl' for rainbow-unicorn-barf LEDs of the GPU.
  563.  Warning: rainbow-barf settings survive reboot.  
  564.  
  565. -Install the Unigine Heaven benchmark:
  566.  https://benchmark.unigine.com/heaven
  567.  so you look happy like Wendell:
  568.  https://www.youtube.com/watch?v=UD4BxGNShw8
  569.  
  570. If when first starting Heaven, you get an error saying msvcp100.dll is not found then search for it and
  571. copy and paste it to C:\Windows\System32 and C:\Windows\SysWOW64\, as explained here:
  572. https://www.reddit.com/r/Windows10/comments/3ulr79/msvcp100dll_missing_for_unigine_valley_benchmark/
  573.  
  574. Heaven benchmark score with resolution 1680x1050, antialiasing x8, details on Ultra, : 3274.
  575.  
  576.  
  577. *** Yeey! You did it! ***
  578.  
  579. 3.6 Tweaks:
  580. -----------
  581.  
  582. 3.6.1 CPU 'pinning' with taskset:
  583. ---------------------------------
  584. -Enable NUMA (Non Uniform Memory Access) and then assign CPUs to the VM:
  585. This is specific for the AsRock Taichi X399 Motherboard. For others, the menus and or settings may be different.
  586. At host boot, to go into UEFI menu by pressing F2.
  587. Then follow the menus: Advanced --> AMD CBS --> DF Common options --> set option 'Memory interleaving' to 'Channel'.
  588. -Check with:
  589.  
  590.     lstopo
  591.    
  592. The machine layout should resemble the image at: https://imgur.com/a/frnUq with two NUMA nodes.
  593.  
  594. -Find out to which NUMA node the passthrough GPU is connected:
  595.  
  596.     lstopo --verbose
  597.    
  598. ...
  599. NUMANode L#0 (P#0 local=16382396KB total=16382396KB)        <--- node 0
  600. ...
  601.         PCI 10de:1b06 (P#36864 busid=0000:09:00.0 class=0300(VGA) link=4.00GB/s PCIVendor="NVIDIA Corporation") "NVIDIA Corporation"  <--- the passthrough GPU
  602.             GPU L#6 "renderD128"
  603.             GPU L#7 "card0"
  604. ...
  605.  
  606. -From the drawing lstopo gave above, you can now find out which CPUs are in the same NUMA node as the passthrough GPU.
  607.  Note lscpu can also give this information:
  608.  
  609.     lscpu
  610. ...
  611. NUMA node0 CPU(s):   0-7,16-23      <---where the passthrough GPU is at
  612. NUMA node1 CPU(s):   8-15,24-31
  613. ...
  614.  
  615.  
  616. We will now use the taskset command to 'pin' Qemu to those CPUs in the same NUMA node as the passthrough GPU.
  617. Qemu also needs some threads of its own, next to the ones that make up the VM.
  618. See: https://www.reddit.com/r/VFIO/comments/4vqnnv/qemu_command_line_cpu_pinning/
  619. -Adjust the Qemu script of Section 3.5.2 as follows:
  620.  
  621. ...
  622. taskset --cpu-list --all-tasks 0-7,16-23 qemu-system-x86_64 \
  623. ...
  624.  
  625.  
  626. - Alternatively, after the Qemu script is started, do from a terminal in the host:
  627.  
  628.     QEMUPID=$(pidof -s qemu-system-x86_64)
  629.     taskset --cpu-list --all-tasks --pid 0-7,16-23  $QEMUPID
  630.    
  631.  
  632.  
  633. 3.7 Steps to install the Oculus Rift:
  634. -------------------------------------
  635. The Oculus Rift (CV1) needs 3 USB 3.0 ports: 2 for the position sensor thingies and
  636. one that goes to the helmet itself, along with an HDMI for video.
  637. I have a USB 3.0 controller in a single IOMMU group (see script in Section 3.1):
  638.  
  639. ...
  640. IOMMU Group 35 42:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
  641. ...
  642.  
  643. This controller is bound to xhci_hcd driver:
  644.    
  645.     ls -la /sys/bus/pci/devices/0000:42:00.3/
  646.  
  647. ...
  648. driver -> ../../../../bus/pci/drivers/xhci_hcd
  649. ...
  650.  
  651.  
  652. 3.7.1 Stuff that doesn't work (and why):
  653. ----------------------------------------
  654. (see 3.7.2 for what DOES work, this section is left in to show what I tried out)
  655. It seems simple to add all ports needed to the Qemu script:
  656.  
  657.  
  658.     -device nec-usb-xhci,id=xhci3,multifunction=on \
  659.     -device usb-host,bus=xhci3.0,port=1,vendorid=0x2833,productid=0x3031 \
  660.     -device usb-host,bus=xhci3.0,port=2,vendorid=0x2833,productid=0x0031 \
  661.     -device usb-host,bus=xhci3.0,port=3,vendorid=0x2833,productid=0x2031 \
  662.     -device nec-usb-xhci,id=xhci2,multifunction=on \
  663.     -device usb-host,bus=xhci2.0,port=1,vendorid=0x046d,productid=0xc52b \  <-- for keyboard
  664.     -device usb-host,bus=xhci2.0,port=2,vendorid=0x2833,productid=0x0211 \
  665.     -device usb-host,bus=xhci2.0,port=3,vendorid=0x2833,productid=0x0211 \
  666.  
  667. In the terminal that calls qemu, the windows guest-initiated-resets of the oculus devices give:
  668.  
  669.     libusb: error [_open_sysfs_attr] open /sys/bus/usb/devices/5-2.1/bConfigurationValue failed ret=-1 errno=2
  670.     libusb: error [_get_usbfs_fd] File doesn't exist, wait 10 ms and try again
  671.     libusb: error [_get_usbfs_fd] libusb couldn't open USB device /dev/bus/usb/005/046: No such file or directory
  672.     libusb: error [udev_hotplug_event] ignoring udev action bind
  673.     libusb: error [udev_hotplug_event] ignoring udev action bind
  674.  
  675.  
  676. However, even if this works to detect all parts of the oculus (including the 2 sensors), it fails to update the Oculus firmware
  677. or link the controllers, because it does a reset of the devices, which the host passes through,
  678. and then udev renumbers the devices, which confuses qemu.
  679.  
  680.  
  681. After some googling:
  682.  
  683. https://www.reddit.com/r/VFIO/comments/97dhbw/qemu_w10_xbox_one_controller/
  684.  
  685. ---> Same problem here, the host xhci keep resetting the device to a new address.
  686. I've google around but only find people choose to pass the entire usb controller, which works,
  687. but not actually solving this problem (and I don't have any usb controller to spare).
  688.  
  689. --> https://patchwork.ozlabs.org/patch/1031919/
  690. With certain USB devices passed through via usb-host, a guest attempting to
  691. reset a usb-host device can trigger a reset loop that renders the USB device
  692. unusable. In my use case, the device was an iPhone XR that was passed through to
  693. a Mac OS X Mojave guest. Upon connecting the device, the following happens:
  694.  
  695. 1) Guest recognizes new device, sends reset to emulated USB host
  696. 2) QEMU's USB host sends reset to host kernel
  697. 3) Host kernel resets device
  698. 4) After reset, host kernel determines that some part of the device descriptor
  699. has changed ("device firmware changed" in dmesg), so host kernel decides to
  700. re-enumerate the device.
  701. 5) Re-enumeration causes QEMU to disconnect and reconnect the device in the
  702. guest.
  703. 6) goto 1)
  704.  
  705. Same kind of problem reported here:
  706. https://www.redhat.com/archives/vfio-users/2016-February/msg00034.html
  707.  
  708. So: here a loop is not initiated, but cant update firmware of occulus, since it dissapears. only unplugging/replugging works.
  709. The added option no_guest_reset is introduced in qemu 4.0 (April 2019). Mine is 3.1 :-/
  710.  
  711.  
  712. 3.7.2 passthrough of entire USB controller:
  713. -------------------------------------------
  714. The advice is always the same: pass through entire usb controller!
  715. -First check which one it is:
  716.  
  717.     ./checkiommugroups.sh | grep USB
  718.  
  719. IOMMU Group 14 01:00.0 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] X399 Series Chipset USB 3.1 xHCI Controller [1022:43ba] (rev 02)
  720. IOMMU Group 19 0a:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
  721. IOMMU Group 35 42:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]   <--- this one!
  722.  
  723.  
  724.     ./show_iommu.sh
  725.    
  726. IOMMU group 35
  727. 42:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
  728.     Driver: xhci_hcd
  729.     Usb bus:
  730.         Bus 005 Device 049: ID 2833:1031        <---2833=Oculus stuff
  731.         Bus 005 Device 048: ID 2833:2031  
  732.         Bus 005 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  733.     Usb bus:
  734.         Bus 006 Device 005: ID 2833:0211  
  735.         Bus 006 Device 006: ID 2833:0211  
  736.         Bus 006 Device 012: ID 2833:3031  
  737.         Bus 006 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
  738.  
  739. ...
  740.  
  741.  
  742. IOMMU group 19
  743.         0a:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
  744.     Driver: xhci_hcd
  745.     Usb bus:
  746.         Bus 003 Device 003: ID 25a7:fa23  
  747.         Bus 003 Device 005: ID 046d:c52b Logitech, Inc. Unifying Receiver   <----wireless keyboard for the VM
  748.         Bus 003 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
  749.     Usb bus:
  750.         Bus 004 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
  751.  
  752.  
  753.  
  754. So we need to passthrough the USB controller that is in IOMMU 35.
  755. In the same way as explained for the passthrough GPU, the entire usb controller of IOMMU group 35 can be bound to the VFIO driver:
  756.  
  757.     #!/bin/sh
  758.     #place in /usr/local/bin
  759.     #unbind usb controller in iommu group 35 and bind to vfio for passthrough
  760.  
  761.     #if not already done..
  762.     /sbin/modprobe vfio
  763.     /sbin/modprobe vfio_pci
  764.  
  765.     #
  766.     echo '0000:42:00.3' > /sys/bus/pci/devices/0000:42:00.3/driver/unbind
  767.     echo '1022 145c' > /sys/bus/pci/drivers/vfio-pci/new_id
  768.     echo '0000:42:00.3' > /sys/bus/pci/devices/0000:42:00.3/driver/bind
  769.     echo '1022 145c' > /sys/bus/pci/drivers/vfio-pci/remove_id
  770.  
  771.     sleep 1
  772.     #check drivers associated with gpu and audio
  773.     lspci -nnk -d 1022:145c
  774.  
  775.     exit 0
  776.  
  777.  
  778. -Place the script in /usr/local/bin as 'unbind_usb_controller_bind_vfio.sh'.
  779. -Check with:
  780.  
  781.     lspci -nnk -d 1022:145c
  782.  
  783. 0a:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
  784.     Subsystem: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:d102]
  785.     Kernel driver in use: xhci_hcd
  786.     Kernel modules: xhci_pci
  787. 42:00.3 USB controller [0c03]: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
  788.     Subsystem: Advanced Micro Devices, Inc. [AMD] Family 17h (Models 00h-0fh) USB 3.0 Host Controller [1022:145c]
  789.     Kernel driver in use: vfio-pci      <---OK!
  790.     Kernel modules: xhci_pci
  791.  
  792.  
  793. -Now update the start_windows.sh script to the *grand total* of:
  794.  
  795.     #!/bin/sh
  796.     #watch out: no spaces allowed between ',' and options of qemu!
  797.  
  798.     #if not done so already, disable nvidia persistence mode
  799.     nvidia-smi -i 0 -pm DISABLED
  800.     nvidia-smi -i 1 -pm DISABLED
  801.  
  802.     #if 2nd GPU is not bound to vfio-pci driver, call unbinding script
  803.     if [ ! -e /sys/bus/pci/drivers/vfio-pci/0000:09:00.0 ]; then
  804.         /usr/local/bin/unbind_nvidia_bind_vfio.sh
  805.         sleep 1
  806.     fi
  807.  
  808.  
  809.     #bind usb controller to vfio
  810.     if [ ! -e /sys/bus/pci/drivers/vfio-pci/0000:42:00.3 ]; then
  811.         /usr/local/bin/unbind_usb_controller_bind_vfio.sh
  812.         sleep 1
  813.     fi
  814.  
  815.  
  816.     export QEMU_AUDIO_DRV=alsa QEMU_AUDIO_TIMER_PERIOD=0
  817.     taskset --cpu-list --all-tasks 0-7,16-23 qemu-system-x86_64 \
  818.         -machine q35,accel=kvm \
  819.         -enable-kvm -m 16384 \
  820.         -cpu host,kvm=off,check,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=whatever \
  821.         -smp 8,sockets=1,cores=8,threads=1 \
  822.         -vga none \
  823.         -nographic \
  824.         -rtc base=localtime,clock=vm \
  825.         -device ioh3420,bus=pcie.0,addr=1c.0,multifunction=on,port=1,chassis=1,id=root.1 \
  826.         -device piix4-ide,bus=pcie.0,id=piix4-ide \
  827.         -device vfio-pci,host=09:00.0,bus=root.1,addr=00.0,multifunction=on,x-vga=on \
  828.         -device vfio-pci,host=09:00.1,bus=pcie.0 \
  829.         -device vfio-pci,host=42:00.3,multifunction=on \
  830.         -device nec-usb-xhci,id=xhci2,multifunction=on \
  831.         -device usb-host,bus=xhci2.0,port=1,vendorid=0x046d,productid=0xc52b \
  832.         -drive file=/usr/share/OVMF/OVMF_CODE.fd,if=pflash,format=raw,readonly \
  833.         -drive file=/home/lars/windows_vm/OVMF_VARS.fd,if=pflash,format=raw,unit=1 \
  834.         -boot order=dc \
  835.         -drive if=virtio,id=disk0,cache=none,format=raw,file=/dev/sda \
  836.         -drive file=/home/lars/windows_vm/Win10_1903_V1_EnglishInternational_x64.iso,index=1,media=cdrom \
  837.         -drive file=/home/lars/windows_vm/virtio-win-0.1.171.iso,index=2,media=cdrom \
  838.        
  839.        
  840.     #rebind to nvidia driver
  841.     /usr/local/bin/unbind_vfio_bind_nvidia.sh
  842.  
  843.     #re-enable nvidia persistence mode otherwise nvidia-smi runs slow, slowing down conky.
  844.     sleep 1
  845.     nvidia-smi -i 0 -pm ENABLED
  846.     nvidia-smi -i 1 -pm ENABLED
  847.  
  848.     exit 0
  849.  
  850.  
  851.  
  852. *** That's it! ***
  853. Make sure to put the wireless USB keyboard dongle in a port that belongs to the USB that is passed through.
  854. Wait! Shouldn't we need a script to rebind the USB controller to xhci_pci?
  855. No. The line that says:
  856.    
  857.     /sbin/modprobe -r vfio-pci
  858.    
  859. in 'unbind_vfio_bind_nvidia.sh' unloads the vfio driver and apparently that
  860. makes the usb port automagically rebind to its original driver. Amazing isn't it!
  861.  
  862. Were done. Passthrough with identical GPUs and Oculus Rift working *flawlessly*.
  863. Happy Robo-Recalling!!
  864.  
  865. 4.TODOs:
  866. --------
  867. Adapt so that sudo isn't necessary to start the VM,
  868. See: https://www.evonide.com/non-root-gpu-passthrough-setup/
  869.  
  870. V. Versionhistory:
  871. ------------------
  872. 29/05/2019  Initial.
  873. 12/07/2019  Added -rtc base=localhost,clock=vm to qemu command line, as correct clock is needed to make RecRoom VR account.
  874.  
  875. R. Useful links in no particular order:
  876. ---------------------------------------
  877. https://www.reddit.com/r/VFIO/comments/8jreon/help_with_using_oculus_rift_in_windows_10_kvm_vm/
  878. https://turlucode.com/qemu-kvm-installing-windows-10-client/
  879. https://www.reddit.com/r/VFIO/comments/7avvwx/qemuaffinity_pin_qemu_kvm_cores_to_host_cpu_cores/
  880. https://imgur.com/a/frnUq
  881. https://forum.level1techs.com/t/enable-numa-on-threadripper/123544
  882. https://www.reddit.com/r/Amd/comments/6vrcq0/psa_threadripper_umanuma_setting_in_bios/
  883. https://devtalk.nvidia.com/default/topic/1016989/cuda-setup-and-installation/nvidia-smi-is-slow-and-hangs-after-sometime-with-1080ti/
  884. https://www.reddit.com/r/VFIO/comments/991qzz/solutions_for_bindingunbinding_gpu_from_host/
  885. https://wiki.debian.org/DebianTesting
  886. https://forum.level1techs.com/t/identical-gpu-passthrough-ubuntu/138843/14
  887. https://forum.level1techs.com/t/the-vfio-and-gpu-passthrough-beginners-resource/129897
  888. https://docs.nvidia.com/deploy/driver-persistence/index.html#persistence-daemon
  889. https://devtalk.nvidia.com/default/topic/1051170/cuda-setup-and-installation/nvidia-persistenced-failed-to-initialize-check-syslog-for-more-details-/
  890. https://www.linux-kvm.org/page/Virtio
  891. https://www.howtogeek.com/244678/you-dont-need-a-product-key-to-install-and-use-windows-10/
  892. https://passthroughpo.st/disk-passthrough-explained/
  893. https://www.reddit.com/r/Windows10/comments/3ulr79/msvcp100dll_missing_for_unigine_valley_benchmark/
  894. https://ritsch.io/2017/08/02/execute-script-at-linux-startup.html
  895. https://forum.level1techs.com/t/windows-10-1803-as-guest-with-qemu-kvm-bsod-under-install/127425/9
  896. https://forum.level1techs.com/t/gpu-passthrough-vfio-blue-screen/132808
  897. https://www.reddit.com/r/VFIO/comments/9pc0j7/dynamically_bindingunbinding_an_nvidia_card_from/
  898. https://gitlab.com/YuriAlek/vfio/blob/master/scripts/windows-basic.sh
  899. https://turlucode.com/qemu-kvm-installing-windows-10-client/
  900. https://www.reddit.com/r/VFIO/comments/708uur/nvidia_switching_gpu_between_vm_and_host/
  901. https://www.reddit.com/r/VFIO/comments/8q9923/looking_for_tutorial_linux_kvm_qemu_ssd/
  902. https://dennisnotes.com/note/20180614-ubuntu-18.04-qemu-setup/
  903. https://unix.stackexchange.com/questions/452934/can-i-pass-through-a-usb-port-via-qemu-command-line
  904. https://www.reddit.com/r/VFIO/comments/4vqnnv/qemu_command_line_cpu_pinning/
  905. https://www.evonide.com/non-root-gpu-passthrough-setup/
  906.  
  907. ---------------------------------------------------------------------------------------------------------------------------------
RAW Paste Data
We use cookies for various purposes including analytics. By continuing to use Pastebin, you agree to our use of cookies as described in the Cookies Policy. OK, I Understand
 
Top