iamdevnitesh / Single_GPU_Passthrough_Guide

A simple Guide to passthrough your GPU to Virtual Machine

Geek Repo:Geek Repo

Github PK Tool:Github PK Tool

Single_GPU_Passthrough_Guide

Hello everyone, this guide will help you passthrought your nVidia/AMD GPU on to a Virtual Machine. You can follow this Github guide to passthrough your discrete GPU onto a Virtual Machine


Contents



Who is this for ?

This guide is for all the people who wants to use linux and windows/other OS at same time.



Requirements:

1. Discrete GPU (nVidia/AMD).

2. Integrated GPU (intel based GPUs).

3. Monitor - A single monitor is enough but if you have 2 monitors it will be more efficient.

4. Make sure you have enough RAM to make both host and virtual OS work.

5. Any Linux Distribution.

6. Downloaded Windows 10 ISO 64-bit



My Setup:

2. Motherboard - MSI Z390 Gaming Edge AC

5. OS - Fedora 34



⚠️Warning !⚠️

1. Make sure you have enough CPU/RAM resources to distribute on both the resources.

2. Laptops are not recommended as we'll be changing the display ports between Motherboard and GPU back and forth.

3. Trying to passthrough nvidia GPU on a macOS virtual machine is not recommended as macOS dropped nvidia support after High Sierra.



STEPS



Step 2 : POST INSTALLATION SETUP

  • Step 2.1 : Updating the System

    • After installing open up the terminal and type

         sudo dnf update -y
      
    • After the update is complete type

         sudo shutdown
      
    • in the terminal and press Enter.







  • Step 2.4 : Updating the System

    After logging in to Fedora/OS. Open terminal and update once again using the command given below:

      sudo dnf update -y
    

Step 3 : INSTALLING SCRIPTS

  • Step 3.1 : Enabling IOMMU

    • Follow the commands given below one by one :

      cd Downloads
      
      git clone https://github.com/iamdevnitesh/Single_GPU_Passthrough_Guide.git
      
      cd Single_GPU_Passthrough_Guide/
      
      chmod +x gpu_passthrough.sh
      
      sudo ./gpu_passthrough.sh
      
    • After running this command some tools will get install. And after than You'll get to see something like this:

    • It will also ask "Do you wan to edit it?" as shown below. Press n and hit Enter

       Complete!
      Creating backups
      Set Intel IOMMU On
      GRUB_CMDLINE_LINUX="rhgb quiet intel_iommu=on rd.driver.pre=vfio-pci kvm.ignore_msrs=1"
      
      Grub was modified to look like this: 
      GRUB_CMDLINE_LINUX="rhgb quiet intel_iommu=on rd.driver.pre=vfio-pci kvm.ignore_msrs=1"
      
      Do you want to edit it? y/n
      n
      Getting GPU passthrough scripts ready
      Updating grub and generating initramfs
      Generating grub configuration file ...
      Adding boot menu entry for UEFI Firmware Settings ...
      done
      
    • Now reboot PC by copying the command below and pasting it in terminal

      sudo reboot
      
  • Step 3.2 : Checking vfio passed devices

    • After reboot is complete open up terminal and type

      lspci -k
      
    • This command will give output similar to this:

    • [niteshkumar@fedora ~]$ lspci -k
      00:00.0 Host bridge: Intel Corporation 8th Gen Core Processor Host      Bridge/DRAM Registers (rev 0a)
              DeviceName: Onboard - Other
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
              Kernel driver in use: skl_uncore
              Kernel modules: ie31200_edac
      00:01.0 PCI bridge: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) (rev 0a)
              Kernel driver in use: pcieport
      00:02.0 VGA compatible controller: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630]			//THIS SHOULD BE PRESENT WHICH WILL HELP INTEL GPU RUN ON LINUX
              DeviceName: Onboard - Video
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
              Kernel driver in use: i915
              Kernel modules: i915*
      00:08.0 System peripheral: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model
              DeviceName: Onboard - Other
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
      00:12.0 Signal processing controller: Intel Corporation Cannon Lake PCH Thermal Controller (rev 10)
              DeviceName: Onboard - Other
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
              Kernel driver in use: intel_pch_thermal
              Kernel modules: intel_pch_thermal
      00:14.0 USB controller: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller (rev 10)
              DeviceName: Onboard - Other
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
              Kernel driver in use: xhci_hcd
      00:14.2 RAM memory: Intel Corporation Cannon Lake PCH Shared SRAM (rev 10)
              DeviceName: Onboard - Other
              Subsystem: Intel Corporation Device 7270
      00:14.3 Network controller: Intel Corporation Cannon Lake PCH CNVi WiFi (rev 10)
              DeviceName: Onboard - Ethernet
              Subsystem: Intel Corporation Device 02a4
              Kernel driver in use: iwlwifi
              Kernel modules: iwlwifi
      00:16.0 Communication controller: Intel Corporation Cannon Lake PCH HECI Controller (rev 10)
              DeviceName: Onboard - Other
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
              Kernel driver in use: mei_me
              Kernel modules: mei_me
      00:17.0 SATA controller: Intel Corporation Cannon Lake PCH SATA AHCI Controller (rev 10)
              DeviceName: Onboard - SATA
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
              Kernel driver in use: ahci
      00:1b.0 PCI bridge: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 (rev f0)
              Kernel driver in use: pcieport
      00:1f.0 ISA bridge: Intel Corporation Z390 Chipset LPC/eSPI Controller (rev 10)
              DeviceName: Onboard - Other
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
      00:1f.3 Audio device: Intel Corporation Cannon Lake PCH cAVS (rev 10)
              DeviceName: Onboard - Sound
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
              Kernel driver in use: snd_hda_intel
              Kernel modules: snd_hda_intel, snd_soc_skl, snd_sof_pci_intel_cnl
      00:1f.4 SMBus: Intel Corporation Cannon Lake PCH SMBus Controller (rev 10)
              DeviceName: Onboard - Other
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
              Kernel driver in use: i801_smbus
              Kernel modules: i2c_i801
      00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller (rev 10)
              DeviceName: Onboard - Other
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
      00:1f.6 Ethernet controller: Intel Corporation Ethernet Connection (7) I219-V (rev 10)
              DeviceName: Onboard - Ethernet
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 7b17
              Kernel driver in use: e1000e
              Kernel modules: e1000e
      01:00.0 VGA compatible controller: NVIDIA Corporation TU106 [GeForce RTX 2070] (rev a1)		//THE NVIDIA CARD SHOULD BE vfio-pci in kernel driver in use
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 3733
              Kernel driver in use: vfio-pci
              Kernel modules: nouveau
      01:00.1 Audio device: NVIDIA Corporation TU106 High Definition Audio Controller (rev a1)	//Same for this nvidia card
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 3733
              Kernel driver in use: vfio-pci
              Kernel modules: snd_hda_intel
      01:00.2 USB controller: NVIDIA Corporation TU106 USB 3.1 Host Controller (rev a1)
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 3733
              Kernel driver in use: xhci_hcd
      01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU106 USB Type-C UCSI Controller (rev a1)
              Subsystem: Micro-Star International Co., Ltd. [MSI] Device 3733
              Kernel driver in use: nvidia-gpu
              Kernel modules: i2c_nvidia_gpu
      02:00.0 Non-Volatile memory controller: Micron Technology Inc Device 5405
              Subsystem: Micron Technology Inc Device 0100
              Kernel driver in use: nvme
              Kernel modules: nvme
      
    • lspci -k This is very helpful when you like to know the name of the kernel module that will be handling the operations of a particular device.

    • If you see the ID 01:00.0 You can see nvidia driver and it is bing used as vfio-pci.

    • Here,are some important IDs check if your devices have similar settings :

         00:02.0 // Check if the kernel driver in use and kernel modules
         01:00.0 // are same for your device IDs
         01:00.1
         01:00.2
         01:00.3
      
  • Step 3.3 : Checking IOMMU Groups

    • Now, we'll ensure that if IOMMU Groups are valid, In short we'll see what are all the devices we can use in our virtual machine**

    • Copy the command given below and paste it in terminal

      #!/bin/bash
      shopt -s nullglob
      for g in `find /sys/kernel/iommu_groups/* -maxdepth 0 -type d | sort -V`; do
          echo "IOMMU Group ${g##*/}:"
      	for d in $g/devices/*; do
      		echo -e "\t$(lspci -nns ${d##*/})"
      	done;
      done;
      
  • My output:

    • IOMMU Group 0:
          00:00.0 Host bridge [0600]: Intel Corporation 8th Gen Core Processor Host Bridge/DRAM Registers [8086:3ec2] (rev 0a)
      IOMMU Group 1:
          00:01.0 PCI bridge [0604]: Intel Corporation 6th-10th Gen Core Processor PCIe Controller (x16) [8086:1901] (rev 0a)
          01:00.0 VGA compatible controller [0300]: NVIDIA Corporation TU106 [GeForce RTX 2070] [10de:1f02] (rev a1)
          01:00.1 Audio device [0403]: NVIDIA Corporation TU106 High Definition Audio Controller [10de:10f9] (rev a1)
          01:00.2 USB controller [0c03]: NVIDIA Corporation TU106 USB 3.1 Host Controller [10de:1ada] (rev a1)
          01:00.3 Serial bus controller [0c80]: NVIDIA Corporation TU106 USB Type-C UCSI Controller [10de:1adb] (rev a1)
      IOMMU Group 2:
          00:02.0 VGA compatible controller [0300]: Intel Corporation CoffeeLake-S GT2 [UHD Graphics 630] [8086:3e98]
      IOMMU Group 3:
          00:08.0 System peripheral [0880]: Intel Corporation Xeon E3-1200 v5/v6 / E3-1500 v5 / 6th/7th/8th Gen Core Processor Gaussian Mixture Model [8086:1911]
      IOMMU Group 4:
          00:12.0 Signal processing controller [1180]: Intel Corporation Cannon Lake PCH Thermal Controller [8086:a379] (rev 10)
      IOMMU Group 5:
          00:14.0 USB controller [0c03]: Intel Corporation Cannon Lake PCH USB 3.1 xHCI Host Controller [8086:a36d] (rev 10)
          00:14.2 RAM memory [0500]: Intel Corporation Cannon Lake PCH Shared SRAM [8086:a36f] (rev 10)
      IOMMU Group 6:
          00:14.3 Network controller [0280]: Intel Corporation Cannon Lake PCH CNVi WiFi [8086:a370] (rev 10)
      IOMMU Group 7:
          00:16.0 Communication controller [0780]: Intel Corporation Cannon Lake PCH HECI Controller [8086:a360] (rev 10)
      IOMMU Group 8:
          00:17.0 SATA controller [0106]: Intel Corporation Cannon Lake PCH SATA AHCI Controller [8086:a352] (rev 10)
      IOMMU Group 9:
          00:1b.0 PCI bridge [0604]: Intel Corporation Cannon Lake PCH PCI Express Root Port #17 [8086:a340] (rev f0)
      IOMMU Group 10:
          00:1f.0 ISA bridge [0601]: Intel Corporation Z390 Chipset LPC/eSPI Controller [8086:a305] (rev 10)
          00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS [8086:a348] (rev 10)
          00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller [8086:a323] (rev 10)
          00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller [8086:a324] (rev 10)
          00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-V [8086:15bc] (rev 10)
      IOMMU Group 11:
          02:00.0 Non-Volatile memory controller [0108]: Micron Technology Inc Device [1344:5405]
      
    • In the above output we can see different IOMMU Groups.

      • Group 1: nvidia Drivers

      • Group 10: audio drivers, ethernet drivers

CREATING VIRTUAL MACHINE

  • Now, we'll create a virtual machine

i) Open the Virtual Machine from menu as shown here

  • It will ask for password. Enter it.

ii) Create a new Virtual machine as shown here

  • On the Top left Click File -> New Virtual Machine.

iii) Select the locally installed ISO file as shown here

  • You'll get a window as shown in the above link. Select Local Install & Click Forward.

iv) You will reach a window as shown here. Click Browse.

v) Again a new window will pop up as shown here. Select Browser Local located at bottom right of window.

vi) Another window will pop up opening the file manager. Browser your Windows_10.iso file through the file manager and Click Open as shown here.

  • As you can see Mine is located in Downloads.

vii) Now, You'll see a window where the virtual machine will be detecting OS. If somehow Windows 10 is not detected. Enter manually as shown here.

  • Click on Forward.
  • A pop up will appear as shown here. Click on Yes.

viii) On the Next Window choose the amount of RAM and CPU cores you want to give to the Virtual Machine.

  • NOTE : Recommended Half the RAM and Half the CPU.
  • For My setup I am giving 16GB RAM and 3 CPUs. Click the forward button as shown here.

ix) On the Next window you will be asked to mention a disk space you want to give to your Virtual Machine.

  • I gave around 250GB of storage".
  • NOTE: Make sure you have enough space on you disk before giving it.
  • After entering the detail click Forward as shown here.

x) On the next window select the option Customize Configuration Before Installation as shown here.

  • And click FINISH.

CUSTOMIZING VIRTUAL MACHINE

i) First of all you'll get to a window like this.

  • Change the Firmware From BIOS to UEFI x86_64: /usr/share/edk2/ovmf/OVMF_CODE.fd as shown here and hit apply.

ii) Then move to the CPUs Tab on Left and Change the topology if you want.

  • NOTE: The above Option ii) is fully optional. You can leave it as it is if you want.
  • However, I changed my CPU topology to this. Remember Topology for your CPU may variate.

WINDOWS INSTALLATION

NOTE: As soon as the black window pops up. Click inside it. Otherwise you'll have to delete and create the virtual machine again.

i) Click on Begin Installation after completing the Customization of Virtual Machine. As shown here in the top left corner.

ii) A black window will appear and it will ask press any key to continue.

  • Quickly Press any key. And you'll be prompted to Windows 10 Installation.

iii) If you are using this guide. You'll probably know how to install Windows 10. So, just install it. and apply all the updates and shut down the windows.

  • And Quit the virtual machine manager.

PASSING THROUGH DEVICES

i) Open the virtual machine manager

  • Go Click on Edit->Preferences like this.
  • And Enable XML Editing as shown here.

ii) Now Right click on the Win10 Virtual Machine and press Open.

A new window will popup. On the top left click the Bulb icon and you'll be prompted to Virtual Machine Configuration

  • In the overview tab click the XML as shown here.

  • The XML file will have many code. Don't change anything now.

  • Now, copy the command given below and paste it as shown here.

    <vendor_id state='on' value='randomid'/>
    

iii) Now, copy the command given below and paste it as shown here.

  • Copy the command below,

      <kvm>
        <hidden state='on'/>
      </kvm>
    
  • Hit apply and the bottom right.

iv) Next we'll add our gpu and sound drivers to the virtual machine.

  • Now here listen to me. Go to the step 3.3 and copy the bash script and run in your terminal. Now, my configuration gave me the nvidia devices in IOMMU Group 1 and Audio devices in IOMMU Group 10. So, now I'll add all the devices on Group 1.

  • My IOMMU Group 1 consist of

    • ​01:00.0 NVIDIA Corporation TU106 [GeForce RTX 2070]
    • 01:00.1 NVIDIA Corporation TU106 High Definition Audio Controller
    • 01:00.2 NVIDIA Corporation TU106 USB 3.1 Host Controller
    • 01:00.3 NVIDIA Corporation TU106 USB Type-C UCSI Controller
  • Add all the devices from IOMMU group except the PCI bridge. You are meant to ignore PCI bridge.

  • Next, I'll add my audio devices which are in IOMMU Group 10.

    • 00:1f.0 ISA bridge [0601]: Intel Corporation Z390 Chipset LPC/eSPI Controller
    • 00:1f.3 Audio device [0403]: Intel Corporation Cannon Lake PCH cAVS
    • 00:1f.4 SMBus [0c05]: Intel Corporation Cannon Lake PCH SMBus Controller
    • 00:1f.5 Serial bus controller [0c80]: Intel Corporation Cannon Lake PCH SPI Controller
    • 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (7) I219-V
  • I'll be adding all these devices.

  • To add the graphics/audio devices just select the add hardware at the bottom of the screen as shown here.

  • A new window will popup. Select the PCI Host Device from the left tab and select the devices by checking your IOMMU Groups as shown here.

  • NOTE: If you get an error with Group viable written on it. It means either there is something you are missing from that IOMMU Group.

    • For example if I get group 10 not viable that means that either there is some device from my IOMMU group 10 that I haven't added to Virtual Machine or I should remove it in order for virtual machine to work.


  • Then start the Virtual machine and install nvidia/AMD Drivers.

  • After installing the GPU drivers remove the following hardware devices:

    • Display Spice
    • Channel Spice
    • Video QXL
  • by clicking on them on the left tab and then clicking on remove button at the bottom right of window.



FINAL NOTES

Now, Run the VM and change the cable of you monitor which is connected to motherboard to GPU. and install all the drivers on Windows 10. Now, You have a Windows 10 machine utilising full system resources running on Linux.

NOTE:

  1. When Shutting down virtual machine you'll have to swap the cable from unplug the display cable from GPU to motherboard because the intel iGPU will render your linux OS.

  2. If after changing the display cable gives you output but doesnot have access to mouse and keyboard. Simply, add USB Host device. So, now whenever the VM starts you'll not be able to use your mouse and gpu in Host machine.



ADVANGTAGES:

  1. Run Windows/Any other OS inside a Linux machine.

  2. Utilise full potential of hardware.

  3. If having a dual montior setup. It can be used side by side efficiently.



DISADVANTAGES:

  1. For Single Monitor setups changing display cable from GPU to Motherboard again and again in annoying.

  2. Not recommended for Lower-End Setups and Laptops.

About

A simple Guide to passthrough your GPU to Virtual Machine


Languages

Language:Shell 100.0%