I can’t even figure out how to tell if it’s supported or not. If it is supported, I can’t figure out how to enable it. If it is enabled, idk where I should be seeing it in proxmox!

Can anyone point me in the right direction?

  • AlphaAutist@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    ·
    8 months ago

    It looks like it should be possible as both your cpu and motherboard support Intel VT-d

    https://ark.intel.com/content/www/us/en/ark/products/236781/intel-core-i7-processor-14700-33m-cache-up-to-5-40-ghz.html

    https://download.asrock.com/Manual/Z690 Extreme.pdf

    PCIe pass through isn’t enabled by default in Proxmox and requires some manual changes to the bootloader (grub or systemd-boot) as well as loading some kernel modules. You may also need to enable VT-d in your BIOS. You can read proxmox’ guide for enabling PCIe pass through here:

    https://pve.proxmox.com/wiki/PCI(e)_Passthrough

    • wildbus8979
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      8 months ago

      The motherboard need to support IOMMU, not Vt-d

        • wildbus8979
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 months ago

          Yes I’m sure, they are related and you need VT-d for IOMMU but not all motherboard isolate all the PCIe devices separately. Server/Enterprise boards always do, but consumer grade stuff can be hit or miss. Maybe it’s a little better with more recent hardware though, I haven’t checked in a couple of gens.

      • nemanin@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        8 months ago

        Ok. So they are different?

        How do I tell which motherboards support IOMMU?

        I can’t find it as a filter or search option on any websites…?

        • wildbus8979
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 months ago

          Yes they are different. VT-d is purely a function of the CPU (passed the BIOS enabling option).

          First you will want to look at the output of acpidump | egrep "DMAR|IVRS", then you will also want to very that IOMMU groups don’t group your GFX with something that won’t be passed through using something like: https://gist.github.com/r15ch13/ba2d738985fce8990a4e9f32d07c6ada

            • wildbus8979
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              1
              ·
              8 months ago

              Run those two commands in the command line, post the result here

              • nemanin@lemmy.worldOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                sorry this took so long… you know, life. trying that command altogether, I get this response: -bash: acpidump: command not found

                trying just egrep “DMAR|IVRS” (in case they are two commands) seems to hang the terminal session.

                I tried following a guide to enable PICe passthrough and get this. One important thing, there is no discrete GPU at the moment, I’m trying to pass through an HBA…

                root@prox:~# dmesg | grep -e IOMMU [ 0.100411] DMAR: IOMMU enabled [ 0.254862] DMAR-IR: IOAPIC id 2 under DRHD base 0xfed91000 IOMMU 1 [ 0.629143] pci 0000:00:02.0: DMAR: Skip IOMMU disabling for graphics [ 0.713978] DMAR: IOMMU feature fl1gp_support inconsistent [ 0.713979] DMAR: IOMMU feature pgsel_inv inconsistent [ 0.713980] DMAR: IOMMU feature nwfs inconsistent [ 0.713981] DMAR: IOMMU feature dit inconsistent [ 0.713982] DMAR: IOMMU feature sc_support inconsistent [ 0.713983] DMAR: IOMMU feature dev_iotlb_support inconsistent

              • nemanin@lemmy.worldOP
                link
                fedilink
                English
                arrow-up
                1
                ·
                6 months ago

                This may also help, my HBA is there:

                root@prox:~# for d in /sys/kernel/iommu_groups//devices/; do n=${d#/iommu_groups/}; n=${n%%/}; printf 'IOMMU group %s ’ “$n”; lspci -nns "${d##/}"; done IOMMU group 0 00:02.0 VGA compatible controller [0300]: Intel Corporation Raptor Lake-S GT1 [UHD Graphics 770] [8086:a780] (rev 04)

                IOMMU group 10 00:1f.0 ISA bridge [0601]: Intel Corporation Z690 Chipset LPC/eSPI Controller [8086:7a84] (rev 11)

                IOMMU group 10 00:1f.3 Audio device [0403]: Intel Corporation Alder Lake-S HD Audio Controller [8086:7ad0] (rev 11)

                IOMMU group 10 00:1f.4 SMBus [0c05]: Intel Corporation Alder Lake-S PCH SMBus Controller [8086:7aa3] (rev 11)

                IOMMU group 10 00:1f.5 Serial bus controller [0c80]: Intel Corporation Alder Lake-S PCH SPI Controller [8086:7aa4] (rev 11)

                IOMMU group 10 00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (17) I219-V [8086:1a1d] (rev 11)

                IOMMU group 11 01:00.0 Non-Volatile memory controller [0108]: Sandisk Corp Western Digital WD Black SN850X NVMe SSD [15b7:5030] (rev 01)

                IOMMU group 12 02:00.0 Ethernet controller [0200]: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller [10ec:8125] (rev 05)

                IOMMU group 13 03:00.0 Non-Volatile memory controller [0108]: Micron/Crucial Technology Device [c0a9:5415] (rev 01)

                IOMMU group 14 04:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)

                IOMMU group 15 05:00.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)

                IOMMU group 16 05:08.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)

                IOMMU group 17 05:09.0 PCI bridge [0604]: PLX Technology, Inc. PEX 8724 24-Lane, 6-Port PCI Express Gen 3 (8 GT/s) Switch, 19 x 19mm FCBGA [10b5:8724] (rev ca)

                **IOMMU group 18 06:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02)

                IOMMU group 19 08:00.0 Serial Attached SCSI controller [0107]: Broadcom / LSI SAS3008 PCI-Express Fusion-MPT SAS-3 [1000:0097] (rev 02)**

                IOMMU group 1 00:00.0 Host bridge [0600]: Intel Corporation Device [8086:a740] (rev 01)

                IOMMU group 2 00:14.0 USB controller [0c03]: Intel Corporation Alder Lake-S PCH USB 3.2 Gen 2x2 XHCI Controller [8086:7ae0] (rev 11)

                IOMMU group 2 00:14.2 RAM memory [0500]: Intel Corporation Alder Lake-S PCH Shared SRAM [8086:7aa7] (rev 11)

                IOMMU group 3 00:15.0 Serial bus controller [0c80]: Intel Corporation Alder Lake-S PCH Serial IO I2C Controller #0 [8086:7acc] (rev 11)

                IOMMU group 4 00:16.0 Communication controller [0780]: Intel Corporation Alder Lake-S PCH HECI Controller #1 [8086:7ae8] (rev 11)

                IOMMU group 5 00:17.0 SATA controller [0106]: Intel Corporation Alder Lake-S PCH SATA Controller [AHCI Mode] [8086:7ae2] (rev 11)

                IOMMU group 6 00:1a.0 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #25 [8086:7ac8] (rev 11)

                IOMMU group 7 00:1c.0 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #2 [8086:7ab9] (rev 11)

                IOMMU group 8 00:1c.4 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #5 [8086:7abc] (rev 11)

                IOMMU group 9 00:1d.0 PCI bridge [0604]: Intel Corporation Alder Lake-S PCH PCI Express Root Port #9 [8086:7ab0] (rev 11)

                • wildbus8979
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  edit-2
                  6 months ago

                  You should be good to go. Make sure vfio is loaded in the modules-load.d

                  vfio
                  vfio_iommu_type1
                  vfio_pci
                  vfio_virqfd
                  

                  Make sure the module options are set correctly and the kernel module is blacklisted in /etc/modprobe.d/vfio.conf

                  options vfio-pci ids=1000:0097
                  blacklist MODULE_NAME
                  

                  Make sure.IOMMU is enabled in your kernel command line (ex via grub): intel_iommu=on iommu=pt

                  This is probably not complete, but it should get you pretty far into allowing you to add the pci device in the hardware config of your vm

        • Possibly linux@lemmy.zip
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          8 months ago

          Its your CPU and yes it does support it as all Intel CPUs made within the last few years have support.