I have been having a few issues recently, and I can’t quite figure out what is causing this. My setup:

  • gigabit WAN up and down. Run speed tests regularly and get 800+ mbps up and down.
  • opnsense router VM (proxmox) running on a lenovo m920x. Installed an intel 2x10gbe card.
  • Sodola 10gbe switch
  • TrueNAS server (bare metal) w/ 10gbe serving the media files over NFS, stored on a ZFS mirror.
  • Jellyfin LXC
  • debian LXC running the arr stack w/ qbittorrent
  • NVidia Shield w/ ethernet

First issue is extremely slow downloads on qbittorrent. Even if I download an ubuntu iso with hundreds of seeders will sit around 1 mibps. Media downloads with ~10 seeders, I’ll sit around 200kibps. Running this through gluetun and protonvpn wireguard with port forwarding enabled and functioning.

Second issue I’m having is if I am downloading anything on qbittorrent, and attempt to play a 4k remux on Jellyfin, it is constantly buffering. If I stop all downloads, immediately the movie plays without issue. 1080 files play without issue all the time.

I tried spinning up a new LXC with qbittorrent, and can download ubuntu isos at 30+ mibps locally and not over NFS.

Any idea what could be causing this? Is this a read/write issue on my TrueNAS server? Networking issuing causing the NFS to be slow? I’ve run iperf to the TrueNAS and getting 9+gbps.

  • just_another_person@lemmy.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    5 days ago

    Bad RAM wouldn’t present like this. You’d more than likely never get past boot with a DDR5 board having caught it with POST tests, or you’d have thrown a kernel exception by now.

    I saw you mentioned that a new LXC container didn’t have the traffic problem, so this is definitely something with config somehow.

    • monty33@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      OK so I have done some additional testing:

      • Memtest passed
      • Added the NFS share to the new qbittorrent LXC, and the download speed dropped down to where my primary qbt is. So I believe this means it is related to the NFS share.
      • Connected the NAS to a different switch. No change.
      • Tried connecting to the NFS share through a different NIC in TrueNAS. No change.
      • Migrated the qbt lxc to another proxmox node. No change.
      • Created a new NFS share on a different pool on TrueNAS and made that the download directory for qbt. No change.

      So I believe I have ruled out memory issues, NIC issues, datapool issues, and switch issues.

      The problem is I don’t know exactly when this started.

      I did change out the motherboard on TrueNAS, and just installed the existing NVMe drives into the new motherboard and booted off of them. I did not install a new TrueNAS OS and restore a backup. Could this be an issue?

      Shortly after the motherboard change, I upgraded to Electric Eel.

        • monty33@lemmy.mlOP
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          I tested that and I get full speeds. Upwards of 40-60mbps compared with the 1mbps I get when downloading to the NFS share

            • monty33@lemmy.mlOP
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              Yes I’m pretty sure I’ve got it narrowed down to issues with NFS shares from TrueNAS. What I can’t figure out is how to fix it. I may do a backup, reinstall truenas, import backup, and see of that fixes it. I’m thinking potentially its an issue from reusing my old installation with the new motherboard, processor, and corresponding hardware.

    • monty33@lemmy.mlOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 days ago

      Good points. I will finish the memtest thats running if only to have something ruled out. After it finishes I will try attaching the NFS share to the new qbt lxc and see if i get the same slow download speeds.