I currently have a PC running Windows 11 that my S/O and I use multi-seated with Aster Multiseat. However, we’re both equally sick of Windows and are interested in switching to Linux.

However, all the information that I can find on multiseat in Linux are forum posts and unfinished wiki entries for Ubuntu and Fedora, and they all seem to be from around 2008-2012.

We’re about to upgrade our PC to support two RTX 3060s and a Ryzen 9 (of course, including the usual two monitors and sets of peripherals).

Can Linux (preferably Fedora, as it’s my favorite distro so far) easily support multiseating?

Will there be any performance issues using this method?

Is it possible to isolate applications per user? (Aster Multiseat doesn’t do this, so sometimes an application can detect another instance on the other user and refuses to start…)

Thanks in advance.

  • phx@lemmy.ca
    link
    fedilink
    arrow-up
    14
    arrow-down
    9
    ·
    1 year ago

    So essentially it’s running a single computer we if it were two seperate workstations?

    I could see an implementation that’s similar to those running a VM with a DGPU for gaming. User A could run a login against the primary GPU and OS. User B could run a VM with several cores allocated and the secondary GPU dedicated to the VM. If any shared did file resources in the primary OS are needed, KVM has ways to do that as well.

    • methodicalaspect@midwest.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Not entirely sure why this reply is being panned (was at -6 when I first saw it).

      OP is in the process of upgrading their PC to a Ryzen 9. If we make the assumption that this Ryzen 9 is on the AM5 platform, the CPU comes equipped with an IGPU, meaning the RTX 3060s are no longer needed by the bare metal. So, installing a stable, minimal point release OS as a base would minimize resource utilization on the hardware side. This could be something like Debian Bookworm or Proxmox VE with the no-subscription repo enabled. There’s no need for the NVIDIA GPUs to be supported by the bare metal OS.

      Once the base OS is installed, the VMs can be created, and the GPUs and peripherals can be passed through. This step effectively removes the devices from the host OS – they don’t show up in lsusb or lspci anymore – and “gives” them to the VMs when they start. You get pretty close to native performance with setups of this nature, to the point that users have set up Windows 10/11 VMs in this way to play Cyberpunk 2077 on RTX 4090s with all the eye candy, including ray reconstruction.

      Downsides:

      • Three operating systems to maintain: bare metal, yours, and your partner’s.
      • Two sets of applications/games to maintain: yours and your partner’s.
      • May need to edit VM configs somewhat regularly to stay ahead of anti-cheat measures targeted at users of VMs.
      • Performance is not identical to bare metal, but is pretty close.
      • VM storage is isolated, so file sharing requires additional setup.

      Upsides:

      • If you don’t know a lot about Linux, you’ll know a bunch more when you’re done with this.
      • Once you get the setup ironed out, it won’t need to change much going forward.
      • Each VM’s memory space is isolated, so applications won’t “step on each other” – that is, you can both run the same application or game simultaneously.
      • Each user can run their own distro, or even their own OS if they wish. You can run Fedora and your partner can run Mint, or even Windows if they really, really want to. This includes Windows 11 as you can pass an emulated TPM through to meet the hardware requirements.
      • Host OS can be managed via web interface (cockpit + cockpit-machines) or GUI application (virt-manager).

      It’s not exactly what OP is looking for, but it’s definitely a valid approach to solving the problem.

      • Norah - She/They@lemmy.blahaj.zone
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 year ago

        I came to the comment section to recommend Proxmox or another hypervisor as well. If it was a system with just one GPU, I wouldn’t, as splitting it between two VMs can be difficult. But, most of the time having two GPUs under one OS can be a lot worse too though. I think it’s definitely the cleaner & easier way to go. One caveat I’ll add is that resources are more strictly assigned to each seat, so memory & cpu can’t be sent to who needs it more as readily. Another positive though is that it would be super simple to create a third VM with a small amount of resources for running a small self-hosted server of some kind on the same box.