• kn100OP
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    1 year ago

    You are - this is a server - it hosts approximately 20 LXC containers beside a couple of VMs. One of the VMs hosts Windows - and gets a GPU and a couple of USB ports. Another VM hosts Linux, which in turn runs Home Assistant, and gets a USB port so that it can use my Zigbee dongle, etc.

    I could feasibly use a Linux VM instead, but I’d have to do the same VM passthrough chicanery - and the way I have this set up right now means that I do not treat the gaming workload as anything special, it’s just another VM. I can snapshot it, move it between storage devices, share hardware between it and other VMs, and so on.

    Oh also, the second GPU that my machine has (an intel iGPU) doesn’t go to waste either! That gets passed through to yet another VM, which hosts Jellyfin - and it makes use of the iGPU component of the CPU to do video transcoding. Virtualising workloads like this is far nicer to manage than for example just having a linux box with all these services running on it. What if the game crashes? In the VM world, I just restart the VM. What if one of the other services shits the bed and starts writing logs frantically (as has happened to be recently). It’s filled the disk, and suddenly I can’t game! In the VM world, the service gets its own portion of disk space and therefore can’t eat it all up. You could feasibly solve all these problems with the setup you describe, but why, when virtualisation has such a small penalty performance wise and comes with a bunch of other benefits for free?

    • ChojinDSL@discuss.tchncs.de
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      I usually shy away from VMs because I have to dedicate a fixed amount of resources, e.g. ram.

      I tend to rely on docker or bare metal services on a server. But I don’t use a server for gaming.