I’m thinking about starting a self hosting setup, and my first thought was to install k8s (k3s probably) and containerise everything.

But I see most people on here seem to recommend virtualizing everything with proxmox.

What are the benefits of using VMs/proxmox over containers/k8s?

Or really I’m more interested in the reverse, are there reasons not to just run everything with k8s as the base layer? Since it’s more relevant to my actual job, I’d lean towards ramping up on k8s unless there’s a compelling reason not to.

  • terribleplan@lemmy.nrd.li
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    1 year ago

    If everything you want to run makes sense to do within k8s it is perfectly reasonable to run k8s on some bare-metal OS. Some things lend themselves to certain ways of running them better than others. E.g. Home Assistant really does not like to run anywhere but a dedicated machine/VM (at least last time I looked into it).

    Regardless of k8s it may make sense to run some sort of virtualization layer just to make management easier. One panel you can use to access all of the machines in you k8s cluster from a console level can be pretty nice.

    Proxmox specifically has some built-in niceties with gluster (which I’ve never use, I manage gluster myself on bare metal) which could even be useful inside a k8s cluster for PVCs and the like.

    If you are willing to get weird (and experimental) look into Rancher’s Harvester it’s an HCI platform (similar to Proxmox or vSphere) that uses k8s as its base layer and even manages VMs through k8s APIs… I played with it a bit and it was really neat, but opted for bare metal Ubuntu for my lab install (and actually moved from rke2 to k3s to Nomad to docker compose with some custom management/clustering over the course of a few years).

      • terribleplan@lemmy.nrd.li
        link
        fedilink
        English
        arrow-up
        2
        ·
        edit-2
        1 year ago

        Yeah, I think the problem comes if you don’t want to manually configure “Add-ons”. Using this feature is only supported on their OS or using “Supervised”. “Supervised” can’t itself be in a container AFAIK, only supports Debian 12, requires the use of network manager, “The operating system is dedicated to running Home Assistant Supervised”, etc, etc.

        My point is they heavily push you to use a dedicated machine for HASS.

        • [email protected]A
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Yea I’ve been running “core” in docker-compose and not the “supervised” or whatever that’s called.
          It’s been pretty flawless tbh.
          It’s running in docker-compose in a VM in proxmox.
          At first, it was mostly because I wanted to avoid their implementation of DNS, which was breaking my split-horizon DNS.

          Honestly, once you figure out docker-compose, it’s much easier to manage than the supervised add-on thing. Although the learning curve is different.
          Just the fact that your add-ons don’t need to go down when you upgrade hass makes this much easier.

          I could technically run non-hass related containers in that docker, but the other important stuff is already in lxc containers in proxmox.
          Not everything works in containers, so having the option to spin a VM is neat.

          I’m also using PCI passthrough so my home theater/gaming VM has access to the GPU and I need a VM for that.

          Even if they only want to use k8s or dockers for now, having the option to create a VM is really convenient.