Current setup:

  • one giant docker compose file
  • Caddy TLS trunking
  • only exposed port is Caddy

I’ve been trying out podman, and I got a new service running (seafile), and I did it via podman generate kube so I can run it w/ podman kube play. My understanding is that the “podman way” is to use quadlets, which means container, network, etc files managed by systemd, so I tried out podlet podman kube play to generate a systemd-compatible file, but it just spat out a .kube file.

Since I’m just starting out, it wouldn’t be a ton of work to convert to separate unit files, or I can continue with the .kube file way. I’m just not sure which to do.

At the end of this process, here’s what I’d like in the end:

  • Caddy is the only exposed port - could block w/ firewall, but it would be nice if they worked over a hidden network
  • each service works as its own unit, so I can reuse ports and whatnot - I may move services across devices eventually, and I’d rather not have to remember custom ports and instead use host names
  • automatically update images - shouldn’t change the tag, just grab the latest from that tag

Is there a good reason to prefer .kube over .container et al or vice versa? Which is the “preferred” way to do this? Both are documented on the same “quadlet” doc page, which just describes the acceptable formats. I don’t think I want kubernetes anytime soon, so the only reason I went that way is because it looked similar to compose.yml and I saw a guide for it, but I’m willing to put in some work to port from that if needed (and the docs for the kube yaml file kinda sucks). I just want a way to ship around a few files so moving a service to a new device is easy. I’ll only really have like 3-4 devices (NAS, VPS, and maybe an RPi or two), and I currently only have one (NAS).

Also, is there a customary place to stick stuff like config files? I’m currently using my user’s home directory, but that’s not great long-term. I’ll rarely need to touch these, so I guess I could stick them on my NAS mount (currently /srv/nas/) next to the data (/srv/nas/<service>/). But if there’s a standard place to stick this, I’d prefer to do that.

Anyway, just looking for an opinionated workflow to follow here. I could keep going with the kube yaml file route, or I could switch to the .container route, I don’t mind either way since I’m still early in the process. I’m currently thinking of porting to the .container method to try it out, but I don’t know if that’s the “right” way or if ".kube` with a yaml config is the “right” way.

  • poVoq@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    edit-2
    6 hours ago

    Don’t use the kube stuff. That’s entirely seperate from Quadlets and some sort of Kubernetes compatibility.

    The correct way to use Quadlets is with .container and .pod files to auto-generate systemd .service files.

    • sugar_in_your_teaOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      5 hours ago

      Awesome, thanks!

      In terms of architecture, which is preferred:

      • separate pod per “app” (e.g. NextCloud), but all one network
      • separate pod and network per app
      • everything in one pod

      I’d like to have one gateway, Caddy, so my cert renewal and proxying are all in one place, and I’d like those proxy configs to look like http://<container>

      I’d prefer my containers not be able to talk to each other unless I specifically allow it. The second option would get me that, but I think it would force me to expose ports for each app to the system.

      TL; DR - Can I have a “Caddy” pod that can see exposed ports from other pods, but hide those ports from regular system users? If not, I’ll probably do the first option. I also want to be able to expose ports to the host on a per app basis if needed.

      • poVoq@slrpnk.net
        link
        fedilink
        English
        arrow-up
        1
        ·
        5 hours ago

        I use one pod per app more or less. The reverse-proxy conf depends a bit on the specific app so that depends, but it will probably work for most by sharing a network and exposing the ports in the pods