• merc
    link
    fedilink
    arrow-up
    2
    ·
    11 hours ago

    Ok, back in the “init” days the approach was to have a bunch of scripts in /etc/rc.d/ that the system would run when you started up. The scripts were numbered so they’d execute in a particular order. So, if you required a network for your program, you’d number your script so that it was started after the network script was done. These scripts were often somewhat modular so you could pass them arguments and stuff. You also had corresponding scripts that executed in a certain order when the system was shutting down.

    Starting in about 2015 that was changed to the systemd approach where instead of having services you had configuration files that described what services you wanted running, what they depended on, etc. This mostly eliminated the need for complex startup / shutdown scripts because the systemd service took care of that part. So, often instead of a script to start a program you just needed an executable and some config files.

    So, for a while a Linux server running a web server might have had a systemd service describing how they wanted to run apache or nginx. But, right around the same time that systemd was being adopted, containers were becoming the new hotness. I would guess that most people running web servers are now doing them in containers. I guess you know something about that since you were talking about Docker. You can run containers with systemd, but most people use some form of container orchestration service like Docker Swarm or Kubernetes.

    I’ve personally never used docker swarm or docker compose, so I can’t really talk about how it does things. Instead, I’ve used kubernetes, even for running services on a single underpowered machine (I’ve even used it on Raspberry Pi machines (but you have to be careful with how it uses the “disk” when you do that)) I didn’t do it because of convenience, more to learn kubernetes and to avoid using Docker things.

    Kubernetes is a bit overkill for a home setup, but the idea there is that you have dozens or hundreds of servers and you have thousands of microservices running in containers. You don’t want to have to manually manage each machine that might run the service. Instead you tell the kubernetes system details like how many copies to run and it figures out where to run them, and will restart them if they fail, etc.

    So, the way I do things is: systemd runs kubernetes, kubernetes runs a containerized version of the ‘arr’ apps. I think you could do the same thing with docker where systemd runs docker (compose? swarm?) and that runs your containers.

    And then there’s the Flux / GitOps / Declarative Ops setup where you have a git repository that describes the state of the system you want, including how kubernetes is supposed to run, and you have a system that observes that git repo and gets things running as described in the configuration stored in the repo.

    How deep you want to go in that setup is up to you. It’s just that glueing things together using scripts isn’t really best practices, especially in the age of containers.