The “arr” stack is a very Windowsey. It’s built in C# and has some baked-in assumptions that mean running it in a container is a bit of a pain. But, I’ve been running it for years on Linux. My linux server boxes are all headless, and I’ve never needed a GUI for anything. I don’t use Plex though, so maybe it’s the difference?
I don’t know why you were trying to run virtual desktop software, or what that has to do with running the “arr” stack. But, of course, a virtual desktop is a GUI thing, so if you want a virtual desktop of course you’ll need some kind of GUI connection. Also, your talk about “getting scripts to auto-run at startup” makes me suspect you were approaching the problem in an usual way, because that’s not how you run services in Linux, and hasn’t been for decades.
If you ever want to try again, I recently migrated my personal kludged-together “arr” stack to the “Home Operations” method of running things. They run a bunch of apps in a local at-home kubernetes cluster using essentially “declarative operations” based on Flux. Basically, you have a git repo, you check in a set of files there describing which parts of the “arr” stack you want to run, and your system picks up those git changes and runs the apps. The documentation is terrible, but the people are friendly and happy to help.
Currently I have the parts of the “arr” stack I want, plus a few other apps, running on an old Mac Mini from 2014.
Oh, and for a VPN on Linux, I recommend gluetun. It’s one app that supports just about every major commercial VPN provider, and provides features like firewalling non-VPNed traffic, and re-connecting if something goes wrong.
that’s not how you run services in Linux, and hasn’t been for decades
Thanks for your response. I’m open to the idea that Linux is a different computing paradigm, my frustration is on needing to learn that on the fly and how much of a distraction it was, even on a tertiary machine… that said, how should I be thinking about this?
Ok, back in the “init” days the approach was to have a bunch of scripts in /etc/rc.d/ that the system would run when you started up. The scripts were numbered so they’d execute in a particular order. So, if you required a network for your program, you’d number your script so that it was started after the network script was done. These scripts were often somewhat modular so you could pass them arguments and stuff. You also had corresponding scripts that executed in a certain order when the system was shutting down.
Starting in about 2015 that was changed to the systemd approach where instead of having services you had configuration files that described what services you wanted running, what they depended on, etc. This mostly eliminated the need for complex startup / shutdown scripts because the systemd service took care of that part. So, often instead of a script to start a program you just needed an executable and some config files.
So, for a while a Linux server running a web server might have had a systemd service describing how they wanted to run apache or nginx. But, right around the same time that systemd was being adopted, containers were becoming the new hotness. I would guess that most people running web servers are now doing them in containers. I guess you know something about that since you were talking about Docker. You can run containers with systemd, but most people use some form of container orchestration service like Docker Swarm or Kubernetes.
I’ve personally never used docker swarm or docker compose, so I can’t really talk about how it does things. Instead, I’ve used kubernetes, even for running services on a single underpowered machine (I’ve even used it on Raspberry Pi machines (but you have to be careful with how it uses the “disk” when you do that)) I didn’t do it because of convenience, more to learn kubernetes and to avoid using Docker things.
Kubernetes is a bit overkill for a home setup, but the idea there is that you have dozens or hundreds of servers and you have thousands of microservices running in containers. You don’t want to have to manually manage each machine that might run the service. Instead you tell the kubernetes system details like how many copies to run and it figures out where to run them, and will restart them if they fail, etc.
So, the way I do things is: systemd runs kubernetes, kubernetes runs a containerized version of the ‘arr’ apps. I think you could do the same thing with docker where systemd runs docker (compose? swarm?) and that runs your containers.
And then there’s the Flux / GitOps / Declarative Ops setup where you have a git repository that describes the state of the system you want, including how kubernetes is supposed to run, and you have a system that observes that git repo and gets things running as described in the configuration stored in the repo.
How deep you want to go in that setup is up to you. It’s just that glueing things together using scripts isn’t really best practices, especially in the age of containers.
The “arr” stack is a very Windowsey. It’s built in C# and has some baked-in assumptions that mean running it in a container is a bit of a pain. But, I’ve been running it for years on Linux. My linux server boxes are all headless, and I’ve never needed a GUI for anything. I don’t use Plex though, so maybe it’s the difference?
I don’t know why you were trying to run virtual desktop software, or what that has to do with running the “arr” stack. But, of course, a virtual desktop is a GUI thing, so if you want a virtual desktop of course you’ll need some kind of GUI connection. Also, your talk about “getting scripts to auto-run at startup” makes me suspect you were approaching the problem in an usual way, because that’s not how you run services in Linux, and hasn’t been for decades.
If you ever want to try again, I recently migrated my personal kludged-together “arr” stack to the “Home Operations” method of running things. They run a bunch of apps in a local at-home kubernetes cluster using essentially “declarative operations” based on Flux. Basically, you have a git repo, you check in a set of files there describing which parts of the “arr” stack you want to run, and your system picks up those git changes and runs the apps. The documentation is terrible, but the people are friendly and happy to help.
Currently I have the parts of the “arr” stack I want, plus a few other apps, running on an old Mac Mini from 2014.
Oh, and for a VPN on Linux, I recommend gluetun. It’s one app that supports just about every major commercial VPN provider, and provides features like firewalling non-VPNed traffic, and re-connecting if something goes wrong.
Thanks for your response. I’m open to the idea that Linux is a different computing paradigm, my frustration is on needing to learn that on the fly and how much of a distraction it was, even on a tertiary machine… that said, how should I be thinking about this?
Ok, back in the “init” days the approach was to have a bunch of scripts in /etc/rc.d/ that the system would run when you started up. The scripts were numbered so they’d execute in a particular order. So, if you required a network for your program, you’d number your script so that it was started after the network script was done. These scripts were often somewhat modular so you could pass them arguments and stuff. You also had corresponding scripts that executed in a certain order when the system was shutting down.
Starting in about 2015 that was changed to the systemd approach where instead of having services you had configuration files that described what services you wanted running, what they depended on, etc. This mostly eliminated the need for complex startup / shutdown scripts because the systemd service took care of that part. So, often instead of a script to start a program you just needed an executable and some config files.
So, for a while a Linux server running a web server might have had a systemd service describing how they wanted to run apache or nginx. But, right around the same time that systemd was being adopted, containers were becoming the new hotness. I would guess that most people running web servers are now doing them in containers. I guess you know something about that since you were talking about Docker. You can run containers with systemd, but most people use some form of container orchestration service like Docker Swarm or Kubernetes.
I’ve personally never used docker swarm or docker compose, so I can’t really talk about how it does things. Instead, I’ve used kubernetes, even for running services on a single underpowered machine (I’ve even used it on Raspberry Pi machines (but you have to be careful with how it uses the “disk” when you do that)) I didn’t do it because of convenience, more to learn kubernetes and to avoid using Docker things.
Kubernetes is a bit overkill for a home setup, but the idea there is that you have dozens or hundreds of servers and you have thousands of microservices running in containers. You don’t want to have to manually manage each machine that might run the service. Instead you tell the kubernetes system details like how many copies to run and it figures out where to run them, and will restart them if they fail, etc.
So, the way I do things is: systemd runs kubernetes, kubernetes runs a containerized version of the ‘arr’ apps. I think you could do the same thing with docker where systemd runs docker (compose? swarm?) and that runs your containers.
And then there’s the Flux / GitOps / Declarative Ops setup where you have a git repository that describes the state of the system you want, including how kubernetes is supposed to run, and you have a system that observes that git repo and gets things running as described in the configuration stored in the repo.
How deep you want to go in that setup is up to you. It’s just that glueing things together using scripts isn’t really best practices, especially in the age of containers.