• 0 Posts
  • 31 Comments
Joined 2 个月前
cake
Cake day: 2025年3月16日

help-circle



  • Right, I didn’t have any issues running it on a pi for years too. The problems came when I started messing with things. So, really my advice is to help save people from ideas like mine.

    I decided one day to take a bunch of old laptops and create a proxmox cluster out of them. It worked great, but I didn’t have a use for them, I was just playing. So, I decided to retire the pi and put the pihole on the cluster. HA for the win!

    I did that and came woke up a few days later to my family complaining that they had no internet. I found the pihole container on a different node and it wouldn’t start. Turns out with proxmox you need separate storage for HA to work. I had assumed that it would be similar to jboss clustering which I’m familiar with, and the container would be on all the nodes and only one actice at a time, with some syncing between nodes. Nope.

    What’s worse is the container refused to move back to the origional node AND wouldn’t start. The pi was stored away at this point so I figured it would be easier to just create a new container, but duh, no internet. Turn off dns settings on the router, bam have internet.

    Eventually set up the old pi again, and it took me a while to figure out what I had done wrong with proxmox. But while I was figuring it out it was nice to have the backup.

    Now I always have two running on different hardware, just in case.











  • You can add overseer for jellyfin, emby, plex based on your preferance, and it connects to sonarr and radarr. Overseer is good for finding recommendations and adding them to your queue. The reason I’m talking about it is you can specify your language and your region for recommendations to help you find good content. As for downloading it, in prowlarr when you search for an indexer to add (public or private) you can filter based on language. Unfortunately when I did it just now for de-DE the 30 or so indexers that popped up are all private. I don’t know their quality, but that is at least a list to start with investigating how to join.

    I do know the BIG english ones have a lot of content including content in other languages or dubbed and marked as multi-language or multi-subs. My recommendation is to google around for big public indexers reguardless of language, add them and search for the content you want while also working to get added to the local private ones.



  • I only had issues with the latest tag when dealing with the community apps. Some of them would randomly break and I’d have to roll back. Once I manually configured the docker settings using normal file mounts things were plenty stable. I think the issues were with the k8s community charts not with the underlying software. And that was fixed by just configuring it manually like however the dockerhub docs suggest.

    I would still have the occasional issue where a container would freeze and a force stop wouldn’t work, and spinning up a new one wouldn’t work because the ports were still used. But I traced that back to a bad ssd with write timeouts. I still think truenas’s k8s wrapper is buggy. Even if a container crashes hard, I shouldn’t have to reboot the system to fix it. I switched to unraid and have been blissfully happy since.


  • Not sure if you were aware of the recent (last year) drama with a major contributing group to the community apps. TrueCharts I think they were called? I had some truecharts containers and some straight truenas containers. Then TrueCharts ragequit and took down their repo. I ended up reinstalling all those apps manually because for the life of me I still couldn’t get the dumb truenas versions to work. Also, I wasn’t a fan of the pvc (or whatever it was called) storage containers that got used by default. Made eveverything more difficult. My advice is to use the truenas community apps as a learning tool to configure your own properly with the truenas software. I noticed the community apps would seriously take around a minute to restart, but the ones I made manually would takes seconds. Same docker image, never figured out why, maybe a k8s thing?



  • Garuda - because like endeavor it’s arch for lazy people, plus I got sold on the gaming edition by how much I like the theme and the latest drivers. But that’s just what got me to try it, what sold me on it is when I had a vm of it that ran out of hdd space mid kernel update. I shut it down to expand the drive, booted it back up and no kernels present. Fiddling around in grub in a panic made me realize snappertools auto snapshots btrfs before updating. I think only once in my life (out of dozens of tries) has Microsoft’s restorepoints actually worked for me. Booting to the snapshot was effortless, clicking through to recover to that snapshot was a breeze. I rebooted again just to make sure it was working and it did. Re-updated and I was back in action.

    That experience made me love garuda. I highly recommend snappertools+btrfs from now on and use it whenever I can. Yes, preventative tools and warnings would have stopped it from happening, but you can’t stop everything, and it’s a comfort to have.


  • Might need more info about your setup. The reverse proxy probably has some logs you aren’t looking at. Most bots from what I’ve seen do ip:port scans hitting every ip and every port. Nginx reverse proxy manager or something similar isn’t going to forward ip:8123 to home assistant. A straight router port forward will, but the reverse proxy manager will look at the domain GET request for https://ha.hit_the_rails.net to your LAN ip:port. It’s a little security through obscurity as they have to know your sub+domain.

    For a time I had port 22 open and forwarded directly to a server. Constant bot traffic. Changed the port, put an ssh honeypot on 22, and it almost completely went away. Sure the bots could be smart enough to scan and find another open ssh port, but they rarely did. I assume because anyone savvy enough to change the ssh port is savvy enough to not allow default logins like ubnt:ubnt and root:1234 which were by far the most common logins I got in the honeypot.