I’m in the process of setting up backups for my home server, and I feel like I’m swimming upstream. It makes me think I’m just taking the wrong approach.
I’m on a shoestring budget at the moment, so I won’t really be able to implement a 3-2-1 strategy just yet. I figure the most bang for my buck right now is to set up off-site backups to a cloud provider. I first decided to do a full-system backup in the hopes I could just restore it and immediately be up and running again. I’ve seen a lot of comments saying this is the wrong approach, although I haven’t seen anyone outline exactly why.
I then decided I would instead cherry-pick my backup locations instead. Then I started reading about backing up databases, and it seems you can’t just back up the data directory (or file in the case of SQLite) and call it good. You need to dump them first and backup the dumps.
So, now I’m configuring a docker-db-backup container to back each one of them up, finding database containers and SQLite databases and configuring a backup job for each one. Then, I hope to drop all of those dumps into a single location and back that up to the cloud. This means that, if I need to rebuild, I’ll have to restore the containers’ volumes, restore the backups, bring up new containers, and then restore each container’s backup into the new database. It’s pretty far from my initial hope of being able to restore all the files and start using the newly restored system.
Am I going down the wrong path here, or is this just the best way to do it?
If you’re using docker (like your DBs run in docker), then I think you’re overthinking it personally. Just back up the volume that the container uses, then you can just plop it back and it will carry on carefree.
I usually did a simple
tar cvf /path/to/compressed.tar.gz /my/docker/volume
for each of my volumes, then backed up the tar. Kept symlinks and everything nice and happy. If you do that for each of your volumes, and you also have your config for running your containers like a docker-compose, congrats that’s all you need.I don’t know who said you can’t just back up the volume, to me that’s kind of the point of docker. It’s extreme portability.
OK, cool. That’s helpful. Thank you!
I know in general you can just grab a docker volume and then point at it with a new container later, but I was under the impression that backing up a database in particular in this way could leave you with a database in a bad state after restoring. Fingers crossed that was just bad info. 😅
In theory the database can end up in an invalid state when you leave the database container running. What I do for most containers is to temporarily stop them, backup the Docker volume and then restart the container.
Seconded, and great callout @[email protected] , yes part of my script was to stop the container gracefully, tar it, start it again, and then copy the tar somewhere. it “should” be fine, in a production environment where you could have zero downtime I would take a different approach, but we’re selfhosters. Just schedule it for 2am or something.
Oh, and feel free to test! Docker makes it super easy. Just extract the tar somewhere else on the drive, point your container to the new volume, see if it spins up. Then you’ll know your backup strategy is working!
Is your script something you can share? I’d love to see your approach. I can definitely live with a few minutes of down time in the early morning.
That particular one is long gone I’m afraid, but it’s essentially just docker compose down, tar like I did above, docker compose up -d, and then I used rclone to upload it
Much simpler than my solution. I’ll look into this. Thank you!
deleted by creator