I am trying to setup a restic job to backup my docker stacks, and with half of everything owned by root it becomes problematic. I’ve been wanting to look at podman so everything isn’t owned by root, but for now I want to backup my work I built.

Also, how do you deal with some docker containers having databases. Do you have to create exports for all docker containers that have some form of database?

I’ve spent the last few days moving all my docker containers to a dedicated machine. I was using a mix of NFS and local storage before, but now I am doing everything on local NVME. My original plan was having everything on NFS so I would worry about backups there, and I might go back to that.

  • PaulEngineer-89@alien.topB
    link
    fedilink
    English
    arrow-up
    2
    ·
    11 months ago

    Don’t backup the container!!

    Map volumes with your data to physical storage and then simply backup those folders with the rest of your data. Docker containers are already either backed up in your development directory (if you wrote them) or GitHub so like the operating system itself, no need tk backup anything. The whole idea of Docker is the containers are ephemeral. They are reset at every reboot.

    • doodeoo@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      This is the only correct answer.

      Containers are ephemeral and stateless. If you’re not mounting a volume, think of what happens if the container dies during your process for exporting the data. A failure mode not possible if you mount.

  • McGregorMX@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I have my config and data volumes mounted to a share on truenas, that share replicates its snapshots to another truenas server. This is likely not ideal for everyone, but it works for me. My friend that also uses docker has it backed up with duplicati.

  • Zeal514@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    First, I try not to have it owned by root. But some containers have special privileges that need to be followed.

    So rsync -O will copy the directory retaining permissions and ownership of all files.

  • Do_TheEvolution@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    From my basic selfhosted experience… I run kopia as root , my shit uses bind mounts so all I care about is in that directory.

    And so far it works fine, to just down old, rename the directory, copy from nightly backup back the directory and start container.

    But yeah if there is something I care about I schedule database dumps like here in bookstack or vaultwarden

    To have something more if shit would not work start.

  • SamSausages@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I do this at the file system level, not the file level, using zfs.

    Unless the container has a database, I use zfs snapshots. If it has a database, my script dumps the database first and then does a ZFS snapshot. Then that snapshot is sent via sanoid to a zfs disk that is in a different backup pool.

    This is a block level backup, so it only backs up the actual data blocks that changed.

  • MoneyVirus@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    only backup of the data i need to backup (mapped volumes).

    Restore: create fresh container and map volumes again.

  • nyrosis@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    ZFS snapshots combined with replication to another box. That and a cronjob on packaging up my compose/config files.

  • SnakeBDD@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    All Docker containers have their persistent data in Docker volumes located on a BTRFS mount. A cronjob takes a snapshot of the BTRFS volume, then calls btrfs send, pipes that through tar and gpg and then directly to AWS S3.

    • froli@alien.topB
      link
      fedilink
      English
      arrow-up
      1
      ·
      11 months ago

      Great idea. I already do something similar(minus the btrfs part) for Vaultwarden. Mind sharing the script/commands?

      I setup my host with btrfs but I have 0 knowledge of it so I didn’t take advantage of it until now. I already have my docker volumes mapped to /docker/stack so I’m gonna create a sub-volume and move that there.

      I’m mostly interest in your btrfs snapshot and send commands but if you don’t mind sharing the whole thing that would be great.

  • katbyte@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    i don’t, i created a docker VM (and a couple others) and then i backup the VMs (proxmox + PBS make this very easy) with all their data in /home/docker/config/*

    i used to have them run off networked storage but i found it to be to slow/have other issues

    this also means for the primary important services that VM runs in HA and moves to another node when needed

  • techbandits@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I use a Synology Nas with Active backup for business and back up the VMs that host the containers.

  • rrrmmmrrrmmm@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    Can’t you run a restic container where you mount everything? If the restic container is insecure, everything is of course.

    But yes, I also migrated to rootless Podman for this reason and a bunch of others.

  • VirtualDenzel@alien.topB
    link
    fedilink
    English
    arrow-up
    1
    ·
    11 months ago

    I build my own docker images. All my images are build to run as set id / guid when specified in ansible.

    This way only my servicedaemon can do stuff. Also makes sure i never have issues with borgbackup etc.