I have about 500GB of data (photos, documents, videos etc.) that I have accumulated over the years. Currently, I keep them on my computer and rsync all additions / changes once a month or so to an external hard drive. Do I need to be worried about data loss (sectors going bad, bit rot, bit flip, whatever it is called)?

To clarify,

  1. None of this is commercially important; I just don’t want to get into a situation where I look up an old family photo or video twenty years down the line and it has got corrupted.

  2. Both my computer and the external HD are HDDs. They are fairly cheap here (and very cheap if second hand). Buying SSDs or dedicated hardware would be expensive.

  • jws_shadotak
    link
    fedilink
    English
    arrow-up
    19
    ·
    6 months ago

    The 3 2 1 rule is always the gold standard.

    I’d recommend at least adding an offsite backup. Set up rclone with a mounted folder (client side encryption is recommended) and sync the files to that as well.

    I use Backblaze for about $6/TB/mo, pro-rated for whatever amount is actually used.

    • Scrubbles@poptalk.scrubbles.tech
      link
      fedilink
      English
      arrow-up
      6
      ·
      6 months ago

      second, for the small amount a backblaze account would be cheap and more than enough. If OP is worried about security then enabling a crypt endpoint in rclone is moderately trivial.

      3-2-1 OP. 3 copies of your data, across 2 different storage mediums, with at least 1 offsite.

    • emergencyfoodOP
      link
      fedilink
      arrow-up
      3
      ·
      6 months ago

      6$ is about 500 rupees. I can get another HDD for double that price.

      I do copy some important files to Google Drive, but I don’t pay for it, and I don’t rely on it.

        • emergencyfoodOP
          link
          fedilink
          arrow-up
          3
          arrow-down
          2
          ·
          6 months ago

          Right, which is why I prefer to rely on local backups. Much cheaper in the long run.

          • KingOfSleep@lemmy.ca
            link
            fedilink
            arrow-up
            7
            ·
            6 months ago

            I used to work with a guy who was religious about backing up his files to an external drives. Until someone broke into his house and stole his computer AND his external drives. He lost everything.

          • howrar@lemmy.ca
            link
            fedilink
            arrow-up
            3
            ·
            6 months ago

            It’s always a good idea to have an off-site backup (e.g. in case of fires, robbery, natural disasters, etc). If you prefer to manage them yourself, an option is to find someone else who also needs an off-site backup and exchange disk space. You do your off-site on their machine, and they do theirs on yours. With external HDDs, you can just have someone else hold on to it for you at a different location. You can come up with fancier schemes to reduce the chances of data loss or to make the process simpler if you care to do so.

          • pe1uca@lemmy.pe1uca.dev
            link
            fedilink
            arrow-up
            2
            ·
            6 months ago

            I also like local only with a similar set up as yours, rsync to and HDD and to an SSD.
            But I also would recommend you to follow that suggestion, you need to have an external backup managed by someone else (encrypted, of course) so you can have options if anything happens to everything in your local.
            It’s up to you how much you’re willing to pay to be sure to be able to retrieve your data.

            I’m using iDrive e2, it says it has a limited offer, but it’s been there for over a year.

            Im basically paying $1.25 for 2TB per month (it’s charged at once for 24 months) https://www.idrive.com/s3-storage-e2/pricing

  • over_clox@lemmy.world
    link
    fedilink
    arrow-up
    9
    ·
    6 months ago

    In my experience, a well treated, non overused physical hard drive can and likely will hold up for over 15 years.

    I haven’t had any problems with any of my HDDs, but I don’t stress them out with daily gaming or video production, and I don’t toss them around like footballs, obviously.

    Just speaking from my own experiences though…

    • emergencyfoodOP
      link
      fedilink
      arrow-up
      1
      ·
      6 months ago

      My external HD is working well, but the computer’s HD seems to be of poor quality. I’m worried that once the primary copy gets corrupted, the mistakes will then be copied to the external HD as well. (Although if I understand rsync correctly, this shouldn’t happen.)

  • davad@lemmy.world
    link
    fedilink
    arrow-up
    7
    ·
    6 months ago

    I started using restic for backups.

    Pro:

    • Encryption
    • Deduplication
    • Flexible backup location
    • Data integrity checks

    Con:

    • No good GUI
  • StrawberryPigtails@lemmy.sdf.org
    link
    fedilink
    arrow-up
    5
    ·
    6 months ago

    Hard drives can fail. A strong magnetic field could scramble the data on the platters. HDD’s are pretty reliable usually though. Biggest concern with external HDDs would be fall damage.

    I would say to check random files from time to time and you should be fine. Every 2 or 3 years, replace your backup drive. A backup program like Borg could help detect if you have a problem with your files, but you lose a bit of the simplicity of your current rsync method.

    Anything your truly worried about should follow the 3,2,1 standard. Minimum 3 copies, on 2 separate media types, with 1 copy offsite. That said your current setup is already better than 95% of the general population and probably 70% of the Fedi.

  • u/lukmly013 💾 (lemmy.sdf.org)@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    4
    ·
    6 months ago

    I also just do this. However I have already found 2 photos that got randomly corrupted, and I don’t know how to prevent that.

    So far my only idea was using md5sum, but checking all files like that takes a loooooooooong time.

    I am paranoid about cloud. I do have my music backed up on OneDrive, encrypted with GPG using AES256, but I don’t even fully trust that. I know, it sounds stupid, but maybe in the future it will be quite easy to break.

    But I don’t know much about encryption. Just reading the man page, I put these options together:

    --s2k-cipher-algo AES256 --s2k-digest-algo SHA512 --s2k-mode 3 --s2k-count 65011712
    

    but whether I can consider that safe enough, I don’t know.

    And since I don’t know enough about it, I prefer not to trust it.

    • treesoid@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      6 months ago

      I also just do this. However I have already found 2 photos that got randomly corrupted, and I don’t know how to prevent that.

      If you are fine with changing your file system, check out zfs. It stores checksums with your data, and can, if configured to store multiple copies, repair corruption.

  • criitz@reddthat.com
    link
    fedilink
    arrow-up
    3
    ·
    6 months ago

    Instead of a single external HD set up a NAS with a raid configuration so that even if a drive fails the data is safe.

    • Sneezycat@sopuli.xyz
      link
      fedilink
      arrow-up
      5
      ·
      6 months ago

      I want to point out that RAID doesn’t actively prevent bit rot and data degradation. You’ll want ZFS/RAIDZ for that.

      • criitz@reddthat.com
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        6 months ago

        I’m not super familiar with ZFS/RAIDZ, I guess it does extra data scrubbing and stuff to prevent data issues?

        That’s cool. But a traditonal RAID setup still gives you redundancy and fault tolerance which is the important part, right?

        • Sneezycat@sopuli.xyz
          link
          fedilink
          arrow-up
          5
          ·
          6 months ago

          It’s a software RAID integrated in the filesystem, as I understand it. This video helped me understand it a bit more and it’s why I’m saying ZFS is a better idea. afaik you get the good parts of raid and some more. Obviously I have very superficial knowledge on all of this though, so I recommend doing your own reading :P

  • Bitrot@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    3
    ·
    6 months ago

    Worried in the sense that having a backup is a good idea, most filesystems do not have much protection from a file becoming corrupted, but random corruption is rare. Personally I like an automated, regular cloud backup to B2 and also do a local one that is easier (faster) to restore. For local, I prefer Borg (or rather the Pika Backup frontend) because you can easily store different dates while also benefitting from file deduplication.

  • Dark Arc@social.packetloss.gg
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    1
    ·
    6 months ago

    I recommend kopia. It lets you backup automatically to a primary location, copy that data periodically to a secondary location, and it has a command that you can use to verify all the data is actually what it was when the backup was created.

    • emergencyfoodOP
      link
      fedilink
      arrow-up
      2
      ·
      6 months ago

      Thank you. On that note, when backing up, is there a way to compare the two versions, see if one has become corrupted, and copy the good version to both? It would be sad if your primary copy got corrupted, and you overwrote all other copies with it.

      • Dark Arc@social.packetloss.gg
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        6 months ago

        Kopia uses content addressable storage. So basically when it copies things, it only copies what data is new. Files that haven’t changed will not be overwritten.

        You kind of need to run the verification command on both the source and the “backup copy” for maximum paranoia. If you’re running it on a local copy, that should be a relatively fast process as you don’t need to download stuff.

        You’d basically connect on the command line to the copy you just updated via sync-to and then ask kopia to verify 100% of the file integrity … it should then run through everything and make sure it matches what’s supposed to be there. I’m not sure how you fix it if it detects something wrong, I’ve yet to run into that … I’m sure there’s a way 🙂

        You could also use two backup drives and sync to both, then if you get an error restoring a particular file from one, you could in theory restore it from the other. A ZFS cluster with redundant copies and/or a RAID-1, RAID-5 or RAID-6 style setup could also help … but most people aren’t going to run an entire NAS just to turn it on periodically and backup their data “offline”. Most people are going to be better served (IMO) by using cloud storage like B2 (where bitflips aren’t really a concern) or a NAS (where bitflips similarly are a minimal concern, ideally in another location) with a periodically updated offline copy (on say an external hard drive) should be enough to protect most people’s data well.

        Also going to like to what I’m talking about: