• JetpackJackson@feddit.de
    link
    fedilink
    arrow-up
    11
    ·
    8 months ago

    Do you have a source for the ext4 failure stuff? I use ext4 currently and want to see if there’s something I need to do now other than frequent backups

    • atzanteol
      link
      fedilink
      English
      arrow-up
      23
      ·
      edit-2
      8 months ago

      They don’t. ext4 has been the primary production filesystem for over 15 years. And it’s basically modified ext3 so it’s been around even longer as a format.

      It’s very stable. It’s still the default for many distros even.

    • kurushimi@lemmyonline.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      edit-2
      8 months ago

      I used ext4 extensively in an HPC setting a few jobs ago (many petabytes). Some of the server clusters were in areas with very unreliable power grids like Indonesia. Using fsck.ext4 had become our bread and butter, but it was also nerve wracking because in the worst failures that involved power loss or failed RAID cards, we sometimes didn’t get clean fscks. Most often this resulted in loss of file metadata which was a pain to try to recover from. To its credit, as another quote in this thread mentioned, fsck.ext4 has a very high success rate, but honestly you shouldn’t need to manually intervene as a filesystem admin in an ideal world. That’s the sort of thing next gen filesystem attempt to provide.

    • sep@lemmy.world
      link
      fedilink
      arrow-up
      4
      ·
      8 months ago

      Not seen a fs corruption yet. But i have only run ext4 on around 350 production servers since 2010 ish.
      Have ofcourse seen plenty of hardware failures. But if a disk is doing the clicky, it is not another filesystem that saves you.

      Have regularly tested backups!

    • nyan
      link
      fedilink
      arrow-up
      2
      ·
      8 months ago

      ext4 is still solid for most use cases (I also use it). It’s not innovative, and possibly not as performant as the newer file systems, but if you’re okay with that there’s nothing wrong with using it. I intend to look into xfs and btrfs the next time I spin up a new drive or a new machine, but there’s no hurry (and I may not switch even then).

      There’s an unfortunate tendency for people who like to have the newest and greatest software to assume that the old code their new-shiny is supposed to replace is broken. That’s seldom actually the case: if the old software has been performing correctly all this time, it’s usually still good for its original use case and within the scope of its original limitations and environment. It only becomes truly broken when the appropriate environment can’t be easily reproduced or one of the limitations becomes a significant security hole.

      That doesn’t mean that shiny new software with new features is bad, or that there isn’t some old software that has never quite performed properly, just that if it ain’t broke, it’s okay to set a conservative upgrade schedule.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      0
      ·
      8 months ago

      Well a few years ago I actually did some research into that but didn’t find much about it. What I said was my personal experience but now we also have companies like Synology pushing BRTFS for home and business customers and they have analytics on that for sure… since they’re trying to move everything…