Serious question: why would anyone opt for XFS these days? I remember reading about it being faster/more efficient with small files, but is that still valid?
XFS has been the default file system for RHEL since RHEL 7. A lot of places typically roll with defaults there, so it makes sense to see it still widely used.
It has more features but it also isn’t as weird and wacky as btrfs and zfs.
Honestly I’m not sure it shouldn’t be the default fs for most distros, except it wasn’t born in the Linux kernel like ext and btrfs, but it’s been here forever and it’s been very well behaved, unlike others I can mention.
Used it for a while on lvm raid, xfs was never what gave me problems.
Rock solid may be a stretch. They still suffer from outrageous metadata bugs even to this day when used in busy file systems.
That bug alone has been open for over a decade. Development focus of the people who understand and want to fix those things have shifted to other filesystems like ext4 and ZFS.
From the top of my head, compared to ext4: RAM use and the ability to shrink an FS if necessary. Oh, also I’ve used an EXT FS driver on a Windows host, but I’ve never seen one for XFS.
The two benefits to XFS that I’ve ever seen are that it has no inode limit like ext4 (which prevents the FS shrink). The other is that it seems to handle simultaneous I/O better than ext4 does; think very active database volumes and datastores.
I’ll give you one reason it’s used commercially: Veeam can only use xfs or refs as a deduplication enabled store using fastclone. For example I have a 60 disk nas hosting hundreds of customer backups and a petabyte. Without deduplication imagine how many extra petabytes of storage would be consumed. Each backup is basically the same image as well as the backup processing time.
Maybe they’ll get that same feature on zfs one day.
Unless you want me to use refs? But I have tried that, and I’ve lost a whole volume to iscsi volume mounted to windows and formatted refs due to corruption when a network power loss happened gradually and whatever reason, that network interruption caused the whole volume to be unmountable over iscsi ever again. I’m not keen to retry that.
Xfs is pretty good with 60 disks, I wouldn’t trust ext4 with that many but there’s nothing factual about ext4 but a feeling.
About to get a second 60 disk nas for another datacentre for the same setup as above to migrate away from Wasabi as offsite. Will build xfs again. Looking forward to it.
Yeah but veeam doesn’t support fast block cloning which means you don’t need to ever recopy blocks that don’t change. From a performance point of view, fast block cloning gives incredible speed up so that in turn means more backups happen in a short time. That’s pretty important even at our small business scale. I guess larger veeam service providers solve things differently.
I have no experience with ZFS and didn’t know it supported project quotas too. I found out about XFS from an LPIC book where it said that XFS, unlike other filesystems, also supported project quotas (this was about 10 years ago). It’s been working fine for me the past few years, so I’ve never looked for alternatives. Now I’m curious.
To me, zfs is like the Gentoo of file systems. If you actually use the zfs features and do a lot of digging and experimentation before you go all in on it, it’s not bad; it really can be quite good. If someone wants a filesystem that they format and forget, ext4 and xfs are still solid options. I used to use ext4 for most of my filesystem needs and xfs for my long term storage on top of mdadm. I just really wanted zfs snapshots.
Serious question: why would anyone opt for XFS these days? I remember reading about it being faster/more efficient with small files, but is that still valid?
XFS has been the default file system for RHEL since RHEL 7. A lot of places typically roll with defaults there, so it makes sense to see it still widely used.
The RHEL (and Fedora) defaults are quite good, too.
Xfs is basically a bigger, better ext4.
It has more features but it also isn’t as weird and wacky as btrfs and zfs.
Honestly I’m not sure it shouldn’t be the default fs for most distros, except it wasn’t born in the Linux kernel like ext and btrfs, but it’s been here forever and it’s been very well behaved, unlike others I can mention.
Used it for a while on lvm raid, xfs was never what gave me problems.
XFS is rock solid and still has active development going on, so why not.
Rock solid may be a stretch. They still suffer from outrageous metadata bugs even to this day when used in busy file systems.
That bug alone has been open for over a decade. Development focus of the people who understand and want to fix those things have shifted to other filesystems like ext4 and ZFS.
Main reason I stopped using it ten years ago.
But are there benefits over ext4 and BTRFS these days?
From the top of my head, compared to ext4: RAM use and the ability to shrink an FS if necessary. Oh, also I’ve used an EXT FS driver on a Windows host, but I’ve never seen one for XFS.
Just to clarify, the previous comment asked about benefits of XFS over ext4. But I completely agree with your reasons for choosing ext4.
Oh, my bad.
The two benefits to XFS that I’ve ever seen are that it has no inode limit like ext4 (which prevents the FS shrink). The other is that it seems to handle simultaneous I/O better than ext4 does; think very active database volumes and datastores.
I’ll give you one reason it’s used commercially: Veeam can only use xfs or refs as a deduplication enabled store using fastclone. For example I have a 60 disk nas hosting hundreds of customer backups and a petabyte. Without deduplication imagine how many extra petabytes of storage would be consumed. Each backup is basically the same image as well as the backup processing time.
Maybe they’ll get that same feature on zfs one day.
Unless you want me to use refs? But I have tried that, and I’ve lost a whole volume to iscsi volume mounted to windows and formatted refs due to corruption when a network power loss happened gradually and whatever reason, that network interruption caused the whole volume to be unmountable over iscsi ever again. I’m not keen to retry that.
Xfs is pretty good with 60 disks, I wouldn’t trust ext4 with that many but there’s nothing factual about ext4 but a feeling.
About to get a second 60 disk nas for another datacentre for the same setup as above to migrate away from Wasabi as offsite. Will build xfs again. Looking forward to it.
ZFS has deduplication, you just don’t want to use it. As deduplication grows, it requires more and more RAM on the ZFS server. :(
Dedupe hash table can be moved to ssd but obviously slower
Yeah but veeam doesn’t support fast block cloning which means you don’t need to ever recopy blocks that don’t change. From a performance point of view, fast block cloning gives incredible speed up so that in turn means more backups happen in a short time. That’s pretty important even at our small business scale. I guess larger veeam service providers solve things differently.
[This comment has been deleted by an automated system]
Well enough, I guess, that I’d never heard if NTFS having that feature 'till now. ;)
I use XFS on partitions where I need to implement project quotas.
Why not zfs?
I have no experience with ZFS and didn’t know it supported project quotas too. I found out about XFS from an LPIC book where it said that XFS, unlike other filesystems, also supported project quotas (this was about 10 years ago). It’s been working fine for me the past few years, so I’ve never looked for alternatives. Now I’m curious.
Fairly sure zfs has been able to do dataset quotas for about 20 years, totally worth looking into
I am pretty sure certain apps want xfs. One I can think of is veeam who leverage their block cloning feature for some of their stuff.
deleted by creator
Zfs broke your shit, or you broke your shit?
deleted by creator
To me, zfs is like the Gentoo of file systems. If you actually use the zfs features and do a lot of digging and experimentation before you go all in on it, it’s not bad; it really can be quite good. If someone wants a filesystem that they format and forget, ext4 and xfs are still solid options. I used to use ext4 for most of my filesystem needs and xfs for my long term storage on top of mdadm. I just really wanted zfs snapshots.
Zfs is great if you need a raid with parity: their raid5 and raid6 are the best in class. I have a NAS build where it makes sense to use those.
If you only need snapshots, go with btrfs. Just stay away from their raid5 and raid6, because they are unstable and tend to lose data.