Choosing filesystem

I am setting up a Urbackup server on a raspberry pi 4 8GB with 64bit os. I have it all running with a temporary usb stick and is awaiting my hd that will arrive next week.

I will only backup from my win10 machine locally.

The hd will be used ONLY by urbackup, no raid planned, just a wd red 6TB disk.
The question is what filesystem to use on the new hd.

Ext4/XFS, Native to linux, not able to use compression. Not great for big drives(?).

NTFS, this makes sense to use since my main rig is windows, if something were to happen, worst case scenario, I can just plug in the hd and use it in the win machine, but you can also do that with both ext4 and BTRFS, although, I have to copy all the files with ext4, not use them directly from the disk. Downside, I wont be able to set permissions per file, but it wont really matter since only urbackup will use the drive, just mount it as urbackup:urbackup.

BTRFS, this is what is best for urbackup to use (?). Otherwise similar to NTFS?

ZFS doesn’t seem to have any advantages for me, but rather complicate things for me without reason (?).

I have researched the subject but can’t really form an opinion. Please give me your input. :slight_smile:

I had performance problems using NTFS disk so changed it to Ext4 and performance now is great.
NTFS → 1.5TB backup took almost 5 days to complete.
Ext4 → the same 1.5TB took 17 hours to complete.

+1 one for Ext4.

NTFS gave me issues when I tried to make copies of my backup on other mediums that weren’t NTFS.

I tried btrfs for awhile but the whole system started going haywire as I neared max capacity and I’m guessing I didn’t save enough space for btrfs to do its thing.

I believe with a single drive, ZFS is actually worse because it’s not great at recovering from single-disk errors - it was designed to be resiliant across multiple disks with various redundancy data.

We are getting closer to forming an opinion.

Decision: Not to use NTFS.

To decide: Contemplating trying out btrf, my disk will be a 6TB, and actual data needed to be backed up is about 2-2.5TB.

The reason I’m “against” using Ext4 is because I wont be able to use the fancy snapshot capabilities of btrf. But then again, maybe its an over glorified thing to even use?

Additional thoughts are more than welcome since I’m still haven’t made any decision.

I’m using ext4 with hard links for file backups.
Sure, it’s slower than making a snapshot, but it’s working well and is stable.

I didn’t have a good experience with btrfs and zfs, they (especially the later) require knowledge on how to configure and maintain them, which I don’t have. They require more memory and cpu, and in case of failure you risk losing all the data stored on them.

Just a reminder that with UrBackup, as with many other parts of life, backups expand to fill the space available. Normally it’s good to have as many older backups as possible to guard against malware that lies in wait for some time in an attempt to infect all backup copies before striking, or against things like files needed once a year suddenly going missing.

You can manage the tradeoffs between keeping lots of backups and available storage with the automatic archive system, assigning a reasonable expiration date to selected backups.

1 Like

That is exactly the reason I am making this backup, ransomware is the reason I’m setting this up.
I am under the impression that btrfs is the best way to NOT clog your drive?

I’ve been using UrBackup for quite a few years now on my backup server with ~60TB of raw storage configured in a RAID array for ~30TB of usable space. Started with NTFS for a few years, then migrated to ext4 for a few years, and now for the last couple of years I’ve been on btrfs. Of the three different filesystems, I’ve been happiest with btrfs. It works extremely well with for me and I find that I use significantly less storage space with btrfs.

I’ve also used zfs, but not for my UrBackup server. I don’t think it’s a filesystem I’d normally consider using for my UrBackup server in my particular use case.

1 Like

Great answer Chad_Neeper!
I think I just made my decision.
I’ll try with btrfs, if it doesn’t work, I’ll tear my hair and redo it with ext4.

Thank you all for your input!

I just realized I have one more question about this.

When running file backups to a btrfs, should I change the transfer mode to “RAW”. It’s not clearly mentioned in the manual what the setting should be.

My drive got delayed and arrives in about 3 days, would be awesome to set up the backup correctly immediately. :slight_smile:

From my experience, btrfs was not stable until the 5.19 kernel. Before then I would experience btrfs going into read-only mode and not be able to recover from it. Backing up and restoring to a new filesystem was the only solution I could find. However, once the 5.19 – and even the 6.0 – kernels dropped, btrfs seems to be more behaved. I haven’t had it go read-only on me once since then. So, my advice to you is to use the newest kernel you can get with btrfs. It seems many issues were resolved with it, beginning with the 5.19 kernel series.

Just happened to run across this old post again and would like to add that I’ve more recently enabled compression on my btrfs data filesystem. I have CPU power to spare, so extra processing overhead isn’t an issue for me. I prefer to transparently compress the data as tightly as I reasonably can to reduce storage use. I enabled the feature months ago and have been watching my on-disk usage steadily decrease ever since. (I didn’t bother trying to compress the already-stored data, figuring it would eventually churn out much of the uncompressed stuff.) My UrBackup instance shows storage usage data going back two full years. For the past ~15 months I’ve been steady between 19T to 20T of storage usage. I can see a very distinct steady downward trend over the past several months (after I enabled compression) and am now showing a hair over 18T with no sign of leveling off. (No, I don’t have actual btrfs stats…there are just WAY too much data to try to get an accurate reading. The UrBackup stats are good enough for me for anecdotal evidence.)

As I type this, I’m in the middle of building a replacement UrBackup server and have again used btrfs for the filesystem holding the backed up data (ext4 for the database) and have opted to enable compression right from the start:

/dev/sdwhatever /mnt/backupdata btrfs defaults,compress=zstd:15,compress-force=zstd:15 0 2

I’m using compress-force zstd at level 15 (which seems to be the maximum level I can set on my system) so that I can try to ensure that btrfs doesn’t give up prematurely on any files. I’d rather have it try fully rather than assume and give up early based on the sample data contained in the start of the file. I’ve seen no noticeable performance impact.

Disclaimer: YMMV and do your own homework. I’m no means a file system or even linux guru, but I do have about 37 years of intense IT background, been a hands-on IT consultant for the past 27, and owned the company for the past 19. I do my homework to my own satisfaction and to meet my own needs. I know enough to know that I don’t know what I don’t know and the thing I know best is that there is far more that I don’t know than I do know!

2 cents.