I’ve just switched from using BTRFS on my UrBackup server to using EXT4. Unfortunately BTRFS was too unreliable and kept going into read-only mode. I had stopped trusting it, unfortunately.
I have 4 x 6Tb disks, set up in RAID10 which store the backup files.
The database itself is on a separate disk and doesn’t appear to be the bottleneck in my case.
The disks are running pretty quickly, but they’re clearly the bottleneck. With EXT4, UrBackup creates symlinks for files in each backup? I expect it’s this process that takes a long time.
Is there anything I can do to improve performance significantly? I expect not, and that I’ll need to wait until UrBackup supports XFS reflinks or similar?
Even though i didnt test it (running UrBackup on a W2kR2 Server) - some folks bet their lifes on ZFS.
A quick google search showed up some diagrams where ZFS is about 100% faster in some Scenarios.
The issue with testing is that I don’t think it’s easy for me to port the existing backed up data to another file system? I think I’d need to reconfigure UrBackup and then perform a full backup of each client again?
My only concern with ZFS was having enough memory for dedup. I suppose I could disable dedup and UrBackup would still run much faster on ZFS than EXT4? Doesn’t UrBackup effectively dedup the files anyway?
yeah the migration procedure might be a bit annoying as you’d have to save the files somewhere else while reformatting the volume. (at least i dont know a way to convert ext4 to ZFS out of the box)
But if this is done, just remount the new ZFS Volume in the same mount point as the ext4 was before, copy everything back to the ZFS and start the urbackup service.
That way you wouldn’t have to change any config at all.
Without dedup ZFS should be Faster than ext4 i guess - at least according to what i read formerly.
i dont know much about UrBackup (dedup) on Linux, but the ZFS dedup in general should be provided with enough RAM if i remember correctly. But as RAM is pretty cheap these days this shouldn’t be the Problem?
here’s a blogpost which should help you to calculate your RAM need.
(Rule of thumb 5GB RAM per TB Diskspace - 30GB RAM in your case?)
Anyway, what i dont know is, if the ZFS supports offline dedup (for example like btrfs or how the W2KR2 Server does it). it would drive the RAM need obsolete, as you could create a cronjob to dedup at night.
i had success with these two on our old backup server (Ubuntu server) and increased the read speed about 12Mbit/s but this is some knowledge from a really long time ago.