Best server OS and filesystem for Urbackup Server?


I’m testing urbackup server and it works fine, I’ve used a server with Windows 2019, two 2Tb HDD in Raid 1, ReFS in the storage volume and deduplication enabled but I’ve readed some post about the preference to use Linux.

Which should be the best OS+fs for urbackup? Mainly it will be used for internet backups (image and files) and a good storage management (space) would be needed. I guess HDD peroformance is not important since the bottleneck is going to be on the internet line (1Gbps) but may be SSD a better option in that scenery?

I have a test server with Ubuntu Server 20.04, should I give it a try?


I’m always using ZFS for my UrBackup servers because it automagically provides data integrity, good RAID options where needed, data compression to save space, and snapshot+replication for periodic USB dumps.

Ubuntu has the best ZFS support so far when it comes to Linux, other distros can make your ZFS pool unavailable after kernel update. Oh and on’t bother with ZFS deduplication if you choose that route.

UrBackup has some ZFS integration built in (link), I’m not using it tho.

1 Like

Best OS is the one you know how to maintain. If you’ve never done any work with Linux and need the backups to be reliable (for a business or whatever) then I’d suggest sticking with windows as the server.
If you’ve had some experience with Linux or its not 100% necessary to have the backups working 100% of the time, then consider playing with Linux.


I have experience in both but sometimes same app works better in Win, others in Linux.

I ask for better performance and better storage managment (deduplicating). And in case of Linux, if there is a preferred distro.

I don’t remember wherem but somewhere on I believe to have read that BTRFS is the best suited filesystem for UrBackup.
So it may come down to the question which distribution offers the most mature BTRFS implementation…

I’m testing on ubuntu+zfs, by now the performance seems equivalent but storage is managed better on Windows (incredible but true). Still trying…

It is certainly worth a test for you to see what works best for your environment.

I have only run UrBackup on Windows at this point, and it has been rock solid across a number of environments. From what I can see, the only real OS preference for the product is based on what the customer/administrator needs.


I use debian 10 + BTRFS. It works fine more 2 years.

Official documentation:
Prefer btrfs, because UrBackup can put each file backup into a separate sub-volume and is able to do a cheap block based deduplication in incremental file backups.

Because of it I chouse btrfs.

Minuses btrfs:
Btrfs has deduplication. Each backup create a separate sub-volume (somethink like snapshot shadow copy in Windows). When you have many URB clients and keep many backup storage system starts work very slow.

Deduplication create much random IO for read and write.
One of my servers for example:
I have 25 clients at one server and keep 20 backups. I use 2 HDD 14Tb SAS WD 7200 RAID 1.
Used size 10Tb of 14Tb.
Now it works slow.

Because of it use SSD storage if you can.

@Michal What storage hardware configuration do you use? Do you have problems with slow IO wait?

@Dmitrius7 nothing fancy. Repurposed Dell OptiPlex 90* and 70* series with SSD system and NAS grade SATA drives in ZFS mirror mode as backup storage. On one installation I was affected by SMR drama in WD Reds but no issues other than that.

A lot on here seem to use Debian or Ubuntu. Performance mainly comes down to filesystem differences these days, so the key advantage of Linux is choice. You’ll need to experiment a little as different use cases and hardware will give different results.

Most of the Linux filesystems, just like the kernel itself, can be tweaked to get some gains by adjusting default settings to better match your hardware and needs.

If you want deduplication, it depends on your data. My setup ist Ubuntu 20.04.2 LTS with ZFS. 30 Backup hosts are mixed WIN10/2016/MSSQL/Exchange and linux. Dedup works fine with a ratio of 3.25 + compression. In other words 5.5TB physical to 27TB logical.

If it comes to ZFS, you could also take a look to Proxmox.

Essentially a Debian with virtualisation support in form of a great web GUI. Boot Proxmox Install from USB and select ZFS at installing, chosing devices and a RAID level and that’s it. I have a ZFS-mirror (“RAID1”) SSD setup where Proxmox takes care to even sync UEFI data on the disks. There are RAIDs from RAID10 to RAIDZ3, great if you have many spinning disks as storage (you know, ZFS supports transparent cache on SSD and much more).

On Proxmox Web GUI, a single Web Shell command (pveam update) makes Turnkey Linux templates available in the Web GUI. There you can select e.g. Debian 11 template, click download, then create a container, associcate some CPU and RAM with a few clicks in the wizard and possibly a big file system :slight_smile: . Boot and install it in the container. You even get a web shell for each VM, and all for free. If you ever need to update, you can use a second container to test first, or use Proxmox Snapshots to roll back changes if the maintenance window is exceeded before you get the new version working.

ZFS dedup saves 75% storage space? That’s amazing, I though UrBackup already dedups data? At least it looks this way in my tests. But I did not test muiltiple similar VMs. Is this a point where UrBackup does not dedup, so ZFS helps?

ZFS eats CPU, so you might want to research the real life use of ZFS, ive seen people buy a QNAP for $3000 only to find its slower with ZFS then the NAS they replaced that was 5-6 years old with a much older CPU.

We have a huge NAS ($6000), we used EXT4 with RAID6, its lighting (getting over a sustained 100Mb/s over the LAN to it and thats with WD Red standard spinning platter hard drives in the RAID, NOT SSD), ive seen guys with Intel XEON with ZFS only getting 20Mb/s throughput due to the load placed on the CPU by using ZFS format.

@sdettmer I think it’s because urbackup dedup is working on file level and zfs dedup works on block level. Keep in mind that zfs dedup is hungry for CPU and RAM. I’m using an AMD EPYC 7232P CPU with 128GB RAM + 1TB l2arc on nvme. All storage is on enterprise SATA SSDs mirrored striped vdevs.