Best Practice? - Windows De-duplication on Source

This is my first post. I have been testing Urbackup server on different platforms and preparing to make a move to a full implementation over this next week(or so). I purchased a commercial license to be used on our main server to be backed up.

This is my setup:

1 Windows Server 2016 domain controller(end of life but still being used) warehousing approx. 19.88 TB of data which using windows deduplication comes in at about 8TB of actual storage consumption.

2 Urbackup servers.
OFFSITE: One box connecting through VPN(could do internet backup config alternatively). Currently running Windows 10 Pro Urbackup server with redundant NTFS storage spaces.

ONSITE QNap NAS(current backup target using Windows Server Backup, to be migrated to Urbackup after successful offsite backup)

Here’s the question/issue:

  • How does Urbackup handle windows deduped data? Is it ‘reconstituted’ first? What about image backups? Are they block level and would run faster? At present, I’m being told it will take 37 days to do my backup at a cost of about 20TB using the Windows 10 server implementation doing a file backup.

  • Should I move to a linux server instead?

I can’t change the source system OS, and turning off deduplication at present would not be feasible, but I can change the server config. Please advise.


I assume yes. The urbackup client is just accessing the files so it does not know anything about the underlying deduplication (or e. g. compression). Not sure how it works - would be interested myself - but I guess the underlying file system on the urbackup servers need to handle this by utilizing btrfs or ZFS deduplication on Linux or Windows deduplication.

What about image backups? Are they block level and would run faster?

I think so. With a commercial license of urbackup CBT you also get change block tracking on Windows which should substantially speed up incremental image backups.

I went with Ubuntu 22.04 using a BTRFS for the server. I’m getting between approx 200 and 500 Mbits/s according to the Urbackup interface.

Is it normal for the network read IO and disk write IO to be happening in sequence? Is it possible for this to happen in parallel? Could double the backup speed…

Bump! :slight_smile:

After a few months of use, I can say the setup is stable. I’m using a hybrid system of urbackup and RClone. I use RClone for the terabytes of mostly static read only archive data, and urbackup for systems and folders that are actively used and need daily backups. RClone is able to do multi-threaded transfers which removes the IO bottleneck that Urbackup experiences(see previous message)