Speed up incrimental backup on large server?


I’m trying to optimize my Urbackup server so that it backs up one of my servers quicker. Below are my settings (I thought images would be easier to read than a ton of text =))

Is there anything you can see that I could tweak?

I’m running on 16gb RAM, 64 bit Win10 PC (backing up Ubuntu 20.04 servers). The urbackup is on a USB 3.2 external drive (crazy fast), and then the data is on another USB 3.2 drive.

I think the bottleneck is it working out which files have actually changed. Currently, its taking over 4 hours to do an incremental backup. The files its backup up on a FULL backup are maybe 60gb, and the changed files should only be 1 or 2 gb max.

I’ve already setup a lot of exclusions, so not sure I can trim much more off that. I’m just wondering if I’ve missed an easy win somewhere?



You can try the beta feature of increasing threads for parallel file backup. It should be able to improve file backup speed. However, since this is a beta, please make sure to test it out first before using it for production.

Thanks - I’ll give that a go :slight_smile:

I’m completely new at this as well but have been scouring the manual and youtube videos. So take this advice with a grain of salt & (admins) if I’m wrong please let me know the error of my ways.
Although I’m working out a deduplication issue apparently if you format your drives as btrfs it is suppose to be considerably quicker for incremental using the “raw - copy on write” feature of btrfs.

From section 11.7.2 of the manual (although I’m struggling with getting the dedup to work for me)
If UrBackup detects a btrfs file system it uses a special snaphotting file backup mode. It saves every file backup of every client in a separate btrfs sub-volume. When creating an incremental file backup UrBackup then creates a snapshot of the last file backup and removes, adds and changes only the files required to update the snapshot. This is much faster than the normal method, where UrBackup links (hard link) every file in the new incremental file backups to the file in the last one. It also uses less metadata (information about files, i.e., directory entries). If a new/changed file is detected as the same as a file of another client or the same as in another backup, UrBackup uses cross device reflinks to save the data in this file only once on the file system. Using btrfs also allows UrBackup to backup files changed between incremental backups in a way that only changed data in the file is stored. This greatly decreases the storage amount needed for backups, especially for large database files (such as e.g. the Outlook archive file)

Thanks. I’m on a Win10 machine and the drives are NTFS. I think the problem for me is the hashing, where it works out the differences. Interestingly, the one that I’m having the issues has 66gb of files backed up. Another server (which runs fast for incremental backups). I do have a LOT of exclusions going on - so maybe thats also part of it (the server with the issue has hundreds of thousands of HTML files that are rebuilt daily, and I’ve told it not to back those up as its a waste - but maybe that affects the backup speeds as its still looking at them, even if not backing up)