How to enable deduplication?

I’m currently testing out urbackup and struggle a bit to enable deduplication.

As a test, I did 3 images of a Windows laptop back to back. The backed up drive got 30gb of data, each backup (cVHD) is 15gb in size. I expected to see actual drive usage of <20gb for all three backups, since no meaningful amount of data changed since the last backup.

However, it uses the “full” 45gb as seen below.

UrBackup server is running on a Debian VM and backs-up to a ZFS accessed via SMB (going to switch to NFS). I know that ZFS supports block-level-deduplication, but since the same Pool gets used as NAS, I don’t want to enable it there.

Quick side question, how do I browse an image in the web interface?
I’ve found this blog post, but it doesn’t show the “Mount Image” button for me.

I’ve now made the switch to NFS to make sure there are no problems with SMBs mfsymlinks. Sadly, deduplication still doesn’t seem to work. Since I’m also unable to find any settings related to deduplication, I’m wondering if I’m doing something wrong here.

I know everyone here is helping in their free time and I don’t want to be impertinent but getting dedupe working is the last thing I need before being able to make the switch to urbackup.

If it helps, I can try different things, like backing up directly to a drive and not using NFS or using another version of urbackup or attaching logs that might be helpful. If you have any idea, just let me know what you need or what I could test.

Deduplication is not a direct feature of urbackup.
However, if urbackup server’s underlying FS is btrfs then, it will (on each startup) test for btrfs and if true, will use btrfs’s inherent space-saving features. Though, not directly a “switch” that is turned on.
If you wish to use an FS that does support dedupe - then you will have to “turn on” / “invoke” the necessary feature to achieve this.

My experience has been mainly with file NOT image backups and btrfs with a docker urbackup server. Under those circumstances - the client makes a local snapshot via whatever method chosen at client setup and then compares what needs to be backed up with the previous backup by “chatting” to the server. Then it sends what is needed and thanks to btrfs subvolumes - a complete backup with all previous data is built - but, at a cost of only what was needed. I urge you to have a read up on btrfs subvolumes - should you like the idea of this route.

To get a good dedupe of image backups - you would need block level de-dupe. I know that Redhat has this for LVM volumes - regardless of the FS built on top. I cannot comment on ZFS.

Oh, that’s a bummer. I really hoped urbackup would have built in deduplication.
I’m using ZFS as FS and while it does have block level deduplication built in, I don’t want to turn it on for the whole FS. The increased processor and RAM usage, to get the same speed it is at now, would outweigh the price of new hard drives.

Anyways, thank you!