So I found a few more oddities about setting up urbackup with a large 12TB zfs pool…
1. Dedup kills performance and hangs the machine…
Problem:
I assume this is due to the obscene RAM requirements of dedup (between 2GB and 10GB of RAM required per TB of storage), with or without spread into an SSD cache, but in my specific use case:
zpool of 12 TB, SSD cache of 16GB, and 16GB of RAM, typical image backup were taking 4x longer and the server would become unresponsive (frozen), most likely due to the RAM requirement of ZFS dedup.
Suggestion:
modify the documentation of urbackup to warn users about zfs dedup requirements
2. zfs folder sizes seems inconsistent (at least on ubuntu)
problem:
Unless I am missing something df can’t seem to calculate the usage of folders properly within a single ZFS pool (mother’s image backup size is not visible from the parent folders under df???):
stephane@HAL:~$ df -h /media/BACKUP/*
Filesystem Size Used Avail Use% Mounted on
backup/files 12T 0 12T 0% /media/BACKUP/files
backup/images 12T 0 12T 0% /media/BACKUP/images
backup 13T 560G 12T 5% /media/BACKUP
stephane@HAL:~$ df -h /media/BACKUP/images/
Filesystem Size Used Avail Use% Mounted on
backup/images 12T 128K 12T 1% /media/BACKUP/images
stephane@HAL:~$ df -h /media/BACKUP/images/mother/
Filesystem Size Used Avail Use% Mounted on
backup/images/mother 12T 0 12T 0% /media/BACKUP/images/mother
stephane@HAL:~$ df -h /media/BACKUP/images/mother/190626-1528_Image_C/
Filesystem Size Used Avail Use% Mounted on
backup/images/mother/190626-1528_Image_C 12T 46G 12T 1% /media/BACKUP/images/mother/190626-1528_I mage_C
suggestion:
warn users in the documentation section on zfs of the bug… maybe setup a small test script which validate the actual usages of files vs images…