I’m using urbackup with a ZFS copy-on-write configuration for both file and image backups. It does not seem to respect the global soft filesystem quota and my storage is constantly out of space triggering emergency cleanups with every backup. Additionally, manual cleanups calculate “space to free” incorrectly (see below, tried to clean up 300G and only got 18G), although the “free space” is accurate. df -h is also accurate. Any thoughts on what I’m doing wrong?
root:~# urbackupsrv cleanup -a 300G -u root
2018-12-26 15:43:07: Shutting down all database instances…
2018-12-26 15:43:07: Opening urbackup server database…
2018-12-26 15:43:07: Testing if backup destination can handle subvolumes and snapshots…
Testing for btrfs…
ERROR: not a btrfs filesystem: /var/urbackup_files/testA54hj5luZtlorr494
TEST FAILED: Creating test btrfs subvolume failed
Testing for zfs…
ZFS TEST OK
2018-12-26 15:43:10: Backup destination does handle subvolumes and snapshots. Snapshots enabled for image and file backups.
2018-12-26 15:43:10: Emulating reflinks via copying
2018-12-26 15:43:10: Cleaning up 18.462 GB on backup storage
2018-12-26 15:43:10: Database cache size is 1.95312 GB
2018-12-26 15:43:10: Starting cleanup…
2018-12-26 15:43:10: Freeing database connections…
2018-12-26 15:43:10: Space to free: 18.462 GB
2018-12-26 15:43:10: Free space: 17.8416 GB
Can you have a look which fs has 17.8416 GB free space (as in the log above).
One known issue is that UrBackup uses the free space of the backup folder (configured on the web interface) to get the free space. So if you put files and images on different ZFS data sets you’ll have problems.
I believe that number is accurate (it’s the “space to free” which seems to be calculated incorrectly). All of urbackup’s paths are on different zfs data sets (config below), however they are all part of the same pool and share the same remaining free space.
How can I determine what UrBackup thinks is the total, and free space for it?
My Global soft filesystem quota is set to 95%
But the Urbackup Jail has a data volume with only 84% used space but it’s cleaning down to minimum backup level like it’s hitting the Global soft quota.
I figured out the problem. UrBackup uses df to determine total filesystem capacity and calculates how much cleanup it needs to do based on that value. df reports this value incorrectly on ZFS because df calculates Size (total capacity) by simply summing Used+Avail. This methodology is incorrect when applied to a zfs dataset, because it does not account for the space used by all the other datasets on the same pool. See example outputs below:
The Size that df reports ranges from 132G-197G, despite the fact that all datasets reside on the same pool. This breaks the Global Soft Filesystem Quota option on ZFS, eventually causing the system to run out of space.
# zfs list rpool
NAME USED AVAIL REFER MOUNTPOINT
rpool 3.38T 131G 128K /
The actual pool capacity as reported by ZFS is 3.38T + 131G. This seems to be the most accurate method for addressing the problem within UrBackup’s existing workflow. For scripting purposes, you can use:
# zfs list -Hp rpool
rpool 3716467060944 140937858864 130944 /
— df reports that rpool/var/urbackup_files contains 119G of data (649M of data on that filesystem only, NOT including child filesystems, plus 119G available, apparently rounded down) —
Now see output of df on rpool root mounted on /var/test (like you had requested)
# df -h /var/test
Filesystem Size Used Avail Use% Mounted on
rpool 119G 128K 119G 1% /var/test
Same issue as above. When calculating free space on zfs filesystems, df ONLY accounts for that specific filesystem without including space occupied by child or sibling filesystems in the same pool. This situation becomes unavoidable when using UrBackup’s ZFS COW feature for image/file backups because it creates a new child filesystem for each backup.
Output as below; my / (and /var) path is actually a separate filesystem ( rpool/var is not mounted but rpool/var/urbackup_files, rpool/var/spool, etc. are)
# df -h /var
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vgroup-root 26G 14G 11G 56% /
If I mounted rpool/var (pain in the butt with the way I have my system set up), it would be an empty filesystem with a bunch of other filesystems mounted on top of it and I expect output would look similar to the pattern seen on previous posts (rpool, rpool/var/xxx, rpool/var/xxx/xxx, etc.) :
# df -h /var
Filesystem Size Used Avail Use% Mounted on
rpool/var 119G 128K 119G 1% /var