I’ve got about 50TB raw / 30TB after zfs lz4 compression of daily incremental backups on a zfs raidz2 on the local backup server. For desaster recovery purposes (e.g. source systems and server go up in flames) I want to clone the server and the latest backup of each client to an external drive.
I’ve scripted this all with python and bash to determine the last file and image backup directory of each client and rsync those to the external drive. However there are a couple of problems with this approach:
- When “Use symlinks during incremental file backups” is enabled, there is symlinks to .directory_pool that have to be followed and there is absolute symlinks belonging to the backup (pointing somewhere into the backup server’s fs) that must not be followed. I couldn’t figure out how to achieve this with rsync, so I had to disable symlinks for the time being.
- Running urbackupsrv remove-unknown on this incomplete clone takes an extremely long time, about 24h to prune the ~18GB database.
I suspect the remove-unknown takes so long because each and every file in backup_server_files.db is stat’ed from the slow spinning disk. Could this maybe be improved for situations where entire clients or their backups are missing? I’m not an expert in databases, but I imagine it would be possible to do something along the lines of DELETE FROM … WHERE $path LIKE “directorythatwasnotcopied”.
Does someone maybe have a different solution to creating desaster recovery backups, other than the 1:1 raid1 or send-receive methods mentioned in the documentation?
On a sidenote, while experimenting with this I discovered that the “last file/image backup” timestamps are not corrected or reset to “never” by remove-unknown when some or all of the backup sets were deleted. Since the timestamps are not updated for failed backups, one might assume if the timestamp says “last backed up today” that there is actually an existing backup of today, not an older one or none.