Error when deleting backups from web interface

Server 2.1.19 on debian with ZFS storage.

When I try to delete a backup from the web interface, it gives me an error and says to check the server log. Here is a sample entry from the server log:

2017-04-25 08:41:44: Deleting image backup ( id=3, path=/mnt/urbackup/WIN-H6JHAFVKGK1/170424-2034_Image_SYSVOL/Image_SYSVOL_170424-2034.raw ) …
2017-04-25 08:41:44: Deleting image backup failed.

The delete fails but I can still mount the image via the web interface.

Please check what

urbackup_snapshot_helper remove WIN-H6JHAFVKGK1 170424-2034_Image_SYSVOL

says …

It returns:

Command not found

That’s the problem then

Hmm, ok sorry forgot something. The program itself probably says command not found, yes?


urbackup_snapshot_helper 1 remove WIN-H6JHAFVKGK1 170424-2034_Image_SYSVOL

Yes, the program exists, it was just missing that parameter.

Here is the output:

root@urb-04:~# urbackup_snapshot_helper 1 remove WIN-H6JHAFVKGK1 170424-2034_Image_SYSVOL
Destroying subvol urbackup/images/WIN-H6JHAFVKGK1/170424-2034_Image_SYSVOL failed. Promoting dependencies…
cannot open ‘urbackup/images/WIN-H6JHAFVKGK1/170424-2034_Image_SYSVOL@ro’: dataset does not exist
Searching for origin urbackup/images/WIN-H6JHAFVKGK1/170424-2034_Image_SYSVOL@170424-2034_Image_SYSVOL
cannot destroy ‘urbackup/images/WIN-H6JHAFVKGK1/170424-2034_Image_SYSVOL’: filesystem has children
use ‘-r’ to destroy the following datasets:

I am doing ZFS remote replication. I use incremental snapshots last/now to perform the replication. Looks like those snapshots are holding it up.

Is there something that can be done so that the UrBackup system will destroy datasets even if other snapshots exist?

I need to keep the snapshots around for my replication but I would also like UrBackup to perform cleanup. Otherwise I have to create a daily script delete old datasets and then stop the urbackup service, perform a remove-unknown and then start the urbackup service.

Maybe use the same thing UrBackup itself does. Rename the snapshots to something unique and then zfs promote them.

I finally had some time to get familiar with the server source code. Looking in snapshot_helper/main.cpp I see what you are saying about how UrBackup renames the snapshots and then promotes them.

I don’t think I was very clear on what I am doing to remote replicate. So let me try to explain in better detail.

To replicate, I simply take a recursive snapshot of the entire pool. My pool is named urbackup. I start with urbackup@last snapshot and zfs send that to the remote host. Once that is complete, subsequent replications follow this series of commands:

zfs snapshot -r urbackup@now
… rollback all datasets to @last snapshot on remote side …
zfs send -RI urbackup@last urbackup@now | mbuffer -q -v 0 -s 128k -m 1G |
ssh -T user@headend ‘mbuffer -s 128k -m 1G | zfs recv urbackup/remotenode’
zfs destroy -r urbackup@last
zfs rename -r urbackup@now urbackup@last
ssh user@headend ‘zfs destroy -r urbackup/remotenode@last’
ssh user@headend ‘zfs rename -r urbackup/remotenode@now urbackup/remotenode@last’

In order for this to work, I have to keep the @last snapshot intact until the next replication cycle. After replication succeeds I delete the @last snapshot but rename @now to @last. So there will always be a @last snapshot.

It doesn’t make sense to promote it because I would have to promote all of the @last snapshots on all data sets whenever I want to delete an old backup.

Just a suggestion, would it makes sense under the remove_subvolume function to check if there are any additional snapshots NOT named @ro and just destroy those? In my case, I am deleting the images anyway.

I thought about changing the replication such that I could remove the @last snapshot once replication is done for a specific image. That would change it such that the replication wouldn’t remove the old images from the remote location. I could setup something on the remote side to do that, but it would be nice to have the retention policy set in UrBackup followed all the way through the remote side.

Just checking in to see if there has been any consideration to what was proposed.



I created a pull request that I think successfully addresses the issue I am having while maintaining functionality. Please review.