Client BTRFS snapshots trying to be made to wrong filesystem

I configured UrBackup client to use BTRFS snapshots, but it fails to create snapshots on my partially BTRFS based system. The directories being backed up are all on BTRFS filesystems, but /mnt is Ext4.

I get the following errors:

Creating snapshot of "/home" failed ERROR: not a btrfs filesystem: /mnt/urbackup_snaps

I created a work-around by modifying btrfs_create_filesystem_snapshot. I added the first two lines in front of the third:

mkdir -p $SNAP_MOUNTPOINT/.urbackup_snaps SNAP_DEST=$SNAP_MOUNTPOINT/.urbackup_snaps/$SNAP_ID btrfs subvolume snapshot -r "$SNAP_MOUNTPOINT" "$SNAP_DEST"

I was too lazy to check out how the passed around arguments were actually being used outside of the script itself, so to be safe I made a snapshot of my own. A quick look seems to show that my files are still there and got backed up, but I haven’t yet done a proper compare to be sure.


Prior to creating this work-around I also tried to use LVM based snapshots, but got different errors.
I’m guessing this one is probably due to having no unallocated space on the PV:

Internal error: Unable to create new logical volume with no extents.

These errors, on a different filesystem, seem to indicate an incompatibility with bcache and/or BTRFS RAID on top of LVM:

Volume group "bcache2" not found Cannot process volume group bcache2 Could not find LVM volume group of volume /dev/bcache2

The stack looks like this, from top to bottom:

btrfs volume bcache superblock logical volume physical volume cryptLUKS MS-DOS partition

bcache2 is not a volume group. It has its own VG, PV and LV. The BTRFS volume is part of an asymmetrical RAID1, bcache2 is part of the same cache set as the other RAID drives, but everything else should be independent.
Note in particular that it only complained about one out of three drives. I deliberately kept the VGs separate to make sure I couldn’t accidentally put supposedly redundant volumes on the same physical disk. However, UrBackup showed no signs of attempting to snapshot all relevant drives. In theory, if the bcache weren’t an issure, it might have snapshotted not even one full copy of the data. Of course, maybe it just gave up after the first one. I would like to know if it’s even coded to try to recognize this possibility.

Now I’m getting an error about a failed hash, multiple times. Maybe this workaround isn’t working quite right.

It’s always the same file, the places.sqlite of one of my Firefox profiles. When I run two backups back-to-back it produces the same pair of hashes. If I give it time until places.sqlite changes it produces a different pair of hashes.

I also get the errors:

Fatal error during backup. Backup not completed FATAL: Backup failed because of disk problems (see previous messages) Backup failed

Also, I tried back-to-back backups again, and appeared to cause new errors that appear to be due overlapping backups…

So I tried enabling end-to-end verification, verify using client side hashes, and Do not fail backups. This time I only got the hash error. Same hashes as the previous backup.

And now, on subsequent backups, even after disabling the options again, I no longer get those errors… What just happened?

–edit–

Nevermind, the error came back again after it changed again. But for some reason there is no problem with any of the other profiles.

It turns out that the directory/subvolume on the bcache/raid filesystem isn’t getting backed up at all with my work-around. It creates a directory with the same name, but it’s empty. I wonder if it has to do with confusion between the root mount and subvolumes. I decided to remove the work-around because finishing with errors is better than ignoring most of what I want backed up.

Ok, sorry, there aren’t many users of the btrfs snapshot script. That /mnt is not the same btrfs file system is a problem the script does not take into account, but it should have worked with your work around.

If there are separate mounts with different file systems you need to configure and add each one separately, once you are using snapshots.

Hello,

I think that the way the snapshot script is written, it will fail in many cases.
In my case, I try to backup a filesystem in /media/UUID, which is /dev/sdb
/mnt/ is /dev/sda though.

Greetings,
Hendrik

@uroni:
You write, that the workaround should have worked. I don’t see how it could:

mkdir -p $SNAP_MOUNTPOINT/.urbackup_snaps
SNAP_DEST=$SNAP_MOUNTPOINT/.urbackup_snaps/$SNAP_ID
btrfs subvolume snapshot -r “$SNAP_MOUNTPOINT” “$SNAP_DEST”

While this generally makes sense, urbackupsrv does expect the snapshots in SNAP_DEST=/mnt/urbackup_snaps/$SNAP_ID doesn’t it?

Or does urbackupsrv read the return value of the script?

echo “SNAPSHOT=$SNAP_DEST”

Greetings,
Hendrik

Yes

Hello,

I’m having a similar issue, and was wondering if I could ask for some guidance on how you fixed your issue?

I have two btrfs filesystems, one that is my root filesystem and another mounted to /mnt/media. I’d like to backup /mnt/media, but when I add it to the list of backed-up folders I get this error:

Errors09/20/16 07:16 Creating snapshot of "/mnt/media" failed
Errors09/20/16 07:16 ERROR: cannot snapshot '/mnt/media' - Invalid cross-device link
Errors09/20/16 07:16 Create a readonly snapshot of '/mnt/media' in '/mnt/urbackup_snaps/a4799dec76a6e7617eea9c2b61cf77c0eef6f0cac8a29455'
Errors09/20/16 07:16 Creating snapshot of "media" failed.

I realize the error is due to the system trying to make a snapshot for one btrfs filesystem on a different btrfs filesystem. How can I update btrfs_create_filesystem_snapshot as the OP did? I can’t seem to find it anywhere on my system. For reference I’m running the latest client build on Debian Jessie.