Migrating urbackup btrfs storage

I’m currently setting up a new Server and woud like to migrate all my urbackup Backups from one BTRFS HDD to the next. It appears that using btrfs send and receive will not work, because subvolumes will have different ids:

What options do I have to clone the backups to another one? Is cloning the whole partition to the target disk the only approach?

1 Like

No need to use send.

Just make a partition (you CAN add the whole device, but I always prefer making a partition and adding that to the btrfs filesystem IN CASE I would like to grab some of the storage space for something else in the future) and add to the existing btrfs filesystem:
sudo btrfs device add /dev/xxxx /mountpoint

Then REMOVE the old device/partition:
sudo btrfs device remove /dev/xxxx

Btrfs will start moving the data to the new drive in the background. You can still work with the filesystem, btrfs will take care of it all! :smiley:

IIRC you can see the process working (you do NOT want to reboot the computer before the process is done) with watch -d btrfs fi us /mountpath but you should google that last part, I just pulled it out of memory, and that can be broken and corrupted… xD

1 Like

Oh, yeah! That’s quite clever! :open_mouth:

I’ll check tonight if this works for me after backing up the partition with Clonezilla.

Edit: Unfortunately, it seems I managed to fry my 10TB western digital by running a big copy operation to it (which I did in preparation for then moving the urbackup backups off of the drive)…
The drive overheated during an rsync transfer and shut off. When turning on again it now refuses to mount…
[Fri Sep 29 18:00:48 2023] BTRFS warning (device sda1): couldn’t read tree root
[Fri Sep 29 18:00:48 2023] BTRFS warning (device sda1): try to load backup roots slot 1
[Fri Sep 29 18:00:48 2023] BTRFS error (device sda1): chunk 2668278841344 has missing dev extent, have 0 expect 1
[Fri Sep 29 18:00:48 2023] BTRFS error (device sda1): failed to verify dev extents against chunks: -117
[Fri Sep 29 18:00:48 2023] BTRFS error (device sda1): open_ctree failed

Running a btrfs rescue chunk-recover -y right now, in the hope the drive can be restored… I know I could restore the files with btrfs restore, but what I don’t need to do that, since I have all data stored safely somewhere else as well. I want the BTRFS subvolumes restore 1:1 so I can continue using them in urbackup. Lets see how this turns out (gonna take approx. 12 hours, since it’s 10TB to scan at 200MB/s)

This looks brilliant, but I’m worried that the removed device might get out of raid1 sync during the removal process and no longer be a good copy (btrfs has three devices it can now spread two copies of data over).

Instead, I might just physically remove the old drive to use as my offsite copy, bring up the btrfs in degraded mode and then add the new drive and let btrfs balance them. I lose redundancy during the rebalancing, however.

Alternatively, I’m thinking adding the third drive, using -dconvert=raid1c3 to get my third copy, then removing that copy and using it for my offsite copy.

Just some random but hopefully relevant thoughts. I’ll udpate if anything comes from them.