Why Urbackup server prefer to use btrfs?

Hi all!
Recently i started to use urbackup , and i stumble upon the problem.
Many install online manuals recommend to use btrfs as file system for storing the backups, but nobody mentioned that cloning of btrfs is not simple.

My instruction of how backup btrfs partitions!

0. Stop urbackup server: sudo systemctl stop urbackup

  1. Umount all disks which you use for backups, meaning all btrfs mountpoints!
  2. Clone partition! sudo partclone.btrfs -b -s /dev/sourcediskpartition -o /dev/destinationdiskpartition
    2.1. Check if cloning was a success!
    fdisk -l /dev/sourcediskpartition /dev/destinationdiskpartition
  3. Very important!
    Change UIID of the source partition, note that old UIID of this partition now “belongs” to the another destination disk!
    sudo btrfstune -u /dev/sourcediskpartition
  4. Write new UIID to the original partition (eg in /etc/fstab) and the old UIID of the source partition to the destination disk!
  5. Only after that, after you changed the UIID of the disks you absolutely safely can mount them, but if you ignore the steps, you can possibly loose data!

Does somebody know that post cloning will be implemented in Urbackup?

I do not understand what you are trying to accomplish. Why would you want to clone the backup partitions at that level?

Urbackup does not provide feature to backup of a backup on many hardrives.
So in order to have an additional exact same copy i use partition cloner and then rewrite UIID of original partition. Both partitions are on different physical disks.
Sure a better solution would be to use raid, but i already spent good amount of money to get me so far.

if the disks are connected to the same machine, just use the integrated raid 1 of btrfs.
If you already created one disk with btrfs just attach the other one to the volume and rebalance it.
That here should work:

If they are on different machines, use btrfs send.

Obviously you could use btrfs send on the same machine, what would be basically what you try to do manually. But that wouldn’t really make a lot of sense.

Looks ok, but in terms of raid tech, i would prefer to use dedicated raid hardware.

Here my reasons:

  • Hot swap ability in software raid is “problematic”, downtime must be counted.
  • Software raid depends on OS, if one time a OS updates fails, the software raid fails too.
  • Recovery tools does not work well with dynamic disks.

I am happy with what i got, i can automate those steps write simple bash script which would make the needed changes and thats it.

I have bad news, the previously described method of the backup does not work for me anymore!

I tried many times and it did fail!
sudo partclone.btrfs -b -s /dev/SourceDisk -o /dev/Target disk <<== This does not work anymore!

Developers of the btrfs done something in the upstream, making this approach not workable, but there is a better and a simpler version:

The good news are:
If you use Ubuntu Linux , then do following

0. Stop urbackup server: sudo systemctl stop urbackupsrv
If you not afraid , i prefer to format the destination disk , before the big copy process started,
mkfs.btrfs -f pathToTheTargetDevice
1.Use rsync to copy files on file-system level only
rsync -av --progress pathToSourceDiskMountingPoint pathToTargetDiskMountingPoint
2. You are done, you copied everything from source disk to the destination disk.

This way is definitely better, because you dont need to spent time on rewriting UIID of the btrfs partitions and as i said early partclone.btrfs clonning maybe fast, but in my case it didn’t worked, i always ended up in the broken partition of the destination disk and even when i checked the disk health it was ok, basicly i experienced that on the kernel level , btrfs partitions are somehow interconnected onto such level, that you cant simply swap UIIDs between disks.

So finally for me the best solution so far to use rsync.

There new info , well my second disk as btrfs partition works pretty bad.
What i do currently i format it with ext4 and then just copy via following command:
cd to your source btrfs disk and then execute following:

cp -r SourceNameofdir1 Sourcenameofdir2 destinationMountPoint & progress -mp $!

SourceNameofdir1, SourceNameofdir2 are placeholders you should use names of the directory of your source disk which you want to copy to the destination disk.

Sure hardware raid is preferable , but this solution should be final.

Have a look at btrfs send/receive

Lots of infos and scripts on the web.

Cool, as i mentioned i moved even further and i am already running a hardware raid 1 , its way better, all happens automatically.
I could switch to hardware raid early, but i did had other things to do, now with raid all works smoothly! :grinning:

I’m just curious. If you rsync everything from one disk to the other, where one is btrfs, and the other ext4. If your btrfs drive fails so you buy a new one, format it in btrfs and then rsync from the ext4 drive, will it actually restore everything as functional for urbackupserver?

I know a btrfs raid would obv, since its an actual clone, bit for bit. How will urbackup react in this situation?

I came to conclusion , that running a hardware raid 1 is the best option available, in such case you dont need think to much about manual steps, because rsync can take long time to copy stuff and then copy back.
Use raid, i didn’t used at first, because i admit i was to lazy, but then i understood , that it makes life easy you set up once and run it continuously, of course there expenses in terms of hardware, but its really small price for very good solution.

Well, you do you I guess. But to answer your first question “Why Urbackup server prefer to use btrfs?”

Because it’s free, fast and pretty reliable, urbackup can use mechanics on btrfs that are not available on “normal” filesystems (like deduplication f ex), it’s less cpu and memory heavy than zfs, you can use the built in raid for “backup” of the storage drive etc etc.


Btrfs is fast i understand, good finally everything is resolved, thanks.

I did a test on the weekend with ZFS, try to save some space on the storage. First i am not sure everything i say is right, so correct me if i am wrong.

In file backup mode the system uses links for every file, so there is not to much wasting space if files don’t change to much.
In image backup mode i did some tests and found suprisingly ZFS did not bring any advantages to my case!
I did mount a iSCSI disk and created a ZFS Pool. Then i copied 23 Gb Imagebackup from urbackup store into the ZFS pool and did run the simulation:

zdb -S urbackup

It comes out there will only be a few K on space to save, so i deleted the files and switched on dedup for that pool, after that i copied the images again into the zpool and it shows again the same as in the simulation, only a few KB was saved on space. Dedup was only showed as 1.

I was suprised because i expected a dedup rate at 8 or 9 at least, it’s images where most of the blocks are always unchanged i thought.

EDIT: But then i saw only the first one is a full image, the other ones are compressed incremental images and thats why they are different to the existing images!
So even in image backup mode ZFS has no space gain to offer, right?

If somebody can share his ZFS statistics and thoughts i will appreciate!