FSCTL_BTRFS_CREATE_SUBVOL failed on windows?

Hi,

I have a problem with the following setup:
Client: Win7 (2.4.11)
Server: WS2022 (2.5.31)

When I try to create a full image backup it says:
Error creating empty image subvolume. “FSCTL_BTRFS_CREATE_SUBVOL failed. Status=-1073741808 - Code 0”

That’s very strange since I am completely on windows. O.o
I guess it has something to do with storage spaces but BTRFS?!

Hope for help.
Thanks.

It is trying to use winbtrfs .
Are you trying to use the ReFS reflinking?

Might be a bug where the btrfs support broke that.

Anyways perhaps change the image file format to vhdx(z) instead of cow-raw. That should fix it.

Thanks uroni!

No, just a WS2022 mirror storage space with NTFS formatted. no ReFS and as far as I have heard BTRFS is just a linux thing.

I will try to change the file format tomorrow.

I don’t really understand what’s behind this but Compressed VHDX (beta) has solved it.
Currently it is creating the first image backup.

Thanks!

Would be interesting to find out how it got into the state where it thought btrfs would be available. If you can create info/debug level logs please post them

I have no idea.

I have a history of several years now with urbackup but I have not seen something like this.

It was a clean WS2023 machine and I tried to replicate the configuration from a working setup I am using since years now. Everything manually set over the GUI of course.

I fear this workaround with VHDX is just the half solution. File Backup is still messed up (I posted in a different post) and may be also related to this btrfs issue. The file backup just blows up the storage space so it seems that the NTFS hardlink/symlink is not working. a file backup with ~400GB fills up a 10TB disk after days. Again, I double and triple checked the whole settings of urbackup.

I now also compared the backup disks.
The only difference I have noticed is the formatted NTFS cluster size. I reformatted the disk to the same I do know it is working. (262144 byte)

We will see if this changes something.

To the logs:
Nothing special here. I already sent the error that appears with image backup related to strange BTRFS misinterpretation. On file backup there is nothing special detected, even up to INFO log level.

Is there something special I could try out?

@uroni
First test was creating an image backup with the classic configuration (compressed VHD) and newly formatted backup disk (NTFS 256kB cluster size).
Backup completed successfully. (that’s new)

Before, either the cluster size was the cause of the problem, since this is what I changed explicitly, or the old backup disk had an unknown issue that was solved due to the format.
By the way, before I had 2097152 bytes which should be 2 MB cluster size.

I hope the other problem regards the deduplication with NTFS hard/symlinks will be also solved with that change. I currently make the initial file backup.
Afterwards I will note the free backup disk space and create another differential file backup. I guess this should show if it is working now, in case the free disk space does not lower after the diff file backup.

I will let you know.

@uroni : What do you say from the theory perspective? Is the cluster size somehow related how urbackup decides his feature set and BTRFS detection?

@uroni :
I can confirm that now also the problem with the file backup has gone. differential file backups are needing no disk space if nothing was changed. NTFS hardlink symlink linking seems to work now.

The whole problem seems to be solved by formatting the backup drive again.
Maybe changing the block size from 2MB to 256kB has solved the issues or something else that was fixed with the format.

VHD image bakup and NTFS linking, both were fixed.

Unlikely. Debug level log of server startup would have been interesting. It tries to use some btrfs functionality and based on the result it should switch to btrfs mode.

I wouldn’t use such a large cluster size… Storing a small file always would use up 2MB then, no?

One theory I have is that you set the back path to a ReFS volume and then later to NTFS. It automatically switched to cow-raw format because that can work on ReFS, but not on NTFS.

hi uroni,

I am 100% sure that was not on ReFS. I printed all the information about the partitions via powershell and I saw NTFS. The only difference was the cluster size and I am aware that ReFS won’t support the links and configuring storage spaces and formating volumes, I have done this many times, even from the power shell.

Regards cluster size, you caught me, I am not sure why I took a cluster size that large…
I think there are two reasons why to choose larger cluster sizes with NTFS.
1.) if the partition size is maxed out. inode address range is limited in NTFS kind of “multiplied” with cluster size and you get different max partition sizes directly related to cluster sizes. One find these limitations in the internet, but I don’t think that was the reason and in this case going with 256k would be fine instead of 2M, you are right.
2.) Especially with storage spaces, parity volumes and NTFS it turns out that the choice of the cluster size is highly important regards performance. parity is also organized in blocks and each disk will provide its block for a stripe. It makes a huge difference if you choose the NTFS cluster size exactly the same as all data disk blocks in a stripe. I don’t want to go into details here, since it is off topic but I found it very interesting and it is true and huge, there are only some configurations in disks and cluster sizes that makes sense. See here: https://www.youtube.com/watch?v=t2Z7NnguMxE&lc=UgzpjIaHx99USohU1bZ4AaABAg.9vCANRcIcCS9vIUqxUPfmI
That’s something MS is not telling you…

Regarding logs, I will see what I can do. Friend is not that happy to publish things about his data in the net but I will look into it and see through the logs and may send it to you or post it here if they are general enough. As far as I understood, you are interested about logs during the urbackup boot. Got it.

Here are my notes when creating a parity NTFS disk in storage spaces:
image
Trust me, choose one of these configurations and you will be happy with the results.
Ah and to be precise: It is not about the number of disks, it’s more about the configured number of columns when creating the storage space parity volume. You still can have 6 disks but use just 5 columns instead of 6 and go with 256kB NTFS block size.

BTW, with ReFS it won’t be such a big deal since it goes more hand in hand with storage spaces and will optimize things in the background somehow.
But with UrBackup ReFS is no choice!

@uroni
Regarding log files:
I have found 5 log files. All the information is from yesterday and newer so I am not sure if there is anything related to the problem. I’m not sure why I can’t find anything older. Maybe the “remove_unknown.bat” also removes the old logs?

Only strange thing I’ve found is this:

2023-10-30 10:57:24: Testing if backup destination can handle subvolumes and snapshots…
2023-10-30 10:57:24: Testing for winbtrfs…
2023-10-30 10:57:24: Btrfs test failed. Creating btrfs subvol failed: FSCTL_BTRFS_CREATE_SUBVOL failed. Status=-1073741808 - Code 0
2023-10-30 10:57:24: Backup destination cannot handle subvolumes and snapshots. Snapshots disabled.

I just learned that there is a windows implementation for btrfs. Still, I am very confused that winbtrfs is that prominent in the urbackup code.
For sure, I am not up to date but I thought even on Linux the btrfs implementation is not the most stable thing in the world so why using it even hackier with windows? However, having the choice is never bad.