Server 2.5.31 is saving Image Backups as VHDz instead of configured VHDX

There is a weird problem similar to the problem fix in Image backup processing/mount problems - #8 by artyomtsybulkin mentioned with 2.5.31 running as QNAP plugin while I test UrBackup to use it maybe permanently with commercial agents for Hyper-V / Windows servers.

Because we have HDDs > 2 TB and the dynamic version is also better I setup as default VHDX images (Image backups activated in group for Hyper-V VMs and Servers):

But somehow only VHDs are saved except on one Testing VM:

[/share/CACHEDEV2_DATA/urbackup] # find . -name “.vhdx
./hpv01[hpeoneview01 (10.30.2.23)]/230809-1706_Image_IDE_0_0/Image_IDE_0_0_230809-1706.vhdx.mbr
./hpv01[hpeoneview01 (10.30.2.23)]/230809-1706_Image_IDE_0_0/Image_IDE_0_0_230809-1706.vhdx
./hpv01[hpeoneview01 (10.30.2.23)]/230809-1706_Image_IDE_0_0/Image_IDE_0_0_230809-1706.vhdx.hash
./hpv01[hpeoneview01 (10.30.2.23)]/230809-1706_Image_IDE_0_0/Image_IDE_0_0_230809-1706.vhdx.cbitmap
./hpv01[hpeoneview01 (10.30.2.23)]/230809-1706_Image_IDE_0_0/Image_IDE_0_0_230809-1706.vhdx.sync

So there is the problem that on 3 servers with one HDD > 2 TB each failing again and again on their biggest HDD - server/client have nearly same logging:

Server:

Client:

Starting scheduled incremental image backup of volume “E:”…
Basing image backup on last full image backup
Error retrieving last image backup. Doing full image backup instead.
FATAL: Error writing to VHD-File. Cannot allocate memory (code: 12)
FATAL ERROR: Could not write to VHD-File
Transferred 387.78 GB - Average speed: 43.2387 MBit/s
(Before compression: 1.80675 TB ratio: 4.77103)
Time taken for backing up client bidev02: 21h 23m 58s
Backup failed

Filesystem is fine:

[/share/CACHEDEV2_DATA/urbackup] # df -h .
Filesystem Size Used Available Use% Mounted on
/dev/mapper/cachedev2
39.4T 10.5T 28.9T 27% /share/CACHEDEV2_DATA

Here one example of stored content

[/share/CACHEDEV2_DATA/urbackup/bidev02] # find . -name “.vhd” -mtime -7 | grep “z$” | xargs ls -lh
-rwxrwx— 1 admin administrators 211M 2023-08-17 01:39 ./230817-0138_Image_SYSVOL/Image_SYSVOL_230817-0138.vhdz
-rwxrwx— 1 admin administrators 1.4G 2023-08-17 01:57 ./230817-0139_Image_C/Image_C_230817-0139.vhdz
-rwxrwx— 1 admin administrators 17M 2023-08-17 01:39 ./230817-0139_Image_ESP/Image_ESP_230817-0139.vhdz
-rwxrwx— 1 admin administrators 347M 2023-08-17 02:13 ./230817-0157_Image_D/Image_D_230817-0157.vhdz
-rwxrwx— 1 admin administrators 8.0M 2023-08-18 23:37 ./230818-2335_Image_G/Image_G_230818-2335.vhdz
-rwxrwx— 1 admin administrators 3.4G 2023-08-19 00:00 ./230818-2337_Image_H/Image_H_230818-2337.vhdz
-rwxrwx— 1 admin administrators 81M 2023-08-19 00:46 ./230819-0000_Image_F/Image_F_230819-0000.vhdz
-rwxrwx— 1 admin administrators 392G 2023-08-20 20:24 ./230819-2250_Image_E/Image_E_230819-2250.vhdz
-rwxrwx— 1 admin administrators 55G 2023-08-21 00:32 ./230820-2144_Image_E/Image_E_230820-2144.vhdz

And I found out howto verify my settings - as shown the setup is correctly setup in database as intended from WebGUI - alll image backups should be done as VHDX and not VHDz:

[/share/CACHEDEV1_DATA/.qpkg/QUrBackup/var/urbackup] # …/…/bin/sqlite3 backup_server_settings.db “SELECT * FROM settings WHERE value LIKE ‘%vhd%’”
image_file_format|vhdx|0||2|1691345479
image_file_format|vhdx|-1||1|0
image_file_format|vhdx|-2||1|0
image_file_format|vhdx|-3||1|0
image_file_format|vhdx|-4||1|0
image_file_format|vhdx|-5||1|0
image_file_format|vhdx|-6||1|0
image_file_format|vhdx|-7||1|0
image_file_format|vhdx|-8||1|0

Is there maybe a bug caused by in Image backup processing/mount problems - #8 by artyomtsybulkin mentioned fix or need the taken server backups maybe completely deleted to start with VHDx correctly or something similar?

Thanx for hints.

Reiner

Could you go to one of the clients to see what image file format the server web interface shows?

Not clear what the “server web interface” on a client should be shown / different to the already screenshotted page. Do you have a documenation / forum reference for it ?

Client settings seems also not what you asked because there is no format selection ( I have after this post limited the image backups to C and added file backups on these servers):

image

Maybe of interest: If I add here the Logs section from tray I got this error popup:

and log itself has the I already written log ?

Starting scheduled incremental image backup of volume “E:”…
Basing image backup on last full image backup
Error retrieving last image backup. Doing full image backup instead.
FATAL: Error writing to VHD-File. No such file or directory (code: 2)
FATAL ERROR: Could not write to VHD-File
Transferred 387.779 GB - Average speed: 44.1237 MBit/s
(Before compression: 1.56368 TB ratio: 4.12917)
Time taken for backing up client bidev02: 20h 58m 13s
Backup failed

when I go to ~ show backups/restore I got a:

This site can’t be reached10.30.20.248 took too long to respond.
Try:

Checking the connection
Checking the proxy and the firewall
Running Windows Network Diagnostics
ERR_CONNECTION_TIMED_OUT

Additional interesting misbehavior between setup and running tasks is that I got now daily file backups:

even they are disabled and suggested to run only once manually triggered:

found this bug while I was trying to modify runtime to 38 hours instead of default of 24 hours.
I guess the system tries to accomply the overtaken min. 40 copies from default settings?

How much free space you have on your root filesystem. And what file system type and mount options you use for both root and backup storage? Can you provide your image backup settings, and if something changed on in Advanced tab.

Not sure set “ALL_NONUSB” for “Default directories to backup” is correct, check documentations, there must be list of paths. Any option configured incorrectly cause instability and errors. No mater if you not use file backups, options will be transferred and processed by client.

Thx for input but I already shown in first post that I have around 40 TB to play with and only 8 TB (now around 30 TB caused by unwanted 3x2TB daily incremental backups) in use.
Independently this shouldn’t cause any VHD/VHDx problems ?.
Root filesystem is in QNAP not of interests and the Urbackup package is itself also on an volume with around 7 TB setup (ah “code” is better than “quote” here):

[~] # df -h . /share/CACHEDEV1_DATA/.qpkg/QUrBackup/var/urbackup /share/CACHEDEV2_DATA
Filesystem                Size      Used Available Use% Mounted on
none                    400.0M    325.0M     75.0M  81% /
/dev/mapper/cachedev1
                          6.7T     38.8G      6.7T   1% /share/CACHEDEV1_DATA
/dev/mapper/cachedev2
                         39.4T     30.3T      9.0T  77% /share/CACHEDEV2_DATA

It is a nearly “clean” QNAP instance which hold usually 2 ISCSI targets for Acronis Backup 12.5 but we had permanently problems last month with ISCSI communication and windows backup server totally lost communication 3 weeks ago so I could “play” and test with this NAS.

Mount options can be an interesting question but is setup by QNAP so I have no chance to modify them.
And there is an instance backuped up with VHDX image file.
And I use Ext4 also for 80 TB and also some hundreds TB storage so I see there no reason as cause:

[~] # mount | grep /share/CACHEDEV2_DATA
/dev/mapper/cachedev2 on /share/CACHEDEV2_DATA type ext4 (rw,usrjquota=aquota.user,jqfmt=vfsv1,user_xattr,data=ordered,data_err=abort,nodelalloc,nopriv,nodiscard,acl)

but if this could be a problem you should better ask for the block size of filesystem, too ^^
(sadly here the bold mark isn’t usable:

Filesystem features:      has_journal ext_attr filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Block size:               4096

):

[~] # dumpe2fs /dev/mapper/cachedev2
dumpe2fs 1.45.5 (07-Jan-2020)
Filesystem volume name:   backups
Last mounted on:          /share/CACHEDEV2_DATA
Filesystem UUID:          d08608dc-a47d-497c-8985-bc243dccc4e6
Filesystem magic number:  0xEF53
Filesystem revision #:    1 (dynamic)
Filesystem features:      has_journal ext_attr filetype needs_recovery extent 64bit flex_bg sparse_super large_file huge_file uninit_bg dir_nlink extra_isize
Filesystem flags:         signed_directory_hash rehashed
Default mount options:    (none)
Filesystem state:         clean
Errors behavior:          Continue
Filesystem OS type:       Linux
Inode count:              2684354560
Block count:              10737418240
Reserved block count:     131072
Free blocks:              3673197370
Free inodes:              2664351830
First block:              0
Block size:               4096
Fragment size:            4096
Group descriptor size:    64
Blocks per group:         32768
Fragments per group:      32768
Inodes per group:         8192
Inode blocks per group:   512
RAID stride:              128
RAID stripe width:        512
Flex block group size:    16
Filesystem created:       Sat Aug  5 11:36:52 2023
Last mount time:          Sat Aug  5 11:37:10 2023
Last write time:          Sat Aug 26 01:01:07 2023
Mount count:              2
Maximum mount count:      -1
Last checked:             Sat Aug  5 11:36:52 2023
Check interval:           0 (<none>)
Lifetime writes:          51 TB
Reserved blocks uid:      0 (user admin)
Reserved blocks gid:      0 (group administrators)
First inode:              11
Inode size:               256
Required extra isize:     32
Desired extra isize:      32
Journal inode:            8
Default directory hash:   half_md4
Directory Hash Seed:      63177b4e-b0c1-44db-b59d-c70237b1df8e
Directory Hash Rev:       1
Directory Magic Number:   0x514E4150
Journal backup:           inode blocks
Journal features:         journal_incompat_revoke journal_64bit
Journal size:             1024M
Journal length:           262144
Journal sequence:         0x00053a28
Journal start:            173386

I have nothing changed in Advanced Tab because I think the settings are optimized - and in my lokal private try changed settings gone very very slow on “server side” processing after client push:

The ALL_NONUSB is refferend by question mark next to this field
http://10.30.2.248:55414/help.htm#default_dirs

How to define default backup locations?

Just enter the different locations separated by a semicolon (";") e.g.

C:\Users;C:\Program Files

If you want to give the backup locations a different name you can add one with the pipe symbol e.g:

C:\Users|User files;C:\Program Files|Programs

gives the "Users" directory the name "User files" and the "Program files" directory the name "Programs".

Those locations are only the default locations. Even if you check “Separate settings for this client” and disable “Allow client to change settings”, once the client modified the paths changes in this field are not used by the client.

How to specify the volumes to backup?

UrBackup backs up all the volumes you specify in the “Volumes to backup” field. You should specify volume letters (C,D,E,…) separated by ";" or "," there. With UrBackup Client and Server 1.4 you can also sepecify "ALL" to backup all volumes or "ALL_NONUSB" to backup all volumes except volumes connected via USB.

and is fine from documentation on first view - the client show always the 2nd block for volumes as referrenced…
Only because I just rechecked from your hint it seems that the reference goes to previous block (in red header).

But also : Even it’s not supported it should gave an error and not an constant file backup for a deactivated task ? :sweat_smile:
I won’t list C;D;E;F;G;H and mark the backup locations as “optional” => “Backup will not fail if the directory is unavailable”…
https://www.urbackup.org/administration_manual.html#x1-640008.3.4
But I can check which disks are on each client and override them by client to proove that this is not the cause :wink:

Here the proof that changing path settings didn’t change wrong backup behaviour:

Interesting “solution” is to use the commercial but not the trial…
Without any changes it works out of the box as suggested:

[/share/CACHEDEV2_DATA/urbackup/nu3hpv01[nu3webadm01 (10.30.2.20)]/240208-1715_Image_SCSI_0_0] # ls -lh
total 3.5G
-rwxrwx--- 1 admin administrators  17G 2024-02-08 17:21 Image_SCSI_0_0_240208-1715.vhdx*
-rwxrwx--- 1 admin administrators 513K 2024-02-08 17:15 Image_SCSI_0_0_240208-1715.vhdx.cbitmap*
-rwxrwx--- 1 admin administrators 1.1M 2024-02-08 17:19 Image_SCSI_0_0_240208-1715.vhdx.hash*
-rwxrwx--- 1 admin administrators  12K 2024-02-08 17:15 Image_SCSI_0_0_240208-1715.vhdx.mbr*
-rwxrwx--- 1 admin administrators    0 2024-02-08 17:21 Image_SCSI_0_0_240208-1715.vhdx.sync*