Server 2.5.22 and VHDX Volumes

Since Testing topic for Server 2.5.22/Client 2.5.16 got autoclosed - starting a new topic.

First of all: thank you for the great tool, after considering multiple options I’ve decided to switch from Bareos to UrBackup as a solution to backup different types of systems under my governance.

I’ve setup UrBackup Server from sources because I need Linux Image Backups - which work for me in 2.5.22. So server is: 2.5.22, CentOS 7. Everything looks good, except for new VHDX container for image backups. Which is marked beta.

I’m trying to backup (Full Image Backup for the first time) Windows 10 Client and whenever I choose Compressed VHDX I get segfault for libzstd.so like so:

image backup wr[14755]: segfault at 8 ip 00007f553f2d22d0 sp 00007f54c8c707a0 error 6 in libzstd.so.1.5.0[7f553f2bc000+105000]

Client is 2.5.16. Same issue with 2.4.x client.

Server dies silently (even with debug level messages), here is the extract from log:

2022-01-03 17:49:18: Server started up successfully!
2022-01-03 17:49:18: Looking for old Sessions... 0 sessions
2022-01-03 17:49:18: Did not find backup suitable for archiving with backup_type=0 image=false letter=
2022-01-03 17:49:18: Did not find backup suitable for archiving with backup_type=0 image=false letter=
2022-01-03 17:49:19: Downloading server version info...
2022-01-03 17:49:25: HTTP: No available slots... starting new Worker
2022-01-03 17:49:49: New Backupclient: WIN10
2022-01-03 17:49:49: Sending Identity to client "WIN10" failed. Retrying soon...
2022-01-03 17:49:50: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNySEJHJCUe51POWxCd60LR+IPpL5eAf6eCmIfBh329WNXg-- (server)
2022-01-03 17:49:50: Getting client settings...
2022-01-03 17:49:50: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNySbVdgZwoPJnnDA1Y0vH4/+kLUrWod6SVXMm/HRAB3I6A-- (server)
2022-01-03 17:49:50: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNyQZnQWXc6g5RdUQ3n2rJSlOfmbFyel965odaUystBPuPw-- (server)
2022-01-03 17:49:50: Flushing FileClient...
2022-01-03 17:49:50: Channel message: STARTUP timestamp=1641227737
2022-01-03 17:49:50: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNySEVNNIuLj9HrtWt+V6rdsUpUhlDmCpVDbuZq2jIMZrvg-- (server)
2022-01-03 17:49:50: Sending backup incr interval...
2022-01-03 17:49:50: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNyT+GDEAt47RUp9gPzaawL2A0ig8iEstTGoH/wDrlTbRbg-- (server)
2022-01-03 17:49:50: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNyRp4yWSzkEYPYLg+8bW3YVBWPPDN26lYbtXL3WVfAFGSg-- (server)
2022-01-03 17:50:35: msg=START IMAGE FULL
2022-01-03 17:50:35: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNySDMPNypj+lB7ZOI929Y+sWbuV+eg8nF4sezCI3i8kZIQ-- (server)
2022-01-03 17:50:35: Starting unscheduled full image backup of volume "C:"...
2022-01-03 17:50:35: Backing up SYSVOL...
2022-01-03 17:50:35: Starting unscheduled full image backup of volume "SYSVOL"...
2022-01-03 17:50:35: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNyQTsxJCXinf/zmsh13IuYRLp9HJ8Bx1CXMmv7W32xG8sA-- (server)
2022-01-03 17:50:35: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNyR0W4hxSUgghz/PEnsiif5jAjGLtg49BxMed77OtEHO5g-- (server)
2022-01-03 17:50:35: Encrypting with key gxDOvp+LbMFxy0TZhTzYyI6NuZ94DCu1+hbAyAQsNyTlmgZeQf4PUAQkBm6nTbawloVkd/x7Yq8mn1ctFKta7Q-- (server)
2022-01-03 17:54:16: Starting HTTP-Server on port 55414

On the server side in backup folder I’ve got all files created, i.e.:
Image_SYSVOL_220103-1754.vhdz.cbitmap
Image_SYSVOL_220103-1754.vhdz.hash
Image_SYSVOL_220103-1754.vhdz.mbr
Image_SYSVOL_220103-1754.vhdz.sync
Image_SYSVOL_220103-1754.vhdz

However Image_SYSVOL_220103-1754.vhdz is only 160 bytes in all cases.

#rpm -qa | grep libzstd
libzstd-1.5.0-1.el7.x86_64

Since I was able to reproduce issue in virtual environment, I assume that this is known issue. If not - please let me know - I can provide more details to pin this.

Switching on the server side container to Compressed VHD works without issues.

Thus the question: should the Compressed VHDX containers work and if yes, do you need more details on the matter?

Could you run the server as decribed here and post the stack traces? https://urbackup.atlassian.net/wiki/spaces/US/pages/8323075/Debugging+with+gdb+on+Linux Thanks!

Sorry for long response time - didn’t had time to setup environment:

#0  0x00007ffff6f8d850 in ZSTD_writeFrameHeader () from /lib64/libzstd.so.1
#1  0x00007ffff6f9551d in ZSTD_compressContinue_internal () from /lib64/libzstd.so.1
#2  0x00007ffff6f96fb7 in ZSTD_compressEnd () from /lib64/libzstd.so.1
#3  0x00007ffff6f9b939 in ZSTD_compressCCtx () from /lib64/libzstd.so.1
#4  0x00007ffff6f9b9e2 in ZSTD_compress () from /lib64/libzstd.so.1
#5  0x00000000005a18d9 in CompressedFile::evictFromLruCache (this=0x7fff98630fe0, item=...) at fsimageplugin/CompressedFile.cpp:505
#6  0x00000000005a46a6 in LRUMemCache::evict (this=this@entry=0x7fff986310a0, item=..., deleteBuffer=deleteBuffer@entry=false) at fsimageplugin/LRUMemCache.cpp:175
#7  0x00000000005a4a93 in LRUMemCache::createInt (this=this@entry=0x7fff986310a0, offset=offset@entry=10485760) at fsimageplugin/LRUMemCache.cpp:242
#8  0x00000000005a4b28 in LRUMemCache::put (this=0x7fff986310a0, offset=10485760, buffer=buffer@entry=0x7fff8b7ad010 "", bsize=bsize@entry=4096)
    at fsimageplugin/LRUMemCache.cpp:78
#9  0x00000000005a3b8e in CompressedFile::Write (this=0x7fff98630fe0, buffer=0x7fff8b7ad010 "", bsize=4096, has_error=<optimized out>)
    at fsimageplugin/CompressedFile.cpp:434
#10 0x0000000000593f5f in VHDXFile::Write (this=0x7fff9862fe20, spos=6291456, buffer=0x7fff8b7ad010 "", bsize=4096, has_error=0x0) at fsimageplugin/vhdxfile.cpp:1561
#11 0x000000000058c14c in VHDXFile::Write (this=0x7fff9862fe20, buffer=<optimized out>, bsize=<optimized out>, has_error=<optimized out>)
    at fsimageplugin/vhdxfile.cpp:870
#12 0x00000000006957ee in ServerVHDWriter::writeVHD (this=0x7fff98831410, pos=6291456, buf=0x7fff8b7ad010 "", bsize=4096) at urbackupserver/server_writer.cpp:263
#13 0x00000000006986c2 in ServerVHDWriter::operator() (this=0x7fff98831410) at urbackupserver/server_writer.cpp:116
#14 0x00000000004aae01 in CPoolThread::operator() (this=0x7fff9862dde0) at ThreadPool.cpp:73
#15 0x0000000000470c0c in thread_helper_f (t=0x7fff9862dde0) at Server.cpp:1487
#16 0x00007ffff6312ea5 in start_thread (arg=0x7fff8a475700) at pthread_create.c:307
#17 0x00007ffff603b9fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

This is stacktrace from updated libzstd-1.5.1-4.el7.x86_64 (most recent from EPEL).

I do not have symbols for libzstd, because debuginfo package is not available in EPEL, however should you want to see them as well - let me know, I’ll compile it manually.

I guess you could compile with --enable-embedded-zstd configure parameter. That way we could also exclude that the issue is with a specific zstd version.

The next step would be to compile with parameters ./configure --enable-embedded-zstd CPPFLAGS="-fsanitize=address" LDFLAGS="-fsanitize=adress" and run like that.

Recompiled and reran:

(gdb) bt
#0  0x00000000012c814f in MEM_write32 (value=4247762216, memPtr=0x8) at external/zstd/common/mem.h:156
#1  MEM_writeLE32 (val32=4247762216, memPtr=0x8) at external/zstd/common/mem.h:288
#2  ZSTD_writeFrameHeader (dst=dst@entry=0x8, dstCapacity=dstCapacity@entry=4210688, pledgedSrcSize=2097152, dictID=dictID@entry=0, params=...)
    at external/zstd/compress/zstd_compress.c:2415
#3  0x00000000012d45fa in ZSTD_compressContinue_internal (cctx=cctx@entry=0x7fffd2ce2b70, dst=dst@entry=0x8, dstCapacity=dstCapacity@entry=4210688, 
    src=src@entry=0x7fffd467b800, srcSize=srcSize@entry=2097152, frame=frame@entry=1, lastFrameChunk=lastFrameChunk@entry=1)
    at external/zstd/compress/zstd_compress.c:2480
#4  0x00000000012d5339 in ZSTD_compressEnd (cctx=cctx@entry=0x7fffd2ce2b70, dst=dst@entry=0x8, dstCapacity=dstCapacity@entry=4210688, src=src@entry=0x7fffd467b800, 
    srcSize=srcSize@entry=2097152) at external/zstd/compress/zstd_compress.c:2895
#5  0x00000000012d5783 in ZSTD_compress_advanced_internal (cctx=cctx@entry=0x7fffd2ce2b70, dst=dst@entry=0x8, dstCapacity=dstCapacity@entry=4210688, 
    src=src@entry=0x7fffd467b800, srcSize=srcSize@entry=2097152, dict=dict@entry=0x0, dictSize=dictSize@entry=0, params=...)
    at external/zstd/compress/zstd_compress.c:2959
#6  0x00000000012db033 in ZSTD_compress_usingDict (cctx=cctx@entry=0x7fffd2ce2b70, dst=0x8, dstCapacity=4210688, src=src@entry=0x7fffd467b800, 
    srcSize=srcSize@entry=2097152, dict=dict@entry=0x0, dictSize=dictSize@entry=0, compressionLevel=compressionLevel@entry=7)
    at external/zstd/compress/zstd_compress.c:2971
#7  0x00000000012db16e in ZSTD_compressCCtx (compressionLevel=7, srcSize=2097152, src=0x7fffd467b800, dstCapacity=<optimized out>, dst=<optimized out>, 
    cctx=0x7fffd2ce2b70) at external/zstd/compress/zstd_compress.c:2981
#8  ZSTD_compress (dst=<optimized out>, dstCapacity=<optimized out>, src=0x7fffd467b800, srcSize=2097152, compressionLevel=7)
    at external/zstd/compress/zstd_compress.c:2991
#9  0x0000000000839503 in CompressedFile::evictFromLruCache (this=0x60200010f240, item=...) at fsimageplugin/CompressedFile.cpp:505
#10 0x00000000008423fc in LRUMemCache::evict (this=this@entry=0x601c000d1cc0, item=..., deleteBuffer=deleteBuffer@entry=false) at fsimageplugin/LRUMemCache.cpp:175
#11 0x00000000008430aa in LRUMemCache::createInt (this=this@entry=0x601c000d1cc0, offset=offset@entry=10485760) at fsimageplugin/LRUMemCache.cpp:242
#12 0x00000000008432c7 in LRUMemCache::put (this=0x601c000d1cc0, offset=10485760, buffer=buffer@entry=0x7fffd421a800 "", bsize=bsize@entry=4096)
    at fsimageplugin/LRUMemCache.cpp:78
#13 0x000000000083fbfa in CompressedFile::Write (this=0x60200010f240, buffer=0x7fffd421a800 "", bsize=4096, has_error=<optimized out>)
    at fsimageplugin/CompressedFile.cpp:434
#14 0x0000000000813e45 in VHDXFile::Write (this=0x60640001cd00, spos=6291456, buffer=0x7fffd421a800 "", bsize=4096, has_error=0x0) at fsimageplugin/vhdxfile.cpp:1501
#15 0x00000000007fc8e3 in VHDXFile::Write (this=0x60640001cd00, buffer=<optimized out>, bsize=<optimized out>, has_error=<optimized out>)
    at fsimageplugin/vhdxfile.cpp:870
#16 0x0000000000ab6223 in ServerVHDWriter::writeVHD (this=0x60240008e700, pos=6291456, buf=0x7fffd421a800 "", bsize=4096) at urbackupserver/server_writer.cpp:263
#17 0x0000000000abe10e in ServerVHDWriter::operator() (this=0x60240008e700) at urbackupserver/server_writer.cpp:116
#18 0x0000000000559c1e in CPoolThread::operator() (this=0x600600157780) at ThreadPool.cpp:73
#19 0x00000000004afe64 in thread_helper_f (t=0x600600157780) at Server.cpp:1487
#20 0x00007ffff4e64a88 in __asan::AsanThread::ThreadStart (this=0x7fffd2ce6000) at ../../../../libsanitizer/asan/asan_thread.cc:99
#21 0x00007ffff366bea5 in start_thread (arg=0x7fffd2ce4700) at pthread_create.c:307
#22 0x00007ffff33949fd in clone () at ../sysdeps/unix/sysv/linux/x86_64/clone.S:111

If possible could you try this patch:

diff --git a/fsimageplugin/CompressedFile.cpp b/fsimageplugin/CompressedFile.cpp
index 69950553c..6a486d044 100644
--- a/fsimageplugin/CompressedFile.cpp
+++ b/fsimageplugin/CompressedFile.cpp
@@ -68,7 +68,7 @@ CompressedFile::CompressedFile( std::string pFilename, int pMode, size_t n_threa
 		blocksize = c_cacheBuffersize;
 		writeHeader();
 		hotCache.reset(new LRUMemCache(blocksize, c_ncacheItems, n_threads));
-		initCompressedBuffers(n_threads);
+		initCompressedBuffers(n_threads + 1);
 	}
 
 	if(hotCache.get())
@@ -93,7 +93,7 @@ CompressedFile::CompressedFile(IFile* file, bool openExisting, bool readOnly, si
 		blocksize = c_cacheBuffersize;
 		writeHeader();
 		hotCache.reset(new LRUMemCache(blocksize, c_ncacheItems, n_threads));
-		initCompressedBuffers(n_threads);
+		initCompressedBuffers(n_threads + 1);
 	}
 	if(hotCache.get()!=nullptr)
 	{

Applied patch, recompiled and reran. A bit different failure than previously. Here is the output from gdb before failure:

[New Thread 0x7fffdda2f700 (LWP 17508)]
[New Thread 0x7fffdcf28700 (LWP 17509)]
2022-01-22 22:41:40: Starting scheduled full image backup of volume "C:"...
2022-01-22 22:41:40: Backing up SYSVOL...
[New Thread 0x7fffdc421700 (LWP 17510)]
2022-01-22 22:41:40: Starting scheduled full image backup of volume "SYSVOL"...
[New Thread 0x7fffdb91a700 (LWP 17511)]
2022-01-22 22:41:40: Encrypting with key Lh3eltN559qqMNKlfVFD9Pz/41TZTCvtN/mW1DYb3L8e5uQFBGlH5S/ZnwiwZ1L1rT0ziYddUEBwe6PP8DZX8g-- (server)
2022-01-22 22:41:40: Encrypting with key Lh3eltN559qqMNKlfVFD9Pz/41TZTCvtN/mW1DYb3L+oIKcHNezs6TdrNpa3z1A/vNP7D5pWglMxUpfcbPr5eQ-- (server)
2022-01-22 22:41:40: Encrypting with key Lh3eltN559qqMNKlfVFD9Pz/41TZTCvtN/mW1DYb3L994gB2HMOlEYDH242pZ7ilrfKYe0dqLH45ZDQuLOx0wQ-- (server)
[New Thread 0x7fffd94d5700 (LWP 17512)]
[New Thread 0x7fffd6af4700 (LWP 17513)]
2022-01-22 22:41:54: Encryption overhead: 66.6123 KB
=================================================================
==17468== ERROR: AddressSanitizer: heap-use-after-free on address 0x600600115870 at pc 0x80aa98 bp 0x7fffd6af3860 sp 0x7fffd6af3850
READ of size 8 at 0x600600115870 thread T30 (image backup wr)
    #0 0x80aa97 (/usr/local/bin/urbackupsrv+0x80aa97)
    #1 0x80acc8 (/usr/local/bin/urbackupsrv+0x80acc8)
    #2 0xabdf69 (/usr/local/bin/urbackupsrv+0xabdf69)
    #3 0x559c1d (/usr/local/bin/urbackupsrv+0x559c1d)
    #4 0x4afe63 (/usr/local/bin/urbackupsrv+0x4afe63)
    #5 0x7ffff4e64a87 (/usr/lib64/libasan.so.0.0.0+0x19a87)
    #6 0x7ffff366bea4 (/usr/lib64/libpthread-2.17.so+0x7ea4)
    #7 0x7ffff33949fc (/usr/lib64/libc-2.17.so+0xfe9fc)
0x600600115870 is located 0 bytes inside of 32-byte region [0x600600115870,0x600600115890)
freed by thread T30 (image backup wr) here:
    #0 0x7ffff4e5d379 (/usr/lib64/libasan.so.0.0.0+0x12379)
    #1 0x838abf (/usr/local/bin/urbackupsrv+0x838abf)
previously allocated by thread T25 (ibackup main) here:
    #0 0x7ffff4e5d139 (/usr/lib64/libasan.so.0.0.0+0x12139)
    #1 0x4ab249 (/usr/local/bin/urbackupsrv+0x4ab249)
Thread T30 (image backup wr) created by T25 (ibackup main) here:
    #0 0x7ffff4e55c2a (/usr/lib64/libasan.so.0.0.0+0xac2a)
    #1 0x4af6eb (/usr/local/bin/urbackupsrv+0x4af6eb)
Thread T25 (ibackup main) created by T23 (client main) here:
    #0 0x7ffff4e55c2a (/usr/lib64/libasan.so.0.0.0+0xac2a)
    #1 0x4af6eb (/usr/local/bin/urbackupsrv+0x4af6eb)
Thread T23 (client main) created by T4 here:
    #0 0x7ffff4e55c2a (/usr/lib64/libasan.so.0.0.0+0xac2a)
    #1 0x4af6eb (/usr/local/bin/urbackupsrv+0x4af6eb)
Thread T4 created by T0 here:
    #0 0x7ffff4e55c2a (/usr/lib64/libasan.so.0.0.0+0xac2a)
    #1 0x4af6eb (/usr/local/bin/urbackupsrv+0x4af6eb)
Shadow bytes around the buggy address:
  0x0c014001aab0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c014001aac0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c014001aad0: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c014001aae0: fa fa fa fa fa fa fa fa fa fa fd fd fd fd fa fa
  0x0c014001aaf0: 00 00 00 fa fa fa fa fa fa fa fa fa 00 00 00 00
=>0x0c014001ab00: fa fa fa fa fa fa fa fa fa fa fa fa fa fa[fd]fd
  0x0c014001ab10: fd fd fa fa fd fd fa fa fa fa fd fd fa fa fa fa
  0x0c014001ab20: 00 00 00 fa fa fa fa fa fa fa fa fa fa fa fa fa
  0x0c014001ab30: fa fa fd fd fd fd fa fa fd fd fa fa fa fa fd fd
  0x0c014001ab40: fa fa fa fa fa fa fa fa fa fa fd fd fd fd fa fa
  0x0c014001ab50: fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa fa
Shadow byte legend (one shadow byte represents 8 application bytes):
  Addressable:           00
  Partially addressable: 01 02 03 04 05 06 07 
  Heap left redzone:     fa
  Heap righ redzone:     fb
  Freed Heap region:     fd
  Stack left redzone:    f1
  Stack mid redzone:     f2
  Stack right redzone:   f3
  Stack partial redzone: f4
  Stack after return:    f5
  Stack use after scope: f8
  Global redzone:        f9
  Global init order:     f6
  Poisoned by user:      f7
  ASan internal:         fe
==17468== ABORTING
[Thread 0x7fffd6af4700 (LWP 17513) exited]
[Thread 0x7fffd94d5700 (LWP 17512) exited]
[Thread 0x7fffdb91a700 (LWP 17511) exited]
[Thread 0x7fffdc421700 (LWP 17510) exited]
[Thread 0x7fffdcf28700 (LWP 17509) exited]
[Thread 0x7fffdda2f700 (LWP 17508) exited]
[Thread 0x7fffe1be4700 (LWP 17507) exited]
[Thread 0x7fffe32b6700 (LWP 17506) exited]
[Thread 0x7fffe3dbd700 (LWP 17492) exited]
[Thread 0x7fffe48c4700 (LWP 17491) exited]
[Thread 0x7fffe53cb700 (LWP 17490) exited]
[Thread 0x7fffe5ed2700 (LWP 17489) exited]
[Thread 0x7fffe74e0700 (LWP 17487) exited]
[Thread 0x7fffe7fe7700 (LWP 17486) exited]
[Thread 0x7fffe8aee700 (LWP 17485) exited]
[Thread 0x7fffe95f5700 (LWP 17484) exited]
[Thread 0x7fffea0fc700 (LWP 17483) exited]
[Thread 0x7fffeac03700 (LWP 17482) exited]
[Thread 0x7fffeb70a700 (LWP 17481) exited]
[Thread 0x7fffec211700 (LWP 17480) exited]
[Thread 0x7fffecd18700 (LWP 17475) exited]
[Thread 0x7fffed81f700 (LWP 17474) exited]
[Thread 0x7fffee628700 (LWP 17473) exited]
[Thread 0x7fffef12f700 (LWP 17472) exited]
[Thread 0x7ffff7fd9880 (LWP 17468) exited]
[Inferior 1 (process 17468) exited with code 01]

Backtrace has no stack for this case. Seems like race condition.

I hope this helps.

One more issue related to VHDX. Not sure if it is related to issue above, so posting it here. I’ve tried uncompressed image today and here is the outcome:

  1. Backup successful
  2. Recovery from recovery image has failed:
2022-02-02 09:48:03: New Backupclient: ##restore##16437916086241
2022-02-02 09:48:03: Channel message: UPDATE ACCESS KEY
2022-02-02 09:48:03: Channel message: STARTUP timestamp=1643791608
2022-02-02 09:48:03: Channel message: LOGIN username=&password=
2022-02-02 09:48:03: Channel message: GET BACKUPCLIENTS
2022-02-02 09:48:17: Channel message: GET BACKUPIMAGES Sandbox-test
2022-02-02 09:48:42: Channel message: DOWNLOAD IMAGE with_used_bytes=1&img_id=419161548&time=1643786677&mbr=true
2022-02-02 09:48:43: Channel message: DOWNLOAD IMAGE with_used_bytes=1&img_id=419161548&time=1643786677&mbr=true
2022-02-02 09:48:43: Lost channel connection to ##restore##16437916086241. has_error=true
2022-02-02 09:48:44: Channel message: DOWNLOAD IMAGE with_used_bytes=1&img_id=419161548&time=1643786677&mbr=false
2022-02-02 09:48:44: ERROR: Footer checksum wrong. Switching to header
2022-02-02 09:48:44: ERROR: Header and footer checksum wrong
2022-02-02 09:48:44: Lost channel connection to ##restore##16437916086241. has_error=true
2022-02-02 09:48:44: Lost channel connection to ##restore##16437916086241. has_error=true
2022-02-02 09:48:45: Channel message: UPDATE ACCESS KEY
2022-02-02 09:48:45: Channel message: STARTUP timestamp=1643791608

Switching to VHD fixes the issue. Recovery image was 2.4.0.