Server 2.5.23/Client 2.5.17 - Backups hang on server on client standby

As always, thank you for the great tool which works great. This forum is usually used to report issues or ask for help, however happy customers are not coming here and they are the majority to my understanding. I’m one of the happy customers.

Description:
Server has multiple clients which run backups. If client is getting switched off (standby and hibernate) during the backup procedure it hangs on server. Even when client is resumed after few hours - server is unable to pick it up and either cancel or resume backup. Those backups hangs forever on server. Client reports no connection to server.

Steps to reproduce:

  1. Setup client and server to run backups.
  2. Start backup. For me, hanging type was image backup. For file backups I cannot see such behavior yet.
  3. After backup has started - set client to standby mode.
  4. Wait for few hours.
  5. Resume client.

Expected behaviour:
Backup which is in progress is either picked up where it left on client resume or considered failed after some timeout or when client is back online.

Actual behaviour:
Server hangs in endless loop in trying to reconnect to the client. Backups never fail or resume - server restart is required. Other clients are also not being backed up.

Logs on server:

Mar 11 10:40:59 apparel urbackupsrv[3560]: Socket has error: 113
Mar 11 10:40:59 apparel urbackupsrv[3560]: Socket has error: 113
Mar 11 10:40:59 apparel urbackupsrv[3560]: Connecting Channel to CLIENT1 failed - CONNECT error -55
Mar 11 10:40:59 apparel urbackupsrv[3560]: Connecting to ClientService of "CLIENT1" failed: Error sending 'running' (2) ping to client
Mar 11 10:40:59 apparel urbackupsrv[3560]: Error sending 'running' (3) ping to client
Mar 11 10:41:04 apparel urbackupsrv[3560]: Connecting Channel to CLIENT2 failed - CONNECT error -55
Mar 11 10:41:04 apparel urbackupsrv[3560]: Connecting Channel to CLIENT3 failed - CONNECT error -55
Mar 11 10:41:04 apparel urbackupsrv[3560]: Connecting Channel to CLIENT4 failed - CONNECT error -55
Mar 11 10:41:04 apparel urbackupsrv[3560]: Connecting to ClientService of "CLIENT4" failed: Error sending 'running' (2) ping to client
Mar 11 10:41:04 apparel urbackupsrv[3560]: Error sending 'running' (3) ping to client
Mar 11 10:41:06 apparel urbackupsrv[3560]: Socket has error: 113
Mar 11 10:41:06 apparel urbackupsrv[3560]: Connecting to ClientService of "CLIENT2" failed: Error sending 'running' (2) ping to client
Mar 11 10:41:06 apparel urbackupsrv[3560]: Socket has error: 113
Mar 11 10:41:06 apparel urbackupsrv[3560]: Connecting Channel to CLIENT2 failed - CONNECT error -55
Mar 11 10:41:06 apparel urbackupsrv[3560]: Error sending 'running' (3) ping to client

On client:

Debug logs on the client:

2022-03-10 17:42:49: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:50: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:51: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:52: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:53: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:54: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:55: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:56: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:57: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:58: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:42:59: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:43:00: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:43:01: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:43:02: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:43:03: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:43:04: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED
2022-03-10 17:43:05: ClientService cmd: STATUS DETAIL#pw=PASSWORD_REMOVED

Sry, cannot reproduce this issue.

It should debug log “CLIENTNAME: Trying to reconnect in doImage” every minute. I guess the log excerpt is too small to see that. When the client goes online again it should also discover this client (and log something about that) and show that it is online again on e.g. the status page on the server.

It doesn’t hang, it tries to resume the imaging process. Yes, that takes away a backup slot and if you have only a limited amount of backup slots that may prevent other backups from starting. After 10 days of this it fails the image backup.

You’re right and those messages are there, I didn’t know that they are meaningful, but please, let me insert bigger log chunk here:

2022-03-11 10:41:06: Error sending 'running' (3) ping to client
2022-03-11 10:41:12: Socket has error: 113
2022-03-11 10:41:12: Socket has error: 113
2022-03-11 10:41:12: Connecting Channel to CLIENT1 failed - CONNECT error -55
2022-03-11 10:41:12: Connecting to ClientService of "CLIENT1" failed: Error sending 'running' (2) ping to client
2022-03-11 10:41:12: Error sending 'running' (3) ping to client
2022-03-11 10:41:12: CLIENT2: Trying to reconnect in doImage
2022-03-11 10:41:14: Connecting Channel to CLIENT3 failed - CONNECT error -55
2022-03-11 10:41:14: Connecting Channel to CLIENT4 failed - CONNECT error -55
2022-03-11 10:41:14: Connecting Channel to CLIENT2 failed - CONNECT error -55
2022-03-11 10:41:14: Connecting to ClientService of "CLIENT2 " failed: Error sending 'running' (2) ping to client
2022-03-11 10:41:14: Error sending 'running' (3) ping to client
2022-03-11 10:41:19: Socket has error: 113
2022-03-11 10:41:19: Connecting Channel to CLIENT5 failed - CONNECT error -55
2022-03-11 10:41:19: Socket has error: 113
2022-03-11 10:41:19: Connecting to ClientService of "CLIENT5 " failed: Error sending 'running' (2) ping to client
2022-03-11 10:41:19: Error sending 'running' (3) ping to client
2022-03-11 10:41:20: CLIENT1 : Trying to reconnect in doImage
2022-03-11 10:41:23: Socket has error: 113
2022-03-11 10:41:23: Socket has error: 113
2022-03-11 10:41:23: Connecting to ClientService of "CLIENT1" failed: Error sending 'running' (2) ping to client
2022-03-11 10:41:23: Connecting Channel to CLIENT1 failed - CONNECT error -55
2022-03-11 10:41:23: Socket has error: 113
2022-03-11 10:41:23: Error sending 'running' (3) ping to client
2022-03-11 10:41:24: Connecting Channel to CLIENT3 failed - CONNECT error -55
2022-03-11 10:41:24: Connecting Channel to CLIENT4 failed - CONNECT error -55
2022-03-11 10:41:24: Connecting Channel to CLIENT2 failed - CONNECT error -55
2022-03-11 10:41:24: Connecting to ClientService of "CLIENT2 " failed: Error sending 'running' (2) ping to client
2022-03-11 10:41:24: Error sending 'running' (3) ping to client
2022-03-11 10:41:27: CLIENT5: Trying to reconnect in doImage

They are there, but this has no effect. Client appears as online from the server prospective (I cannot see relevant messages on the sever side, even in debug mode), i.e. server detects the client, however from the client prospective there is no server.

As steps to reproduce, please try to have client switched off for, let’s say 8 hours? Because in reality it was sleep-over-night.

Perhaps you can check if the server can ping the client when this happens? One method is to put the client into client discovery hints. The other is to run wireshark on the client and look if it receives the broadcasts from the server.

It seems that I was able to pin this down. Context:

  • I have FQDN for UrBackup server in LAN
  • Default settings were to connect via that FQDN for all clients
  • In LAN that FQDN resolved in local IP, in internet it was resolved in external IP.
  • My logic was the following: sometimes it takes time for server-client connection via broadcasting, probably enabling internet server on local clients would speed up that process. And it did.

So what happened was:

  • Client/Server connect via broadcast or via internet active connection.
  • Backup starts
  • Client goes to sleep
  • Client wakes up, but now it’s connected via internet connection using FQDN, instead of broadcast group. Or vice versa. For server it’s the same client, however using new channel to continue backups is not possible - it leads to above errors.

I’ve removed active internet connections for local clients and this seems to have fixed the issue so far, i.e. after sleep client successfully continues backup and server is able to reconnect at the last state.

Likely a warning is needed on the client/server side, should such situation occur, i.e. when broadcasted IP and active connection IP is the same.

But probably you have a better idea to resolve this. The issue was floating and drove me nuts, as I was unable to catch it.

Unfortunately issue has returned even after disabling internet mode for local clients.

I’ve made a dump for UDP port 35623 from the client side when issue occurs. PFA PCAP file. I had to change it’s extension as forum doesn’t allow me to attach PCAP file. So, please, change it’s extension to pcap and you should be able to open it with wireshark.

Client in CentOS 7. Same issue is observed for Windows clients as well.

failed_urbackup.txt (19.0 KB)

Restarting the client - fixes the issue.

I hope this would be helpful.

Thx, this might fix it:

Thank you - that doesn’t seems to be helping.

The client I’m testing on now reports error like this:

2022-05-29 01:54:43: Connecting Channel to CLIENT1 failed - CONNECT error -55
2022-05-29 01:54:43: Connecting to ClientService of "CLIENT1 " failed: Error getting name of client

Previously same messages also happened. Need to mention few things:

  1. Sometimes client go to sleep
  2. Sometimes server (Urbackup) go to sleep

I haven’t seen errors during backups or failure to recover (need more testing), however above mentioned issue is pretty annoying.

Short update from my side: it seems that since patch I didn’t see situation when backup has started and hanged despite client is back online. Which means that it fixes mid-backup hangs. Yay!

The only issue remains is described above. Idle clients start to get CONNECT error -55 until restart of the client. Clients are Windows/Linux/ARM, so it seems it’s not platform specific. Wifi and LAN connected.

Haven’t seen such behaviour for active (Internet) clients though.

Should you need more dumps to pin it - please let me know.

Perhaps you could send me a debug level client log file of a client that has such issues?

Here is the logs:

2022-06-20 14:00:17: Looking for old Sessions... 0 sessions
2022-06-20 14:30:18: Looking for old Sessions... 0 sessions
2022-06-20 15:00:19: Looking for old Sessions... 0 sessions
2022-06-20 15:30:20: Looking for old Sessions... 0 sessions
2022-06-20 16:00:21: Looking for old Sessions... 0 sessions
2022-06-20 16:16:16: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=jq4O+EGIqThSY9x5riLY2g--&compress=zstd&compress_level=6
2022-06-20 16:16:16: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=b1jqU6wYKrtc0SKY0pG35g--&compress=zstd&compress_level=6
2022-06-20 16:16:16: rc=0 hasError=true state=0
2022-06-20 16:16:16: rc=0 hasError=true state=0
2022-06-20 16:16:26: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=V76MYpODKVbPPjLYK3xKXA--&compress=zstd&compress_level=6
2022-06-20 16:16:26: rc=0 hasError=true state=0
2022-06-20 16:16:26: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=7+O+P1+rUGm+lNvxQD2X+w--&compress=zstd&compress_level=6
2022-06-20 16:16:26: rc=0 hasError=true state=0
2022-06-20 16:16:36: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=RDMscW5jmDwztwU0EP0ifA--&compress=zstd&compress_level=6
2022-06-20 16:16:36: rc=0 hasError=true state=0
2022-06-20 16:16:36: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=JB/Zzs0kkdqHst2N8dI0og--&compress=zstd&compress_level=6
2022-06-20 16:16:36: rc=0 hasError=true state=0
2022-06-20 16:16:46: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=dh1m1/opMnSLjoMygY2wDQ--&compress=zstd&compress_level=6
2022-06-20 16:16:46: rc=0 hasError=true state=0
2022-06-20 16:16:46: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=FIZb5TEY3MHhjWbmWBvLhQ--&compress=zstd&compress_level=6
2022-06-20 16:16:46: rc=0 hasError=true state=0
2022-06-20 16:16:56: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=5gATK4MFzh5BStiXxezpxg--&compress=zstd&compress_level=6
2022-06-20 16:16:56: rc=0 hasError=true state=0
2022-06-20 16:16:56: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=Ayx5CqMnSrOOqZ7rlWB5nA--&compress=zstd&compress_level=6
2022-06-20 16:16:56: rc=0 hasError=true state=0
2022-06-20 16:17:06: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=7s9mtVPGuh59LRXQ2g137g--&compress=zstd&compress_level=6
2022-06-20 16:17:06: rc=0 hasError=true state=0
2022-06-20 16:17:06: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=kOQwZWR7yfF4wWrtJPQj7w--&compress=zstd&compress_level=6
2022-06-20 16:17:06: rc=0 hasError=true state=0
2022-06-20 16:17:16: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=jb/CsrQsueh4qGM7WMW58g--&compress=zstd&compress_level=6
2022-06-20 16:17:16: rc=0 hasError=true state=0
2022-06-20 16:17:16: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=xrKXqSTSZy+laJe9Pqtm4w--&compress=zstd&compress_level=6
2022-06-20 16:17:16: rc=0 hasError=true state=0
2022-06-20 16:17:26: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=9hJu/nfPwPT48xGVdb9V+Q--&compress=zstd&compress_level=6
2022-06-20 16:17:26: rc=0 hasError=true state=0
2022-06-20 16:17:26: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=qCwAQXrmr8UC3PxjMhZQhw--&compress=zstd&compress_level=6
2022-06-20 16:17:26: rc=0 hasError=true state=0
2022-06-20 16:17:36: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=LtcmysZRJYUDSOME/3bn5w--&compress=zstd&compress_level=6
2022-06-20 16:17:36: rc=0 hasError=true state=0
2022-06-20 16:17:46: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=LjL8x+XHhcIKFPhHeMGTFA--&compress=zstd&compress_level=6
2022-06-20 16:17:46: rc=0 hasError=true state=0
2022-06-20 16:17:56: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=enU4a+XxeY1nT1U3fJnXhw--&compress=zstd&compress_level=6
2022-06-20 16:17:56: rc=0 hasError=true state=0
2022-06-20 16:18:06: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=os9DzAk6nU4xL0pa8R6OCg--&compress=zstd&compress_level=6
2022-06-20 16:18:06: rc=0 hasError=true state=0
2022-06-20 16:18:16: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=+HiA/aU7AlQKWT1wDDvxBw--&compress=zstd&compress_level=6
2022-06-20 16:18:16: rc=0 hasError=true state=0
2022-06-20 16:18:26: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=q1J7RLwC54HEZjued05xuQ--&compress=zstd&compress_level=6
2022-06-20 16:18:26: rc=0 hasError=true state=0
2022-06-20 16:18:36: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=wvzo1oXnxr3r9RVeBLS/Kw--&compress=zstd&compress_level=6
2022-06-20 16:18:36: rc=0 hasError=true state=0
2022-06-20 16:18:46: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=VcL/voR+TP0dBJesjL4xMg--&compress=zstd&compress_level=6
2022-06-20 16:18:46: rc=0 hasError=true state=0

2022-06-20 16:16:16 - is the timestamp when server is back online.

BUT! The correct time is: Mon Jun 20 18:21:46 CEST 2022 on the server.

For some reason there is a 2 hours gap (ntpd is running) between logs and system time. Restarting client fixes the issue with connection, but time in logs remains the same.

Logs from the client after client restart:

2022-06-20 16:22:40: FileSrv: Binding UDP socket at port 35622...
2022-06-20 16:22:40: FileSrv: done.
2022-06-20 16:22:40: urbackupserver: Server started up successfully!
2022-06-20 16:22:40: Started UrBackupClient Backend...
2022-06-20 16:22:40: FileSrv: Binding ipv6 UDP socket at port 35622...
2022-06-20 16:22:40: FileSrv: done.
2022-06-20 16:22:40: ERROR: Error joining ipv6 multicast group ff12::f894:d:dd00:ef91
2022-06-20 16:22:40: FileSrv: Servername: -CLIENT_NAME-
2022-06-20 16:22:40: FileSrv: Server started up successfully
2022-06-20 16:22:40: FileSrv: UDP Thread started
2022-06-20 16:22:41: Looking for old Sessions... 0 sessions
2022-06-20 16:22:41: Internet mode not enabled
2022-06-20 16:22:41: Final path: /
2022-06-20 16:22:46: urbackupserver: No available slots... starting new Worker
2022-06-20 16:22:46: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=T7WxMF1RVpzvhAkIaH8Iuw--&compress=zstd&compress_level=6
2022-06-20 16:22:46: Encrypting with key Q2QgHFWNYtGfXkI0GKFfA9KZf1zHdKlyWaUxLa+Z3yxPtbEwXVFWnO+ECQhofwi7APgx02qq+48qmWPK3hvGHQ-- (client)
2022-06-20 16:22:46: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#GET CLIENTNAME
2022-06-20 16:22:46: rc=0 hasError=true state=0
2022-06-20 16:22:46: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=WLVXkLQasz8o4VRnLcvhPg--&compress=zstd&compress_level=6
2022-06-20 16:22:46: Encrypting with key Q2QgHFWNYtGfXkI0GKFfA9KZf1zHdKlyWaUxLa+Z3yxYtVeQtBqzPyjhVGcty+E+3Fg9oRj8rg0OzJzunasNKg-- (client)
2022-06-20 16:22:47: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#1CHANNEL capa=0&token=tMvcsnR8c4ReFTm6QnWI&restore_version=1&startup=1&virtual_client=
2022-06-20 16:22:47: New channel: Number of Channels: 1
2022-06-20 16:22:47: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#ENC?keyadd=VEhd1rPc9AUOOPuMZTt4UA--&compress=zstd&compress_level=6
2022-06-20 16:22:47: Encrypting with key Q2QgHFWNYtGfXkI0GKFfA9KZf1zHdKlyWaUxLa+Z3yxUSF3Ws9z0BQ44+4xlO3hQbm+urW5bee4RNC15HGYkDA-- (client)
2022-06-20 16:22:47: ClientService cmd: #Ivw2j87vtGxuYrTgpqbVG#CAPA
2022-06-20 16:22:47: rc=0 hasError=true state=0
2022-06-20 16:22:48: ClientService cmd: #IBUvHIgBtz7QNS5NW7LpX#GET CHALLENGE with_enc=1
2022-06-20 16:22:48: rc=0 hasError=true state=0
2022-06-20 16:22:48: ClientService cmd: #IBUvHIgBtz7QNS5NW7LpX#SIGNATURE#pubkey=MIIBtjCCASsGByqGSM44BAEwggEeAoGBAMFCPbtkjT5HmphNTc5/ZgZ9zucjw48WW0iKuNlLSlYoUPuxMh3fFx+QhAkhMYgz8DoJ3n3npsM7m0ReWW1YpnwE+RYc/ue44bgfi5FdEfsd1VwWhJ5PcAOwDo5ugObGTlKsB382/bUoF3YgzwxGnEEPIYx6vu/BihusUehpPTiDAhUA0y9EMcpeR1CSY14IpftHhE1i4ysCgYAa9EBu9MYC1mJnA6kF8Vz3eXQbLWu2z7IG0/61s4d5ccp/+LmtJuSIOC5AEPh1nh2Ko48UI1jJpDbOhHRYy4UH2C+B5hkWRopE0GQzz+27rJTazg4y9/ZYZOnU6bdzfOYF8bxEz6SFfTuk1s+O1zhyxRv6Y8lFCrTQDAkSFjWg2QOBhAACgYB4VZT/Gva2+k1twoGXb0EYhj9KaYGKuH5HHObxFpWgJyz8J29Qv0tXvVsRULZ+v2Cz+TukuOaduMGB2d+VBWE8cxntmxg5esEeyL4gyX22NCy+COK0D6cAyzo2DUL/nSMpgZpTru4fvnm17pLSoE+mA2ZkJgKAiG6YGVNKyNiWrw--&pubkey_ecdsa409k1=MIIBsDCCAUAGByqGSM49AgEwggEzAgEBMB0GByqGSM49AQIwEgICAZkGCSqGSM49AQIDAgIBVzBsBDQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABDQAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAABBGkEAGDwX2WPScGtOrGJD3GEIQ79CYfjB8hMJ6zPuPn2fMLEYBietaqqYu4iLrGzVUDP6QI3RgHjaQULfE5CrLodrL8EKZw0YHgvkY6kJ+YyUWXp6hDj2l9sQunFUhWqnKJ6WGPsSNjgKGsCM3/////////////////////////////////+X4Oy1OogQA7EVX1e0+PnyltLXIO44B5fzwIBBANqAAQAwq3lCX1tpy0NK8JXaQyCmhHBZWAXGr0KK48L6sl5wetYrg9d8hIhzvLK/sqct6/m66ZMAYOvVJg8YTo0da8Yimjad454D3B986FjwxBveY3HiDM4bOBSDp/zF3L5b7ISW6L8SAku0Q--&signature=aZreO75uSAvCpNhLsq+D87AW6ep06Dfa/aN750EQbLcxv8AKuth75A--&signature_ecdsa409k1=NP9ArIW3Vu6uiYiDdr5chbsbBRrBwu/27DUrFetICgEND45xnUX+iaKEzcAY8l0qTmhMJKNaB3/pCxTmn5CprlkMLGJ0b6ViLF6od/0K5d2RcYG/woCkgnjwF0DIyYAuynZRs95C&session_identity=sa1DVNy58tBe5SP6jCZv&secret_session_key=oWCnFsnIJpflXNHw9sLMrU6dtP2rdXczr3BFLQUBtkTHiSnhtwKEBntIY0AXS3YthsmypvO/YE14z7SskiScCw--&pubkey_ecdh233k1=BAAD0L+ANhWjLtDnX81wKMUJUtcQHTdBo6CAN7tkFwCu9HZFmbiDrucX90N0DHviGuVmZQs4/lsN/DV7+g--&signature_ecdh233k1=XtvmnlaXq8EtA7JQuCejQwxNM/3zov8erpYtNYXgdv9NOcS/PmLpYMOgjLPlmIIu3JaDWiTyYcsPuP5kuciSBrIJ7BHZVwtVauZPS2BeLxN8J349bvC91UjMBs0/xEhMFsNfAeZ9
2022-06-20 16:22:48: rc=0 hasError=true state=0
2022-06-20 16:22:48: ClientService cmd: #Isa1DVNy58tBe5SP6jCZv#ENC?keyadd=q4mw3NhJSpUFn/LJfK9XWQ--&compress=zstd&compress_level=6
2022-06-20 16:22:48: Encrypting with key NAzq9JZzZt/kN1QioCYeKi8yuRJY6awEimVh5Vm6i7+ribDc2ElKlQWf8sl8r1dZvPcFFLYCej144EuuZGRdhA-- (client)
2022-06-20 16:22:48: ClientService cmd: #Isa1DVNy58tBe5SP6jCZv#GET CLIENTNAME
2022-06-20 16:22:48: rc=0 hasError=true state=0

I hope this helps.

Thx! This might fix it:

Will do a release of the changes soon.

Hi!

I have more than 30 UrBackup servers. Your UrBackup program is great!
I would love to help test new version. The best way is to test it at real clients.

How to run 2 different versions of UrBackup client to one host same time?

  • UrBackup client 2.4 makes backups in production server.
  • UrBackup client 2.5 make backups to the test server.

This way I will not affect the production environment.

Thank you! Going to test it right away.

While I’m going to test changes and still testing VHDX patch (which seems to be working), the issue with unexpected deletion of VHDX containers on remove-unknown call is really annoying, so if there is a way to include fix into the next release - that would be great. Described here: Server 2.5.23/Client 2.5.17 - #13 by uroni and you’ve mentioned that you’ve found the issue. Haven’t shared the patch though.