Block sent out of sequence. Expected block - image backup failing


Having an issue with one particular server that has issue backing up a data drive on a Windows Server 2012 R2 VM.

Client keeps erroring out at random times during the full backup. Sometimes it backs up 1 GB, 5 GB, etc.

Starting unscheduled full image backup of volume “E:”…
Basing image backup on last incremental or full image backup
Error retrieving last image backup. Doing full image backup instead.
Block sent out of sequence. Expected block >=2463216 got 0. Retrying…
Client disconnected before sending anything (Timeout: unavailable).
Transferred 9.22437 GB - Average speed: 91.9135 MBit/s
Time taken for backing up client TSR-VERIATO: 14m 23s
Backup failed

Debug log at the time shows:
2018-07-03 10:37:12: WARNING: Error deleting snapshot set {D61F6D7B-B229-4FE3-91C9-AE3FE5A65A7C}
2018-07-03 10:51:33: ERROR: Pipe broken -2
2018-07-03 10:51:33: ERROR: Pipe broken -4

C: drive backs up without any issue and I can do incrementals on it without issue also.

E: drive (problem drive) has been wiped, deleted, removed and rebuilt.

Running urbackupclientbackend.exe with high priority and high I/O priority.


Is there a way to adjust the timeout settings on the urbackup client? Is this even the case?


Even though it’s a VM, have you verified it’s not an underlying disk problem?

Check event viewer (on hypervisor and VM) for problems around the time of backup error
Use SMART on the hypervisor to check SMART disk statuses
Try duplicating the vhd(x) to make sure there aren’t read errors.


This is usually a networking problem. Can you update some drivers or enable/disable something like TCP offloading?


Other VM’s on the same hypervisor and storage seem to run fine with no issue.

I just checked on the TCP offloading and it was indeed enabled. Just finished disabling the settings and will let it run another backup attempt.


That was it. Stupid TCP Offloading!


You’re talking about this?



No, TCP Offloading settings on the virtual NIC on the VM itself. In all of my other VM’s I have it disabled and it improves the network performance. This was a new VM built by another tech and I assumed he disabled it but never did:


We use Proxmox for our Hypervisor.