Way to continue image backup? (Data error (cyclic redundancy check))


Hey All,

I am running urbackup server 2.3.7 and client 2.3.4, at home and at work, backing up about 30 hosts in all.

I am running both file and image backups periodically.

Every now and then, I find the image backups fail for one or two machines. The error is:

Error on client occurred: Error while reading from shadow copy device (1). Data error (cyclic redundancy check).

Now if I replace the hard drive, re-install windows, and restore files from the latest file backup all is fine. The problem is, that takes a lot of time. There probably is a bad spot on the drive, but it is not affecting client operations in any way. It would be great if urbackup could still create a backup image of the drive even with the bad spot.

FYI, I have enabled the option in settings: Do not fail backups in case of hash mismatches or read errors

It would be great if this could be a warning instead of a fatal error. Not sure if its even possible…



While possible, I’m not sure this would be sensible, because the backup would be clearly missing data.

The thing to do would be to force the disk to reallocate the sector, by identifying the bad sector, then overwriting it with random data (preferably after telling you which files are affected). I have googled for a Windows utility that does this before, but haven’t found any. Perhaps you have more luck, or someone else knows of one? (On Linux I’d use the non-descructive mode of badblocks).


True, but like I said, the client machine is running just fine, so yes the bit of data is missing, but it is not important to the operations of the machine.

Also the bit of data that is missing is not part of the user data, or the file backups would return a warning or fail.

I’m pretty sure simply re-imaging the disk will cause the disk to avoid the bad sector. The case I am looking at now, the disk is an intel SSD. I downloaded the intel disk utility and it also shows there is a bad sector. It doesn’t provide a way to correct it though.

So if urbackup would just complete an image of the disk, I could then write that same image to the same drive, causing the disk to avoid the bad sector. That bit of data would still be missing, but it doesn’t seem to matter in this case, and would save me time and money. (I wouldn’t replace the disk yet)


As another data point, I’ve seen another machine with this error and because consumer disks retry reading for up to 30min it had 30min Windows hangs.

Yeah, like described here: https://superuser.com/a/688764/89138 If you find an utility that only writes zeroes to the bad blocks (as I said), you wouldn’t have to re-image the whole thing.


Yes, a utility to write only zeros over bad sectors would be nice.

I suppose I can go into the settings and create an exception for this machine to skip image backups. It will only get file backups until I actually need to replace the disk.


You could always use the primitive old-school approach, CHKDSK /R which should mark the bad spot not to be used. I can’t say for UrB but every other imaging solution I’ve tried copes then.

Re-running CHKDSK afterwards with the /B switch (re-evaluate any sectors maked bad) may induce such a write.

Writing zeros to only one sector is doable with Linux, you’d use fsck or badblocks to identify the block then use dd with appropriate values for bs (blocksize) skip (how far up the drive) and count=1 (only write one block)

Personally I’d avoid the chance of errors in the above Linux operation, & simply sacrifice one write cycle per cell… use a drive wipe utility, one pass, and restore most recent backup, yes you write zeros to all sectors, but you get the one you need, assuming the two passes with CHKDSK didn’t sort it that is.