CBT Output Buffer Too Small

Hi Everyone, (client 2.3.6 CBT, Server 2.3.8)
I have the client with change block tracking and it has been working well for me–I have seen backups shrink nicely and become feasible over the internet.

I use the CBT feature to backup off site a VM.
I have spoken in another post about how I did it to reduce the backup size etc (sorry but not sure how to link to it), as I have limited or as it can be called metered internet. I am still looking for ways to minimize the data transfer.

to that end I decided to setup a test locally to see how each setting affected the data size.
I have a folder with 73 files in it with sizes of files from 70GB to a few hundred MB, (this is a forever incremental VEEAM test back folder for a VM backed up and changed every hour). the 70GB file is changed every hour as the old incremental backup is rolled into it (normally about 3-400MB and a new file added and an old one deleted of the incremental backups.
This set of files resides on a VHD drive.

Now, when I do an incremental backup of files I get a huge change of about 32GB, (local incremental file backup transfer mode: Block Differances - Hashed).
When I do an image incremental backup of the VHD drive these files are on the transfer amount is around 1.2GB.

I thought I would check (but still very new to URbackup) and on the info form for the client which says what has CBT active there is the message “output buffer too small”

Anyone know what it is or know of anyway I can reduce backup sizes further?

As an example when I do a incremental file backup on another machine where the file changes are a few hundred MB only my transfers are approx 10MB.

thanks for anyones input it is appreciated.

Sorry you are having issues. To solve the problem the client (debug) log would be useful.
If possible, could you post it or send it?

This post describes how to change the client to debug logging, where it is stored and where to send it to if posting is not possible: Having problems with UrBackup? Please read before posting

If it is in the CBT info screen, copy & paste the complete message here please.
Thanks!

W.r.t. to backup size. CBT doesn’t really influence that. It just makes backups faster, since it doesn’t have to look at the whole file content/image content and can also give you a better progress bar.

Identical files should not be loaded if you have client-side hashes enabled, run the clients as Internet clients and use a file system on the server where it can hard link (all setups except the ZFS snapshotting one).

Hi Uroni, thanks for getting back to me I appreciate it.
I have changed the arg file to debug, had to stop the service and restart to get a clean log.
I have attached 2 logs, one is the old log which goes all the way back to the beginning and has a lot of misc stuff from where I was playing and messing stuff up lol. Shows a lot of errors but I have to admit to not knowing what any of it means. I have also made a clean log file from which the debug switch is turned on.

I have run 4 runs of urbackup, run one is an incremental image backup on the drive with almost no changes (previously run urbackup and Veeam only runs from dawn to dusk on this drive).
then I ran an incremental file backup.
After this almost no change to source run I have run an incremental image run and then an incremental file run.

I hope that makes sense and gives a benchmark or 2 to ref.
if you need more information please let me know.

Please forgive any errors as I am still learning how to use URbackup to a good degree.
That said it is a unique piece of software and it is doing for us what a lot of very expensive software cannot–safe offline internet transferred backup, awsome.
thanks
John

CBT Status window text
Volumes with active change block tracking with crash persistence (v2.13):
N:

Volumes without change block tracking:
E:, V:, I:, C:, T:, D:, F:

Last driver log messages:urbackup - Copy.zip (96.9 KB)
Output buffer too small (0) in IOCTL_URBCT_STATUS

It seems Fileserver is a local client? You could make that a Internet client (block the ports, or run enable_internet_only.bat on the client), that way it’ll use client side hashing and compress the transfer.

NTFS also cannot store differences efficiently. It patches URbackuptest.vbm with a ~600KB patch, but it has to copy the whole file for that because NTFS doesn’t have reflink (unlike ReFS). So sticking to Windows, you could use ReFS, or use Windows Server dedup.

Or you use Linux+btrfs.

Hi Uroni, sorry for delay replying–so much to do and only one lifetime to do it in lol.
Correct, fileserver is local. I am using it as a test base using local based computers to try and find the most efficient method of transfer of backups from offsite.

I have limited data per month (1TB) and I have an offsite server backing up to fileserver over the internet.
In a previous post I described how I have done it to minimise the transfer size (long story short–Use Veeam to make a backup copy of its backup to a vhd file mounted and do an image of that using the cbt client)–this has stabalised at around 1-2 GB per hour backup–do this 10 times per day = about 20GB. But some days it is bigger and so at the months end I am out of data.

My next move is to try a direct backup of the VM using the hyper v backup but I am having a few issues but probably down to me being silly (changing clients from normal client>CBT client>Hyperv client) but hey nothing in life is easy if you want to do it well.

If I could get the changes per hour down to 4-600MB I would be very happy.
Will keep the community posted as to how it goes.
ta