Physical machine, Quad Core i3-4160, 8GB RAM, Debian Linux, 4x 6TB Western Digital in raidz1
Client:
UrBackup 2.0.31
Windows Server 2012R2
Virtual Machine, Quad Core, 8GB RAM, Virtio disks
It looks like the speed is somehow limited to 60Mbit or 6MB/sec when doing a full image backup. Whatever setting I try to change (raw, hashed, compressed, encrypted), I do not see performance improve.
Iperf tells me that I can do around 800mbit of traffic between the client and the server. CrystalDiskMark shows good performance:
I’m suspecting a software issue/setting/limitation somewhere, but I can’t figure out where…
You could try disabling Run backups with background priority on the clients in the advanced settings on the server. Setting should become active after client service restart.
I did a fresh install of W7 last week.
The backup was running at about 56kbps over Gb LAN!
It turned out the single core CPU was maxed out by the Windows update client.
I killed the update process and it immediately jumped to about 60mbps…
I’ve setup a different set of machines to try and replicate the issue.
My setup reaches 30Mbit instead of 60, when doing image-backups. However, filebackups go as fast as you can reasonably expect.
My feeling still is that there is something ratelimiting imaging. Especially since the speed on this setup is about half of what my customers setup does…
It’s probably quite hard to properly pin this down. A first step would be to find out if it is the client or server that is slowing it down.
One thing you could try is having a look at UrBackupClientBackend.exe via http://processhacker.sourceforge.net/ and then increasing the CPU priority of the image read-ahead thread.
I’m currently running an incremental image backup. There is almost no traffic between the client and the server, I can tcpdump the stream and keep track of it. It’s mostly keep-alives.
The client keeps running at 3.something MB/sec, also if I change the priority to Realtime and the I/O Priority to High.
I tried Windows Server Backup, btw. To see if that is just as slow with VSS, but that kicks at around 80MB/sc.
Well. I’m seeing the exact same disk-read-speed on the client in an incremental run and in a full backup. So don’t you agree that we can pretty safely assume that it’s the client?
Yeah, my guess is that the read-ahead needs improvement. I just need to make sure that’s the bottleneck and that it is not simply fixed by increasing the CPU priority of the read-ahead thread.