Backup speeds is very slow

Hi,

I am running UrBackup Server v2.3.8.0 on Debian Buster. All the clients are running UrBackup Client Controller v2.3.4.0 on Debian Buster as well except one windows client.
The backup speed is extremely slow on the Debian clients. currently a full file backup job of 133GB is running for an hour and has achieved 400MB and it needs more than a month to complete. The ETA on the Web interface:

I used to use rsnapshot which uses rsync and a full backup took for this client around 2-3 hours.
The backup server is a Ganeti LVM instance and the backup directory is a nfs mount directly from the Ganeti master (host). It is one Western Digital Red 2TB Sata 6GB/s 128MB cache and 5400Rpm . This is the same NFS mount rsnapshot wrote to.
I tested the network speed:

I have unchecked the run with background priority box, but it is not getting better.

Thank you for the help.

Something new about this. Can I do anything to improve the speed?

If I understood this correctly the speed refers to the write operation on the disk. If so then this is really slow.
dd can write a 1GB file to the same directory in in 10 seconds.

If the speed refers to the network speed it was constant at 17 Mbits/sec for the whole time without any drops.
The speed on Urbackup server under Activities changes continuously between 10 Kbits/sec and 14 Mbits/sec.

I hope you can give me any hints to get this work.

Thanks

After further investigation I have the following:
network speed between urbackup LVM Ganeti VM and Ganeti node (host):

  • iperf urbackup as client:
iperf -c 10.0.10.10
------------------------------------------------------------
Client connecting to 10.0.10.10, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.10.11 port 42674 connected with 10.0.10.10 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.0 sec  22.6 GBytes  19.4 Gbits/sec
  • iperf on Ganeti server:
iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.10.10 port 5001 connected with 10.0.10.11 port 42674
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.0 sec  22.6 GBytes  19.4 Gbits/sec

network speed between urbackup and a backup client:

  • iperf on urbackup as server:
iperf -s
------------------------------------------------------------
Server listening on TCP port 5001
TCP window size: 85.3 KByte (default)
------------------------------------------------------------
[  4] local 10.0.10.11 port 5001 connected with 10.0.10.150 port 44400
[ ID] Interval       Transfer     Bandwidth
[  4]  0.0-10.8 sec  13.1 MBytes  10.2 Mbits/sec
  • iperf on backup client:
iperf -c 10.0.10.11
------------------------------------------------------------
Client connecting to 10.0.10.11, TCP port 5001
TCP window size: 85.0 KByte (default)
------------------------------------------------------------
[  3] local 10.0.10.150 port 44400 connected with 10.0.10.11 port 5001
[ ID] Interval       Transfer     Bandwidth
[  3]  0.0-10.2 sec  13.1 MBytes  10.8 Mbits/sec

The speed varies between 10.8 Mbits/sec and 14.9 Mbits/sec on different tries.

dd from urbackup server to the NFS share which is the backup location:

dd if=/dev/zero of=/backup/test bs=1GB count=10 oflag=dsync
10+0 records in
10+0 records out
10000000000 bytes (10 GB, 9.3 GiB) copied, 250.428 s, 39.9 MB/s

So it is definitely not an I/O issue of the NFS share. It is obvious that the backup client 10.0.10.150 has a low bandwidth of max 14.9 Mbits/sec, but this doesn’t explain the Kbit/s speed on the Activities view on urbackup server when backing up this client.

I hope this information will help solving this issue.

Thanks

Unfortunately I don’t have the benchmark of iftop any more, but it sowed exactly the same speed as on Urbackup while backing up to the NFS share.
I made a test of backing up the the same client to the local disk of Urbackup and iftop showed the ~ 14 Mbits/sec as well as Urbackup.
So this seems to be related to the combination of backing up a client to the NFS share through Urbackup. Thus the IO of the NFS share without Urbackup and the network speed between client and Urbackup are at the max speed. only the combination between them has this drawback of the backup speed.

I know this is 19 day olds but I just have to comment on this. And 3x2b2ff5, I am not trying to single you out here, it’s just, why do you guys think this will work at all well? Think about it. Your urbackup server is reaching out over the NIC connection to a server to back it up, transferring data to the urbackup server, then you ask it to write that data over the same NIC to another server. There is obviously going to be contention on that link, the data in and the data out cannot travel simultainiously.

I use iSCSI storage for all my urbackup servers but I see no reason why NFS would not work but I don’t know how to force an NFS mount to use a different interface but I’ m guessing it is possible. Point is I always use a seperate interface to connect my storage, seperate from the interface urbackup uses to access the LAN to pull backups. On physical servers some will say β€œI don’t have the option of adding another NIC”, well you have to ask yourself, if that is not physically possible (not just financially) then adding backup storage directly to your server is the other option.You have to make choices. If this is for your business you have to go to management and say β€œBackups are taking too long, we are at risk, here is how we mitigate that risk”

Take this as you will but I’ve been doing server backups for close to 30 years and this had been the best solution to improving backup performace to remote storage.

BTW, urbackup is the most stable, reliable backup system I have ever used. Even better than those multi $1K systems I’ve been forced to use in the past.

Usually UrBackup is slowed down by random IO. NFS is worse than iSCSI (I guess) because iSCSI at least enables use of the local in-memory (writeback) Linux block cache. You can also use e.g. btrfs with iSCSI. The hard-linking operations are sequential (per backup) in UrBackup and thus the network latency matters there. Of course having other stuff using the same network interface increases the NFS latency.

1 Like

This not totally true. For the NIC the direction of the traffic doesn’t matter, and above all a 1Gbit/s NIC. The bottleneck depends on the CPU and the HDD, if I have a low network latency.

I will replace the antenna including the rp-sma cables for this wireless device and increase the VCPU’s for the backup server and test again.

Sorry, It is true, 1Gb/s is not the be all-end all super speed. That is why storage venders support 10Ge and Fiber channel. It is not about direction it is contention. This is why some commercial backup software vendors wil not support this kind of configuration. Put a support call into Unitrends or Symantec and complain about performance and they will say β€œWe don’t support this”, believe me, I have done just this.

It is the same thing when I see a VMware setup where they are running 20 VM through a single NIC and wonderig why their performance is slow.

I replaced the antenna including the rp-sma cables, and indeed the cables weren’t in a good shape. I have now between 40 Mibt/s - 70 Mbit/s speed. Unfortunately I still have the speed drop. It starts at 30 - 40 Mbit/s and drops to ~ 100 Kbit/s. In general the speed is between 1,5 Mbit/s and 100 Kbit/s.

Just for the record I really like Urbackup and the code seems to be clean. I couldn’t find any thing related to this in the code yet, and since it works perfectly for the majority of the people using it, I’ll still try to find out why isn’t working for me.

Thank you very much for the great job.

Here is iftop for a backup job running with rsnapshot:

 Display paused               19.1Mb                        38.1Mb                        57.2Mb                        76.3Mb                  95.4Mb
└─────────────────────────────┴─────────────────────────────┴─────────────────────────────┴─────────────────────────────┴─────────────────────────────
debian.local.domain                                           => backup.local.domain                                           34.9Mb  29.5Mb  28.6Mb
                                                              <=                                                                345Kb   292Kb   295Kb

Only one NIC on backup server and the backup directory is the NFS mount. I have a backup speed of up to 60 Mbit/s in between.
And by the way I am not saying the 1Gbit NIC is the non plus ultra. But for home use, it is more than enough.

Thanks for your effort.