'Files in Queue' has very large numbers that decrease very slowly

I have 3 separate urbackup servers that are running Ubuntu 16.04.05. 2 of the 3 servers are backing up fast and building up the number of files in the queue then it take many hours for the queue to empty.

The servers are attached to a datastore and each server is running a 2.5 TB virtual storage presented for kernel and storage.

The 1 server that runs fast never builds up the queue more than a couple thousand and empties quickly when it does have file in a queue.

The servers are all VMs on VMware 6.5.

Any advice on how to make the queues flush out faster would be much appreciated.

I get that on servers with a lot of small files (> 500gb mails/svn servers).
Maybe it s worth to have a look at the stored files and setup some exceptions.

I also get that on the aws efs storage, because it needs some specific usage pattern to get some performances ou of it (parallel access with large blocks)

Most of this still applies: http://blog.urbackup.org/177/performance-considerations-for-larger-urbackup-server-instances

1 Like

Having an SSD for the Program Files/UrBackup directory and another 256GB SSD for urbackup_tmp (Settings: Server: Nondefault temporary file directory) keeps the Files in queue close to zero at my sites.

It’s interesting that you suggest keeping the urbackup_tmp_files on SSD storage. I keep my database files (/var/urbackup under Linux) on SSD storage, as recommended, but I never thought about urbackup_tmp_files.

Do you see a very big difference when you store this directory on SSD storage? Does anyone else have any experience with this change?

Hello! I have same problem.

Server: Debian 10, UrBackup server 2.4.13;
Storage: 15TB btrfs file system.

Client: Debian 10, UrBackup client 2.4.10;

I have a client which have full backup size around 7.5TB.
Full size of storage is 15TB (btrfs), 3TB Avail.

2 days ago UrBackup started to create a second full file backup. Now the progress is 70%. The storage almost full (652G Avail, normally 3TB Avail):

Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/vgurbackup-lvurbackup1   15T   14T  652G  96% /media/BACKUP/urbackup

Files in queue - 432111

If this continues, the disk space will run out and the backup will end with an error.

I have migrated database to SSD

I have read this topic and have migrated /dev/sdc drive which contains root file system with database from slow HDD to very fast SSD (live migration. UrBackup server is proxmox virtual machine).

I hoped after move root with database to SSD the queue will decrease and avail size will bigger, but this not happened!

4 hours later:

There was even less free space. From 652G to 495G.

Filesystem                          Size  Used Avail Use% Mounted on
/dev/mapper/vgurbackup-lvurbackup1   15T   14T  495G  97% /media/BACKUP/urbackup

Queue is getting bigger. Files in queue becomes from 432111 to 441618.

Please help! I don’t understand why the queue is getting bigger.
As I understand the main factor is a database performance which now at SSD.

Thank you in advance!

Migration database to SSD didn’t help. Queue is getting bigger. Finally I got error:

FATAL: Could not free space. NOT ENOUGH FREE SPACE.
...
Backup failed

I added feature request - Limit ‘Files in Queue’. It will fix the problem with

FATAL: Could not free space. NOT ENOUGH FREE SPACE.
...
Backup failed