URBackup HELP - Need Clarification on Settings or Solution to Linux Problem

Hi, I am having a few issues with URBackup. We have linux clients that are never finishing the backup. They will sit at 100% percent complete until the RAM is full and linux kills urbackup.
I am looking for someone who can clearly explain all the different options and settings in URBackup in a language my boss and I can understand so we can tweak the settings under we get this thing working.

Is there anyone willing to help out? (my boss is willing to pay on an hourly basis)

Hello

Do you have more details, like logs, versions?
You get memory problem after consuming how much memory?
It s server or client crash, client/servers are windows/linux ?
Are you doing internet or lan backups?
What s the volume to backup ?
Whats the backup destination, ntfs, zfs ext4 zfs?

If it s a server crash, In advanced settings :
How large is the batch size set ?
Could it be that /tmp is a smallish tmpfs and that you use “Temporary files as file backup buffer” option ?

For the settings, there are a lot of them and a lot of explanation in the manual.
Ask here, but you d need to be more specific with what you want to achieve if you need help with something.

Server Version is 2.1.19
Client Version is 2.1.16 I think, not 100% sure
The Server has 32GB of RAM and it consumes 99.9% before it kills URBackup
There are no URBackup logs for this crash.
Linux machine running URBackup Server continues to run after URBackup is killed.
It is an internet backup.
It is not doing image backups, only file backups of a different linux computer. /usr/pub is the directory being backed up.
I think it is ext4 on both machines.

I believe the temporary files as file backup buffer is enabled.
Right now the server is offline, but when it comes back, I’ll get the rest of the settings.

as i understand it s a urbackup server crash
it s actually urbackup that consume the whole memory, or some other thing?
if you have it enabled “Temporary files as file backup buffer”, chances are that it s the issue, if /tmp is set as tmpfs and run out of space/memory

Thank you. I will check into that. I really appreciate the help

Would you like to be paid to help us troubleshoot our URBackup Issues?

Hello

You should ask @uroni first, maybe buy him some windows cbt licences.

I am employed so i can’t just get money from another company. But if uroni can’t, maybe one day i can look at that for free through a teamviewer session when i get home, around 19h-20h france/paris time.

Thank you. I will try uroni first and let you know. I really appreciate the help

We have a URBackup Server running on Ubuntu 16.04.3 LTS. The Server Version currently installed and running is 2.1.19. We setup a RAID 5 on a external storage bay connected to the server and mounted as /var/BACKUP

The URBackup directory to backup to is /var/BACKUP/urbackup

The first issue is:
We’ve tried this twice now and had the same problem both times.
We start a full file backup on a Linux (Debian GNU/Linux 7.7 (wheezy); Kernel 3.2.0-4-686-pae i686) Client (Version 2.1.16). Each time, the backup gets to 100% and never finishes. The estimated time to finish is 292487235 years 4 months 2 weeks 1 day 8 hours 10 minutes. The live log says:
08/22/17 10:46 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8682 KB at 0 Bit/s
08/22/17 10:47 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8691 KB at 0 Bit/s
08/22/17 10:48 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8701 KB at 0 Bit/s
08/22/17 10:49 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8711 KB at 0 Bit/s
08/22/17 10:50 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8721 KB at 0 Bit/s
08/22/17 10:51 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.873 KB at 0 Bit/s
08/22/17 10:52 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.874 KB at 0 Bit/s
08/22/17 10:53 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.875 KB at 0 Bit/s
08/22/17 10:54 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.876 KB at 0 Bit/s
08/22/17 10:55 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.877 KB at 0 Bit/s
08/22/17 10:56 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8779 KB at 0 Bit/s
08/22/17 10:57 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8789 KB at 0 Bit/s
08/22/17 10:58 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8799 KB at 0 Bit/s
08/22/17 10:59 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8809 KB at 0 Bit/s
08/22/17 11:00 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8818 KB at 0 Bit/s
08/22/17 11:01 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8828 KB at 0 Bit/s
08/22/17 11:02 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8838 KB at 0 Bit/s
08/22/17 11:03 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8848 KB at 0 Bit/s
08/22/17 11:04 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8857 KB at 0 Bit/s
08/22/17 11:06 DEBUG Loading “urbackup/FILE_METADATA|LXBVZkUL5N5uKtsRzHv1|2148”. Loaded 31.8877 KB at 0 Bit/s

The error from the failed backup is as follows:
Error getting complete file “LXBVZkUL5N5uKtsRzHv1|pub/public/computerplace/PC - Windows Easy Transfer - Items from old computer.MIG” from LINUX CLIENT. Errorcode: TIMEOUT (2)

We tried deleting all the backups and starting over from the beginning, but it did the same thing.

The second issue is:

We start a full file backup on a Linux (Debian GNU/Linux 7.6 (wheezy); Kernel 3.2.0-4-686-pae i686) Client (Version 2.1.16) and it never finishes indexing. It will index for days, but never actually do anything.

As far as I am aware all the computers in question are ext4, but the clients may be ext3.

Any help with this would be greatly appreciated.

I wonder if the never ending indexing is due to symbolic links…?
They could be causing the indexing to get stuck in an endless loop…

Yes, maybe, the size is almost the same, but with a slight increase every time.
Maybe a recursion over a log file that keeps growing, hard to tell.

A few suggestions

FOR ISSUE TWO:
Alright. I tried the /symlinks_optional and adding *.log, *.tmp, *.temp files as excluded and the client was still indexing for over 12 hours. I added the ,share_hashes this morning. I will see if that changes anything.
I will look into the virtual client backup.

FOR ISSUE ONE:
For the Backup that gets to 100% and never finishes, is there any suggestions for that?

FOR ISSUE ONE:
What does the ‘files in queue’ on the activities page say, zero?

Files in queue on the activities page say zero, yes

Is there a should be excluded files list for linux also?
I’ve googled it and I can’t find anything…

issue 1
“pub/public/computerplace/PC - Windows Easy Transfer - Items from old computer.MIG”
Its always that file that s causing problem? can you try to exclude it? what if you try to do “ls -l” on it from urbackup server?
Maybe it s very large or maybe it s on and nfs share that’s not working.

issue 2 and 1
Well there are tons of files that can be excluded.
On linux there s no need to backup things like /var/log/, /var/tmp, /proc, /run, /dev, /tmp
Typically you backup only /etc /home, and additional specific folders.

If you re not a linux admin, i ll try to explain : For example /dev/random will allow you to read an infinite amount of random numbers. And actually /dev contains only special files that will be re-created when you boot.

My guess is that you have one specific file that’s causing the issue.
Thus virtual client can help, by not backuping everything at once and allow you to isolate the wrong file.
Other option is to maybe try to backup different folders every day, until you find the problematic one.

issue 2
Usually the beta is quite stable and has no specific issue. So one thing that’s fast to try is to upgrade the client to beta and see if the problem is still there, maybe you had a bug that was fixed in beta.