Cache recommendations for server

Hi Martin,

We have 6 file servers with roughly 2 TB a piece and are looking for some performance recommendations to get the most out of urbackup. The urbackup server is windows 2008 r2 VM on a fast SAS array with 4 cores and 32GB ram dedicated to it and it is backing up to a smb share on a fast NAS over gigabit ethernet. There are roughly 8 million files total and we are just doing file level backups. We are using the 1.4 beta for the symlinks feature. Which settings as far as database type and cache size should we be using? Is it better to limit the amount of simultaneous backups? As of now I am using SQLite but maybe LMDB would be better?

Thanks

First, I would find out if the file deduplication is a bottleneck. This is usually the case if there are a lot of files queued during file backups, which you can see on the progress screen.

By increasing the interval of full file backups you can get load off the database. UrBackup actually does not need full file backups and they are there just for double checking in case the UrBackup client misses a file change (Download and recheck just in case).

Then you can change the database cache size in the advanced tab, because you have so much RAM.

If you have further performance problems in this area enabling the file cache might help. LMDB uses the operating system file cache. I’d use it if the dataset fits into RAM, which is probably the case.

I’d set the number of simultaneous backups such that the backups finish as fast as possible. You probably set a per client rate limit, so you can just divide 1GBit by it and thave the number (given the storage and database keeps up).
The Client creates a snapshot while backing up, which causes another write for each write to disk on the client PC. You’d want the time this snapshot exists to be as short as possible (On the C drive there may be system restore points anyway so this argument would be moot in this case).

Also: Be aware of this issue here: