Is this normal resource usage?


Hi Guys,
I’m running 2.1.20 on Ubuntu Server 16.04. UrBackup is using ~12GB out of the 16GB of RAM.

Is this normal usage? Or do I need start thinking about upgrading?



I guess It depends how large your dataset is, some peoples run urbackup at home ona raspberry pi or their nas

You get that much like within 24h of a service restart? Since when is the application started ?

In advanced tab, what did you set for "Database cache size during batch processing:"
Can you show the memory usage, as you see it ?

From the windows task manager (detailled view, click column tittle, check memry related columns) or better the detail view from the pstools process explorer.
Or on linux cat /proc/pidofurbackup/status

Also if possible, please dont restart the service right now, maybe uroni will like you to run memleax on the process.

For info , this is what i get with a few tb of backups, and cache size set to 1gb for batch.
123456 123456 20 0 39,313g 0,012t 1,3 20,2 11303:56 S urbackupsrv

VmPeak: 42296524 kB
VmSize: 41222780 kB
VmLck: 0 kB
VmPin: 0 kB
VmHWM: 13806148 kB
VmRSS: 13305608 kB
RssAnon: 12420752 kB
RssFile: 884856 kB
RssShmem: 0 kB
VmData: 15893908 kB
VmStk: 132 kB
VmExe: 5192 kB
VmLib: 15500 kB
VmPTE: 28732 kB
VmPMD: 176 kB
VmSwap: 0 kB
HugetlbPages: 0 kB


Best post a top screenshot or something. You might be interpreting it incorrectly.


Here is a screenshot of the first part. I should note that I’m using ZFS on Ubuntu, and my storage pool is currently at about 2TB. Dedup and compression are enabled.

“Database cache size during batch processing” is at 200MB - I believe this is the default, I didn’t change it.


So really the process consumes ~1.5GB of ram


For the record ZFS dedup will use a lot of resources and is not really needed if you use ZFS compression…
Especially as UrBackup uses dedup.


Hmmmm, I hear you. But why does the manual say to enable dedup if using ZFS…

I’ll do some more research. Thanks for your reply!