I have been using urbackup for a few weeks now and it looks pretty good so far.
Here are two ideas that would even further improve the user experience. I could not find them in the documentation so feel free to correct me if they do exist.

Allow for metrics to be scraped from a http page. This should be relatively easy to implement. A growing number of organisations is starting to use Prometheus, including myself. Once useful metric besides used volume would be ‘time since last backup’

Daily or weekly reports on failed/successful backups so you see everything at a glance would be nice.
I am currently using the mail feature but it seems so outdated and spams me at every backup.

I’ll see if I can free some time to help with this although I cannot make any promises.

Would love to see both of these implemented. I wonder if the same sort of information is written to the log on the server side. If so, I was also thinking that a Splunk query would help surface this information.


Currently you have to use a separate system like Nagios, yes. You can get the information by directly accessing the database (though you should use 2.0.35 because of a bug, then) or by using e.g. GitHub - uroni/urbackup-server-python-web-api-wrapper: Python wrapper to access and control an UrBackup server

You can see in the example how to get the time since last backup. I think it would be very easy to push that metric to something like Prometheus.

Alerts and reports are the next big item on the roadmap (target 2.2). Though connecting it with Nagios might be better anyway.

Thank you for the feedback.

A first implementation should indeed be pretty simple based on the example provided by uroni.
An additional remark: if we provide a metrics page that can be scraped by prometheus, the same page can easily be scraped by Nagios too with a curl and a grep without the installation of additional python modules that have to be kept up to date on each monitor server. Just to note that it can benefit both.