Server 2.5.22/Client 2.5.16

New settings handling

The major change with server 2.5.x and client 2.5.x is a complete redesign how settings are handled. The goal is to make it more explicit and consistent.

  • Each setting has a button next to it which switches between inheriting the setting from the group, using the configuration value on the configuration page and using the value from the client, and for some setting merging client and server settings together
  • File backup paths are now a normal setting just like any other. No more separation between paths to backup configured on the client and default directories to backup

This is a critical area of UrBackup and as such needs a lot of testing. Also focus should be on if the 2.4.x client still more or less works with the 2.5.x server (and especially doesn’t loose its connection settings for Internet servers).

Connect via web sockets

With 2.5.x clients can connect via web sockets to the server. Setup should be easy. Instead of “Internet server name/IP” of e.g. “” use “ws://”. The real advantage comes into play if a real web server with SSL is setup in front. Either proxy the whole page, or serve the files and only proxy “/x” via FastCGI (see manual), additionally proxy “/socket” via web socket to “ws://”. Then setup the clients to connect to “Internet server name” “wss://”. Basic authentication, a different name and a different port should also work, e.g. “wss://”.

Come back of macOS client

A test version of the macOS client is here. Please add your feedback!

Linux image backups

2.5.x can backup Linux clients (via dattobd or Linux device mapper snapshots). The use case is e.g. backing up a DigitalOcean instance. It is explicitly not to create image backups of all possible configurations e.g. LVM, mdraid, btrfs, …
It can “live restore” a root volume, i.e., overwrite the root volume of a currently running machine. So one can image backup e.g. a DigitalOcean droplet. To restore create a new instance (tested with Debian 10/buster) then run the restore script which overwrites the current instance with the backed up instance using a trick to put the currently running system files into (compressed) memory.
Dattobd and the Linux device mapper supports Changed Block Tracking (CBT), so image (and file) backups are done with CBT.

The Linux device mapper snapshots scripts are currently only tested on Debian 10 and with the root volume. It needs to hook into the boot sequence (initramfs) to backup the root volume. For other volumes it probably needs to hook into udev. Help getting it running on other distributions/with non-root volumes would be welcome!

Linux image backup howto


  • To only backup used space install partclone (e.g. apt install partclone). This is currently supported for ext* and xfs. If partclone is not present it’ll backup unused space.
  • Install the (binary) client and select either dattobd or the Linux device mapper for snapshotting
  • If you are using dattobd: Install dattobd dattobd/ at master · datto/dattobd · GitHub


  • On the server browse to the image backup to restore
  • Click on “Restore Linux image”
  • Copy & paste the command into a new instance and follow the instructions (the command has a timeout (token), so after a while it should not work anymore)

Testing (image backup + file backup with CBT)

To test if CBT works correctly with file backups please enable Debugging: Verify file backups using client side hashes (for Internet clients with client-side hashing) or Debugging: End-to-end verification of all file backups in the advanced settings.

To test image backups enable md5sums creation on the client via touch /usr/local/var/create_md5sums_imagebackup. The client will then create a file (with date+time in the name) at /usr/local/var/md5sums-* for each image backup listing all files in the image plus their md5sum. Copy that file to server, mount the image backup then verify the md5sums with cd /backup/client/image_mnt; md5sums --quiet -c /path/to/md5sums-file.txt. The only files with checksum errors should be the .datto-*/.overlay-* files in the root directory and swap files (those get automatically excluded from backup).

Thanks for any help with testing and feedback!

Upgrade process

As always: Replace the executables (via the installers) and the database of the server/client will be updated on first running it.

Downgrade process (server)

Stop the UrBackup server, restore C:\Program Files\UrBackupServer\urbackup or /var/urbackup from a backup before upgrade and then install the previous version over the beta release.

On testing

Please see Having problems with UrBackup? Please read before posting for what kind of information to add when reporting issues. Reports such as “X does not work” are useless. In general, at the very least, think about what kind of information one would need to reproduce the problem in a different environment. Best case you debug the problem yourself and then post the fix.

Changes with client 2.5.16

  • Disable accepting new server via clicking on balloon
  • Fix updating settings via post install wizard
  • Base client side paths completely off settings file

Changes with client 2.5.15

  • Start filesrv before accepting commands
  • Don’t update settings on upgrade if no paths are configured and setting file doesn’t exist yet
  • Fix waiting for connection to server when restoring image
  • Cleanup snapshot later if it cannot be deleted because it is in use
  • Build glibc client with older glibc

Changes with server 2.5.22

  • Fix mail retries
  • Fix raw image backup trim issue (Do not trim if trim range is narrowed down to zero)
  • Fix local encryption issues that caused local image backups to fail: Continue receiving as long as pipe is readable if using compressed/encrypted pipe
  • Don’t require correct client uid for internet/active clients
  • Fix internet server url validation if empty
  • Fix ZFS incremental file backups

Changes with server 2.5.21

  • Make vhdx support compile with older compilers
  • Build Debian package on older Debian so it works on more versions
  • Handle merge settings which cannot be configured on client
  • Select correct use value for old setting config

Changes with server 2.5.20 beta

  • Add vhdx support
  • Use GPT partitioning for larger volume images



Linux image backup doesn’t work…

have tried on several ubuntu 18.04 boxes.

dattodb and partclone installed.

am using mdadm (simple mirror of boot drive) - trying to image the boot drive.
Wondering if mdadm is the problem??

for volume to image have tried C sda md0 etc…

Error log:

Info 08/23/21 12:24 Starting unscheduled full image backup of volume “C:”…
Errors 08/23/21 12:24 Could not read MBR. Error message: Error opening Device . No such file or directory (code: 2)
Errors 08/23/21 12:24 Cannot retrieve master boot record (MBR) for the disk from the client.
Info 08/23/21 12:24 Time taken for backing up client backup3: 2s
Errors 08/23/21 12:24 Backup failed

@durcom Just saw that you already described what I experienced on software raid. Had opened a topic for that today: Linux image backup on software raid

I have nothing against widening that scope given enough help. The problem is also restoring it when it is backed up successfully.

Yes, agreed and understood. It is perfectly suited for “stock”-cloud instances, e. g. on Hetzner, too. Problem is only related to dedicated servers w/o hardware raid, so I won’t complain if it will not get implemented :+1:

Hi all,
when running remove_unknown on my windows based server the 8GB memory installed fill up rapidly , paging starts, and after about 10-15 minutes the whole system becomes unresponsive - no more RDP possible, all services halt. I am not aware of the details behind the process, but happy to help - log files show nothing special, it comes to a stop with “deleting incomplete file backup”

Could you look at RAMMap while this is happening. Also perhaps the task manager memory usage (+handles)?

Then perhaps make a memory dump of the process when its memory usage is still reasonable and send me that? Perhaps I can find the issue this way.

Sry, I’m used to tracking down memory leaks on Linux and it has been a while since I have done that on Windows… (this also looks useful: Exercise 2 - Track User Mode Process Allocations | Microsoft Docs )

I have taken a few rammap dumps throughout the process — but it’s fairly apparent that the memory (over-) consumption is in metafile usage. Side-note: It’s a ReFS Volume with offline (windows built-in) deduplication.
Let me try to undedup the urbackup folder and see if this changes behaviour.
here’s the rammap archive…:

will keep you posted on any updates.

Can this be installed in the docker container in my current urbackup instance and essentially update to this newer build? Or have paths changed that require extra labor?

Id like to try out the websocket support for internet clients and try to get it working behind my NGINX reverse proxy (if its even doable)

When attempting to mount an image backup via the web interface it fails with error:

Mounting image failed. Please see server log file for details.

However the log in /var/log/urbackup.log contains nothing related to this even when set to debug levels. However, the error in the web interface is incorrect as it IS succeeding at mounting the image at:

ps aux | grep -i urbackup
root      475622  0.2  0.1 2105736 50308 ?       Sl   10:05   0:00 /usr/bin/urbackupsrv run --config /etc/default/urbackupsrv --daemon --pidfile /var/run/
root      476641  0.0  0.0  65804  5596 ?        Ss   10:05   0:00 guestmount -r -n --format=raw -a /dev/loop0 -o kernel_cache -o uid=114 -o gid=121 -o allow_root -m /dev/sda /ssdraid/backups/urbackup/DESKTOP-BES576G/210906-0947_Image_C_mnt0

If you then refresh the urbackup web page it shows the correctly mounted image file and i’m able to browse and interact with the backup just fine.

This is on a proxmox 7 system (Debian 11) with ZFS setup using the recommended settings in the urbackup documentation for Copy-on-write raw image backups with ZFS.

File backups are failing when Beta: Number of parallel file* are set to anything more than 1. I noticed a significant speed increase when testing with 2-10 parallel threads however anything more than 1 fails in error. This is on a 10GbE network with all SSD based storage on client (windows 10) and host (Debian 11, proxmox 7). ZFS configured for Copy-on-write file backups with ZFS. Trying to achieve the best performance possible, default settings only average ~500Mbit/s on a 10Gbit network with SSD storage on client and host capable of significantly more.

|09/06/21 14:09  |DEBUG  |Saved metadata of 5867 files and directories. 95% done...|
|09/06/21 14:09  |WARNING  |Not all folder metadata could be applied. Metadata was inconsistent.|
|09/06/21 14:09  |INFO  |Writing new file list...|
|09/06/21 14:09  |DEBUG  |Some metadata was missing|
|09/06/21 14:09  |ERROR  |Fatal error during backup. Backup not completed|
|09/06/21 14:09  |INFO  |Transferred 6.57034 GB - Average speed: 399.05 MBit/s|
|09/06/21 14:09  |INFO  |(Before compression: 7.19828 GB ratio: 1.09557)|
|09/06/21 14:09  |DEBUG  |Script does not exist urbackup/post_incr_filebackup|
|09/06/21 14:09  |INFO  |Time taken for backing up client DESKTOP-BES576G: 2m 50s|
|09/06/21 14:09  |ERROR  |Backup failed|

Unfortunately I couldn’t find any other relevant information looking in the logs, I’m more than happy to provide more information unfortunately i’ll need a bit of guidance on locating where that is. Thanks!

Update - it might have been an “underlying” / OS problem, but cannot confirm. I ran cleanup of backups and DB after marking a good amount of older backups for deletion and re-ran remove_unknown - and “tada”. Maybe that provides an additional hint.
I’ll monitor behaviour over the next few months / versions.

fresh bullseye (debian) install with 2.5.22 end up with

ERROR: No permission to access “/urbackup/urbackup_tmp_files”
WARNING: Shutthing down

(ps: on config dialog i changed backup location to /urbackup)