Server 2.5.17 beta/Client 2.5.11 beta (Updated 3x)

New settings handling

The major change with server 2.5.x and client 2.5.x is a complete redesign how settings are handled. The goal is to make it more explicit and consistent.

  • Each setting has a button next to it which switches between inheriting the setting from the group, using the configuration value on the configuration page and using the value from the client, and for some setting merging client and server settings together
  • File backup paths are now a normal setting just like any other. No more separation between paths to backup configured on the client and default directories to backup

This is a critical area of UrBackup and as such needs a lot of testing. Focus should be on if the 2.4.x client still more or less works with the 2.5.x server (and especially doesn’t loose its connection settings for Internet servers).


  • Implement new URL scheme for connection settings in the client settings GUI

Connect via web sockets

With 2.5.x clients can connect via web sockets to the server. Setup should be easy. Instead of “Internet server name/IP” of e.g. “” use “ws://”. The real advantage comes into play if a real web server with SSL is setup in front. Either proxy the whole page, or serve the files and only proxy “/x” via FastCGI (see manual), additionally proxy “/socket” via web socket to “ws://”. Then setup the clients to connect to “Internet server name” “wss://”. Basic authentication, a different name and a different port should also work, e.g. “wss://”.

Come back of macOS client

A test version of the macOS client is here. Please add your feedback!

Linux image backups

2.5.x can backup Linux clients (via dattobd or Linux device mapper snapshots). The use case is e.g. backing up a DigitalOcean instance. It is explicitly not to create image backups of all possible configurations e.g. LVM, mdraid, btrfs, …
It can “live restore” a root volume, i.e., overwrite the root volume of a currently running machine. So one can image backup e.g. a DigitalOcean droplet. To restore create a new instance (tested with Debian 10/buster) then run the restore script which overwrites the current instance with the backed up instance using a trick to put the currently running system files into (compressed) memory.
Dattobd and the Linux device mapper supports Changed Block Tracking (CBT), so image (and file) backups are done with CBT.

The Linux device mapper snapshots scripts are currently only tested on Debian 10 and with the root volume. It needs to hook into the boot sequence (initramfs) to backup the root volume. For other volumes it probably needs to hook into udev. Help getting it running on other distributions/with non-root volumes would be welcome!

Linux image backup howto


  • To only backup used space install partclone (e.g. apt install partclone). This is currently supported for ext* and xfs. If partclone is not present it’ll backup unused space.
  • Install the (binary) client and select either dattobd or the Linux device mapper for snapshotting
  • If you are using dattobd: Install dattobd


  • On the server browse to the image backup to restore
  • Click on “Restore Linux image”
  • Copy & paste the command into a new instance and follow the instructions (the command has a timeout (token), so after a while it should not work anymore)

Testing (image backup + file backup with CBT)

To test if CBT works correctly with file backups please enable Debugging: Verify file backups using client side hashes (for Internet clients with client-side hashing) or Debugging: End-to-end verification of all file backups in the advanced settings.

To test image backups enable md5sums creation on the client via touch /usr/local/var/create_md5sums_imagebackup. The client will then create a file (with date+time in the name) at /usr/local/var/md5sums-* for each image backup listing all files in the image plus their md5sum. Copy that file to server, mount the image backup then verify the md5sums with cd /backup/client/image_mnt; md5sums --quiet -c /path/to/md5sums-file.txt. The only files with checksum errors should be the .datto-*/.overlay-* files in the root directory and swap files (those get automatically excluded from backup).

Thanks for any help with testing and feedback!

Upgrade process

As always: Replace the executables (via the installers) and the database of the server/client will be updated on first running it.

Place the files from the update directory into C:\Program Files\UrBackupServer\urbackup or /var/urbackup to auto-update clients. Disable Download client from update server in the server settings to prevent the server from downloading the current version.

On Linux e.g. with this update script:

Downgrade process (server)

Stop the UrBackup server, restore C:\Program Files\UrBackupServer\urbackup or /var/urbackup from a backup before upgrade and then install the previous version over the beta release.

On testing beta/WIP versions

Please see Having problems with UrBackup? Please read before posting for what kind of information to add when reporting issues with the beta/WIP version. Reports such as “X does not work” are useless. In general, at the very least, think about what kind of information one would need to reproduce the problem in a different environment. Also a reminder that this is a community project which depends on your contribution and help. Testing is a way to contribute, but without detailed reports on the issues you are finding, you might as well wait for the final release. Best case you debug the problem yourself and then post the fix.

I’ll ignore any issue reports that don’t follow minimum reporting standards.

Changes with server 2.5.17 beta

  • Fix compression fallback to zlib
  • Fix skipping small files in parallel hashing

Changes with server 2.5.16 beta

  • Fix crash bugs in new client encryption
  • Fix compilation issue with clang
  • Fix settings display
  • Remove extensive debug message

Changes with server 2.5.15 beta

  • Make amount of download, hash, image compress and client parallel hash threads configurable
  • Fix settings validation issues

Changes with server 2.5.14 beta

  • Add local encryption
  • Fix UTF decoding path parameter when accessing files/backups
  • Don’t log failed token authentication attempts into auth log

Changes with server 2.5.13 beta

  • Fix SQL issue with updating client settings
  • Correctly initialize global settings if client is in a group

Changes with client 2.5.11 beta

  • Add missing list_incr script to tar bundle
  • Improve handling of default paths to backup on the client
  • Don’t escape generated default dirs setting
  • Fix skipping small files in parallel hashing
  • Don’t quit parallel file hashing too early

Changes with client 2.5.10 beta

  • Fix compression fallback to zlib
  • Enable ZSTD in ARMv6 build
  • Fix parallel hash extra thread crash
  • Start correct number hash threads

Changes with client 2.5.9 beta

  • Support multiple parallel hash threads for a single file backup
  • Add post backup script call on file backup failure
  • Add error info the postindex call as parameter
  • Handle unwritten extents in Linux file CBT

Changes with client 2.5.8 beta

  • Add local encryption
  • Use “rootfs” instead of “root” as default name for “/”
  • Fix “Infos” menu entry
  • Disable wxWidgets asserts in release version

Changes with client 2.5.7 beta

  • Add new Linux device mapper based snapshot method


1 Like

Saving settings works again, however the last two tab pages (in general settings and client specific settings) still will not load.

A minor typo “Periodically readd” on the General --> local/passive clients tab.

-edit- The latest client immediately crashes when a backup is started: “2020-10-15 10:31:10: ERROR: Thread exit with unhandled std::exception basic_string::_M_construct null not valid”. Downgrading to the previous version worked.

Server Beta 2.5.15 will not compile in TrueNAS 12 –

urbackupserver/FileBackup.cpp:57:56: error: no matching constructor for
initialization of ‘std::vector<BackupServerHash *>’
backupid(-1), hashpipe(NULL), hashpipe_prepare(NULL), bsh(NULL), bsh_pre…

Updated ( server 2.5.16 beta).

Could you narrow it down please?

Might have worded it a bit badly.


So i updated and the backup were running in a loop like previousely (maybe one every 5 minutes).
Tried to resave all the client settings and the client crashed.

2020-10-27 09:52:20: AESGCMEncryption::put ���m�9�Eܧ��
2020-10-27 09:52:20: AESGCMEncryption::put �F*�����Lp�n�
2020-10-27 09:52:20: AESGCMEncryption::put (�/�X��ho�b�˖,z�

2020-10-27 09:52:20: Started connection to SERVICE_COMMANDS
2020-10-27 09:52:20: ClientService cmd: #ggg#3START FULL BACKUP group=200&clientsubname=gentoo_backup&running_jobs=0&sha=528&with_permissions=1&with_scripts=1&with_orig_path=1&with_sequence=1&with_proper_symlinks=1&status_id=267&phash=1&async=1#token=ggg
2020-10-27 09:52:20: Async index ggg
2020-10-27 09:52:20: Removing VSS log data…
2020-10-27 09:52:20: AESGCMEncryption::put ǂ��۱Ѥ�Yuٜ5�Z/ASYNC-async_id=ggg
2020-10-27 09:52:20: ERROR: Thread exit with unhandled std::exception basic_string::_M_construct null not valid

Weird, I removed that debug message in the last client version (2.5.9)

it seems it s still there, i tried to run in gdb , i get a raise.c: No such file or directory, not sure you can still get the infos you need ou of it.

/usr/local/sbin/urbackupclientbackend --version
UrBackup Client Backend v2.5.9.0

gdb --args /usr/local/sbin/urbackupclientbackend -v debug
(gdb) thread apply all bt
(gdb) run
Starting program: /usr/local/sbin/urbackupclientbackend -v debug
[Thread debugging using libthread_db enabled]
Using host libthread_db library “/lib/x86_64-linux-gnu/”.
[New Thread 0x7ffff7c03700 (LWP 136243)]
[New Thread 0x7ffff7402700 (LWP 136244)]

2020-10-27 21:21:03: AESGCMEncryption::put �̵a�22A��uEg��p/ASYNC-async_id=ggg
[New Thread 0x7fffbe7fc700 (LWP 137771)]
[New Thread 0x7fffbdffb700 (LWP 137772)]
2020-10-27 21:21:03: ERROR: Thread exit with unhandled std::exception basic_string::_M_construct null not valid
terminate called after throwing an instance of ‘std::logic_error’
what(): basic_string::_M_construct null not valid

Thread 20 “phash e0” received signal SIGABRT, Aborted.
[Switching to Thread 0x7fffbdffb700 (LWP 137772)]
__GI_raise (sig=sig@entry=6) at …/sysdeps/unix/sysv/linux/raise.c:50
50 …/sysdeps/unix/sysv/linux/raise.c: No such file or directory.
(gdb) bt
#0 __GI_raise (sig=sig@entry=6) at …/sysdeps/unix/sysv/linux/raise.c:50
#1 0x00007ffff7d9f859 in __GI_abort () at abort.c:79
#2 0x00005555555fb238 in __gnu_cxx::__verbose_terminate_handler() [clone .cold] ()
#3 0x00005555559b7ce6 in __cxxabiv1::__terminate(void (*)()) ()
#4 0x00005555559b7d21 in std::terminate() ()
#5 0x00005555559b7e99 in __cxa_rethrow ()
#6 0x00005555557e2e52 in thread_helper_f (t=) at Server.cpp:1493
#7 0x00007ffff7f75609 in start_thread (arg=) at pthread_create.c:477
#8 0x00007ffff7e9c293 in clone () at …/sysdeps/unix/sysv/linux/x86_64/clone.S:95

Thanks. Maybe it helps if you run catch throw in gdb before run?

Thread 16 “phash e0” hit Catchpoint 1 (exception thrown), 0x00005555559b7e0e in __cxa_throw ()
(gdb) bt
#0 0x00005555559b7e0e in __cxa_throw ()
#1 0x00005555555fc445 in std::__throw_logic_error(char const*) ()
#2 0x00005555557eb087 in std::__cxx11::basic_string<char, std::char_traits, std::allocator >::_M_construct<char const*> (this=0x7fffbcdfdaa0, __beg=0x0,
__end=0x636e692f7364616f <error: Cannot access memory at address 0x636e692f7364616f>) at /usr/include/c++/9/bits/basic_string.tcc:206
#3 0x00005555556a8aaf in std::pair<long long, std::__cxx11::basic_string<char, std::char_traits, std::allocator > >::pair (this=0x7fffbcdfda98)
at /usr/include/c++/9/bits/stl_pair.h:303
#4 ParallelHash::runExtraThread (this=0x7fffe403ee30) at urbackupclient/ParallelHash.cpp:534
#5 ParallelHash::operator() (this=0x7fffe403ee30) at urbackupclient/ParallelHash.cpp:104
#6 0x00005555557d45b6 in CPoolThread::operator() (this=0x7fffc4000bc0) at ThreadPool.cpp:73
#7 0x00005555557e2da7 in thread_helper_f (t=0x7fffc4000bc0) at Server.cpp:1487
#8 0x00007ffff7f75609 in start_thread (arg=) at pthread_create.c:477
#9 0x00007ffff7e9c293 in clone () at …/sysdeps/unix/sysv/linux/x86_64/clone.S:95

Did you configure multiple parallel hash threads? I haven’t tested this at all – just wanted to know if it still works with one of them. But I guess now I also know it doesn’t work with multiple (as expected).

Beta: Calculate file hashes on client in parallel as Internet/active client: yes
Beta: Number of parallel file download threads per file backup 1
Beta: Number of parallel server file hash threads per file backup 1
Beta: Number of parallel client file hash threads per file backup 1
Maximum number of simultaneous jobs per client 2

Thanks. Updated OP. Should be fixed now with client 2.5.10 beta.


Is Image Backups supported on Ubuntu or only Deb 10 atm ? If so, how can I enable this for Ubuntu 20.04 ? I have dattobd installed and it’s failing on 2 different clients, saying that the client doesn’t have IMAGE caps (this was on the server iirc).

Both Server and Client are running latest beta (from this post)

Thanks !

Did you upgrade the client? It needs a create_volume_snapshot setting in /usr/local/etc/urbackup/snapshot.cfg and it might only set this up when installing a new client.

That’s better now it doesnt crashes, but backups aren t successfull.
Should i disable parralelism, because you want to solve that later ?
Also as it doesn actually crash, i dont know which gdb options would beusefull on it.

31/10/20 22:14 DEBUG zzz[home]: Doing backup with hashed transfer…
31/10/20 22:14 INFO zzz[home]: Loading file list…
31/10/20 22:15 DEBUG Loading “urbackup/filelist_100.ub”. 49% finished 15.1558 MB/30.4683 MB at 541.176 KBit/s
31/10/20 22:16 DEBUG zzz[home] Starting incremental backup…
31/10/20 22:16 INFO zzz[home]: Calculating file tree differences…
31/10/20 22:16 INFO zzz[home]: Indexing file entries from last backup…
31/10/20 22:16 INFO zzz[home]: Calculating tree difference size…
31/10/20 22:16 INFO zzz[home]: Linking unchanged and loading new files…
31/10/20 22:16 ERROR Current parallel hash position greated than requested. 570 > 3
31/10/20 22:16 ERROR Error getting parallel hash for file “.bash_history” line 3 (2)
31/10/20 22:16 INFO Referencing snapshot on “zzz[home]” for path “home/clientsubname=home” failed: FAILED
31/10/20 22:16 INFO Waiting for parallel hash load stream to finish
31/10/20 22:16 ERROR Error during parallel hash load: TIMEOUT
31/10/20 22:16 INFO Waiting for file transfers…
31/10/20 22:16 INFO Waiting for file hashing and copying threads…
31/10/20 22:16 INFO Waiting for metadata download stream to finish
31/10/20 22:16 INFO Writing new file list…
31/10/20 22:16 DEBUG Some metadata was missing
31/10/20 22:16 DEBUG Client disconnected while backing up. Copying partial file…
31/10/20 22:16 DEBUG Syncing file system…
31/10/20 22:16 INFO Transferred 7.73277 MB - Average speed: 431.464 KBit/s
31/10/20 22:16 INFO (Before compression: 30.4629 MB ratio: 3.93946)
31/10/20 22:16 DEBUG Script does not exist urbackup/post_incr_filebackup
31/10/20 22:16 INFO Time taken for backing up client zzz[home]: 3m 39s
31/10/20 22:16 ERROR Backup failed

Yip, client is updated and:

cat /usr/local/etc/urbackup/snapshot.cfg

#This is a key=value config file for determining the scripts/programs to create snapshots


Here is manually running the scripts:

/usr/local/share/urbackup/dattobd_create_filesystem_snapshot 2 /

Snapshotting device /dev/sda2 via dattobd…
Using /dev/datto0…
Mounting /dev/mapper/wsnap-2…

/usr/local/share/urbackup/dattobd_remove_filesystem_snapshot 2 /mnt/urbackup_snaps/2

Unmounting /dev/datto0 at /mnt/urbackup_snaps/2…
Removing devicemapper snapshot…
Destroying dattobd snapshot /dev/datto0…