The major change with server 2.5.x and client 2.5.x is a complete redesign how settings are handled. The goal is to make it more explicit and consistent.
- Each setting has a button next to it which switches between inheriting the setting from the group, using the configuration value on the configuration page and using the value from the client, and for some setting merging client and server settings together
- File backup paths are now a normal setting just like any other. No more separation between paths to backup configured on the client and default directories to backup
This is a critical area of UrBackup and as such needs a lot of testing. Also focus should be on if the 2.4.x client still more or less works with the 2.5.x server (and especially doesn’t loose its connection settings for Internet servers).
With 2.5.x clients can connect via web sockets to the server. Setup should be easy. Instead of “Internet server name/IP” of e.g. “example.com” use “ws://example.com:55414/socket”. The real advantage comes into play if a real web server with SSL is setup in front. Either proxy the whole page, or serve the files and only proxy “/x” via FastCGI (see manual), additionally proxy “/socket” via web socket to “ws://127.0.0.1:55414/socket”. Then setup the clients to connect to “Internet server name” “wss://example.com/socket”. Basic authentication, a different name and a different port should also work, e.g. “wss://username:email@example.com/socket”.
A test version of the macOS client is here. Please add your feedback!
2.5.x can backup Linux clients (via dattobd or Linux device mapper snapshots). The use case is e.g. backing up a DigitalOcean instance. It is explicitly not to create image backups of all possible configurations e.g. LVM, mdraid, btrfs, …
It can “live restore” a root volume, i.e., overwrite the root volume of a currently running machine. So one can image backup e.g. a DigitalOcean droplet. To restore create a new instance (tested with Debian 10/buster) then run the restore script which overwrites the current instance with the backed up instance using a trick to put the currently running system files into (compressed) memory.
Dattobd and the Linux device mapper supports Changed Block Tracking (CBT), so image (and file) backups are done with CBT.
The Linux device mapper snapshots scripts are currently only tested on Debian 10 and with the root volume. It needs to hook into the boot sequence (initramfs) to backup the root volume. For other volumes it probably needs to hook into udev. Help getting it running on other distributions/with non-root volumes would be welcome!
- To only backup used space install partclone (e.g.
apt install partclone). This is currently supported for ext* and xfs. If partclone is not present it’ll backup unused space.
- Install the (binary) client and select either dattobd or the Linux device mapper for snapshotting
- If you are using dattobd: Install dattobd dattobd/INSTALL.md at master · datto/dattobd · GitHub
- On the server browse to the image backup to restore
- Click on “Restore Linux image”
- Copy & paste the command into a new instance and follow the instructions (the command has a timeout (token), so after a while it should not work anymore)
To test if CBT works correctly with file backups please enable
Debugging: Verify file backups using client side hashes (for Internet clients with client-side hashing) or
Debugging: End-to-end verification of all file backups in the advanced settings.
To test image backups enable md5sums creation on the client via
touch /usr/local/var/create_md5sums_imagebackup. The client will then create a file (with date+time in the name) at
/usr/local/var/md5sums-* for each image backup listing all files in the image plus their md5sum. Copy that file to server, mount the image backup then verify the md5sums with
cd /backup/client/image_mnt; md5sums --quiet -c /path/to/md5sums-file.txt. The only files with checksum errors should be the
.overlay-* files in the root directory and swap files (those get automatically excluded from backup).
Thanks for any help with testing and feedback!
As always: Replace the executables (via the installers) and the database of the server/client will be updated on first running it.
Stop the UrBackup server, restore
C:\Program Files\UrBackupServer\urbackup or
/var/urbackup from a backup before upgrade and then install the previous version over the beta release.
Please see Having problems with UrBackup? Please read before posting for what kind of information to add when reporting issues. Reports such as “X does not work” are useless. In general, at the very least, think about what kind of information one would need to reproduce the problem in a different environment. Best case you debug the problem yourself and then post the fix.
- Disable accepting new server via clicking on balloon
- Fix updating settings via post install wizard
- Base client side paths completely off settings file
- Start filesrv before accepting commands
- Don’t update settings on upgrade if no paths are configured and setting file doesn’t exist yet
- Fix waiting for connection to server when restoring image
- Cleanup snapshot later if it cannot be deleted because it is in use
- Build glibc client with older glibc
- Fix mail retries
- Fix raw image backup trim issue (Do not trim if trim range is narrowed down to zero)
- Fix local encryption issues that caused local image backups to fail: Continue receiving as long as pipe is readable if using compressed/encrypted pipe
- Don’t require correct client uid for internet/active clients
- Fix internet server url validation if empty
- Fix ZFS incremental file backups
- Make vhdx support compile with older compilers
- Build Debian package on older Debian so it works on more versions
- Handle merge settings which cannot be configured on client
- Select correct use value for old setting config
- Add vhdx support
- Use GPT partitioning for larger volume images