The parallel client-side file hashing is behind a feature flag in the internet settings tab on the server and needs to be enabled first.
Changes with server 2.2.4 beta
Fix server crash: Copy clientname instead of reference
Updated translation
More error information when volume/snapshot creation fails
Add async index error hint
Treat async index not found error as timeout error
Add internet only arg to settings and cmdline
Improve hostname web interface check regex to allow multiple servers
Parallel hash: Don’t set min downloaded if item is moved from chunked to full
Show on status screen if no backup directories are configured
Show file/image backups as disabled on status screen
Remove extra space in report mail subject
Decrease restore internet client ping interval
Do not shutdown socket during image download
Start/stop single file shadow copy references
Major changes with server 2.2.x beta
Client-side file hashing parallel with file backup
Image backup restore via Internet client
Simultaneous file meta data application with file backups
Scriptable (lua) alerts and reports
Major changes with client 2.2.x beta
Client-side file hashing parallel with file backup
Changes with Restore CD 2.1.1 beta
Fix: Restore to correct partition with newer kernel
Login again if restore client times out
Fix restore progress indicator after reconnect
Do not lock backup_mutex during image download to prevent restore client timeout
Changes with Restore CD 2.1.x beta
Image backup restore via Internet client
Upgrade process
As always: Replace the executables (via the installers) and the database of the server/client will be updated on first running it.
Place the files from the update directory into C:\Program Files\UrBackupServer\urbackup or /var/urbackup to auto-update clients. Disable Download client from update server in the server settings to prevent the server from downloading the current version.
Stop the UrBackup server, restore C:\Program Files\UrBackupServer\urbackup or /var/urbackup from a backup before upgrade and then install the previous version over the beta release.
inconsistent backup status in web interface?.. I’m not sure if this is just specific to this version, but I’ll have machines in a group where either file backups or image backups are disabled and the status of a specific client will occasionally say “Disabled” and be green, but also occasionally say “No recent backup” and be red. It fluctuates. I have 2.2.4 beta installed on ubuntu 16.04.
The server crashed this morning with the following message, “2017-10-05 09:36:20: PT: Hashing file “emalware.002”
Bus error”
I have the log file if you want it but it’s a big one.
Does the database have a limit or does it just keep on going as long as the disk has space available?
-rwxrwxrwx 1 root urbackup 21967667200 Oct 5 13:24 backup_server_files.db
-rwxrwxrwx 1 root urbackup 52428800 Oct 5 03:45 backup_server_link_journal.db
-rwxrwxrwx 1 root urbackup 1258291200 Oct 5 13:24 backup_server_links.db
-rwxrwxrwx 1 root urbackup 52428800 Oct 5 13:23 backup_server_settings.db
-rwxrwxrwx 1 root urbackup 2202009600 Oct 5 13:24 backup_server.db
The database(s) are getting rather large. Maybe I should run a cleanup?
My log file is 73.5GB, yes GB!! Should it not roll over automatically?
I’m going to delete the log and restart the urbackup server process. Will upload the log next time it crashes.
It’s crashed again already, “Oct 5 13:24:46 freenas kernel: pid 26020 (urbackupsrv), uid 1001: exited on signal 11”
The log file shows nothing unusual I can see. It’s 9.5mb this time. I can upload or attach it if you like?
Same again :o(
2017-10-05 14:06:06: Connecting Channel to RogerHaigh failed - CONNECT error -55
Bus error
It’s every few mins now…
So I just spent the last few hours running a cleanup etc and restarted.
Less than an half hour later I get, "2017-10-05 20:58:22: Connecting to target service…
2017-10-05 20:58:22: Established internet connection. Service=1
2017-10-05 20:58:22: Reconnected successfully,
Bus error
"
Update: The system has now been stable for a few days and not crashed at all. Not really sure what changed apart from removing an unrelated 4TB drive and pool from the ZFS system. I’m just happy it seems stable now.