UrBackup Server/Client 2.0.4 beta (with Mac OS X client); Restore 2.0.1

this problem seems to have sorted itself out so might not have been an issue client download is now working

Hi, it seems a little bug introduced itself in ‘urbackup_snapshot_helper’ since beta 2.0.>0 I straced it back to an erroneous ‘-c’ parameter being passed to ‘btrfs subvolume delete’ :

 [pid  8227] execve("/sbin/btrfs", ["/sbin/btrfs", "subvolume", "delete", "-c", "/media/BACKUP/urbackup/testA54hj"...], [/* 0 vars */]) = 0

This makes the snapshot helper fail every btrfs test.

root@icts-q-dcvm-4:/usr/bin# ./urbackup_snapshot_helper test
Create subvolume '/media/BACKUP/urbackup/testA54hj5luZtlorr494/A'
Create a snapshot of '/media/BACKUP/urbackup/testA54hj5luZtlorr494/A' in '/media/BACKUP/urbackup/testA54hj5luZtlorr494/B'
ERROR: error accessing '-c'
Delete subvolume '/media/BACKUP/urbackup/testA54hj5luZtlorr494/A'
TEST FAILED: Removing subvolume A failed
ERROR: error accessing '-c'
Delete subvolume '/media/BACKUP/urbackup/testA54hj5luZtlorr494/B'
TEST FAILED: Removing subvolume B failed

‘urbackup_snapshot_helper’ from beta 2.0.0.0 works fine:

root@icts-q-dcvm-4:/tmp/unpack/usr/bin# ./urbackup_snapshot_helper test
Create subvolume '/media/BACKUP/urbackup/testA54hj5luZtlorr494/A'
Create a snapshot of '/media/BACKUP/urbackup/testA54hj5luZtlorr494/A' in '/media/BACKUP/urbackup/testA54hj5luZtlorr494/B'
Delete subvolume '/media/BACKUP/urbackup/testA54hj5luZtlorr494/A'
Delete subvolume '/media/BACKUP/urbackup/testA54hj5luZtlorr494/B'
TEST OK

Update: this happens on Ubuntu 14.04, which apparently uses an older ‘btrfs’ where the ‘-c’ option to subvolume delete does not yet exist.

Hi Uroni,

Testing on Freenas with 2.0.4 clients and server.

Seems to be working mostly OK, so thank you very much for your hard work
I did have issues initially where files would only very slowly be removed from the file queue, as urbackup was searching for the file hashes of teh previous full file backup and couldn’t read them.

I got around this by deleting the clients and readding them since they were test machines.

Other than that, I am getting “error setting filetime” warnings at the ends of Backups. Does this have anything to do with atime not being set ?

Server:

Installed on Freenas and seems to work properly

Linux Client
Testing on
DISTRIB_ID=Ubuntu
DISTRIB_RELEASE=15.10
DISTRIB_CODENAME=wily
DISTRIB_DESCRIPTION=“Ubuntu 15.10”

When installing the client gives this error:

root@erick-Latitude-E6320:/home/erick/Downloads# ./beta.sh
Verifying archive integrity… All good.
Uncompressing UrBackup Client Installer for Linux 100%
Installation of UrBackup Client 2.0.4 beta… Proceed ? [Y/n]
y
Uncompressing install data…
Detected Debian (derivative) system
Detected systemd
Detected architecture x86_64-linux-eng
Installing systemd unit…
./install_client_linux.sh: 1: ./install_client_linux.sh: pkg-config: not found
root@erick-Latitude-E6320:/home/erick/Downloads#

Fixed by installing pkg-config: apt-get install pkg-config

When Selecting option 1 for snapshots it says:

W: Failed to fetch https://cpkg.datto.com/repositories/dists/wily/main/binary-amd64/Packages HttpError404

W: Failed to fetch https://cpkg.datto.com/repositories/dists/wily/main/binary-i386/Packages HttpError404

E: Some index files failed to download. They have been ignored, or old ones used instead.

When selecting option 4 for no snapshots says:

Detected Debian (derivative) system
Detected systemd
Stopping currently running client service…
Detected architecture x86_64-linux-eng
Installing systemd unit…
Starting UrBackup Client service…
Successfully started client service. Installation complete.
+Detected Ubuntu. Dattobd supported
-Detected no btrfs filesystem
-LVM not installed
Please select the snapshot mechanism to be used for backups:

  1. dattobd volume snapshot kernel module from https://github.com/datto/dattobd
  2. Use no snapshot mechanism
    4
    Configured no snapshot mechanism

Worked on second try

Verifying archive integrity… All good.
Uncompressing UrBackup Client Installer for Linux 100%
Installation of UrBackup Client 2.0.4 beta… Proceed ? [Y/n]
y
Uncompressing install data…
Detected Debian (derivative) system
Detected systemd
Stopping currently running client service…
Detected architecture x86_64-linux-eng
Installing systemd unit…
Starting UrBackup Client service…
Successfully started client service. Installation complete.

Full file backups work

I LOVE THE SPEED METER!!!

This is something that needs improvement. Thanks for the hint!

It seems it needs a fallback for older btrfs-tools. Thanks for the hint!

Your server probably downloaded it while I was uploading or your download got interrupted.

Ok, will have a look at this.

No, I’m getting this sporatically as well but haven’t looked at it yet. This is only a “beauty” thing. It sets the file times of the files in the backup storage to the file times of the files being backed up. Those file times are then not used again. For restore and the web interface the internal metadata is directly read and used.

Hello,
with 2.0.4, I’m noticing one (blocking) problem: the backup starts indexing, then hangs. The client log shows a lot of:
[…]

2016-02-13 04:13:47: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:13:47: Active query(0): END;
2016-02-13 04:13:57: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:13:57: Active query(0): END;
2016-02-13 04:14:07: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:14:07: Active query(0): END;
2016-02-13 04:14:17: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:14:17: Active query(0): END;
2016-02-13 04:14:27: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:14:27: Active query(0): END;
2016-02-13 04:14:37: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:14:37: Active query(0): END;
2016-02-13 04:14:47: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:14:47: Active query(0): END;
2016-02-13 04:14:57: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:14:57: Active query(0): END;
2016-02-13 04:15:07: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:15:07: Active query(0): END;
2016-02-13 04:15:17: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:15:17: Active query(0): END;
2016-02-13 04:15:27: SQLITE_BUSY in CQuery::Execute Stmt: [END;]
2016-02-13 04:15:27: Active query(0): END;
2016-02-13 04:15:37: ERROR: SQLITE: Long running query Stmt: [END;]
2016-02-13 04:15:37: ERROR: Active query(0): END;"

The servers stays in “Indexing” state, the client continues with:

2016-02-13 04:19:06: Looking for old Sessions… 0 sessions
2016-02-13 04:49:07: Looking for old Sessions… 0 sessions
2016-02-13 05:19:08: Looking for old Sessions… 0 sessions
2016-02-13 05:49:09: Looking for old Sessions… 0 sessions
2016-02-13 06:19:10: Looking for old Sessions… 0 sessions
2016-02-13 06:49:11: Looking for old Sessions… 0 sessions
2016-02-13 07:19:12: Looking for old Sessions… 0 sessions
2016-02-13 07:49:13: Looking for old Sessions… 0 sessions
2016-02-13 08:19:14: Looking for old Sessions… 0 sessions
2016-02-13 08:49:15: Looking for old Sessions… 0 sessions
2016-02-13 09:19:16: Looking for old Sessions… 0 sessions
2016-02-13 09:49:17: Looking for old Sessions… 0 sessions

and everything is stuck this way.

This seems to happen only on larger servers, and not on first (full or incremental) backup, but on second/third after restarting the client.

Could you create a memory dump of the UrBackupClientBackend.exe process and send that to me (best done with the task manager http://blogs.msdn.com/b/debugger/archive/2009/12/30/what-is-a-dump-and-how-do-i-create-one.aspx )?

This is happening on Linux. I will be doing it as soon as it will happen again.

Assuming you are using the client installer: I haven’t saved the debug info for that yet (before stripping it).

Can you perhaps give me steps to reproduce the problem? I could also build one with debug info.

Actually, it happens after one or two backups, without restarting the client. When indexing, that is the error message that shows, and everything hangs there. Restarting the client solves the problem, at least for the first backup. It usually happens when using with big servers or slow ones. It happens both with snapshot modes (datto and lvm) and normal.

When installing on OS X, I get two UrBackup icons showing up in the menu bar. Is that by design? Or is it a bug in the GUI?

Same here

Can you tell me which kind of distribution + Linux kernel version? Thanks!

It surely happened on: Debian 8 (stock kernel: 3.16.0-4-amd64)
Ubuntu 14.04: 3.13.0-77-generic
Centos 7: 3.10.0-327.4.5.el7.x86_64

Yes, I have many combinations here, that’s why I’m doing so many tests.

Hello.

I’m new to this nice thing that called “urbackup”, and i’m starting to love it :smile:

I’v installed this beta version (2.04) and i have question:

when the backup activity (first full backup) stopped suddnely because of unplanned server shutdown (where the urbackup server installed) the backup process won’t start from the last place where it stopped. it will begin a new full backup directory… is it right? or there is something to do with it?

thanks in advance… Great Work!!

Hi Uroni, something else I noticed, but I don’t have logs to back my story up.
First point:
I have a machine with symbolic links in the backup location that points to very large folders (time machine backup folders) outside the backup location. Even with the symbolic links set as excluded in the backup location settings, the indexing will try to index the very large time machine Backup folders. For some reason the indexing never finished. The only way to get it to backup was to delete the symbolic links
Second point:
after the initial full file backup, I recreated the link, thinking the issue might only have been with the initial file indexing. I let urbackup do its think for a couple of days, until I got warnings my backup space was low. For some reason, I think that the attempted backup of the time machine folders caused the server to write large amounts of temporary files.

I realise this is not much information to go on, especially point 2. Point 1 is pretty clear though I hope.

Thanks for this brilliant backup system !

Yes that is by design. The server is assumed to not shut down suddenly so regularily that it makes sense optimizing for that case.

Is that time machine link always there? If yes we should compile in a exclusion for that…

There is no such setting in UrBackup 2.0.x. If you are using the client with server 1.4.x you should upgrade as 1.4.x doesn’t backup the file metadata and has only rudimentary symlink handling.

With UrBackup 2.0.x server+client you can disable following symlinks by flags to backup directories.

I.e. if you set no flags the default flags are follow_symlinks,symlinks_optional,share_hashes
So if you do not want to follow symlinks outside of the backup dir you set e.g. default directories to backup to /|root/symlinks_optional,share_hashes (or add /symlinks_optional,share_hashes to name in the select directories dialog). I’ll have to add this to the GUI at some point.

Dear Mr. Uroni,
is there a function for the client that provide the possibility to shut down the computer when the backup is finished? If no, have you planned to develope it?
Best regards.

Hi Stefan,

I’m no expert but there is an option to use the postfilebackup.bat to run a shutdown script after backup completes.

Hi Uroni,

I’m having a similar problem to Draga on Windows Clients to Windows server 2012 server I haven’t checked the logs yet but I will the next time I pick up the problem. in short it seams if clients and server are restarted backups run fine but after a couple of backups clients seem to hang (not all of them/and not always the same ones) during indexing or even at certain percentages of completion during backups (and they never complete). In some instances I’ve just restarted the server and when the clients come back online they finish their backups without a problem but in most instances I need to restart the client and restart the backup process for it to finish.

Its pretty much a daily occurrence so I will get some logs for you