UrBackup Server/Client 2.0.4 beta (with Mac OS X client); Restore 2.0.1

Hi, I’m getting the following error in some of my windows clients:

I’m backing up to a CIFS share on a freenas box.

@uroni, I am switching over 8-10 clients to 2.0.4 beta this week. Are there any particular metrics I can supply to you to help out? I am going to roll them in on a gradual basis to test the waters. But if there is anything specific I can supply from general usage, I would be more than happy to further the cause.

Getting the following errors on the client with 2.0.4 Server and 2.0.4 Windows Client whenever i start a full or incremental backup.
Client is set to Internet Only Mode.
Image Settings are set to raw… confused

2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: Message end. Size: 41
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error
2016-02-24 12:48:51: ERROR: Too much hashdata - error

Log keeps running with those messages and no data is saved to the server.

Does this happen only for image backups?

Yes. Image Backups only. although i can’t test with filebackups because i don’t run any

It should not occur for full image backups, at least. Are you sure they are real fulls and not synthetic fulls? If you use btrfs it uses synthetic fulls automatically always…

Can you post the size of the .hash file in the backup storage of the last full or incremental backup (on which the incremental backup is based)?

-rwx------  1 urbackup urbackup  70G Feb 19 10:34 Image_E_160219-0927.vhd
-rwx------  1 urbackup urbackup 7,5M Feb 19 09:27 Image_E_160219-0927.vhd.cbitmap
-rwx------  1 urbackup urbackup  15M Feb 19 10:27 Image_E_160219-0927.vhd.hash
-rwx------  1 urbackup urbackup  562 Feb 19 09:27 Image_E_160219-0927.vhd.mbr

Problem seems to be fixed. Have deleted the old Backups. Now the Full Backup is running.
PS: Internet Only is set to true only because the client sits behind a local nat… everything is LAN

Was happy too early… the full backup has run without problems. the next incremental is broken again with the same errors :frowning:
Update: Deleted Backup again… now it seems to work. incremental images are running again…

I have seen my nas server build with OpenMediaVault (Debian 7), with 2.0.4 beta client, get stuck indexing daily file backup 2 days in row now.
So far, I have just restarted the backend services, and then everything was fine again.

/etc/init.d/urbackupclientbackend restart

Have anybody seen something likes this yet?

Could be this issue: UrBackup Server/Client 2.0.4 beta (with Mac OS X client); Restore 2.0.1

I managed to track it down and will released a fixed version soon.

2 Likes

I started migrating over my first remote client and it has been indexing for two days. I just enabled debug logging on client and server. How long should I let them run before sending them off to you? Do you want memory dumps also?

Could you have a look at the debug log first? Maybe it is just taking a long time. It is hashing the files twice with different functions (to verify both have the same result). That will be removed soon and then it will be faster.

I suppose it is possible, but this is a client with only about 350GB of backups and it has been stuck at indexing for 4-5 days now. How long do I let that run before reverting back to the prior release version?

If you put it into debug logging mode you’ll see if it makes forward progress on a per file basis. Also, could you try the just released 2.0.5 client? Thanks!

Dear Mr. Uroni,
I am using a UrBackup Server 2.0.4 beta on a Linux server and trying to backup a MAC OS X 10.7.5 with a UrBackup Client 2.0.4 beta. The file backup work but when I try an image backup I get from the server log the following errors:
01/03/16 18:13 DEBUG Getting client settings…
01/03/16 18:13 DEBUG Cannot do image backup because can_backup_images=false
01/03/16 18:13 DEBUG Cannot do image backup because internet_no_images=true
but in the Settings of the client under the Internet tab the “Do image backups over internet” is flagged.
Is this function already implemented for MAC client?
Best regards.

Dear all,
it’s possible to limit the amount of space dedicated to a client?
I use the client and server version 2.0.4 beta.
I noticed that I can use the soft client quota but in this way the client can exceed the threshold and I do not receive any alert by e-mail.
Thanks!

As far as I can see Mac OS X does not have a built in snapshot mechanism so image backup cannot be supported.

Even time machine backups without snapshot, so perhaps Apple thinks the users data is not important? (Correct me if I am wrong there…)

Dear Mr. marveltec,
it works! Thanks!
You know if there is something like that for the image backup?
I had tried to create a postimagebackup.bat but it doesn’t work.
Best regards.

A post was split to a new topic: Do not follow symbolic links pointing outside of backup path