Error creating hardlink

both server and client run ubuntu 16.04

server UrBackup 2.3.8
client 2.3.4

clients are installed with option: backupmethod Use no snapshot mechanism
filesystems are stock - backup target is a NFS share to a windows server

i keep getting this error for one of the clients - for several other clients everything works as smooth as it goes with this lovely backup solution that deservers great praise!

please advise for help, thanks! I am trying to get this backup running for days. how can i get rid of this without distrubing anything?

it always tries to hardlink the recent backup to the new one - i can ls the new file, which means it does exist - unfortunately still this makes the whole backup fail - please let me know if i can add more information.

16.09.19 23:55
HT: Error creating hardlink from “/backups/path/190916-1648/PATH/path.path/path/Path/path/path/path/Pretty_Long_File_name.pdf” to “/backups/path/190916-2139/PATH/path.path/path/Path/path/path/path/Pretty_Long_File_name.pdf”
16.09.19 23:55
Writing metadata to /backups/path/190916-2139/.hashes/PATH/path.path/path/Path/path/path/path/Pretty_Long_File_name.pdf failed
17.09.19 00:16
FATAL: Backup failed because of disk problems (see previous messages)

problem seems to be caused by a renaming from upper to lower case part of the filename extension (so no change of any letters, only case was changed)

  • please advice on how to proceed with the folders that are created on the server when a backup fails
    it seems that although a new full backup is started, the failed folder from the last attempt is still referenced somehow

Sorry you are having issues. To solve the problem the server (debug) log would be useful.
If possible, could you post it or send it?

This post describes how to change the server to debug logging, where it is stored and where to send it to if posting is not possible: Having problems with UrBackup? Please read before posting


is it normal that case is different between .hashes and normal path?
also note 190912-1852 both are in uppercase

find . -iname “*Filename_BLA.PDF”


It’s not normal. What file system are you using on the client and on the server?

thanks for checking in on this!
the backup target is a windows share mounted via NFS - all FS are stock (ext.4)

Making sure I don’t misunderstand… the backup storage is ext4? The client on Linux backs up NTFS on Windows via NFS mount?

yes, linux server has a NFS mount that is coming from a windows server as backuptarget
linux server is running urbackup - windows is just fileserver
so urbackup is writing into NFS share directly

what is the best way to get a full backup from scratch for this client - without losing old backups preferably?

server log shows a lot of these lines:
ERROR: Error opening file “/backups/servername/190917-0936/.b68xO+K9SCOF35cLk4Bf9Q/35849” from pipe for reading ec=2

maybe the entries about hardlinks failing are not important after all and the real reason is the open file error?

Ok, that’s not what I said… This could cause issues, because UrBackup Server assumes the file system is case sensitive on Linux.

yes, hard link errors get ignored and it’ll just copy the file. NTFS has a low hard link limit, which it usually doesn’t ERROR about, when UrBackup Server is running on Windows.

i also have found a lot of these errors. we fail to delete old directories not only for this client. but i can add and delete files in the share.

2019-09-17 06:16:25: ERROR: Error deleting directory “/backups/servername/190906-1518/servername”
2019-09-17 06:16:25: ERROR: Error deleting directory “/backups/servername/190906-1518”
2019-09-17 06:16:25: ERROR: Directory still exists. Deleting backup failed. Path: “/backups/servername/190906-1518”
2019-09-17 06:16:25: WARNING: Error deleting file backup

thanks a lot for your help so far! it already gave me some insight into what is going on.
within the server log from yesterday i see the following more specific reason why the writing of metadata is failing for this file:

2019-09-17 15:56:08: WARNING: HT: Error creating hardlink from “/backups/path/190917-0842/PATH/path.path/path/Path/Path/Path/Path/Pretty_long_Filename.pdf” to “/backups/path-path/190917-1531/PATH/path.path/path/Path/Path/Path/Path/Pretty_long_Filename.pdf”
2019-09-17 15:56:08: WARNING: File “/backups/path-path/190917-1531/.hashes//PATH/path.path/path/Path/Path/Path/Path/Pretty_long_Filename.pdf” has wrong size. Should=8 is=230. Error writing metadata to file. -1

2019-09-17 15:56:08: ERROR: Writing metadata to /backups/path-path/190917-1531/.hashes//PATH/path.path/path/Path/Path/Path/Path/Pretty_long_Filename.pdf failed

any ideas how to fix wrong size?

what is the best way to get a full backup “from scratch” - can we somehow ignore the problems of the past? :slight_smile:

the problem was caused by the fact that two versions of the file where existing. it went away after the one of the files was removed.

mixing case sensitive filesystems with a windows storage is tricky - specially in our case we have urbackclient fooled into thinking it is working in an EXT4 environment but actually writing out to a NTFS target.

thanks again for you help, problem is solved.

Thanks for confirming it. I already added a check for it to the next server version, so it should then warn you that there might be errors…

It also tests if it can create a file named CON , and if it can access files via dos names e.g. testfo~1. The CON problem is worked around in the UrBackup Windows Server, and it shows commands how to disable the dos name generation.

(So don’t let your users create a file named CON or a directory named testfolder and testfo~1 :wink: )