Error writing metadata to file. -1

How to repair this? The original source pdf has 1.433.689 Bytes.

2025-03-19 01:50:24: WARNING: File “/zstore/urbackup/SERVER231/250318-1615/.hashes/F/Lims/LimsDaten/KN3086/BN54447/PN123183/PBs/PB215571_V0.pdf”
has wrong size. Should=8 is=384. Error writing metadata to file. -1

Run a full file backup

I did start the 3th time, it continues but never get finish. Problem theres a lot and it takes long time.

Info 20.03.25 02:54 Number of copied file entries from last backup is 1018915
Fehler 20.03.25 02:54 Fatal error during backup. Backup not completed
Info 20.03.25 02:54 Transferred 1.44819 TB - Average speed: 102.926 MBit/s
Fehler 20.03.25 02:54 FATAL: Backup failed because of disk problems (see previous messages)
Info 20.03.25 02:54 Time taken for backing up client SERVER231: 1 days 10h 39m 37s
Fehler 20.03.25 02:54 Backup failed

There should be an error causing this in the log. Is it the Error writing metadata to file error?

Sorry i did not post the full log. Now have it all:

Info 18.03.25 16:15 Starting unscheduled incremental file backup…
Info 18.03.25 16:30 Component caption=Tasks Store name=
Info 18.03.25 16:30 Component caption= name=C:_Windows_system32_wins
Info 18.03.25 16:30 Component caption= name=
Info 18.03.25 16:30 Component caption=Windows Managment Instrumentation name=
Info 18.03.25 16:30 Component caption= name=C:_Windows_NTDS
Info 18.03.25 16:30 Scanning for changed hard links on volume of "c:"…
Info 18.03.25 16:30 Indexing of “C” done. 1071 filesystem lookups 58191 db lookups and 178 db updates
Info 18.03.25 16:30 Scanning for changed hard links on volume of "D:"…
Info 18.03.25 16:30 Indexing of “D” done. 5 filesystem lookups 20 db lookups and 4 db updates
Info 18.03.25 16:30 Scanning for changed hard links on volume of "F:"…
Info 18.03.25 16:30 Indexing of “F” done. 896 filesystem lookups 822258 db lookups and 417 db updates
Info 18.03.25 16:30 SERVER231: Loading file list…
Info 18.03.25 16:30 SERVER231: Calculating file tree differences…
Info 18.03.25 16:30 SERVER231: Creating snapshot…
Info 18.03.25 16:30 SERVER231: Deleting files in snapshot… (1432)
Info 18.03.25 16:30 SERVER231: Deleting files in hash snapshot…(1432)
Info 18.03.25 16:30 SERVER231: Indexing file entries from last backup…
Info 18.03.25 16:40 SERVER231: Calculating tree difference size…
Info 18.03.25 16:40 SERVER231: Linking unchanged and loading new files…
Fehler 18.03.25 17:19 Writing metadata to /zstore/urbackup/SERVER231/250318-1615/.hashes/D/SQL_Server/MSSQL12.MSSQLSERVER/MSSQL/DATA/ProbenbuchDev_20221214_1.ldf failed
Fehler 18.03.25 17:49 Writing metadata to /zstore/urbackup/SERVER231/250318-1615/.hashes/D/SQL_Server/MSSQL12.MSSQLSERVER/MSSQL/DATA/tempdb.mdf failed
Fehler 18.03.25 18:20 Error writing file metadata -1
Fehler 18.03.25 23:08 Error writing file metadata -1
Fehler 19.03.25 01:50 Error writing file metadata -1
Fehler 19.03.25 01:50 Error writing file metadata -1
Fehler 19.03.25 09:59 Error writing file metadata -1
Fehler 19.03.25 10:22 Error writing file metadata -1
Fehler 19.03.25 10:44 Error writing file metadata -1
Fehler 19.03.25 10:49 Error writing file metadata -1
Fehler 19.03.25 10:55 Error writing file metadata -1
Fehler 19.03.25 10:55 Error writing file metadata -1
Fehler 19.03.25 14:34 Error writing file metadata -1
Fehler 19.03.25 15:28 Error writing file metadata -1
Fehler 19.03.25 16:28 Error writing file metadata -1
Fehler 19.03.25 16:38 Error writing file metadata -1
Fehler 19.03.25 18:22 Error writing file metadata -1
Fehler 19.03.25 19:17 Error writing file metadata -1
Info 20.03.25 02:45 Waiting for file transfers…
Info 20.03.25 02:53 Waiting for file hashing and copying threads…
Info 20.03.25 02:54 Writing new file list…
Info 20.03.25 02:54 All metadata was present
Info 20.03.25 02:54 Number of copied file entries from last backup is 1018915
Fehler 20.03.25 02:54 Fatal error during backup. Backup not completed
Info 20.03.25 02:54 Transferred 1.44819 TB - Average speed: 102.926 MBit/s
Fehler 20.03.25 02:54 FATAL: Backup failed because of disk problems (see previous messages)
Info 20.03.25 02:54 Time taken for backing up client SERVER231: 1 days 10h 39m 37s
Fehler 20.03.25 02:54 Backup failed

And this is the right part from the log file:

WARNING: File “/zstore/urbackup/SERVER231/250318-1615/.hashes/D/SQL_Server/MSSQL12.MSSQLSERVER/MSSQL/DATA/ProbenbuchDev_2022
1214_1.ldf” has wrong size. Should=8 is=290. Error writing metadata to file. -1
2025-03-18 17:19:47: ERROR: Writing metadata to /zstore/urbackup/SERVER231/250318-1615/.hashes/D/SQL_Server/MSSQL12.MSSQLSERVER/MSSQL/DATA/Proben
buchDev_20221214_1.ldf failed
2025-03-18 17:49:09: WARNING: File “/zstore/urbackup/SERVER231/250318-1615/.hashes/D/SQL_Server/MSSQL12.MSSQLSERVER/MSSQL/DATA/tempdb.mdf” has wr
ong size. Should=8 is=290. Error writing metadata to file. -1
2025-03-18 17:49:09: ERROR: Writing metadata to /zstore/urbackup/SERVER231/250318-1615/.hashes/D/SQL_Server/MSSQL12.MSSQLSERVER/MSSQL/DATA/tempdb
.mdf failed

Yeah, that is an incremental backup. What makes the full backup not finish?

ERROR: Writing metadata

it says:

|SERVER231|Fortgesetzte vollständige Dateisicherung|

Because its the first backup.

Ah, ok. Could you take a look at the log of the last non-failed (resumed) full backup that it is resuming?

There is no non-failed on the new nas system for this server. Its the first backup and it failed, all other servers are ok, also the clients are ok via imaging. I see it runs until nearly finish and then it fails, on the storage it uses 1,9TB every time it trys, it does not delete the failed backup.

Thats another server (230) who does backups without problems.

Info 19.03.25 21:02 Starting scheduled incremental file backup…
Info 19.03.25 21:10 Component caption=Tasks Store name=
Info 19.03.25 21:10 Component caption= name=C:_Windows_NTDS
Info 19.03.25 21:10 Component caption=Windows Managment Instrumentation name=
Info 19.03.25 21:10 Component caption= name=
Info 19.03.25 21:10 Component caption= name=C:_Windows_system32_wins
Info 19.03.25 21:10 Component caption= name=
Info 19.03.25 21:10 Scanning for changed hard links on volume of "c:"…
Info 19.03.25 21:10 Indexing of “C” done. 357 filesystem lookups 10212 db lookups and 95 db updates
Info 19.03.25 21:10 Backing up “D” without snapshot.
Info 19.03.25 21:10 Indexing of “D” done. 0 filesystem lookups 0 db lookups and 0 db updates
Info 19.03.25 21:10 Scanning for changed hard links on volume of "F:"…
Info 19.03.25 21:10 Indexing of “F” done. 69 filesystem lookups 690651 db lookups and 48 db updates
Info 19.03.25 21:10 SERVER230: Loading file list…
Info 19.03.25 21:10 SERVER230: Calculating file tree differences…
Info 19.03.25 21:10 SERVER230: Creating snapshot…
Info 19.03.25 21:10 SERVER230: Deleting files in snapshot… (495)
Info 19.03.25 21:11 SERVER230: Deleting files in hash snapshot…(495)
Info 19.03.25 21:11 SERVER230: Calculating tree difference size…
Info 19.03.25 21:11 SERVER230: Linking unchanged and loading new files…
Info 19.03.25 21:15 Waiting for file transfers…
Info 19.03.25 21:15 Waiting for file hashing and copying threads…
Info 19.03.25 21:16 Writing new file list…
Info 19.03.25 21:17 All metadata was present
Info 19.03.25 21:17 Transferred 2.16732 GB - Average speed: 60.2914 MBit/s
Info 19.03.25 21:17 Time taken for backing up client SERVER230: 15m
Info 19.03.25 21:17 Backup succeeded

If it is resuming a backup it has a non-fatally failed backup (e.g. because the client went offline). Perhaps the issue occurred there and now all backups trying to resume this backup fail.

I did start a new try 2 hours ago with debug modus on in urbackup. We will have a better log tomorrow.

it says it will take 18 hours… :frowning:

1,3 TB left…

Sounds good (except that it takes so long). I hope the debug logging is on the server?

yes it fills the disc… :slight_smile:

Sorry for the late response, cached a could. :frowning:

Next backup went thru without errors…??
But takes long time. Will the block tracking client help to bring down the backup time? We will move in the near future from windows servers to linux servers but the problem of 1 million pdf, xml and jpg’s will be the same.

Info 20.03.25 20:15 SERVER231: Deleting files in snapshot… (7385)
Info 20.03.25 20:15 SERVER231: Deleting files in hash snapshot…(7385)
Info 20.03.25 20:16 SERVER231: Indexing file entries from last backup…
Info 20.03.25 20:21 SERVER231: Calculating tree difference size…
Info 20.03.25 20:21 SERVER231: Linking unchanged and loading new files…
Info 22.03.25 08:01 Waiting for file transfers…
Info 22.03.25 08:13 Waiting for file hashing and copying threads…
Info 22.03.25 08:14 Writing new file list…
Info 22.03.25 08:14 All metadata was present
Info 22.03.25 08:14 Number of copied file entries from last backup is 1014556
Info 22.03.25 08:15 Transferred 1.46279 TB - Average speed: 99.3652 MBit/s
Info 22.03.25 08:15 Time taken for backing up client SERVER231: 1 days 12h 23m 8s
Info 22.03.25 08:15 Backup succeeded

Did a scrub on the zfs, found no problem.

scan: scrub repaired 0B in 1 days 18:43:22 with 0 errors on Sat Mar 22 14:18:00 2025


Takes a long time, even with 500 MB/s…

So i dont know where the problems had coming from, now they look gone…

You want something to see from the log file?

I am thinking about dedup but i am not sure i want to have new problems with something like metadata…

Thanks for the update. Yeah, as expected it would be a problem with an interrupted backup. Unfortunately not much to look into without a debug log of the interrupted backup that is causing the problem.

I think your bottleneck is network speed.
CBT for files mainly helps if you have large files which only change a bit in incremental backups.

Not sure about that, system read from CEPH datastorage is at the moment 198 MB/s with 3219 op/s:

health HEALTH_OK
osdmap e137219: 30 osds: 30 up, 30 in
client io 198 MB/s rd, 108 kB/s wr, 3219 op/s

And the zfs scrub reads from the iscsi-backupdisc with about 800 MB/s.

As is see the problem are the small files what takes time in windows to open, read and close. So if the client would know wich files are changed/new it could be faster…?

Unfortunately my files are all small and mostly new, the files not get changed after write. Only the SQL database is around 50GB, rest are in KB area.

Anyway, thanks for the help! Appreciate!

Or let me ask this way:

Indexing of “F” done. 86 filesystem lookups 690649 db lookups and 62 db updates

It means it looks into the filesystem 86 times and into the database 690649 and did 62 record updates?

Or the other system the same:

Indexing of “F” done. 576 filesystem lookups 823621 db lookups and 265 db updates

Is there no archive bit (like in good old dos days) where the client recognise what files need to be backed up?
How is CBT working? It goes over all blocks or only the changed one? Because many blocks are not changed in my data storage, only the new files change the blocks from empty to data.