There is no non-failed on the new nas system for this server. Its the first backup and it failed, all other servers are ok, also the clients are ok via imaging. I see it runs until nearly finish and then it fails, on the storage it uses 1,9TB every time it trys, it does not delete the failed backup.
If it is resuming a backup it has a non-fatally failed backup (e.g. because the client went offline). Perhaps the issue occurred there and now all backups trying to resume this backup fail.
Next backup went thru without errors…??
But takes long time. Will the block tracking client help to bring down the backup time? We will move in the near future from windows servers to linux servers but the problem of 1 million pdf, xml and jpg’s will be the same.
Info
20.03.25 20:15
SERVER231: Deleting files in snapshot… (7385)
Info
20.03.25 20:15
SERVER231: Deleting files in hash snapshot…(7385)
Info
20.03.25 20:16
SERVER231: Indexing file entries from last backup…
Info
20.03.25 20:21
SERVER231: Calculating tree difference size…
Info
20.03.25 20:21
SERVER231: Linking unchanged and loading new files…
Info
22.03.25 08:01
Waiting for file transfers…
Info
22.03.25 08:13
Waiting for file hashing and copying threads…
Info
22.03.25 08:14
Writing new file list…
Info
22.03.25 08:14
All metadata was present
Info
22.03.25 08:14
Number of copied file entries from last backup is 1014556
Info
22.03.25 08:15
Transferred 1.46279 TB - Average speed: 99.3652 MBit/s
Info
22.03.25 08:15
Time taken for backing up client SERVER231: 1 days 12h 23m 8s
Thanks for the update. Yeah, as expected it would be a problem with an interrupted backup. Unfortunately not much to look into without a debug log of the interrupted backup that is causing the problem.
I think your bottleneck is network speed.
CBT for files mainly helps if you have large files which only change a bit in incremental backups.
Not sure about that, system read from CEPH datastorage is at the moment 198 MB/s with 3219 op/s:
health HEALTH_OK
osdmap e137219: 30 osds: 30 up, 30 in
client io 198 MB/s rd, 108 kB/s wr, 3219 op/s
And the zfs scrub reads from the iscsi-backupdisc with about 800 MB/s.
As is see the problem are the small files what takes time in windows to open, read and close. So if the client would know wich files are changed/new it could be faster…?
Unfortunately my files are all small and mostly new, the files not get changed after write. Only the SQL database is around 50GB, rest are in KB area.
Indexing of “F” done. 86 filesystem lookups 690649 db lookups and 62 db updates
It means it looks into the filesystem 86 times and into the database 690649 and did 62 record updates?
Or the other system the same:
Indexing of “F” done. 576 filesystem lookups 823621 db lookups and 265 db updates
Is there no archive bit (like in good old dos days) where the client recognise what files need to be backed up?
How is CBT working? It goes over all blocks or only the changed one? Because many blocks are not changed in my data storage, only the new files change the blocks from empty to data.