while backing up in Internet mode with backups stored on BTRFS, urbackup server reads all files in the “.hashes” directory, which significantly slows down backups for large directory trees (in our case a few hours compared to “local” backup within the internal network using VPN) and burden the disk with a large number of I/O operations. Is it a bug or a feature? In my opinion, only the hash files that have been changed since the last backup should be loaded.
At worst, we can use local IPSec-encrypted local backup mode, where there are no such a disk operations requirement, but there is no compression and deduplication …
It’s a feature. You can decrease the number of metadata files read during incremental file backups by increasing the minimal number of incremental file backups. Or you can comment out the code in IncrFileBackup::addSparseFileEntry .
Thanks, it seems that the “minimal number of incremental file backups” value has helped.
Have a nice day.