Idea for accelerating incremental file backups

For incremental file backups, most files are unchanged since the last backup. At the beginning of the backup the client waits for the server to create the destination directory with all the soft/hard links. In my case this takes almost 10 minutes.

What if the server creates this scaffold directory in advance? There are some logical issues to address, like cleanup of unused scaffolds, but when all those are ironed out, it would accelerate file backups significantly. It should be introduced as an option which is turned off by default.

What do you think?

Would be pretty difficult to implement with the symlinks. It kind of needs to know what changed before being able to create the correct number of symlinks (since it symlinks unchanged directory trees). When using only hardlinks (no directory symlinks) it would help, I guess, if there is a period where there is unused IO between backups.
But I don’t know how many people are running it in only hardlink mode…

Best performance is with using btrfs, of course :wink:

Sure, it’s just an idea to keep in mind. A nice to have feature.

I tried btrfs (raid) for a few months, but it wasn’t stable enough so I ditched it. Although I miss the incremental-forever image backups :wink: