Best Practice for Successful Initial File Backup of Large Dataset

Hello all:

I have a large dataset on a laptop machine which seems to be struggling to complete its initial file backup.

Unfortunately I’m not in one place long enough to allow an entire backup pass to complete successfully; what I was hoping would happen was that the file backup would resume and pick up where it left off - but that doesn’t appear to be happening, despite the client status being ‘Resumed incremental file backup’.

Instead I’m seeing the client side regularly re-hashing all the files in the dataset, and it appearing to re-transfer lots of files which have previously been transferred to the server. The server claims to have 738GB of files for this client currently stored - so I’m rather mystified as to why things seem to be regularly (but not always…) starting from scratch.

Should this be working as I’d expect, or would I be better off excluding parts of my dataset until I have a successful backup completed, then iteratively reducing the exclusion list?

Running server 2.4.15 and client 2.4.12.

Thanks!