Seeding initial file backup with data/files already present?

Hoping someone can help me out here with my specific issue. I created a full file backup on a massive media server I have (64TBs of usable space and 36TBs of total consumed space). The backup got all the way to about 33TBs and 95ish percent complete and then the backup server froze and needed a hard reset (OS related issue as the backup had been going for nearly 12 days at that point). When I tried to start the full file backup again, urbackup began indexing and copying all the files over again from scratch which I obviously do not want to do.


  1. Since urbackup nearly finished the original full file backup, I now have a folder on the backup server that is nearly complete with roughly 33TBs out of the whole 36TBs worth of files that needed to be backed up.

  2. Can I have urbackup somehow check/read/index this folder so that it realizes these files are already present and doesn’t back them up on a subsequent new full file backup? Essentially, the first full file backup did not complete but got so close that I really just need urbackup to check what is missing and then backup only those files (essentially as if it were an incremental backup).

Here’s a view of the overall infrastructure:

Server A - Media Server:

  • 36TBs of consumed space that needs to be backed up on a RAID volume.

  • This server is running the urbackup client and is set to backup the above listed RAID volume.

Server B - Backup server

  • This server houses a separate RAID volume used as a backup repository equal to the same usable space as Server A (64TBs).

  • This server runs the urbackup server software.

  • A previous urbackup full file backup that was told to backup the data on the RAID volume of Server A failed to finish at roughly 33TBs out of 36TBs (or 95ish percent).

  • I now have a folder with about 95% of the overall files I need backed up on the backup RAID volume of this server that urbackup is using as it’s backup repository.

  • I need a way/someone to walk me through the steps/process of how to run another full file backup in urbackup but have it realize/check this existing folder so that it does not try to copy over another 33TBs of data it already has. I’ve tried to do this already on my own but it begins to hash, index, and copy everything over again when I do.

Thank you anyone who can assist!!

Any suggestions??? I’d hate to be stuck with using robocopy’s MIR function or something similar instead of a true backup solution like urbackup offers. Thanks!

Theoretically the files would be indexed again, but the server should detect they already exist and they won’t be transferred again.

Hey thanks so much for the response. The indexing does occur but unfortunately the files are being transferred over again as though they don’t exist. I can see the network traffic from both Server A & B along with the total written data on the backup server (server B) increasing. I let it go for about an hour just to confirm and it transferred around 100GBs. When I checked the “new” backup I can confirm by looking at the files that they are all duplicates.

Any ideas?