Just wanted to confirm my understanding- I’m backing up approx 3tb worth of files currently and growing daily (stupid raw photo format…). My offsite server doesn’t have a great connection and my current ETA is 7 days for the full file backup.
And to my understanding, urbackup doesn’t have the hardcore ‘incremental’ system where all incremental file backups are true deltas where you need a full backup plus all the incrementals after it to recreate things.
therefore:
1/ Am I correct that the next full backup will take another 7 days - since it will re-send all files?
2/ Is it a good idea to turn off full backup? Or should I just let it churn away on a full backup every (say) 30 days
tldr: I don’t really see the benefit of doing full backups for files. Am I correct?
No, it will not send all files.
If you use a btrfs file system on the server then it will APPEAR on the server as a full backup. But,btrfs uses snapshots and copy on write.
I believe that if you use other file system - the server will still use smart linking.
I know that the client and server negotiate what has changed for incremental and only transfer that.
It us true that your understanding of what an incremental backup using deltas is NOT what urbackup does. IMHO what you describe is a somewhat (at least circa 1980) outdated method. Filesystems such as ZFS and btrfs and others are capable of linking in such a way that an incremental contains ALL files - as though they were a full backup. Thus not requiring a restore to incrementally restore deltas in the correct order.
Maybe this is not what you want? Maybe you want to be able to just restore the most recent delta always? But, you can still obtain the same result with some simple date based copying.
I use incrementals every 2 days and full every 60 days. But, I am using btrfs on the server - so, the storage cost/increase is minimal.
You can use a container version of urbackup on the server (that’s what I do), If your server OS is doze based - you can even use btrfs (Winbtrfs - open source for XP upwards)
You don’t specify the OS for either server or client, new data averages or schedules - so I am unable to give more helpful information.
However, should you supply more detail - I will be happy to try and help.
Dave
Huh, I just noticed I basically asked the same question twice. But yes, I’m somewhat limited by my setup, I have a really old synology server parked at my dad’s that I barely managed to get urbackup set up on (had to hand-compile the package of a slightly older urbackup etc). But the main point of the backup of course is that hopefully only one or the other fails - the backup dies or the source dies (and my source also has raid6).
The thing my server does not have is oodles of space beyond what I’ve agreed with the aforementioned dad so I wonder if a full backup will just run up a huge storage bill with only a little bit of satisfaction for mr. Justin.
Is there some way I can test if a full backup will just make me occupy 2x the ‘required’ space of my single backup so far? Other than hit ‘backup’… of course…
It’s amazing how many people on these forums think that backups only need to go on obsolete old tech because it’s only there ‘if the main system fails’. Then when the main system fails then they post on here complaining that the backup system is corrupt or broken and they cannot recover their data … and its the fault or urbackup
I don’t think that’s how the math works here. It’s all about risk. If you are diligent about checking if a backup can be restored (and I am, I check this manually), then being able to recover from one catastrophic failure is reasonable. If my main system dies, I have my dad’s backup - yes old syno box, but standard linux disk system and modern drives, so it can fail too, not to mention it’s raid 1. So for me to lose all my stuff I both need my main system die, and my dad’s system die at the same time. (and even then I have a few extra safeguards).
You don’t always have to have fancy gear to be in a decent state