Best advice for backing up large (500GB) containers (e.g. veracrypt) over internet

I’ve been extensively researching open source backup solutions with incremental / diff remote backups etc, and urbackup is definitely the best (apart from lack of at-rest encryption).

Apart from my system partition, and regular files, I also have a few VMs and a large veracrypt container.

What are the drawbacks in urBackup for backing up one huge “file” (container) as opposed to backing up its contents? Is it more or less as fast as the sector-by-sector whole system partition backup method?

I’ve tried other backup software in the past (like achronis) and its incremental backups of a somewhat active 500GB veracrypt container were very efficient (space wise, and indexing time was fine too).

What about if I try to “solve” the lack of at-rest encryption by doing something really dumb: Make a 2TB veracrypt volume to which I copy all my files, then tell urBackup to backup only this one 2TB “file”. How’d that work out for me? :slight_smile:

Cheers!

I might be missing something really obvious here, but what are you trying to achieve here ? Are you trying to encrypt the backups before they are taken ?

If so, any level of incremental backup will go out the window since there isn’t a good way to track what is where. IMHO, you will be better off storing the backups on an encrypted volume but yeah, without knowing what you are trying to do it’s hard to suggest :slight_smile:

I would not encrypt backups, I would just be trying to back up a truecrypt volume, instead of its contents file by file.

If you can back up an entire system partition with urBackup, and do it incrementally, then I figured that if you give it a large file that has parts of it changed (ie a truecrypt volume), it would also be able to find diffs on that

Please note - I am NOT part of the development team :slight_smile:

This, IMHO, would be quite hard to track. I suspect that paid for backup solutions do this to work around not having block level dedup or tracking.

To implement this, the client would need to know each block and what it’s hash was, then scan the file and only send changed blocks. I understand how this would make your life easier, but the whole system would need to be changed to assist for a gain in a small case. At the moment I suspect that the client simply tracks hashes, mod date time and the likes, sees if they are different and marks the file as changed. The whole file is sent across, resulting in the bigger backups that you see

I wonder if snapshots will make things easier for you…

BTW - are you running a Linux host ?

Should work fine if you backup to btrfs or ZFS (with snapshots).

If you don’t have the CBT driver it will read the whole file during each incremental backup and transfer the differences.

Sparse handling is implemented as well. It even works around VeraCrypt not changing the last modified time etc. during operation.

I’m still trying to figure out what kind of nas to use. I was planning on getting an odroid xu4 with open media vault as an OS and install UrBackup there. I would then be able to format the nas’ HDDs however it’s supported. Didn’t ZFS need a lot of RAM per GB or something? Or was that just for maintaining redundancy drives?

My client is Windows 10. I will also have some linux clients (like steam OS / ubuntu) but that’ll come later.

What server are you using ? Including the one with the VMs on ?

Sorry I wasn’t clear. My Server, is this NAS that I haven’t built yet :wink: Which will if it can handle it, be a raspberry pi odroid xu4 w 2gb of ram and Debian Stretch (open media vault).

The VMs are just small linux stuff running on my windows PC that I’m trying to back up. (the client)

Right - then yes, as uroni said, get CBT on the Windows box. On the server side of things, the NAS, make sure the file system is btrfs (or ZFS but I wouldn’t - personally)

Make sure that both the temp and final storage (if you choose to use temp storage) is btrfs, I found some odd things if they are not :slight_smile:

Ok, I’ll start looking into “Changed Block Tracking” CBT driver, and making snapshots with btrfs :+1:

You don’t need to do the snapshots - the awesome product does it already :slight_smile:

1 Like

So Presistent write-back cache is cache on your local client. How reliable is that as a local backup copy? I mean, if I want my files to be backed up both remotely and also on a hdd inside my client machine, should I just rely on the persistent write-back cache (if I have enough space) and not worry that I may need to recover while not having (internet) access to my server? Or should I set up a second backup server that runs on the same client machine in addition to a remote server on a remote machine?

How do I set up this local write-back cache? The urbackup interface is very spartan and does not allow me to intuit what’s going on with how the application works. I don’t see anything like a setting where I can paste in the path to my local backup HDD so it can use it as the “cache”.

What you linked is in a appliance (Operating System + UrBackup Server ). I doubt that you’d want that.

If you want to have local backups you indeed need to have a … local backup server. Some back up to both a local and internet server, some replicate the local backups to a offsite server.

I see. I had read about the tricks to set up an internet server + a local server. I’ll do that. But it sucks because it’ll basically have to run the backup / indexing once for each server…

Replicating the local backups to an offsite server is impossible if you have terabytes of data and images and containers etc. I would have just used Syncthing if it was doable.

The only option is to do incremental backups over the internet. I was really hoping that UrBackup would to the “sane” thing and take care of both a local “cache” backup and an internet offsite safekeeping backup in an incremental way without needing 2 servers… And nobody else is doing this either, it’s so annoying that there’s this gap in the market.

I mean who would want to either only backup to the internet and then wait eons to download a large image, or to do the same backup jobs twice every time? :slight_smile: And backing up only locally is not sane.

E.g. Windows has that built in (and Mac as well now). Just make sure you have enough free space and make VSS snapshots regularily. Then you have “previous versions” of a file. That protects against accidental deletion. If you are concerned of a hard disk braking put it into a (software) raid.

Thanks, but it should be noted that accidental file deletion “prevention” and same-drive shadow copy of files, is not a real backup solution. (e.g. image versioning, ransomware, external nas storage setups etc)

  • If I have a laptop with a 1TB drive, 500GB of which are in a container,
  • and I want to back it up to an external drive,
  • and need it to stay encrypted (for ex so my parents don’t see my memes backup while saving family photos),
  • while additionally also backing it up remotely incrementally (not bandwidth feasible otherwise),

then as of today I have no choice on the market but to do backup jobs twice, for no raisin as far as the user is concerned. :wink:

A software raid would not help with hdd/laptop catching on fire. Nobody can afford hardware raids or any raids on a laptop. :slight_smile:

Notes on Shadowcopy not being backup:

Can volume shadow copy replace backups?

Windows 10 shadow copy cannot replace backup for a couple of reasons.

  1.   The shadow copies are stored in the original volume, so if the volume crashes, no shadow copies are available.
    
  2.   Shadow copies may not be created regularly by the default settings.
    
  3.   Your valuable shadow copies may be deleted without warning you when disk space is low.
    
  4.   Shadow copies may not correctly record all the files changes
    

(source)