UrBackup + Offsite Cloud Service?

I recently began running a UrBackup server on a Windows 7 machine, backing up several local PCs’ data. We would also like to run a low-cost paid offsite backup service on this machine for the just-in-case/what-if scenario of total building destruction.

I’m currently trying CrashPlan, but it appears to be having some major trouble with UrBackup’s symbolic links. My UrBackup folder has ~250k files that are ~140GB (based on “Properties” from Windows Explorer), however CrashPlan seems to think it’s backing up 3M files totaling 1.8TB. I’m really loving UrBackup compared to everything else I’ve tried so I want to make it work for us…

Have any of you successfully integrated UrBackup with an offsite solution like CrashPlan, BackBlaze, iDrive, SOS, etc? We prefer a flat rate unlimited service rather than pay-per-GB or pay-per-transaction.

A couple additional points: we do not / will not run a Linux server and we want a low maintenance set-it-and-forget it solution for the offsite backup (i.e. we’re not going to run our own server at some other location, nor do we want to replicate drives then physically carry them offsite on a regular basis).

Thanks in advance for the suggestions!

I’ve been using crash plan but I’m not sure it’s suitable. The indexing process is so slow it takes days. Which means it’s never really up to date.
Although I do have over 14M files totalling 17+TB.

Has anyone else had a better experience?

Within the UrBackup directories, it looks like the “clients” directory contains only junctions (hard links for folders) to the most recent file backup sets. I’m experimenting with CrashPlan set to only back up this “clients” directory hoping it will get all of the current files. My idea is that after the initial backup, as UrBackup updates the current data sets, CrashPlan should be able to see the changes and back them up. So far it looks like CrashPlan has scanned and found a number of files & total size on par with the data on the 3 PCs I’m currently UrBacking-up.

My only fear is that there are symbolic links that CrashPlan won’t follow deeper into the “clients” directory structure beyond those initial junctions. I’ll let it run for a couple days then restore a tree and compare to the original PC.

If the above doesn’t work, I suppose a program such as Cobian Backup can maintain a mirror copy of the “clients” directory (with copies of all real files; no junctions or links), then CrashPlan will send the mirror offsite.

What backup target will you use with cobian? It really needs to be offsite. You could use a host offering FTP access with cobian

The problem I have is the amount of data. After deduplucation I’m using ~12TB and climbing.

If anyone else is running a similar setup I may be interested in backing up to each other’s servers…

UrBackup uses hard linking, this causes your tertiary backup to ‘see’ each file as a full file for each backup set. It can’t distinguish between the two files.

The “links” are really pointers, they appear and function as full files in each backup set. So yes, when ‘indexing’ your hard drive you really do appear to have many times more data than in effect your disk holds.

For your situation, I’d recommend a block-based backup of your disc to a third party. Eg. ZFS send/receive or LVM snapshots. If you want to be really wasteful, UrBackup your UrBackup server, it does file based deduplication (by using hard links) although it will take even longer now to index everything.