Import/Seed/Export a backup, and parent/child UrBackup servers

Hi there,

I’m new to UrBackup but it came highly recommended from a colleague and so far, I’m very impressed!

I’d like to echo the desire for importing/seeding/exporting backups from a remote location to a central one, or from a central location to a remote one. See the bottom of this message for others requesting a similar feature set.

Our situation is this: We have about 30 sites we’d like to back up to a centralized server and storage array that live in a central data center. The bandwidth to the data center is limited to 100 Mbps up and down, and each site is running somewhere between 1-5 Mbps upstream. So I’m limited to that 1-5 Mbps
to send a backup through, and I’m limited to sending these backups between 6 PM and 7 AM the following morning. In some cases, the window is smaller than that because they run a 2nd shift until 12 midnight. So, if each site has between 250 GB and 1 TB of data to send to the centralized server, it could be months before the first full backup is done. This simply isn’t acceptable.

However, if I took a micro-workstation with a 3 TB drive loaded with UrBackup Server to each site and did the initial backup over a weekend to that, I could then transport that workstation to the datacenter and upload the data into the storage enclosure. I then set up the server at each site as an “internet” client and I use the suggested “import” option to import the full backup into the database. Suddenly the amount of data I need to send over the internet nightly is small and manageable.

There are cloud based services that offer this option, but it is tedious. For instance… without naming company names…

If I need to seed a full backup, the one we currently subscribe to has us ship them a SATA hard drive with the initial image on it, and they charge us $200-$500 (USD) for the import. If I want my drive back, I have to send them a prepaid return label in the box with the drive, and they keep the drive for a week or more before the import is online. Occasionally I’m told the backup is “invalid” and I have to repeat the process.

If I need a full restore from them because a server has a catastrophic failure, I have to pay them $250 for a hard drive + $200 for the restore to the drive + overnight shipping. All in all, this is $500 per incidence to either send them a lot of data or receive a lot of data back, and I have a location that sends workers home for 2 days unpaid because they cannot operate. So this means not only do we lose 2 days of production and angry customers, we have workers angry that they cannot work and that lose 2 days of pay. In the US they can submit this to unemployment and that’s another set of nightmares for the worker and for the company.

When the network monitoring softare notifies me that a site is down, I could remote into the server using iDRAC or iLO and diagnose the problem remotely and then run to the centralized data center, export the last full backup from UrBackup to the micro-workstation, run it to the site that is down, and restore the data. It may take me 12 hours and be a long day for me, but I just saved an entire day for the company and the workers.

For those wondering, the last time this happened it was a RAID controller that went bad, and the identical replacement RAID controller refused to import the old array configuration. So my server is down and when I get the replacement part from the server manufacturer (4 hour turnaround) the RAID array had to be rebuilt. Then I needed to do a full restore, so I’m waiting for the drive to arrive from [offsite cloud backup vendor]. When it arrived the following day I did a full restore of the data (6 hours). So the site was down for two days.

Additionally, I’d love to see a parent/child UrBackup architecture. This way a site could have a local backup as well as the centralized backup. Others have suggested ways to make this happen where you internet back up a UrBackup Server as a client and that seems like a great idea. My additional suggestion is that the child server be able to pass backup status information to the parent server, so you can see on one screen all the sites and all the servers, and their last backup information.

Lastly, hats off to the UrBackup Development team. I’m genuinely impressed with the product!

As a new user it won’t let me post these links (says no more than 2 links, even though there are only 2 links here)… but here’s what comes after the base URL:
/t/first-seeding/161
/t/import-backups-from-another-server-site/1936

Hello

Sorry but your post is a little too long, i am unsure i understood everything.

My understanding is that:

If you go to a datcenter. Copy the files data on an external drive.
Go back to your central location, and attah that drive to a temporary/fake server
Then add that server in urbackup, and backup this server with the data in the external drive.
From this moment, urbackup will know the files and corresponding hashes.
Then when you will backup the servers you originally wanted to backup, as the hash are known almost no data will be copied.
Then, you should be able to remove the temporary server. ( don t do it before, or the hashes/files, will be orphaned and removed)

I would also like a master server option for multi-site…it would basically need to know only the metadata the other servers know.
I was also thinking this option could be used to manage an archive server. It would store old backups on a different set of drive/fs. As currently urbackup need to have all the data on the same volume to have the dedup working

Hi,
The seeding does not work for Windows Components. It will re upload all SQL & Exchange DB files. which in my case is over 100GB over 10MB ADSL line. When is this going to to addressed? Is there another way we are suppose to do windows components?

I have tried everything no luck. See below always wants to upload entire file. PLEASE tell this is going to be sorted out soon or a Seed load utility made.

Hello

Personally i can not help for this, i can not say that @uroni will help on it.
As far as i understand the issue is that at least exchange and mssql can not be seeded in a useful way, because there s no easy way to diff them. My understanding is that, it’s at least difficult to make a differential of these databases (exchange store reacts like a database to backups).

I think that what would maybe work. Is that you find a way to backup these in an incremental mode using api or whatever, then suggest to implement this to @uroni. Maybe if you kickstart him with a rough understanding of how it can be made it, it will be implemented faster.

Maybe on the way you will find a way to backup the main database as a dump file, then a set of incremental dumps. Then the virtual client would just backup theses db dumps. Then maybe uroni will develop something like backup to named pipe, thus not requiring the fact that you do a dump to an actual file.

That’s a lot of maybe, i know !

Hi Orogor,

Thank you for the response and suggestions. I am testing a few more ideas to get this working.

Hi,

After further testing. I have found the following.

I am needing a way to re index the database on UrBackup so it can re-read all the files placed in the client folder. This task needs to be done on the UrBackup server side so that the UrBackup server database knows what the files are in the client folder after coping new files in to that folder, specifically the folder and files created by the component backup option. Someone I know said to me he renamed a client folder and to his surprise the backup ran completed fast but did an incremental so this tells me the file/backup & hash information is more likely stored in the UrBackup database.

If I backup the SQL Data folder without using the “Component Backup” then the incremental works perfectly and I can then do my Seed load easily and all 100%. I noticed that the 1.4x client does not have “Component Backup” option which is where I got the idea from.

@uroni - Your direction here would be greatly appreciated.

Many thanks.

Is doesn’t really need the file entries if you copy it over as previous backup. If you really want to do that it would go something like this:

  • Add the client to the offsite backup server (via add client)
  • Manually copy the backup to the offsite server somehow
  • Copy the row in the backups table in backup_server.db (e.g. with SQLite browser) to the db on the offsite server. Correct the clientid and let the backupid if necessary
  • Copy the clientlist_b_{backupid}.ub file to the offsite server while adjusting the backupid to the new one

Could be it works without component backup, because it doesn’t cause the Exchange server to change the database file by applying logs.

Hi @uroni,

Thank you very much that work perfectly. Below are the steps I followed.

UrBackup Seed load procedure (Files & Windows Components)
Not tested with image backups.

  1. Load UrBackup server & client onto to customers server
  2. Configure UrBackup server to backup on to an attached USB hard drive on the customer’s server.
  3. Open the UrBackup client on the client server and select folders and Windows Components you wish to backup.
  4. Run the full backup only. Once complete remove the USB drive from client server.
  5. Setup client on offsite backup server. (See point 1 of @uroni notes below)
  6. Copy data from USB drive to off-site server using FastCopy or Ln.
  7. Follow @uroni notes below.

@uroni Notes:

  1. Add the client to the offsite backup server (via add client)
  2. Manually copy the backup to the offsite server somehow
  3. Copy the row in the backups table in backup_server.db (e.g. with SQLite browser) to the db on the offsite server. Correct the clientid and the backupid if necessary.
  4. Copy the clientlist_b_{backupid}.ub file to the offsite server while adjusting the backupid to the new one.
2 Likes

Thank you for the step by step this is going to be very useful for me