Hi, I’m testing URBACKUP and I hope to continue using it.
I’ve got a RDX backup library with 5 disk, that, in fact, are symply hard disk that windows recognize as unit P: (even when I change the cartridge).
My Goal is to use it as destination nightly FULL backup so the next day I change the cartridge and keep it with me at home.
In other words, evey night I do a FULL backup in a disk.
Mon --> FULL BKP in disk1 (and take home Disk5…)
Tue --> FULL BKP in disk2 (and take home Disk1…)
Fri --> FULL BKP in disk5 (and tak home Disk4…)
and so on with those 5 disks (always seen as P: by Windows Server)
Is it possible with URBACKUP without problem?
Urion can you clarify if I can use URbackup with removal support like RDX or USB disks?
Thanks in advance.
You can change the urbackup storage directory. UrBackup will complain a lot but it will redownload the files and produce a consistent backup. It would not be very efficient though.
I would propose a different strategy:
- Use one or two disks as normal UrBackup storage making mostly incremental backups.
- Schedule a nightly job to copy the current client snapshots on the UrBackup storage to your backup library. Every file backups looks like a full on disk, so you won’t have to care if it is an incremental or not. To copy the data you can use any file copying tool that can handle the symbolic links (if you don’t disable them)
- If one disk is enough you can clone the whole disk, then you take home the whole backup history
This strategy would be more efficient, because you only transfer changes to the backup server.
I don’t understand why I have to change the backup storage dir. My RDX library always is recognized as P: bye windows Server.
And what do you mean for “ur backup will complain a lot” ?
so, in the end I have to say that URbackup is not ideal to backup to Tape/RDX/usb drive or similar library.
Changing unit every day cause to many error on the DB and I can’t have a consistend full backup un my cartridges during a week.
I will continue try for some day, If I’m in error I will post results…
RDX is a removable storage. Because of this many backup clients (Symantec, Acronis, and others) want to backup to a fixed location to keep their database clean and know what was backed up yesterday.
To this end UrOni seems to be suggesting that the permanent repository be kept without swapping out, but the results in the repository can be copied to the RDX (Like Symantec’s “off-site” option).
If I understand the post right, you can just create a simple xcopy / robocopy batch script to copy the files from the permanent storage to the off-site storage (P:) This gives you the benefit of always having the database be whole, and allows you to copy your files to the off-site storage.
I love my RDX drives, but they are best used as secondary or off-site storage, not primary. (In some cases I have thrown a cheap USB3 drive on the system to be used as permanent)
Hope this helps,
but now my consideration are:
- WHEN (at what time) can I schedule xcopy/robocopy job to my RDX? I don’t Know exactly when URBACKUP finish his backup operation.
- doing this my backup will result much more long
- in your opinion, what would be the most appropriate configurations in urbackup to operate in this way (save to fixed disk repository with a second copy on RDX)
thank you very very much in advance
postfilebackup.bat, in the UrBackup folder, executes at the end of the job on the client. Maybe we can use that to trigger the file sync based on client folders? Haven’t done that, but it seems very plausible.
The extra “off-site” copy can use synctoy, or some other file synchronization program to speed things up when it can. If there’s already a backup for that folder on the RDX drive, just copy the new / changed stuff.
Since I am just starting to use UrBackup, I don’t have all the scenarios worked out with it, but based on my use of other backup systems, the off-siting of a second copy is the only technique that has consistently worked with my RDX drives.
Windows Design considerations:
- Primary backup not on the main RAID array (if on a server). Separate controller if possible (USB3 would do)
- Run the off-site script on a scheduled task on server or by remote trigger from client
2a) PsExec (Windows) execute through the use of the PostFileBackup.bat running the Synctoy to the RDX.
- Use eventcreate in script to drop an event log entry so that your RMM (if you have one) can pick up on a failure or success.
By using a separate channel, the off-site backup doesn’t affect the HDD throughput of the main server and can run in the middle of the day. You’re only taking this home at night, right?
By using the client triggered script you can start the backup sooner.
By using the scheduled task script, you can run it on a more consistent timeframe.
That’s all I can think of at the moment.
The server runs
both files need to be created. The first parameter is the path to the finished backup. Currently this is also run for failed backups, so if a second parameter about failure state is needed I can add that.
I myself just copy/backup
readlink /media/urbackup/clients/CLIENTNAME . This symlink always points at the last successfully completed file backup.
thanks guys, very interesting scenario,
I’d need more info about post_full_filebackup.bat and post_incr_filebackup.bat. Can I creat them as I prefer? Have I to respect some standard? My doubt is if I’ve to create on the server or on all clients…
I see only solution on creating a robocopy (/MIR option) script that starts AFTER all clients finished to copy data.
Did not know about the other batch files!! I would say a resounding “yes” for a second parameter. This would allow me to run event drops at the server. For those who can only monitor the server’s event logs, this would be a great boon!
I think SyncToy is faster than Robocopy for this purpose, and can be scripted to be run from the command line. I would assume that the batch script is just that, a standard batch script. If you know the backup that runs last, you can use the first parameter: %1 to figure out which backup job just ran. You can then trigger the copy after the job you specify.
Otherwise, you could run it after every job. Since only the changes get copied, the sync jobs would be shorter, though there would be more resource utilization in total due to the comparisons.