Suggestion for "Image Backups"

I need to do a full backup every 90 days, and incremental backups every day in between.
I can’t figure out how to do it to prevent them from being deleted the next day after I’ve done the backup.
Currently, it looks like the photo below.
Any suggestions?

Change the maximum number of image backups to a higher number.

To do what you wanted, you want numbers as follows

  • 1
  • 90
  • 90
  • 20
  • 4
  • 2
  • C

This will mean you get your “full backup every 90 days”. It will always keep at least two of these, but four if space. It will backup every day, trying to keep 90 days of incremental but at least keep 20 of them.

The number of incrementals can likely be shrunk a bit. How may potential versions would ou need to roll back through?

1 Like

I still have no idea how many potential versions I will have to restore, I have a cloud datacenter with servers with many TB of disk and since it is remote, I do not want to do backups often because I will be left uncovered for a few days and in case I CANNOT afford it

If you are running a data centre, then you mainly will worry about “yesterday”. So even a week of snapshots would be enough.

If you have a 90 days full. Then every day an incremental tops that up. Having the ability to roll back any point in that seven days is handy.

Having 7-14 days of incrementals means if you got an infection you could still go back a week or two for a cleaner version.

Really your limits comes down to how much hard disk space you want to dedicate to your backups. Your clients are not paying for a backup service to allow them to go hunting for file changes. You are just covering yourself in case of flood or infection.

n reality, we work in the local healthcare sector, so even 1 hour can be important for us because we have thousands of transactions in that time frame.

However, we consider 1 day for the snapshot sufficient. If it were possible, even every 8 hours like other tools, but I don’t know how to do it here.

The point is you will rarely want to go back to last week. If you get a failure, you want the “latest”. So as long as you have a few spare you should be fine.

Really the people who need loads of extra incremental backups sitting there are those who may want to go digging back to catch when something changed. I don’t think this is what you need.

Just having “maximal incremental backups” of 10 should be enough. You’ll need less space on the backup server then. But you need extras in case of a virus\corruption so you have choices to go back to different days.

But do remember I am only a user like you. So I am only talking my personal policy for when I look after my clients.

Just remember to test the system out once it is in place. Recover to a test disk now and then to see how long a recovery is. Hopefully if you ever have to do a recovery you can sit next to the server to recover to a local disk, and then drive\fly to the destination to install. I’d hate to think how long an internet recovery would take…

You’d use File Backups to get a faster rate… then you can be more targeted at what actually changes more often.

Basically, I need to back up SQL Server or PostgreSQL databases.
We have a lot of them, about 3,000 databases on 4 machines with 6 TB of data. I don’t know if I can back up individual MDF and LDF files, for example. I’ve been using UrBackup for a while now.

My problem is that if a customer accidentally deletes a patient record, I have to go and recover it, and I usually start from a safe day to then figure out where the problem was.

Another thing is, if the cloud data center explodes, I have to set up a virtual machine there as soon as possible to provide a basic service, and then rebuild the state from the backups up to the latest incremental.

Of course, if there was the option of differentials and NOT incrementals, it would be much better and faster

Your real answer comes down to resources. How good your bandwidth is to the backup server. How much space you dedicate on that backup server for versions. How much the running backup itself affects the live machine.

What you are saying is not really along the lines of “I need to know what the records said last month”, so you really don’t need loads of incrementals to scroll back through. Just enough to go back far enough to spot where the problem was. Only you will know how long it usually takes to spot a problem has occurred.

With the “exploding data centre” scenario I really don’t know what the difference is in rebuild time. But logic says to me that if you have 20 incrementals to add to your last full backup it will take a lot longer to rebuild than if there were only a week of incrementals.

What you say about restore times is true, but with many TB to back up, it takes me several days via the Internet, even with a nominal bandwidth of 1 GB, which in reality never goes beyond 200-250 Mbps.

I’m looking into it, analyzing the restore times, and then adjusting everything to find the most appropriate solution possible.

If you have any suggestions, I’m happy to read and evaluate them.

I don’t have experience of internet backup or restore. Just I can see the restore time always being quicker if it is local to the server, and then drive\fly\courier it to the original location.

Personally if I ever was to do offsite backup like this, I would buy a second server. I’d backup onsite, and then sync that backup machine offsite. Having the main backup onsite will allow faster backups and most situations it is an onsite recovery you will want to do.

If a rare situation of a flood or onsite explosion, then that off site sync will save that rarer occasion.

Just an FYI,

Not sure of the fine details of your backup process, However you can’t do a file level backup of a database while it is in use. You have to:

  1. Stop the database service
  2. Backup the files
  3. Start the database service

Failure to do so can (and will) lead to unusable backups. Each database has its own high availability backup recommendations. For Postgres use ‘Continuous Archiving’. See: PostgreSQL: Documentation: 17: Chapter 25. Backup and Restore

Chapter 25 covers three methods for backups, ‘Continuous Archiving’ is for large datasets that require high availability.

You will have to research SQL Server’s recommendations for large datasets that require high availability.

An example of use: Use, ‘Continuous Archiving’ for your hourly point backups, store the resulting files in a directory. That directory is then backed up daily using UrBackup.