Create ZIP download from CLI on server?

Hi,

I know I can restore the latest files from CLI with urbackupsrv, but what I want to do is create a ZIP file for a specific host - using the latest files. It would create the ZIP, and then I can download the ZIP once its completed. I know I can do this via the web GUI for the admin interface, but its too slow (I’m downloading a 60gb file , and so far its only 30gb, and its been running for 2 hours).

So:

  1. Is there a faster way to create the ZIP? (directly on the server)
  2. Would a faster spec machine make much of a difference? The current backup server is 2gb RAM, 1CPU, and the next one up is 4gb RAM, 2 CPU’s, but double the price)

Thanks in advance

Andy

I have to say the following:
– Creating a 60GB zip file is never going to be trivial.
– With 1 CPU and only 2GB of RAM, the issue above is compounded significantly

I just ran a test on two of my local servers (one physical, one virtual) and here’s what I ended up with when backing up ~10GB of data.

Physical Server: 32 CPUs @ 2.6Ghz w/96 GB RAM = 20 minutes to compress 20GB (5 files)
Virtual Server: 2 CPUs @ 2Ghz w/4 GB RAM = 13 minutes to compress 10GB (12863 files)

While I’m pretty certain that my CLI-based zip utility is not multithreaded, having multiple threads does allow other activities to still go on without too much issue while the archive is taking place.

Mind you, this was one 10GB and 20GB of data in one relatively simple folder tree, so definitely less involved than what UrBackup might be expected to accomplish. But the numbers should be good for a general comparison, at least.

Since you appear to be using cloud servers, you will have to give some consideration to the config you can support at what price. CPU speed will also be a factor. But 2 hours is not abnormal for what you are trying to do on that hardware.

Thanks for the reply. The time isn’t so much an issue - its the fact sometimes it drops out mid-way of downloading. I’ve got around it a bit of a dirty way. In SSH, I’m finding the latest backed up folder, and then doing:

tar --dereference -cvvf backup.tar *

This compresses the files in pretty good time even when doing 100gb. The issue I have with it, is that it is de-referencing everything (sylimks), so what I’m having to do is on the origin server is keep track of the symlinked files/folders, and then fix them up manually with new symlinks when the restore is done on the new server. Not ideal, but it works. It would still be nice to have a proper CLI command we can run, which would do this cleaner :slight_smile:

Cheers

Andy

1 Like