I have an EC2 t2.small instance on AWS that has a 50GB volume for the OS (Ubuntu 14.04.3). I only use about 20% of that but I need to keep it free for the websites that I host among a few other web services that I use for my business and clients.
I mounted an S3 bucket and use that for my urbackup storage so by doing this I end up with a cloud backup that I can offer to clients. I have one question about my setup. How can I reduce the amount of requests made to S3 so my bill doesn’t go through the roof? by making only about 20gb worth of backups over a 5 day period to the storage I used over 1.6million requests costing me $9 before urbackup it was around 230,000 requests costing $1.50. The 20gb of storage was less than $1 so its not the storage that’s gonna cost me. Its the number of requests that add up quickly. Is there a way to reduce the requests made? maybe by using my EC2 local storage and then offloading it to S3 once the backup completes?
None of the full backups of the PCs are more than 5GB and they only run every 5 days but It’s important for me to be able to use AWS and keep the costs down.
I’ve done some more testing and as you probably already know backups run a lot faster to the local EBS storage on the EC2 instance so I am gonna stick with that for now and expand when I need to. Maybe in the future it would be possible to somehow archive/store large parts of the backups on another volume(S3) and then when needed they could be referenced, deleted, downloaded, etc. but this way storage usage is limited on the “other volume” and the higher performance volume is used on a regular basis. Maybe there could be an option to store full backups on another volume.