We limit to 10 backups at a time, but they run very quickly as we are only doing file backups and the daily diff is pretty small. We are fortunate that we have customers split all across the country so we stagger them by time zone to get them to complete.
I would like to try and change the check in interval somehow because i think that is a lot of the overhead for really nothing. If they checked in ever 5 minutes, that would be a 5x reduction in chatter. But maybe there is a reason for checking in so often, but i dont see it.
I get splitting the customers onto another server, but that just doesn’t seem efficient for support and long term growth. Some type of shared storage and then a couple servers in front to handle the request and buffer the storage that is to be written to disk. We have database on SSD’s and that did help greatly, but the database is really getting large, so like to know what is the impact of the “Database cache size during batch processing” setting? What should it be set to? Some type ratio of database size, amount of memory, etc.
We can manage without the clustering, but freeing up resources just for client check-in’s i think would help, as it seems like a waist.