I’m using urbackup to create backups for 7 developer Client Computers (up to a backup every half hour).
We have a huge amount of small files (<4KB and <2KB).
Every Individiual Backup and the Cleanup task only uses about 50 Disk IO (very high Latency).
With that, we have some Backups that take hours (we work with Git).
Currently the Server also is unable do delete enought old Backups to be self sustained.
Is there any way to increase the IO queue?
Or does it help to use a higer file Size for when it starts to do deduplication?
We do get the full IO when we test with a tool like ATTO.
Is there anything in Planing to allow multiple Servers to connect to a single Client with different Backup Strategies?
i.e. one lokal with 1/2 hour backups, and one in the Azure Cloud with 1/Day.
2-Core low-CPU (ev. B2m).
Database on C. Backup Data on D.
Both drives with a Maximum of 500 IO.
Backup Data 15GB