Intermediate storage on SSD?

Is there any way to achieve (with reasonable levels of trickery) a setup where files for backup from clients are stored on one disk and then once a day migrated to a second disk? My reason for asking is that I don’t want to buy an SSD large enough to hold all my backups but also don’t want a HD spinning constantly (in particular as it may end up in my bedroom).

I know that the server only supports using one storage location but can this be done anyway with some background script and symlinks? Or can the caching of files before writing to the storage that I understand (from some posts) is being done be extended to the point that all writes are done once or twice a day (after being stored somewhere else in the meantime)?

edit: typo

For me, I wasn’t worried about a spinning disk’s noise so much as I was about optimizing network throughput on my homeserver, but this is what I did…

I installed a 1TB NVME drive in my homeserver and use PrimoCache to get all the things in or out as quickly as possible. I carved a small partition out of the SSD for the OS (Win10Pro), and the remainder is a very fast cache for my storage array. Basically, PrimoCache sits in front of my storage, catches all the writes, puts them onto the ssd, and then writes them to my primary storage later. Subsequently, if any of the cached data is needed, it’s right there on the ssd.

While UrBackup and Plex are thrashing about on the filesystem, the machine doesn’t have to wait for the “slow” spinning disks to read + write, as the operations happen very quickly on the SSD. Exactly how much is cached, whether it’s a read + write cache or either alone, when to write to disk, etc is configurable, but I don’t recall the particulars. You can even have it cache only some of the disks in your storage array, as it doesn’t cache by “drive letter”, but rather by physical disk.

I could have used similar function built into my storage system (Stablebit DrivePool) without PrimoCache, but I’d have to install two SSDs to get DrivePool’s caching to work. Since I only had one NVME slot to install an SSD to, the decision was easy.

By the way… if you’ve got spare RAM, you can let PrimoCache use that for caching, as well, but you better be on a UPS or you risk losing cached data in between writing to the ramdrive and the ssd or spinning rust. Since my network is only 1GbE, I didn’t try out the RAM cache.

Perhaps I should also mention that you can “sort of” do this just by setting your urbackup server to use space on your SSD directly for caching while backing up, but I don’t think you can configure a wait time before writing to disk. I don’t worry about disks spinning all the time. Two years ago I repurposed a 3TB drive out of my homeserver into a UrBackup server for my cousin. It now has about 8 years of power-on time, has zero reallocated sectors, zero pending reallocated, zero uncorrectable errors… you get the picture. I don’t want to turn this into a religious debate, but I’ve had exactly 1 WD drive die in the last decade or more, and that was only because I flashed the wrong firmware update to it. The other brands, I replace them with WD drives when they fail, and never look back.

Sounds nice, though I was hoping to run this on a Raspberry Pi as it’s only for home use (basically my parents backup to me while I’m currently using a cloud service for my computers after the disk I used for the reverse direction backups crashed) but maybe I’ll have to think again (or just set the backups to run when I’m not sleeping).

But how (where) do I control the caching location as you suggest in the last paragraph? Is it “Nondefault temporary file directory” in Settings->General->Server? Or maybe “Temporary files as file backup buffer” in Settings->General->Advanced?

Yes, you can set the location with the “non-default” setting, but you must also have “temporary files as * buffer” set.

Since you mention this is on a Pi…
I wouldn’t use eMMC/MMC/SD as cache, as it’s slow and fragile compared to an SSD.

Maybe try using something like bcache in write-back mode. Then configure a large writeback_percent ( ), set sequential_cutoff to zero and writeback_rate to zero. When write back is wanted decreate writeback_percent and increase writeback_rate to clear to write back cache.

The problem is that it might read data that is not in cache during backups (cache miss). Then your disk will spin up.

Also your probability of disk failure is now the probability of the SSD failing OR the disk failing.

My slant on this.
I am running backup for about 5 computers on my home network. Part personal and part business uses. I just find it works well enough to backup everything straight to a WD 4TB red spinning hard drive running in a Windows 10 Pro machine that is to all intents is my server. If I were concerned about disks always spinning I can change the advanced power settings to power them down after X mins. Though I don’t know how that affects UR Backup accessing the drive.
I am like Jason_Marsh happy to just let them run. I mostly use WD drives and although they do fail and some after 3 to 4 years I have many that have lasted much longer under continuous use. BTW, I have also had the odd one fail after being on the shelf with archived data on it soon after being powered back up!!!

No, I wouldn’t use a SD-card for anything that involves more than occasional writes (I think it will be ok for the raspberry I’ll use as firewall as the software is written to reside in the internal flash of consumer grade routers, but other than that, no…).

That said, getting SSDs to work reliably isn’t trivial either. I’ve already wasted more than a day figuring out which USB-SATA-chipsets (and their firmwares) work with the raspberry. Some outright don’t (detect properly but then fail, sometimes silently), others do but take forever to boot, etc…

This sounds really interesting.

Can’t I avoid the cache miss problem by keeping the urbackup database (and the OS) on a plain SSD and then use another SSD (partition?) as backup for the mechanical disk holding the urbackup file storage?

As for failure wouldn’t it depend on the bcache implementation? It could very well (at least if I have it cache all writes as you suggest) be done in a fashion that the only data lost if the SSD fails is the pending writes. Which would then be equivalent to a days backup not having happened. Regardless, the only scenario I’m really trying to avoid is the client computer crashing and the backup server’s data then being lost before I can restore it, and that probability will still be minimal (and one could even backup the backup server to avoid that and any inconsistent disk states potentially caused by the caching).

This is probably the solution (or as close as I can get).