I have elected to use /var/tmp as my temporary file folder. So its just the ubuntu server that has access to that folder.
The server is a fresh install of Ubuntu Server 16.10 and im pretty sure ubuntu does not have a scanner as standard? Is there something else that could be throwing up this error?
Hey i had the same message, and i do not have any real-time/on-access av solution running on my server. Luckily i found this thread, so i found out the server ran out of space. Why that happened is my “mistake”; i moved some hardware around and changed the setup, i duplicated it to other hardware, the rootfs is significantly smaller now, fixing that.
However, the error message is misleading, obviously. Maybe that can be changed?
Btw i occasionally have very slow backups, in that case i would stop the backups, stop urbackupsrv, (remove-unknown - most of the time), restart urbackupsrv and things would be ok again. Because i moved hw around i was paying close attention to how things would go now with the new setup, i noticed the same behavior; some running backups would take weeks, even years (not all, just 2 out of 4 running backups). Sometimes i read people complain about that, could it have been all that time that the /tmp directory was simply filling up? In that case there should be some message indicating: hey, your temp folder is running out of space! or something like that.
And i know, i should find some way of monitoring disk usage and reporting, but my current isp is a hassle since the mails aren’t going out at the moment. Moving ISPs within this month, so i didnt bother. Would love to use pushover for notifications, but need to write a template for that i saw… but that’s all on a whole other subject
Agreed. But it is also hard to distinguish e.g. access denied errors that come from av vs. real access denied errors.
If you get into such a situation again, maybe you can look closer what the issue is? Look at the tmp folder contents to confirm your suspicions, maybe even compile it with debug symbols or install them, then attach gdb and run thread apply bt all to see where the backups are stuck?
Maybe a message just indicating that Urbackup is not able to write to it’s configured temp storage (at blabla), possibly due to real-time av solution would be more than enough for server admins? I think at least it would made me check the usage, permissions, and the suggested av solution. If the test (from the server perspective) is just a check if it can write there, maybe a message indicating that it can’t is enough. Mentioning that an av solution is the problem is confusing.
rant incoming…
It happens very occasionally, also there are a lot of customizations i made, its not a production environment, just personal. But in the meantime i think i have confirmed the issue was indeed the temp storage; the server was running from an nvme ssd only 128GB large, the /tmp was just a folder on the root fs. The root fs was probably around 120GB. Then, because i needed to move hardware around (long story, wont bother you with that) i thought; hey if this setup will not allow me to use the nvme anymore (new board doesnt have on-board m.2 slots and only 1 pcie slot, which is used by a storage controller), why dont i move it to 2 sata ssds and set it up with mdadm and lvm?
That took me a while but i was able to do that and with and with seperate lvs for /var/urbackup and /tmp. Because the ssds were only 64GB, tmp was much smaller than before and then i noticed the backups were stalling. But this time there was that error message, which confused me. I found this post and checked out df /tmp: completely full! Ouch! In the meantime i fixed it by adding in another sata ssd, 128GB large which is solely used for /tmp.
I read in the documentation the tmp storage should at least be as large as the single biggest file you will have to backup. Which made me think: of course, that explains. Since the /tmp is now bigger than it was before, i will need to see if that changes anything, but i will surely have another look when a backup stalls. Btw there is one remote (windows laptop) client which sometimes just stalls, but i think its because it goes into standby. I have used the pre- and post backup scripts to temporarily put it in performance mode, but that doesn’t seem to work all the time, or at least, that’s what i think happens. Anyway it’s my dad’s laptop and i can see the backups resume later and enough backups are made so i don’t think it’s that important.
Another thing to note is that i was able to wake up the remote laptop at a set time in the night. And i was able to configure my backup server to do something similar; it resumes from hibernation 3 times a day, performs the backups it can and then hibernates again. Its a bit risky because storage doesn’t like that kind of thing, but it seems to work fine. Also i was able to encrypt the /var/urbackup and backup storage with luks and because i use hibernation&resume i don’t need to ssh into it to unlock the luks devices, but i am deviating from the point.
Maybe i should write a cookbook on the wiki on hibernation/resume and luks, but atm i’m too busy just trying out things “because i can”.
…rant done
TLDR;
The point is; it’s not a big problem, got a lot of customization going on so some things are expected to break, but when it happens i will check and investigate, and maybe even compile it with debug symbols if needed, for now; everything seems to work nicely.