One of my clients has stopped backing up successfully. When I looked at the server log to investigate, I saw a flood of the same messages.
2026-03-24 14:37:36: WARNING: Failed to write to file… waiting…
2026-03-24 14:37:37: Write failed. errno=28
2026-03-24 14:37:37: WARNING: Failed to write to file… waiting…
2026-03-24 14:37:46: Write failed. errno=28
2026-03-24 14:37:46: WARNING: Failed to write to file… waiting…
2026-03-24 14:37:46: Write failed. errno=28
2026-03-24 14:37:46: WARNING: Failed to write to file… waiting…
2026-03-24 14:37:47: Write failed. errno=28
2026-03-24 14:37:47: WARNING: Failed to write to file… waiting…
2026-03-24 14:37:56: Write failed. errno=28
2026-03-24 14:37:56: WARNING: Failed to write to file… waiting…
My log is in debug mode but I can’t see where it’s actually failing. The error starts right in the middle of server initialization.
2026-03-24 14:33:36: Connecting to target service…
2026-03-24 14:33:36: Connecting to target service…
2026-03-24 14:33:36: Established internet connection. Service=0
2026-03-24 14:33:36: Established internet connection. Service=0
2026-03-24 14:33:36: Authed+capa for client ‘********’ (encrypted-v2, compressed-zstd, token auth) - 1 spare connections
2026-03-24 14:33:36: Getting client settings…
2026-03-24 14:33:36: Encrypting with key /[redacted redacted redacted]/ (server)
2026-03-24 14:33:36: Flushing FileClient…
2026-03-24 14:33:36: Encrypting with key /[redacted redacted redacted]/ (server)
2026-03-24 14:33:36: Write failed. errno=28
2026-03-24 14:33:36: WARNING: Failed to write to file… waiting…
2026-03-24 14:33:36: Authed+capa for client ‘********’ (encrypted-v2, compressed-zstd, token auth) - 1 spare connections
2026-03-24 14:33:36: Channel message: STARTUP timestamp=1774042197
2026-03-24 14:33:36: Connecting to target service…
2026-03-24 14:33:36: Connecting to target service…
2026-03-24 14:33:36: Established internet connection. Service=0
2026-03-24 14:33:36: Established internet connection. Service=0
My storage seems fine, other clients can backup and verify-hashes and remove-unknown all work without error.
Any ideas as to what this error might be?
uroni
March 24, 2026, 11:26pm
2
Sry, but here is the ChatGPT answer… which is good enough for such a common problem.
That error is pretty straightforward once you decode the errno:
errno=28 = “No space left on device.”
Your system (or the filesystem the program is writing to) has run out of available space, so every write attempt fails and the app keeps retrying.
What’s happening
The program tries to write to a file
The OS returns error 28
The program logs a warning and retries
It keeps failing because the underlying issue (disk full) isn’t resolved
Common causes
Disk/partition is completely full
Temporary directory (e.g. /tmp) is full
A mounted volume (Docker volume, network drive, etc.) is full
Disk quotas exceeded (even if overall disk has space)
Too many inodes used (rare, but possible with tons of small files)
How to fix it
Try these checks:
1. Check disk space
df -h
2. Check inode usage
df -i
3. Find large files
du -sh /* 2>/dev/null
4. Clean up
5. Check quotas (if on shared system)
quota -v
If you tell me what program produced this log (database, backup tool, etc.), I can point you to the exact place it’s likely filling up.
Ok, we got somewhere!
My NAS (where backup data is stored) still has ~4TB available, so I was certain that it wasn’t a disk space issue.
BUT, it looks like /tmp is full
root@urbackup:/tmp# df -h /tmp
Filesystem Size Used Avail Use% Mounted on
tmpfs 2.0G 2.0G 0 100% /tmp
root@urbackup:/tmp# ls -lah
total 2.0G
drwxrwxrwt 9 root root 320 Mar 27 21:10 .
drwxr-xr-x 17 root root 314 Feb 17 22:56 ..
-rw------- 1 urbackup urbackup 0 Mar 27 21:15 cps.0JAX0e
-rw------- 1 urbackup urbackup 9.9M Mar 27 21:15 cps.CNC97x
-rw------- 1 urbackup urbackup 4.7M Mar 24 16:54 cps.cxM6KX
-rw------- 1 urbackup urbackup 2.0G Mar 27 21:15 cps.fGzMes
-rw------- 1 urbackup urbackup 0 Mar 27 21:15 cps.jTbLSK
-rw------- 1 urbackup urbackup 0 Mar 27 21:15 cps.nykfQs
-rw------- 1 urbackup urbackup 0 Mar 27 21:15 cps.yRmOLz
drwxrwxrwt 2 root root 40 Mar 24 16:08 .font-unix
drwxrwxrwt 2 root root 40 Mar 24 16:08 .ICE-unix
drwx------ 3 root root 60 Mar 24 16:08 systemd-private-3f1d67d3ce27450ebefffe07bd5bdc7a-exim4.service-mFj9sW
drwx------ 3 root root 60 Mar 24 16:08 systemd-private-3f1d67d3ce27450ebefffe07bd5bdc7a-rsyslog.service-DduVBj
drwx------ 3 root root 60 Mar 24 16:08 systemd-private-3f1d67d3ce27450ebefffe07bd5bdc7a-systemd-logind.service-hf38kk
drwxrwxrwt 2 root root 40 Mar 24 16:08 .X11-unix
drwxrwxrwt 2 root root 40 Mar 24 16:08 .XIM-unix
Does /tmp need to be large enough to contain the largest backed up file? I’m positive I’ve backed up files larger than 2GB before.
Even with the urbackupsrv service stopped, there are an insane amount of open file handles in /tmp
root@urbackup:/tmp# lsof | grep /tmp/cps | wc -l
480
Is there any way to inspect what that 2gb file is that is consuming all space in /tmp?
Might be coincidental, but this started right after I upgraded to 2.5.36.
Running Urbackup from .deb package on Debian 13 VM, no docker.
Looks like the culprit was the upgrade to Debian 13. It switches /tmp from a folder (or separate partition) to a tmpfs volume with a maximum size of 50% of memory.
# systemctl mask tmp.mount
and then rebooted, /tmp is no longer a ramdisk.