My backup storage is ballooning in size for a particular host, and I’m trying to troubleshoot this issue. After looking through the files, I discovered a bunch of older files that I can’t imagine that UrBackup is still using, but considering its sophisticated incremental mechanism, the last thing I want to do is take a chainsaw to the file structure and break everything.
As such, I’m trying to using urbackup srv remove-unknown to get the system to do this for me. However, when I run the command, I get an error:
# urbackupsrv remove-unknown
2024-01-06 19:00:29: Going to remove all unknown files and directories in the urbackup storage directory. Waiting 20 seconds...
2024-01-06 19:00:49: Shutting down all database instances...
2024-01-06 19:00:49: Opening urbackup server database...
2024-01-06 19:00:49: Testing if backup destination can handle subvolumes and snapshots...
Testing for btrfs...
TEST FAILED: Creating test btrfs subvolume failed
Testing for zfs...
TEST FAILED: Dataset is not set via /etc/urbackup/dataset
2024-01-06 19:00:49: Backup destination cannot handle subvolumes and snapshots. Snapshots disabled.
2024-01-06 19:00:49: ERROR: Error getting free space
2024-01-06 19:00:49: ERROR: Error cleaning up.
I have my backup directory on an NFS share, and I’m not sure if that is the cause of this error, or if it’s entirely unrelated. It’s been working reliably for me over the last few years in this configuration, apart from the aforementioned space issue.
I think I fixed the problem, though it seems like a weird one.
While searching around for other troubleshooting steps, I came across this post that suggested deleting the fileIndex, and the system would rebuild it. I deleted the folder back on Jan 6, and sure enough, the system rebuilt it as expected.
One unexpected side effect, though is that running remove-unknown now generated an access error:
# urbackupsrv remove-unknown
2024-01-08 17:22:24: Going to remove all unknown files and directories in the urbackup storage directory. Waiting 20 seconds...
2024-01-08 17:22:44: Shutting down all database instances...
2024-01-08 17:22:44: Opening urbackup server database...
2024-01-08 17:22:44: WARNING: SQLite: cannot open file at line 39363 of [3bfa9cc97d] errorcode: 14
2024-01-08 17:22:44: WARNING: SQLite: os_unix.c:39363: (13) open(/var/urbackup/backup_server_files.db) - errorcode: 14
2024-01-08 17:22:44: Could not open db [urbackup/backup_server_files.db]
2024-01-08 17:22:44: ERROR: Database "urbackup/backup_server_files.db" couldn't be opened
2024-01-08 17:22:44: ERROR: Couldn't open backup server database. Exiting. Expecting database at "/var/urbackup/backup_server_files.db"
When I check permissions in the /var/urbackup/ folder, everything looked ok, but I realized that the newly created .db files are owned by root, not urbackup.
I am running UrBackup as root since it’s on a containerized system, but it wasn’t obvious until I looked at the strace output that urbackupsrv runs as user urbackup unless explicitly told to run as a different user with the -u flag. I assumed that it either used the launching user (which was root), or whichever user was specified in the config file (also root).
Once I explicitly told it to run as root, everything worked.
Since user urbackup does not have any permissions to access the nfs mount, it totally makes sense why I was getting the previous errors.