UrBackup performance collapse: DB size 130GB+, 100% Disk IO, and "Unknown mode: -1" errors

Hi everyone,

I’ve been using UrBackup for about 3 years, but I’ve recently hit a wall where backups are failing or taking days to complete. I would appreciate any advice on whether this is salvageable or if a complete reinstall is required.

My Setup:

OS: Ubuntu 22.04 (Server installed from package).

Storage: Hardware RAID 5 (56TB total).

File System: Btrfs with deduplication enabled.

Clients: 70+ clients (including several virtual sub-clients).

Heavy Load: 3 file servers (~10TB total, millions of small Office files).

Retention: 25 incremental / 10 full file backups per server.

The Symptoms:

Extreme Slowdown: File backups used to take 3–6 hours. Now they don’t finish even after 48 hours. I previously split file servers into sub-clients thinking it would be faster, but it didn’t help.

Disk IO Saturation: During backups, the array is at 100% load. Write speeds drop to 5–6 Mbps even with only 2-3 parallel tasks.

Stuck Processes: Backups often hang for 5–6 hours at the “Syncing file system…” stage. In the “Activities” tab, “Files in queue” often stays at 10,000+ without decreasing.
BTRFS
/usr/bin/btrfs device usage /media/backup/
/dev/mapper/vgbk-lv–0, ID: 1
Device size: 69.12TiB
Device slack: 0.00B
Data,single: 50.97TiB
Metadata,DUP: 1.21TiB
System,DUP: 64.00MiB
Unallocated: 16.94TiB

Database Issues: My database files have grown significantly:

backup_server_files.db: 132 GB
backup_server.db: 100 MB
backup_server_links.db-wal: 409 MB

When running cleanup, the logs are filled with warnings about the database file being truncated due to its size.

Image Backup Failures: Image backups are now failing with the following error:
Errors: Error creating empty image subvolume. “Unknown mode: -1”
Errors: Error creating image backup destination.

Questions:

Is there any way to optimize this existing installation without a full wipe? (e.g., DB vacuuming, Btrfs tuning).

If I reinstall, what is the recommended storage stack for this scale? Should I stick with Btrfs, move to ZFS, or abandon deduplication and use EXT4?

Is there a better way to handle millions of small files? I prefer file backups over images because mounting images via the Web UI to restore a single Word doc has been unreliable for us.

Thanks in advance for any help!

It is probably the backup storage (btrfs) that is the bottleneck and not the database. You should be able to find that out easily.

That sounds like more than a performance issue?

The main fix for such a bottleneck is to make sure the storage has enough provisioned IOPS. In the appliance there are various management tasks that make it perform a little bit more as expected in case it is overloaded.
Btrfs can also be bottlenecked by other things, such as CPU. I fixed this for a customer recently Making sure you're not a bot! (also for the appliance)

I meant these messages when cleaning the system from lost files.

I see that this is a performance issue with the disk subsystem. I increased the memory on the server from 12 to 22 GB, but it didn’t really help.
I marked the images that the system was complaining about as deleted, ran the lost file cleanup, and the error seemed to disappear.

Info 02/25/26 04:11 Starting scheduled full image backup of volume “C:”…
Info 02/25/26 04:11 Basing image backup on last incremental or full image backup
Info 02/25/26 04:11 Creating writable snapshot of previous image backup…
Errors 02/25/26 04:11 Could not create snapshot of previous image backup at 260112-2322_Image_C “Unknown mode: -1”
Errors 02/25/26 04:11 Error creating image backup destination.
Info
02/25/26 04:11 Time taken for backing up client CT-RT: 2s
Errors
02/25/26 04:11 Backup failed

But it turns out that this needs to be done for each server separately, and it only helps with images.

I looked at your messages. I understand you’ve patched the system. Is there any way to obtain these patches?
If you need any logs or data, I’m happy to provide them. I really want to understand and fix the system. I like, or rather, liked, how BTRFS works, especially the deduplication, but now my file backups have practically stopped.

This is just normal operation.

I linked to the patches… The problem is that your btrfs (likely) is bottlenecked somewhere else. For example in the delayed ref processing. You can search in the archive above about me talking about that :wink:

I deleted some of the old copies that the server was complaining about during backup, ran the process of cleaning up lost files, and the backups seemed to work, but the joy was short-lived.
Now I’m getting these errors:
Errors 28.02.26 21:08 Could not create snapshot of previous image backup at 260226-1222_Image_C “Unknown mode: -1”
Errors 28.02.26 21:08 Error creating image backup destination.
Errors 28.02.26 21:08 Backup failed

And on the new client, I got this:
Errors 28.02.26 22:06 Error creating empty image subvolume. “Unknown mode: -1”
Errors 28.02.26 22:06 Error creating image backup destination.
Errors 02/28/26 10:06 PM Backup failed
I tried to create a 7 TB disk image in Compression VHD format.
When I switched to VHD, I was refused permission to create a VHD image of that size (no problem, more than 2 TB is not allowed).
I’ve now installed the VHDX option, I’ll see what happens.

By the way, I just noticed today that the option to create a RAW image has disappeared for some reason. I thought it was there before?

This got me thinking: could it be that during an OS update or something else (I don’t think I changed anything personally), my image types changed in the settings at some point, and now the system can’t continue creating them. It can’t say that this is the problem, so it keeps giving me the error “Could not create snapshot of previous image”