Backup to S3 queues but does not write after some time

Hello,

we want to use UrBackup Infscape for our customers and testing it currently out on a few test candidates, but there are issues around backing up to S3 storage.

The backup starts and does its job, but stops at around 1TB and then just does nothing. It keeps queuing, but does not write to the S3 storage anymore.

I don’t know what the issue is or how I could troubleshoot and would appreciate help. If you need additional details, please let me know, so I can provide them

The S3 storage is from contabo.com

Server OS: Debian 11 kernel 5.10.206,
Infscape UrBackup Appliance 1.11.3
UrBackup Server: v2.5.32.0

The logs show the following

2024-03-18 22:33:28: ERROR: Error starting backup of internal database: disabled
2024-03-19 01:40:46: ERROR: Error starting backup of internal database: disabled
2024-03-19 03:01:26: ERROR: Error while downloading version info from http://appupdate2.urbackup.org/appcbtclient/2.5.x/update/version.txt: HTTP response code said error(ec=22), The requested URL returned error: 404 Not Found
2024-03-19 03:01:26: ERROR: Error while downloading version info from http://cbt.urbackup.com/autoupdate/89535264-f18b-4c6e-a292-f0a82af40b8e/2.5.x/version_osx.txt: HTTP response code said error(ec=22), The requested URL returned error: 404 NOT FOUND
2024-03-19 03:01:26: ERROR: Error while downloading version info from http://appupdate2.urbackup.org/appcbtclient/2.5.x/update/version_linux.txt: HTTP response code said error(ec=22), The requested URL returned error: 404 Not Found
2024-03-19 03:01:27: ERROR: Error downloading dataplan database: HTTP response code said error(ec=22), The requested URL returned error: 404 Not Found
2024-03-19 06:41:47: ERROR: Error starting backup of internal database: disabled

journalctl on the service shows

Mar 19 03:01:27 UrBackup urbackupsrv[3549722]: Delete subvolume (commit): '/media/backup/backups/exchange02.aaa.com/240212-1837_Image_C'
Mar 19 03:01:27 UrBackup urbackupsrv[3549730]: Delete subvolume (commit): '/media/backup/backups/exchange02.aaa.com/240212-1730_Image_D'
Mar 19 03:01:27 UrBackup urbackupsrv[3549738]: Delete subvolume (commit): '/media/backup/backups/exchange02.aaa.com/240212-1730_Image_L'
Mar 19 03:01:27 UrBackup urbackupsrv[3549767]: Delete subvolume (commit): '/media/backup/backups/exchange02.aaa.com/240212-1730_Image_P'
Mar 19 03:01:27 UrBackup urbackupsrv[3549775]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240213-0643'
Mar 19 03:01:28 UrBackup urbackupsrv[3549794]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240213-1144'
Mar 19 03:01:28 UrBackup urbackupsrv[3549808]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240213-1645'
Mar 19 03:01:28 UrBackup urbackupsrv[3549830]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240213-2146'
Mar 19 03:01:28 UrBackup urbackupsrv[3549842]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240214-0247'
Mar 19 03:01:28 UrBackup urbackupsrv[3549855]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240214-0749'
Mar 19 03:01:28 UrBackup urbackupsrv[3549869]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240214-1250'
Mar 19 03:01:29 UrBackup urbackupsrv[3549885]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240214-1753'
Mar 19 03:01:29 UrBackup urbackupsrv[3549907]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240214-2256'
Mar 19 03:01:29 UrBackup urbackupsrv[3549920]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240215-0357'
Mar 19 03:01:29 UrBackup urbackupsrv[3549932]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240215-0858'
Mar 19 03:01:29 UrBackup urbackupsrv[3549946]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240215-1359'
Mar 19 03:01:29 UrBackup urbackupsrv[3549958]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240215-1902'
Mar 19 03:01:30 UrBackup urbackupsrv[3549971]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240216-0003'
Mar 19 03:01:30 UrBackup urbackupsrv[3549988]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240216-0504'
Mar 19 03:01:30 UrBackup urbackupsrv[3550001]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240216-1005'
Mar 19 03:01:30 UrBackup urbackupsrv[3550018]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240216-1506'
Mar 19 03:01:30 UrBackup urbackupsrv[3550031]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240216-2009'
Mar 19 03:01:30 UrBackup urbackupsrv[3550043]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240217-0110'
Mar 19 03:01:31 UrBackup urbackupsrv[3550059]: Delete subvolume (commit): '/media/backup/backups/daten.aaa.com/240217-0611'
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15514 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15511 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15512 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15515 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15510 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15708 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15706 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15707 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15710 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15705 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15886 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15883 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15884 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15885 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 15887 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16067 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16084 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16116 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16124 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16125 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16211 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16227 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16256 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16259 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16400 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16401 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16422 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16441 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16442 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16582 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16585 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16622 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16634 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16635 is gone
Mar 19 03:01:43 UrBackup urbackupsrv[3550067]: Subvolume id 16734 is gone
Mar 19 06:41:47 UrBackup urbackupsrv[681]: ERROR: Error starting backup of internal database: disabled

The relevant log file for s3 storage would be at /var/log/clouddrive.log.

You can run echo "a=set_loglevel&loglevel=debug" > /media/clouddrive_vol/cmd

to increase verbosity.

On the bottom of the web interface there is a “Report problem” link with which you could upload the log files.

Thank you for your quick answer.

I checked clouddrive.log, but nothing of interest in there:

2023-12-07 10:13:34: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2023-12-07 10:40:30: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2023-12-07 11:42:32: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2023-12-07 14:52:14: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2023-12-15 12:18:32: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2024-01-16 03:07:01: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2024-02-29 10:48:25: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2024-03-05 07:25:42: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2024-03-05 21:44:50: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2024-03-05 22:09:25: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2024-03-08 10:04:54: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2024-03-13 07:45:57: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2024-03-13 07:57:17: WARNING: AWS-Client: Retry Strategy will use the default max attempts.

I tried setting the debug level, but the directory clouddrive_vol does not exist. I created the folder and executed the command nevertheless.

I also used report problem to upload the details.

I hope you can assist me further with the provided details.

Ok, it is the RAID synced to S3. The log file would be /var/log/raiddrive.log. There you’ll find some QuotaExceeded error messages.

Increasing log verbosity would be echo "a=set_loglevel&loglevel=debug" > /media/raiddrive_vol/cmd , but would not be necessary since the error is as above…

From my side I guess this needs proper error events.

number of objects is by default limited to 3 millions per customer

It is quite hard to make a scalable, robust storage system like S3 as far as I know. Using Ceph is a good start though…

That was the issue.
We contacted Contabo to increase the limit and now it syncs.

Thank you for the help!