Error between client backup and cloud storage

I have a question.
I have configured a new server in Cloud storage mode with cache disk. I have configured a single client that totals 1.6TB of data to backup.
I forgot to set the cloud size correctly (1TB) and so the backup stopped asking for a backup reset.

Now the backup works correctly after enlarging the cloud storage but there is an error. There should not be so much on the cloud storage. I think that the backup reset did not delete properly. Storage Cloud 1.94To

how to correct this problem?

Thanks in advance

Here are the screenshots.




I guess you are using appliance version 10.5 and have waited for one (nightly) cleanup?

If you are up for some command line, please post the output of (after sudo -i):

btrfs fi usage /media/backup

Output image of (shows fragmentation):

apt update
apt install btrfs-heatmap
btrfs-heatmap /media/backup

I don’t know exactly what you mean with this:

I guess it stopped the backup with error messages, then deleted it later?

btrfs fi usage /media/backup:
Capture d’écran du 2021-05-04 17-27-35

btrfs-heatmap /media/backup

For : stopped asking for a backup reset

When the backup has filled 1TB, a message appears on the status page to reset the space error.

my S3 provider has 1.94TB of data. Version is 10.5 and yes there have been several cleanup

Looks good (unfortunaly for narrowing down the issue).

Could you post the ouput of

echo "SELECT * FROM tasks;" | sqlite3 /media/clouddrive_cache/objects.db

Plus perhaps send me the file(s) /var/log/clouddrive.log*

echo "SELECT * FROM tasks;" | sqlite3 /media/clouddrive_cache/objects.db

Capture d’écran du 2021-05-04 18-21-39

How can I send you the log file ?

Per PM here, per email to support@infscape.com or by using the “Report problem” link on the bottom of the appliance web interface.

You could also take a look at /var/log/clouddrive.log yourself. It shows perhaps why the deletion tasks fail…

Could have something to do with the sharding but unfortunately I can’t confirm it via log files…

Could you run

echo "info" > /media/clouddrive_cache/log_level_override

reboot, wait a while then resubmit the log files. Thanks!

From the log you sent me my current hypothesis is that it just takes a long while to delete. This might be exacerbated by the non-shard optimized delete (because of https://tracker.ceph.com/issues/41642 ).

You can verify it is still deleting by going to Settings -> System -> Advanced: Access server statistics (netdata) (you might need to do this twice). Then go to storage -> clouddrive and look at del ops/s in Asynchronous IO operations.

if I’m not mistaken, the elements are removed but slowly as you can see on the picture?