HTTP Request rate limit at SCALEWAY

Hello,

I have to lower the demand rate, it’s limited to 250 per second at scaleway. How can I do that?

thank you in advance

Could you decribe what problems you are experiencing with demand rate?

It automatically backs off on error and retries…

since Friday, the backup doesn’t work anymore and I have this as a log in Web Gui:

Problem with S3 backend: BackupUSER: Error during S3 upload - 01/08/20 10:39 (BackupUSER)
S3 upload of object cd_magic_file failed.
Last errors:
2020-08-01 08:38:37: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-08-01 08:38:37: WARNING: AWS-Client: Request failed, now waiting 12800 ms before attempting again.
2020-08-01 08:39:30: WARNING: AWS-Client: Encountered AWSError ‘ServiceUnavailable’: Please reduce your request rate.
2020-08-01 08:39:30: ERROR: AWS-Client: HTTP response code: 503
Exception name: ServiceUnavailable
Error message: Please reduce your request rate. with address : 62.210.*******
5 response headers:
content-type : application/xml
date : Sat, 01 Aug 2020 08:39:30 GMT
transfer-encoding : chunked
x-amz-id-2 : tx19620a8cdfe649b7961ce-005f252a1a
x-amz-request-id : tx19620a8cdfe649b7961ce-005f252a1a
2020-08-01 08:39:30: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.

At this point the operation has been retried a few times. Nevertheless the problem might be transient and get fixed without any action on your side. Backups might hang during that time.
If the problem persists please contact support of your S3 provider.

and :

BackupUSER: Error starting up cloud drive - 01/08/20 07:46 (BackupUSER)
Error starting up cloud drive: Error setting active transactions.
Last errors:
2020-08-01 05:45:59: ERROR: AWS-Client: HTTP response code: 503
Exception name: ServiceUnavailable
Error message: Please reduce your request rate. with address : 62.210.******
5 response headers:
content-type : application/xml
date : Sat, 01 Aug 2020 05:45:59 GMT
transfer-encoding : chunked
x-amz-id-2 : tx84d629a60f2340ecad238-005f250151
x-amz-request-id : tx84d629a60f2340ecad238-005f250151

At this point the operation has been retried a few times. The appliance will continue to retry this operation infinitely

I’ve seen that there are currently problems with S3 storage at Scaleway.
I think my problem has something to do with it.

I’m waiting for a response from technical support.

i sent you a problem report by infscape : id 138

I’ll be back to give you the tech support answer as soon as I get it.

My problem is still going on. It’s still the same problem.

What is Scaleway support saying?

As said, it backs off if it encounteres this error. Also, I doubt it does 250 requests/s, except if Scaleway counts bulk deletes as more than one request.

Other than that, if you reduce the number of CPUs of the VM, it does also do less (parallel) requests.

I don’t have a definite answer yet, Scaleway had a technical problem on the S3 infrastructure that is now resolved.

Unfortunately my problem persists but I think it’s a network problem. In the log, the concerned ip address is not mine. I think it’s a gateway address and therefore a multiple request address.
I have just re-done my request.

I am waiting for their answers.

I just reduced the number of cpu from 6 to 4 and restarted VM.

With 4 Cpu -> Same Problem

but

for the last hour, everything’s been working fine. Scaleway solved their problem.They made a comment about the number of objects in the bucket. Too many.

This is the first time I’ve used S3 storage. What are the best practices with URBACKUP?

Usually S3 scales well and there is no limit on the number of objects in a bucket. No idea about the Scaleway implementation.

There is a trick to use multiple buckets (this is not well tested so avoid if possible). You can enter multiple bucket names like bucket1/bucket2 (separated by /) and it will write to the first bucket (bucket1 in the example) while still reading/deleting from all of them.

I feel like testing a few buckets.

How the data’s distributed in the buckets, randomly, by client… ?

When I try to add a bucket, I get this error:

Error verifying cloud storage settings:

2020-08-05 16:53:01: AES key length=32
2020-08-05 16:53:01: All depenencies available. Starting up…
2020-08-05 16:53:01: Total available RAM: 31.4112 GB
2020-08-05 16:53:01: Database with identifier “40” couldn’t be opened
2020-08-05 16:53:01: Retrieving object cd_magic_file
2020-08-05 16:53:01: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2020-08-05 16:53:02: ERROR: Error decompressing zstd: Unknown frame descriptor
2020-08-05 16:53:02: ERROR: Error while decompressing (code: -10)
2020-08-05 16:53:02: ERROR: Error decrypting and decompressing
2020-08-05 16:53:02: Retrieving object 000/00/146_734de90800
2020-08-05 16:53:02: WARNING: Locally stored md5sum of object 000/00/146_734de90800 empty. Using online md5sum from now (ff6a33b9103fa0afb32892249c8434b5)
2020-08-05 16:53:02: ERROR: Error decompressing zstd: Unknown frame descriptor
2020-08-05 16:53:02: ERROR: Error while decompressing (code: -10)
2020-08-05 16:53:02: ERROR: EVENT s3_backend Subj: Error decrypting and compressing msg: Error decrypting and decompressing object 000/00/146_734de90800. Last errors: prio: 2 extra:
2020-08-05 16:53:02: ERROR: Error decrypting and decompressing
2020-08-05 16:53:02: ERROR: Error getting item 000/00/146_734de90800 from cloud drive. Encryption key may be wrong.
2020-08-05 16:53:02: WARNING: Exception: Error getting item 000/00/146_734de90800 from cloud drive. Encryption key may be wrong.

Are you sure the cloud drive encryption key is correct?

how do we verify this?

with only one bucket it works.

the error occurs when I save the parameters (2 bucket).

Is the one bucket empty?

first with objets (original) and the second empty (for divide objets on multi buckets).

I will run some tests. Could be it doesn’t set the key correctly when testing for connectivity after changing the settings.

As said, it’ll only write to the first bucket, so you’d want to put the empty one first…

When I change order of buckets, I get a dialog box

Cloud storage migration currently not supported. Settings not changed.

You need to check that you migrated the data…