AWS Appliance lost S3 bucket after reboot

I rebooted my appliance now the S3 bucket will no attach to my EC2 instance. I tried to report the problem from the web UI but I am getting ‘Error reporting problem: Connection issue with service’. Below is the error I am getting.

Error starting up cloud drive - 10/20/20 02:18
Error starting up cloud drive: Error listing objects on cloud drive.
Last errors:
2020-10-20 02:18:15: ERROR: Error listing objects on cloud drive

At this point the operation has been retried a few times. The appliance will continue to retry this operation infinitely.

I tried to recreate the storage in the WebUI but it did not set. I only had 4 clients on this server at the time of reboot.

Sorry you are having problems.

Have you/could you retry that?

Alternatively enter the web shell (Settings → System → web shell), login as admin (with admin pw), then run

sudo -i
tail -n 100 /var/log/clouddrive.log

The problem is likely that it somehow lost permissions to access the bucket. Is for example the IAM policy correctly attached the the instance?

Same issue when reporting a problem. UI gets to the creating issue 0%, hangs then the connection issue with service error.

I am able to connect to the server web shell by logging in with a user I created but the admin account will not work. I am unsure if there is an issue with a character in the password so I will try changing that. I also tried to connect with putty but cannot.I am prompted for credentials and after giving a user, putty errors with no supported auth methods available,

You need to login via admin + the credentials you setup on AWS (ssh key). Further login via “machinectl login app” with the username admin and your web interface password.

so even via the web ssh, I need the AWS ssh key to log in with admin? that would explain why it is not working. While trying just admin via the web, it asks for a password but i just get login incorrect.

I am SSHed into the appliance now. There is no log by that name, below is a listing of that path:
alternatives.log btmp daemon.log.1 faillog messages syslog.3.gz wtmp
apt chrony debug install_packages.list messages.1 syslog.4.gz
auth.log cloud-init.log debug.1 kern.log syslog unattended-upgrades
auth.log.1 cloud-init-output.log dpkg.log kern.log.1 syslog.1 user.log
bootstrap.log daemon.log fai lastlog syslog.2.gz user.log.1

Below is the cloud-init.log
2020-10-20 14:56:10,734 - helpers.py[DEBUG]: config-timezone already ran (freq=once-per-instance)
2020-10-20 14:56:10,734 - handlers.py[DEBUG]: finish: modules-config/config-timezone: SUCCESS: config-timezone previously ran
2020-10-20 14:56:10,734 - stages.py[DEBUG]: Running module disable-ec2-metadata (<module ‘cloudinit.config.cc_disable_ec2_metadata’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_disable_ec2_metadata.py’>) with frequency always
2020-10-20 14:56:10,735 - handlers.py[DEBUG]: start: modules-config/config-disable-ec2-metadata: running config-disable-ec2-metadata with frequency always
2020-10-20 14:56:10,735 - helpers.py[DEBUG]: Running config-disable-ec2-metadata using lock (<cloudinit.helpers.DummyLock object at 0x7fbbb909ecc0>)
2020-10-20 14:56:10,735 - cc_disable_ec2_metadata.py[DEBUG]: Skipping module named disable-ec2-metadata, disabling the ec2 route not enabled
2020-10-20 14:56:10,735 - handlers.py[DEBUG]: finish: modules-config/config-disable-ec2-metadata: SUCCESS: config-disable-ec2-metadata ran successfully
2020-10-20 14:56:10,735 - stages.py[DEBUG]: Running module runcmd (<module ‘cloudinit.config.cc_runcmd’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_runcmd.py’>) with frequency once-per-instance
2020-10-20 14:56:10,735 - handlers.py[DEBUG]: start: modules-config/config-runcmd: running config-runcmd with frequency once-per-instance
2020-10-20 14:56:10,735 - helpers.py[DEBUG]: config-runcmd already ran (freq=once-per-instance)
2020-10-20 14:56:10,735 - handlers.py[DEBUG]: finish: modules-config/config-runcmd: SUCCESS: config-runcmd previously ran
2020-10-20 14:56:10,735 - stages.py[DEBUG]: Running module byobu (<module ‘cloudinit.config.cc_byobu’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_byobu.py’>) with frequency once-per-instance
2020-10-20 14:56:10,735 - handlers.py[DEBUG]: start: modules-config/config-byobu: running config-byobu with frequency once-per-instance
2020-10-20 14:56:10,735 - helpers.py[DEBUG]: config-byobu already ran (freq=once-per-instance)
2020-10-20 14:56:10,735 - handlers.py[DEBUG]: finish: modules-config/config-byobu: SUCCESS: config-byobu previously ran
2020-10-20 14:56:10,735 - main.py[DEBUG]: Ran 12 modules with 0 failures
2020-10-20 14:56:10,736 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
2020-10-20 14:56:10,736 - util.py[DEBUG]: Read 10 bytes from /proc/uptime
2020-10-20 14:56:10,736 - util.py[DEBUG]: cloud-init mode ‘modules’ took 0.064 seconds (0.06)
2020-10-20 14:56:10,736 - handlers.py[DEBUG]: finish: modules-config: SUCCESS: running modules for config
2020-10-20 14:56:11,233 - util.py[DEBUG]: Cloud-init v. 0.7.9 running ‘modules:final’ at Tue, 20 Oct 2020 14:56:11 +0000. Up 7.78 seconds.
2020-10-20 14:56:11,267 - stages.py[DEBUG]: Using distro class <class ‘cloudinit.distros.debian.Distro’>
2020-10-20 14:56:11,268 - stages.py[DEBUG]: Running module package-update-upgrade-install (<module ‘cloudinit.config.cc_package_update_upgrade_install’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_package_update_upgrade_install.py’>) with frequency once-per-instance
2020-10-20 14:56:11,268 - handlers.py[DEBUG]: start: modules-final/config-package-update-upgrade-install: running config-package-update-upgrade-install with frequency once-per-instance
2020-10-20 14:56:11,268 - helpers.py[DEBUG]: config-package-update-upgrade-install already ran (freq=once-per-instance)
2020-10-20 14:56:11,269 - handlers.py[DEBUG]: finish: modules-final/config-package-update-upgrade-install: SUCCESS: config-package-update-upgrade-install previously ran
2020-10-20 14:56:11,269 - stages.py[DEBUG]: Running module fan (<module ‘cloudinit.config.cc_fan’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_fan.py’>) with frequency once-per-instance
2020-10-20 14:56:11,269 - handlers.py[DEBUG]: start: modules-final/config-fan: running config-fan with frequency once-per-instance
2020-10-20 14:56:11,269 - helpers.py[DEBUG]: config-fan already ran (freq=once-per-instance)
2020-10-20 14:56:11,269 - handlers.py[DEBUG]: finish: modules-final/config-fan: SUCCESS: config-fan previously ran
2020-10-20 14:56:11,269 - stages.py[DEBUG]: Running module puppet (<module ‘cloudinit.config.cc_puppet’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_puppet.py’>) with frequency once-per-instance
2020-10-20 14:56:11,269 - handlers.py[DEBUG]: start: modules-final/config-puppet: running config-puppet with frequency once-per-instance
2020-10-20 14:56:11,269 - helpers.py[DEBUG]: config-puppet already ran (freq=once-per-instance)
2020-10-20 14:56:11,269 - handlers.py[DEBUG]: finish: modules-final/config-puppet: SUCCESS: config-puppet previously ran
2020-10-20 14:56:11,269 - stages.py[DEBUG]: Running module chef (<module ‘cloudinit.config.cc_chef’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_chef.py’>) with frequency once-per-instance
2020-10-20 14:56:11,269 - handlers.py[DEBUG]: start: modules-final/config-chef: running config-chef with frequency once-per-instance
2020-10-20 14:56:11,269 - helpers.py[DEBUG]: config-chef already ran (freq=once-per-instance)
2020-10-20 14:56:11,270 - handlers.py[DEBUG]: finish: modules-final/config-chef: SUCCESS: config-chef previously ran
2020-10-20 14:56:11,270 - stages.py[DEBUG]: Running module salt-minion (<module ‘cloudinit.config.cc_salt_minion’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_salt_minion.py’>) with frequency once-per-instance
2020-10-20 14:56:11,270 - handlers.py[DEBUG]: start: modules-final/config-salt-minion: running config-salt-minion with frequency once-per-instance
2020-10-20 14:56:11,270 - helpers.py[DEBUG]: config-salt-minion already ran (freq=once-per-instance)
2020-10-20 14:56:11,270 - handlers.py[DEBUG]: finish: modules-final/config-salt-minion: SUCCESS: config-salt-minion previously ran
2020-10-20 14:56:11,270 - stages.py[DEBUG]: Running module mcollective (<module ‘cloudinit.config.cc_mcollective’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_mcollective.py’>) with frequency once-per-instance
2020-10-20 14:56:11,270 - handlers.py[DEBUG]: start: modules-final/config-mcollective: running config-mcollective with frequency once-per-instance
2020-10-20 14:56:11,270 - helpers.py[DEBUG]: config-mcollective already ran (freq=once-per-instance)
2020-10-20 14:56:11,270 - handlers.py[DEBUG]: finish: modules-final/config-mcollective: SUCCESS: config-mcollective previously ran
2020-10-20 14:56:11,270 - stages.py[DEBUG]: Running module rightscale_userdata (<module ‘cloudinit.config.cc_rightscale_userdata’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_rightscale_userdata.py’>) with frequency once-per-instance
2020-10-20 14:56:11,271 - handlers.py[DEBUG]: start: modules-final/config-rightscale_userdata: running config-rightscale_userdata with frequency once-per-instance
2020-10-20 14:56:11,271 - helpers.py[DEBUG]: config-rightscale_userdata already ran (freq=once-per-instance)
2020-10-20 14:56:11,271 - handlers.py[DEBUG]: finish: modules-final/config-rightscale_userdata: SUCCESS: config-rightscale_userdata previously ran
2020-10-20 14:56:11,271 - stages.py[DEBUG]: Running module scripts-vendor (<module ‘cloudinit.config.cc_scripts_vendor’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_vendor.py’>) with frequency once-per-instance
2020-10-20 14:56:11,271 - handlers.py[DEBUG]: start: modules-final/config-scripts-vendor: running config-scripts-vendor with frequency once-per-instance
2020-10-20 14:56:11,271 - helpers.py[DEBUG]: config-scripts-vendor already ran (freq=once-per-instance)
2020-10-20 14:56:11,271 - handlers.py[DEBUG]: finish: modules-final/config-scripts-vendor: SUCCESS: config-scripts-vendor previously ran
2020-10-20 14:56:11,271 - stages.py[DEBUG]: Running module scripts-per-once (<module ‘cloudinit.config.cc_scripts_per_once’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_per_once.py’>) with frequency once
2020-10-20 14:56:11,271 - handlers.py[DEBUG]: start: modules-final/config-scripts-per-once: running config-scripts-per-once with frequency once
2020-10-20 14:56:11,271 - helpers.py[DEBUG]: config-scripts-per-once already ran (freq=once)
2020-10-20 14:56:11,272 - handlers.py[DEBUG]: finish: modules-final/config-scripts-per-once: SUCCESS: config-scripts-per-once previously ran
2020-10-20 14:56:11,272 - stages.py[DEBUG]: Running module scripts-per-boot (<module ‘cloudinit.config.cc_scripts_per_boot’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_per_boot.py’>) with frequency always
2020-10-20 14:56:11,272 - handlers.py[DEBUG]: start: modules-final/config-scripts-per-boot: running config-scripts-per-boot with frequency always
2020-10-20 14:56:11,272 - helpers.py[DEBUG]: Running config-scripts-per-boot using lock (<cloudinit.helpers.DummyLock object at 0x7fdc0d918e48>)
2020-10-20 14:56:11,273 - handlers.py[DEBUG]: finish: modules-final/config-scripts-per-boot: SUCCESS: config-scripts-per-boot ran successfully
2020-10-20 14:56:11,273 - stages.py[DEBUG]: Running module scripts-per-instance (<module ‘cloudinit.config.cc_scripts_per_instance’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_per_instance.py’>) with frequency once-per-instance
2020-10-20 14:56:11,273 - handlers.py[DEBUG]: start: modules-final/config-scripts-per-instance: running config-scripts-per-instance with frequency once-per-instance
2020-10-20 14:56:11,274 - helpers.py[DEBUG]: config-scripts-per-instance already ran (freq=once-per-instance)
2020-10-20 14:56:11,274 - handlers.py[DEBUG]: finish: modules-final/config-scripts-per-instance: SUCCESS: config-scripts-per-instance previously ran
2020-10-20 14:56:11,274 - stages.py[DEBUG]: Running module scripts-user (<module ‘cloudinit.config.cc_scripts_user’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_scripts_user.py’>) with frequency once-per-instance
2020-10-20 14:56:11,274 - handlers.py[DEBUG]: start: modules-final/config-scripts-user: running config-scripts-user with frequency once-per-instance
2020-10-20 14:56:11,274 - helpers.py[DEBUG]: config-scripts-user already ran (freq=once-per-instance)
2020-10-20 14:56:11,274 - handlers.py[DEBUG]: finish: modules-final/config-scripts-user: SUCCESS: config-scripts-user previously ran
2020-10-20 14:56:11,274 - stages.py[DEBUG]: Running module ssh-authkey-fingerprints (<module ‘cloudinit.config.cc_ssh_authkey_fingerprints’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_ssh_authkey_fingerprints.py’>) with frequency once-per-instance
2020-10-20 14:56:11,274 - handlers.py[DEBUG]: start: modules-final/config-ssh-authkey-fingerprints: running config-ssh-authkey-fingerprints with frequency once-per-instance
2020-10-20 14:56:11,274 - helpers.py[DEBUG]: config-ssh-authkey-fingerprints already ran (freq=once-per-instance)
2020-10-20 14:56:11,274 - handlers.py[DEBUG]: finish: modules-final/config-ssh-authkey-fingerprints: SUCCESS: config-ssh-authkey-fingerprints previously ran
2020-10-20 14:56:11,274 - stages.py[DEBUG]: Running module keys-to-console (<module ‘cloudinit.config.cc_keys_to_console’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_keys_to_console.py’>) with frequency once-per-instance
2020-10-20 14:56:11,274 - handlers.py[DEBUG]: start: modules-final/config-keys-to-console: running config-keys-to-console with frequency once-per-instance
2020-10-20 14:56:11,275 - helpers.py[DEBUG]: config-keys-to-console already ran (freq=once-per-instance)
2020-10-20 14:56:11,275 - handlers.py[DEBUG]: finish: modules-final/config-keys-to-console: SUCCESS: config-keys-to-console previously ran
2020-10-20 14:56:11,275 - stages.py[DEBUG]: Running module phone-home (<module ‘cloudinit.config.cc_phone_home’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_phone_home.py’>) with frequency once-per-instance
2020-10-20 14:56:11,275 - handlers.py[DEBUG]: start: modules-final/config-phone-home: running config-phone-home with frequency once-per-instance
2020-10-20 14:56:11,275 - helpers.py[DEBUG]: config-phone-home already ran (freq=once-per-instance)
2020-10-20 14:56:11,275 - handlers.py[DEBUG]: finish: modules-final/config-phone-home: SUCCESS: config-phone-home previously ran
2020-10-20 14:56:11,275 - stages.py[DEBUG]: Running module final-message (<module ‘cloudinit.config.cc_final_message’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_final_message.py’>) with frequency always
2020-10-20 14:56:11,275 - handlers.py[DEBUG]: start: modules-final/config-final-message: running config-final-message with frequency always
2020-10-20 14:56:11,275 - helpers.py[DEBUG]: Running config-final-message using lock (<cloudinit.helpers.DummyLock object at 0x7fdc0d86a9b0>)
2020-10-20 14:56:11,276 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
2020-10-20 14:56:11,276 - util.py[DEBUG]: Read 10 bytes from /proc/uptime
2020-10-20 14:56:11,284 - util.py[DEBUG]: Cloud-init v. 0.7.9 finished at Tue, 20 Oct 2020 14:56:11 +0000. Datasource DataSourceEc2. Up 7.89 seconds
2020-10-20 14:56:11,284 - util.py[DEBUG]: Writing to /var/lib/cloud/instance/boot-finished - wb: [420] 50 bytes
2020-10-20 14:56:11,285 - handlers.py[DEBUG]: finish: modules-final/config-final-message: SUCCESS: config-final-message ran successfully
2020-10-20 14:56:11,285 - stages.py[DEBUG]: Running module power-state-change (<module ‘cloudinit.config.cc_power_state_change’ from ‘/usr/lib/python3/dist-packages/cloudinit/config/cc_power_state_change.py’>) with frequency once-per-instance
2020-10-20 14:56:11,286 - handlers.py[DEBUG]: start: modules-final/config-power-state-change: running config-power-state-change with frequency once-per-instance
2020-10-20 14:56:11,286 - helpers.py[DEBUG]: config-power-state-change already ran (freq=once-per-instance)
2020-10-20 14:56:11,286 - handlers.py[DEBUG]: finish: modules-final/config-power-state-change: SUCCESS: config-power-state-change previously ran
2020-10-20 14:56:11,286 - main.py[DEBUG]: Ran 17 modules with 0 failures
2020-10-20 14:56:11,287 - util.py[DEBUG]: Creating symbolic link from ‘/run/cloud-init/result.json’ => ‘…/…/var/lib/cloud/data/result.json’
2020-10-20 14:56:11,287 - util.py[DEBUG]: Reading from /proc/uptime (quiet=False)
2020-10-20 14:56:11,287 - util.py[DEBUG]: Read 10 bytes from /proc/uptime
2020-10-20 14:56:11,287 - util.py[DEBUG]: cloud-init mode ‘modules’ took 0.127 seconds (0.13)
2020-10-20 14:56:11,287 - handlers.py[DEBUG]: finish: modules-final: SUCCESS: running modules for final

As said:

Alternatively I guess you can directly look at the file via e.g.:

tail -n 100 /mnt/root_app/var/log/clouddrive.log

Funny, I did a find for all log files and finally found it just before you posted. Information is the same as what was already presented in the UI.

2020-10-20 13:22:55: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:22:55: WARNING: AWS-Client: Request failed, now waiting 200 ms before attempting again.
2020-10-20 13:23:26: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:23:26: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:23:26: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:23:26: WARNING: AWS-Client: Request failed, now waiting 400 ms before attempting again.
2020-10-20 13:23:56: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:23:56: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:23:56: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:23:56: WARNING: AWS-Client: Request failed, now waiting 800 ms before attempting again.
2020-10-20 13:24:27: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:24:27: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:24:27: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:24:27: WARNING: AWS-Client: Request failed, now waiting 1600 ms before attempting again.
2020-10-20 13:24:59: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:24:59: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:24:59: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:24:59: WARNING: AWS-Client: Request failed, now waiting 3200 ms before attempting again.
2020-10-20 13:25:32: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:25:32: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:25:32: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:25:32: WARNING: AWS-Client: Request failed, now waiting 6400 ms before attempting again.
2020-10-20 13:26:09: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:26:09: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:26:09: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:26:09: WARNING: AWS-Client: Request failed, now waiting 12800 ms before attempting again.
2020-10-20 13:26:52: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:26:52: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:26:52: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:28:24: WARNING: AWS-Client: Retry Strategy will use the default max attempts.
2020-10-20 13:28:54: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:28:54: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:28:54: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:28:54: WARNING: AWS-Client: Request failed, now waiting 0 ms before attempting again.
2020-10-20 13:29:25: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:29:25: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:29:25: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:29:25: WARNING: AWS-Client: Request failed, now waiting 50 ms before attempting again.
2020-10-20 13:29:55: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:29:55: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:29:55: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:29:55: WARNING: AWS-Client: Request failed, now waiting 100 ms before attempting again.
2020-10-20 13:30:25: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:30:25: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:30:25: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:30:25: WARNING: AWS-Client: Request failed, now waiting 200 ms before attempting again.
2020-10-20 13:30:55: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:30:55: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:30:55: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:30:55: WARNING: AWS-Client: Request failed, now waiting 400 ms before attempting again.
2020-10-20 13:31:26: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:31:26: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:31:26: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:31:26: WARNING: AWS-Client: Request failed, now waiting 800 ms before attempting again.
2020-10-20 13:31:57: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 13:31:57: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 13:31:57: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 13:31:57: WARNING: AWS-Client: Request failed, now waiting 1600 ms before attempting again.

It has a bit more information. It somehow cannot connect to S3 – probably the same network issue that causes the problem reporting to fail.

There’s really only 1 aws admin at this time. He has not made any changes. The S3 bucket was working then did not after a reboot. Within the policy section, all outbound traffic is allowed. We followed the admin guide for every portion except for assigning the elastic IP. At this point we are not sure how to proceed.

Diagnose the network issue. E.g. by running

ip addr
route
ping example.com
curl http://example.com
curl https://www.google.com
curl https://app.urbackup.com
traceroute app.urbackup.com

Alternatively start a new instance, point it at the existing bucket and terminate the old instance.

Thanks, we had the same thought on recreating the instance and using the same S3 bucket. We will do that then I will run those troubleshooting steps for the network.

A new instance had the same issue with the existing bucket. We spoke with our AWS support team. There seemed to have been some issues with the IAM policy. We created a new policy that allowed all access to the storage bucket. From an SSH console, we were then able to ls the s3 bucket bucket but the UrBackup application was having the same repeated failures in the log. Since this was a test instance, we decided to purge the bucket and instance and start over.

2020-10-20 15:53:00: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:53:00: WARNING: AWS-Client: Request failed, now waiting 400 ms before attempting again.
2020-10-20 15:53:30: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:53:30: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:53:30: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:53:30: WARNING: AWS-Client: Request failed, now waiting 800 ms before attempting again.
2020-10-20 15:54:01: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:54:01: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:54:01: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:54:01: WARNING: AWS-Client: Request failed, now waiting 1600 ms before attempting again.
2020-10-20 15:54:33: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:54:33: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:54:33: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:54:33: WARNING: AWS-Client: Request failed, now waiting 3200 ms before attempting again.
2020-10-20 15:55:06: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:55:06: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:55:06: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:55:06: WARNING: AWS-Client: Request failed, now waiting 6400 ms before attempting again.
2020-10-20 15:55:43: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:55:43: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:55:43: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:55:43: WARNING: AWS-Client: Request failed, now waiting 12800 ms before attempting again.
2020-10-20 15:56:26: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:56:26: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:56:26: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:56:56: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:56:56: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:56:56: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:56:56: WARNING: AWS-Client: Request failed, now waiting 0 ms before attempting again.
2020-10-20 15:57:26: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:57:26: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:57:26: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:57:26: WARNING: AWS-Client: Request failed, now waiting 50 ms before attempting again.
2020-10-20 15:57:56: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:57:56: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:57:56: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:57:56: WARNING: AWS-Client: Request failed, now waiting 100 ms before attempting again.
2020-10-20 15:58:27: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:58:27: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:58:27: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:58:27: WARNING: AWS-Client: Request failed, now waiting 200 ms before attempting again.
2020-10-20 15:58:57: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:58:57: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:58:57: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:58:57: WARNING: AWS-Client: Request failed, now waiting 400 ms before attempting again.
2020-10-20 15:59:27: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:59:27: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:59:27: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:59:27: WARNING: AWS-Client: Request failed, now waiting 800 ms before attempting again.
2020-10-20 15:59:58: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 15:59:58: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 15:59:58: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 15:59:58: WARNING: AWS-Client: Request failed, now waiting 1600 ms before attempting again.
2020-10-20 16:00:30: ERROR: AWS-Client: Curl returned error code 28 - Timeout was reached
2020-10-20 16:00:30: ERROR: AWS-Client: HTTP response code: -1
Exception name:
Error message: curlCode: 28, Timeout was reached
0 response headers:
2020-10-20 16:00:30: WARNING: AWS-Client: If the signature check failed. This could be because of a time skew. Attempting to adjust the signer.
2020-10-20 16:00:30: WARNING: AWS-Client: Request failed, now waiting 3200 ms before attempting again.