I have a VPS on Amazon EC2 t3.medium (2 vCPU, 4 GB Ram) with following volumes
xvda 128GB gp3
sdb 4GB GP3
sdc 256GB GP3
And until a few months ago the server ran for years without any problems.
However, for a few months now, I have been facing problems like the attached image, this problem is partially resolved when restarted.
After the restart it works fine for about 10 days and the problem persists again.
Where should I start to try to resolve this issue?
uroni
August 15, 2024, 8:51pm
2
If okay you could use the report problem link on the bottom to upload log files.
Otherwise looking at /var/log/clouddrive.log
might give some hints.
A wild guess is that this could be a performance problem, but that should cause a message on the status page. It might be bottlenecked by vCPU compessing the S3 objects before upload.
uroni
August 21, 2024, 10:11pm
3
Had a look at the logs but unfortunately cannot find any hints at the cause. Could you perhaps run a few dianostic commands the next time it goes into this state?
sudo -i
cat /media/clouddrive_vol/num_dirty_items
cat /media/clouddrive_vol/meminfo
echo "a=set_loglevel&loglevel=debug" > /media/clouddrive_vol/cmd
sleep 30
tail -n 100 /var/log/clouddrive.log
iostat -x -y 5 5
Yes, I can, as soon as it crashes again I will run the tests.
Dear @uroni
Follow output of commands.
cat /media/clouddrive_vol/num_dirty_items
14537: 84379
cat /media/clouddrive_vol/meminfo
cat /media/clouddrive_vol/meminfo
##CloudFile:
locked_extents: 128 * 32 bytes
bitmap: 66.125 MB/2.06641 MB
big_blocks_bitmap: 128 KB/400 KB
old_big_blocks_bitmap: 62.5 KB
new_big_blocks_bitmap: 62.5 KB
fracture_big_blogs: 0 * 16 bytes
in_write_retrieval: 0 * 32 bytes
fs_chunks: 6921 * 24 bytes = 162.211 KB
##TransactionalKvStore:
lru_cache: 271934 * 57 bytes = 14.7822 MB
lru_cache items with more chances: 81039 (29%)
compressed_items: 0 * 57 bytes = 0 bytes
compressed_items items with more chances: 0
open_files: 14 * 56 bytes = 784 bytes
read_only_open_files: 4 * 32 bytes = 128 bytes
preload_once_items: 0 * 36 bytes = 0 bytes
preload_once_delayed_removal: 0 * 40 bytes = 0 bytes
submission_queue: 0 * 88 bytes = 0 bytes
submission_items: 0 * 48 bytes = 0 bytes
dirty_evicted_items: 0 * 32 bytes = 0 bytes
nosubmit_dirty_items: 1 * 56 bytes = 56 bytes
nosubmit_dirty_items[14536]: 81388 * 32 bytes = 2.48376 MB
nosubmit_untouched_items: 65288 * 32 bytes = 1.99243 MB
num_dirty_items: 1 * 16 bytes = 16 bytes
num_delete_items: 0 * 16 bytes = 0 bytes
fd_cache: 472/472 * 80 bytes = 36.875 KB
queued_dels: 1116 * 32 bytes = 34.875 KB
in_retrieval: 0 * 32 bytes = 0 bytes
transactions: 7 * 88 bytes = 616 bytes
memfiles: 0 * 152 bytes = 0 bytes
memfile used size: 0 bytes
memfile_size_check: 0 bytes
memfile_size_non_dirty: 0 bytes
memfile_size_dirty: 0 bytes
memfile_stat_bitmaps: 0 * 56 bytes
num_mem_files: 0 * 16 bytes = 0 bytes
submit_bundle: 0 * 89 bytes = 0 bytes
submit_bundle_items_a: 0 * 40 bytes = 0 bytes
submit_bundle_items_b: 0 * 40 bytes = 0 bytes
del_file_queue: 0 * 32 bytes = 0 bytes
##KvStoreFrontend:
unsynced_keys_a: 0 * 72 bytes = 0 bytes
unsynced_keys_b: 0 * 72 bytes = 0 bytes
##KvStoreFrontend::PutDbWorker:
items_a: 16 * 128 bytes
items_b: 16 * 128 bytes
##KvStoreBackendS3:
s3_clients[0]: 30 * 24 bytes
##KvStoreFrontend::BackgroundWorker:
object_collector_size: 0 = 0 bytes
object_collector_size_uncompressed: 0 = 0 bytes
##fusemain:
cached_dbs: 0/944 * 33 bytes
memory_reserve: 0
##sqlite3:
memory used: current=13.0011 MB high=115.258 MB
malloc count: current=7906 high=36001
##jemalloc:
jemalloc.stats.resident: 274.504 MB
jemalloc.stats.active: 256.465 MB
jemalloc.stats.allocated: 209.036 MB
##fuseuring:
fuse_io_buf: 464.906 KB
interface_cont: 0 entries 0 bytes
tail -n 100 /var/log/clouddrive.log
# tail -n 100 /var/log/clouddrive.log
2024-08-27 13:24:53: Incr dirty item 736965d300 transid 14537
2024-08-27 13:25:23: Incr dirty item 736a77d200 transid 14537
2024-08-27 13:25:23: Incr dirty item 736c61ca00 transid 14537
2024-08-27 13:25:23: Incr dirty item 73df4ed200 transid 14537
2024-08-27 13:25:39: Free metadata space: 22.572 GB
2024-08-27 13:25:52: Incr dirty item 739107d200 transid 14537
2024-08-27 13:25:53: Incr dirty item 739363d400 transid 14537
2024-08-27 13:25:53: Incr dirty item 73e04ed200 transid 14537
2024-08-27 13:25:53: Incr dirty item 739407d200 transid 14537
2024-08-27 13:25:53: Incr dirty item 734a31d400 transid 14537
2024-08-27 13:25:53: Incr dirty item 734931d400 transid 14537
2024-08-27 13:25:53: Incr dirty item 73d0e8d200 transid 14537
2024-08-27 13:25:53: Incr dirty item 734c31d400 transid 14537
2024-08-27 13:26:24: Incr dirty item 732e580400 transid 14537
2024-08-27 13:26:24: Incr dirty item 734e31d400 transid 14537
2024-08-27 13:26:34: Incr dirty item 736e56d200 transid 14537
2024-08-27 13:26:34: Incr dirty item 73c14fd200 transid 14537
2024-08-27 13:26:34: Incr dirty item 73b050d200 transid 14537
2024-08-27 13:26:34: Incr dirty item 73de931900 transid 14537
2024-08-27 13:26:38: Incr dirty item 736a46d200 transid 14537
2024-08-27 13:26:40: Free metadata space: 22.562 GB
2024-08-27 13:26:54: Incr dirty item 73fd05d200 transid 14537
2024-08-27 13:26:54: Incr dirty item 737040d200 transid 14537
2024-08-27 13:27:08: Incr dirty item 7351ebd200 transid 14537
2024-08-27 13:27:08: Incr dirty item 73c449d200 transid 14537
2024-08-27 13:27:08: Incr dirty item 737d55d200 transid 14537
2024-08-27 13:27:09: Incr dirty item 737f55d200 transid 14537
2024-08-27 13:27:09: Incr dirty item 738055d200 transid 14537
2024-08-27 13:27:09: Incr dirty item 73a1edd200 transid 14537
2024-08-27 13:27:24: Incr dirty item 7333580400 transid 14537
2024-08-27 13:27:24: Incr dirty item 7335580400 transid 14537
2024-08-27 13:27:24: Incr dirty item 73e0ecd200 transid 14537
2024-08-27 13:27:24: Incr dirty item 738255d200 transid 14537
2024-08-27 13:27:24: Incr dirty item 738355d200 transid 14537
2024-08-27 13:27:24: Incr dirty item 737856d200 transid 14537
2024-08-27 13:27:24: Incr dirty item 73796fd400 transid 14537
2024-08-27 13:27:24: Incr dirty item 73856fd400 transid 14537
2024-08-27 13:27:24: Incr dirty item 737b6fd400 transid 14537
2024-08-27 13:27:32: Incr dirty item 739907d200 transid 14537
2024-08-27 13:27:32: Incr dirty item 737177d200 transid 14537
2024-08-27 13:27:32: Incr dirty item 736760ca00 transid 14537
2024-08-27 13:27:33: Incr dirty item 738a47d200 transid 14537
2024-08-27 13:27:33: Incr dirty item 73c24fd200 transid 14537
2024-08-27 13:27:33: Incr dirty item 73b150d200 transid 14537
2024-08-27 13:27:40: Free metadata space: 22.562 GB
2024-08-27 13:27:50: Incr dirty item 737277d200 transid 14537
2024-08-27 13:27:50: Incr dirty item 73716bd300 transid 14537
2024-08-27 13:27:50: Incr dirty item 73706bd300 transid 14537
2024-08-27 13:27:50: Incr dirty item 732900d300 transid 14537
2024-08-27 13:28:04: Incr dirty item 739306d200 transid 14537
2024-08-27 13:28:04: Incr dirty item 73e14ed200 transid 14537
2024-08-27 13:28:04: Incr dirty item 737140d200 transid 14537
2024-08-27 13:28:05: Incr dirty item 730e20d300 transid 14537
2024-08-27 13:28:19: Incr dirty item 737256d200 transid 14537
2024-08-27 13:28:19: Incr dirty item 73c44fd200 transid 14537
2024-08-27 13:28:19: Incr dirty item 736461ca00 transid 14537
2024-08-27 13:28:28: Incr dirty item 7339580400 transid 14537
2024-08-27 13:28:28: Incr dirty item 737161ca00 transid 14537
2024-08-27 13:28:28: Incr dirty item 7360a2d200 transid 14537
2024-08-27 13:28:40: Free metadata space: 22.562 GB
2024-08-27 13:28:59: Incr dirty item 73e5931900 transid 14537
2024-08-27 13:28:59: Incr dirty item 73b250d200 transid 14537
2024-08-27 13:28:59: Incr dirty item 7361a2d200 transid 14537
2024-08-27 13:29:00: Incr dirty item 737577d200 transid 14537
2024-08-27 13:29:00: Incr dirty item 73fe05d200 transid 14537
2024-08-27 13:29:00: Incr dirty item 739b14d300 transid 14537
2024-08-27 13:29:00: Incr dirty item 739506d200 transid 14537
2024-08-27 13:29:00: Incr dirty item 73c978d200 transid 14537
2024-08-27 13:29:00: Incr dirty item 73d778d200 transid 14537
2024-08-27 13:29:00: Incr dirty item 737356d200 transid 14537
2024-08-27 13:29:20: Incr dirty item 73d85c0400 transid 14537
2024-08-27 13:29:20: Incr dirty item 73d678d200 transid 14537
2024-08-27 13:29:20: Incr dirty item 737456d200 transid 14537
2024-08-27 13:29:20: Incr dirty item 737240d200 transid 14537
2024-08-27 13:29:20: Incr dirty item 737677d200 transid 14537
2024-08-27 13:29:20: Incr dirty item 737556d200 transid 14537
2024-08-27 13:29:21: Incr dirty item 737656d200 transid 14537
2024-08-27 13:29:21: Incr dirty item 738455d200 transid 14537
2024-08-27 13:29:21: Incr dirty item 73a879d200 transid 14537
2024-08-27 13:29:21: Incr dirty item 73b079d200 transid 14537
2024-08-27 13:29:21: Incr dirty item 73b179d200 transid 14537
2024-08-27 13:29:21: Incr dirty item 737956d200 transid 14537
2024-08-27 13:29:21: Incr dirty item 73b579d200 transid 14537
2024-08-27 13:29:21: Incr dirty item 73b679d200 transid 14537
2024-08-27 13:29:24: Incr dirty item 737777d200 transid 14537
2024-08-27 13:29:33: Incr dirty item 73a75e0400 transid 14537
2024-08-27 13:29:33: Incr dirty item 7337e1d200 transid 14537
2024-08-27 13:29:40: Free metadata space: 22.562 GB
2024-08-27 13:29:42: Incr dirty item 735ca3d200 transid 14537
2024-08-27 13:29:42: Incr dirty item 733041d200 transid 14537
2024-08-27 13:29:42: Incr dirty item 738f7ad200 transid 14537
2024-08-27 13:30:05: Incr dirty item 734019d300 transid 14537
2024-08-27 13:30:06: Incr dirty item 73907ad200 transid 14537
2024-08-27 13:30:06: Incr dirty item 73b36dd200 transid 14537
2024-08-27 13:30:06: Incr dirty item 7361a3d200 transid 14537
2024-08-27 13:30:06: Incr dirty item 736562ca00 transid 14537
2024-08-27 13:30:06: Incr dirty item 735c7bd200 transid 14537
2024-08-27 13:30:14: Incr dirty item 73a85e0400 transid 14537
2024-08-27 13:30:14: Incr dirty item 73927ad200 transid 14537
2024-08-27 13:30:14: Incr dirty item 73be22d300 transid 14537
~# iostat -x -y 5 5
Linux 5.10.213 (ip-172-31-87-8) 08/27/24 _x86_64_ (2 CPU)
avg-cpu: %user %nice %system %iowait %steal %idle
1.42 0.00 9.86 79.53 8.72 0.47
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.40 1.60 0.00 0.00 2.00 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.16
dm-1 280.60 4499.20 0.00 0.00 1.18 16.03 64.20 7576.00 0.00 0.00 22.09 118.01 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.75 47.12
loop0 4.20 85.60 0.00 0.00 2.90 20.38 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 1.84
nvme0n1 1643.60 8536.80 19.00 1.14 33.98 5.19 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 55.86 100.00
nvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 0.40 1.60 0.00 0.00 2.50 4.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.16
nvme2n1 280.60 4499.20 0.00 0.00 1.08 16.03 75.60 7576.00 0.00 0.00 20.52 100.21 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 1.85 47.12
avg-cpu: %user %nice %system %iowait %steal %idle
1.14 0.00 12.24 76.66 8.92 1.04
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-1 48.60 769.60 0.00 0.00 6.98 15.84 60.20 485.60 0.00 0.00 6.95 8.07 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.76 12.24
loop0 8.60 137.60 0.00 0.00 16.77 16.00 55.60 425.60 1.80 3.14 44.00 7.65 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 2.59 11.92
nvme0n1 1835.00 10875.20 16.80 0.91 26.58 5.93 57.00 2640.80 0.00 0.00 14.87 46.33 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 49.62 99.84
nvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme2n1 48.60 769.60 0.00 0.00 6.81 15.84 60.20 485.60 0.00 0.00 5.55 8.07 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.67 12.24
avg-cpu: %user %nice %system %iowait %steal %idle
1.49 0.00 8.73 77.16 12.53 0.09
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-1 23.40 390.40 0.00 0.00 1.30 16.68 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.03 4.88
loop0 12.60 220.80 0.00 0.00 3.16 17.52 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.04 4.96
nvme0n1 1976.00 11475.20 20.80 1.04 26.79 5.81 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 52.93 100.16
nvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme2n1 23.40 390.40 0.00 0.00 1.21 16.68 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.03 4.80
avg-cpu: %user %nice %system %iowait %steal %idle
1.13 0.00 7.93 80.45 9.63 0.85
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-1 19.60 329.60 0.00 0.00 1.80 16.82 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.04 4.80
loop0 10.20 182.40 0.00 0.00 4.73 17.88 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.05 5.04
nvme0n1 1980.20 10539.20 19.00 0.95 27.52 5.32 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 54.50 100.00
nvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme2n1 19.60 329.60 0.00 0.00 1.66 16.82 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.03 4.80
avg-cpu: %user %nice %system %iowait %steal %idle
1.44 0.00 7.08 82.11 8.04 1.34
Device r/s rkB/s rrqm/s %rrqm r_await rareq-sz w/s wkB/s wrqm/s %wrqm w_await wareq-sz d/s dkB/s drqm/s %drqm d_await dareq-sz f/s f_await aqu-sz %util
dm-0 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
dm-1 10.00 154.40 0.00 0.00 0.72 15.44 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 2.08
loop0 4.20 64.80 0.00 0.00 3.05 15.43 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 2.16
nvme0n1 1990.00 10411.20 17.20 0.86 27.28 5.23 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 54.28 100.00
nvme1n1 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00
nvme2n1 10.00 154.40 0.00 0.00 1.06 15.44 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.00 0.01 2.08
uroni
August 27, 2024, 6:33pm
6
Cloud storage seems to not be the issue. Could you run
cat /proc/meminfo
iotop --b --only -n 5
as well?
After sending the last post, the server was restarted to resume backup routines.
The data sent now is from after the reboot
~# cat /proc/meminfo
MemTotal: 3963376 kB
MemFree: 507480 kB
MemAvailable: 2111088 kB
Buffers: 64 kB
Cached: 2143788 kB
SwapCached: 0 kB
Active: 906912 kB
Inactive: 1825872 kB
Active(anon): 1644 kB
Inactive(anon): 611788 kB
Active(file): 905268 kB
Inactive(file): 1214084 kB
Unevictable: 11852 kB
Mlocked: 11852 kB
SwapTotal: 0 kB
SwapFree: 0 kB
Dirty: 61368 kB
Writeback: 0 kB
AnonPages: 600760 kB
Mapped: 268892 kB
Shmem: 14408 kB
KReclaimable: 131376 kB
Slab: 622728 kB
SReclaimable: 131376 kB
SUnreclaim: 491352 kB
KernelStack: 11232 kB
PageTables: 12188 kB
NFS_Unstable: 0 kB
Bounce: 0 kB
WritebackTmp: 0 kB
CommitLimit: 1981688 kB
Committed_AS: 3726968 kB
VmallocTotal: 34359738367 kB
VmallocUsed: 23504 kB
VmallocChunk: 0 kB
Percpu: 1336 kB
HardwareCorrupted: 0 kB
AnonHugePages: 0 kB
ShmemHugePages: 0 kB
ShmemPmdMapped: 0 kB
FileHugePages: 0 kB
FilePmdMapped: 0 kB
HugePages_Total: 0
HugePages_Free: 0
HugePages_Rsvd: 0
HugePages_Surp: 0
Hugepagesize: 2048 kB
Hugetlb: 0 kB
DirectMap4k: 143272 kB
DirectMap2M: 3989504 kB
DirectMap1G: 0 kB
iotop.txt (78.6 KB)
uroni
August 27, 2024, 7:48pm
8
So I found the sysstat folder and the current guess would be that it runs out of memory and becomes inefficient w.r.t cache usage and maybe even starts paging in/out.
It doesn’t have a swap space which probably makes this worse. It is unclear to me why it did not automatically create one on the 4GB disk – maybe because it is too small (will have to run some tests).
The memory usage might be hard to pin down because it could be a combination of factors:
The backup server memory usage due to the active backups (~700 MiB)
Linux memory usage for e.g. file system data structures ( it shows 1.35GiB for this in the logs)
Another ~300MiB for the cloud storage
Additional data collection might confirm this:
cat /proc/meminfo
top -bn1
slabtop -o
A short term fix would be to assign more memory or manually add swap.
delcain
September 3, 2024, 11:28am
9
Good morning @uroni
This morning I noticed that the server was going to crash again, so I tried to add a new GP2 partition to use as swap, but something different happened that I’m used to, as soon as I add the partition it gets automatically encrypted.
nvme4n1 259:8 0 2G 0 disk
`-nvme4n1p1 259:11 0 2G 0 part
`-LUKS-CC-28bb98ff523247dbb0ef31461aca29d2 253:3 0 2G 0 crypt
To prevent this from happening, should I stop the urbackup service?
I tried to stop it using systemctl, init.d and even urbackupsrv but I couldn’t find any way to stop the service.
uroni
September 3, 2024, 5:22pm
10
I’d recommend using a file as swap, e.g. via
cd /media/cloudcache
touch swap.file
chattr +C swap.file
chmod 0600 swap.file
fallocate -l 2G swap.file
mkswap swap.file
swapon swap.file
delcain
September 4, 2024, 8:49pm
11
Tanks a lot
I will wait a few days and get back with feedback.
Done
root@ip-172-31-87-8:/media/cloudcache# swapon -s
Filename Type Size Used Priority
/media/cloudcache/swap.file file 2097148 60264 -2
@uroni
It still hasn’t worked. I’m thinking about installing a new server.
What do you think about this?
Do you think there is something else we can do to solve this problem or is reinstallation the best way to go?
If so, is there a guide to follow? Could you send me the link?
uroni
October 1, 2024, 6:00pm
13
I think at this point I’d have to take a look at a instance in this state if possible. The problem with re-installation would be that it would not guarantee that it fixes the problem as long as the problem isn’t diagnosed.