Btrfs filesystem becomes read-only

I have urbackup running inside a Debian based NAS with a BTRFS volume to store backups. It was working fine for a long time, but suddenly two days ago, the BTRFS volume has become read-only and I can’t do any backups and all I tried to restore to read-write more doesn’t work.
I readed a lot on the net but I didn’t find any solution.
I know this is not a directly related UrBackup problem but any help will be appreciated.

Thank you so much.

This sounds like it could be filesystem corruption. Is there anything in dmesg or /var/logs/messages that indicate a problem with BTRFS?

Also, please post the output of “btrfs device stats /btrfs-mountpoint”

Hello,
With dmesg:

5274.869841] BTRFS error (device md127): parent transid verify failed on 543433687040 wanted 18201 found 63912
[ 5274.872192] BTRFS error (device md127): parent transid verify failed on 543433687040 wanted 18201 found 63912
[ 5274.872285] BTRFS warning (device md127): Skipping commit of aborted transaction.
[ 5274.872316] BTRFS: error (device md127) in cleanup_transaction:1864: errno=-5 IO failure
[ 5274.872324] BTRFS info (device md127): forced readonly
[ 5274.872330] BTRFS: error (device md127) in btrfs_drop_snapshot:9231: errno=-5 IO failure
[ 5274.873153] BTRFS info (device md127): delayed_refs has NO entry

What I could do?

The above looks suspiciously like a hard drive problem… you could replace the drive with one without problems. Fortunately losing backups is less of a disaster than losing data… current backups can be recreated.

It’s very strange, because I have not seen any disk errors, of RAID errors (it is a RAID1) previously, and the two disks are new, but all it´s possible.
I will try to save some of the backups, because I can access them in read-only mode, and after this, try to repair btrfs filesystem… and if isn´t solves the problem, then I try to delete the whole RAID and recreate it again.
Thank you so much.

If you have two disks, put them into a btrfs raid1, not a mdraid raid1. That way btrfs would (perhaps) been able to correct the error by reading from a second disk.
Maybe you could simulate that by removing one of the disks from the mdraid and then mounting (and then do the reverse)?

Also with btrfs the kernel version is always important.

If you’re using non-ECC RAM, you may want to run a memory test and check for bad memory. Faulty RAM can also damage filesystems.

Finally I have been no chance, and I have been destroyed the whole RAID and create it again. The disks seems to be OK so I really doesn´t understand why has hapened this.
Thanks all