Optimizing backup storage size

Hello,
I wonder how to optimize the consumption on the backup storage while also keeping up speed of backups. Currently I’m using a btrfs volume in raid1 with 5 disks, another HDD for the OS (omv5). The backup volume is mount without compression, because my server’s CPU is old (Xeon X3220 from 2007) and I fear to sacrifice backup speed when activating online compression. Also the images written during backups are uncompressed (no vhdz, but vhd) to make use of btrfs’s deduplication. Compressed images would destroy the deduplication feature of btrfs, correct?

So I wonder if I can safe storage space with optimizing storage during non-backup times. One thing I see is that I can compress the extends on the btrfs filesystem. There seems to be two options: Either use btrfs fi defrag -czstd <volume> which should compress all existing files, but might take a while, or use btrfs property set <folder> compress zstd to compress individual folders. What happens then to the deduplication of btrfs? Will it be harmed? What are the downsides of this?

Another thing might be to make use of btrfs deduplication If I have duplicate extends, I understand dduper can find those blocks. What’s the disadvantage of this?