Do I need to reset the urbackup DB (and how), if I want to redo my backup folder completely?

  1. what should I do if I need to completely wipe out my backupfolder completely? (ie do I need to delete my urbackup db completely? run the cleanup script? reinstall from scratch and add the ident keys back…)
  2. what’s the concrete impact to have dedup=off on zfs?
  3. what would be better?
    3.1) ZFS with raidz, ssd_cache, WITHOUT dedup, controller_setup_as_JBOD
    3.2) BRTFS single drive, with dedup, on top of hardware controller RAID5
  4. is there a way to leverage a small SSD with BRTFS similar to the zfs-ARC2 cache mode?

background:

  • I have been struggling to setup urbackup storage on zfs as dedup requires so much RAM/CPU that the server ends frozen and performance is too abysmal to be useful.
  • without dedup, zfs seems amazingly fast… (raidz+cache_on_SSD+cache_on_controler)
  • since my raid controller is capable for RAID5, I am considering giving a try to brtfs a try (originally I discarded the idea due to brtfs buggy raid5/6 implementation).
  • in my specific case, I would like to use raid5 over 6x3TB HD, a small SSD partition (up to 64GB), and I have an old but trusted adaptec RAID controller with 512MB of cache, RAM=16GB.

If your Adaptec controller is a 6 series or older controller, these controllers have a known (to me) firmware bug which corrupt data with >2TB disks during a RAID rebuild operation when using RAID6 while writing to the array. I haven’t tested this on RAID5, but you may want to do some testing before putting valuable data on the array.

I suspect this is a 2TB limitation bug in the RAID rebuild portion of the firmware and may affect all parity RAID rebuild operations. This bug also existed on 7 series controllers, but was later fixed in a firmware update. Unfortunately, Adaptec didn’t apply the fix to older controllers as far as I’m aware.

The BTRFS RAID5/6 issues do not apply if you are using hardware RAID. If you have very little RAM, hardware RAID and BTRFS on top will probably work fine in most cases.

I use the following BTRFS mount options with good performance results when using hardware RAID underneath BTRFS:

defaults,noatime,nodiratime,compress=zlib,commit=3600

I’ve not tried this, but bcache or LVM cache, I believe, should be able to do something similar to ZFS ARC2.