My storage location is in a NTFS partition which is capacity is up to 16TB. What if the space went to full?
Huh, didn’t know it was this low. One can buy 16tb hard disks nowadays… Seems you’d need to increase the cluster size when formatting the volume when starting off…
Or use better file systems like btrfs or ZFS.
Do you know whether btrfs or ZFS could be userd on Windows Server 2016 x64
Here the volume size is most probably NOT a problem. The cluster size seems to be too small. I already experienced such issue and could fix it using free Partition Manager Software (but not free for Server OS).
Recreating of volume with larger cluster size is another option.
Hope, this is helpfull for you.
Cheers
Ironiff
With increased cluster size there’ll be more overhead when storing small files. Does the NTFS compression work with non-4K cluster size? (edit: just tested: It doesn’t)
There’s a btrfs driver for Windows that works for XP and newer… But it’s a re-write from scratch, not based on the Linux one, is a lot newer, and will have a heck of a lot less people using it to find bugs… I wouldn’t use it on a backup server.
ZFS is a similar story, although it’s being ported over (from the BSD one I think) rather than re-written so it’s likely a little bit more stable and reliable.
So to summarize on Windows:
NTFS:
- Can compress files, but only for volumes up to 16TB
- Will have larger overhead for smaller files if the volume size is large
- Windows server dedup will probably also compress with larger volume sizes (the small file disadvantage remains, though, except if small files are deduped?)
ReFS:
- Supports large volume sizes (1 yobibyte …)
- Dosn’t have file compression
- Windows server dedup will compress data, but breaks UrBackup’s file level deduplication (so those file level copies put strain on the server dedup system and cause temporary space usage spikes)