Noob question - backups of 3TB and bigger


I have a Windows PC with four NTFS hard discs in it. 160GB, 4TB, 4TB, 8TB. And this thing is full of films and music.

I am backing up to a UNRAID server (v6.9.2) with urbackup server v2.4.13 running in a docker container.

I setup the client for first time on this PC. Fairly default on the settings. And then got a warning in the server logs as my images are too big:

Data on volume is too large for VHD files with 2.8054 TB. VHD files have a maximum size of 2040GB. Please use another image file format.

What should I do to backup 3.5TB of data from a drive? Is this too big for an image backup? Can I only use file mode? Or is there a way of creating multiple VHD files?

The nature of the data means it doesn’t change too often. Just gets added to and the music gets tags updated.

Previously I have only backed up office PCs with small hard disks. Didn’t know there was a limit on the images.

Note: The UNRAID server is freshly built specifically to run urbackup.

See if RAW or something else is an option for image file format rather than VHD, 2TB is a limitation of the VHD format. You should have some option if you’re using a Linux file system for your backup storage.

Thanks for your help @Bearded_Blunder . The destination is an xfs file system. The Settings on the server for the backup only offer VHD or Compressed VHDs. No mention of RAW or anything else. I was kinda hoping it would just span multiple VHDs.

I guess I am just doing things wrong. Probably no reason to use Image Backups on Media files.

XFS works with raw files, but you need a recent Linux kernel + XFS formatting with the cow feature.

1 Like

As I am using UNRAID I am not sure what their cattle features are. I know they use a subset of Linux so I guess I am stuck with what I have. Though I’ll keep that in mind for next build I do as I may just flip this all around and put the live network files onto the UNRAID server and send the backups to a different machine.

Seeing how long the initial backup has taken I may yet reorganise the whole plan of network storage and backup.

Would btrfs have the same limitation?

Thanks for your interest @BBit at waking this old thread up. I have no idea if btrfs would work. When I was initially setting up the UNRAID system I picked XFS due to reports that btrfs was unstable and not to be trusted. Hard to know when you are going into a new system.

Also, is it possible to convert XFS to BTRFS without data loss? I don’t want to have to start the backups again from zero.

To be honest, I was hoping for a more active community on the unraid side who had experience of URBackup. Never got any replies to this question so things were left as they are. Which is frustrating as I had a hard disk start to fail and a image recovery would have been useful.

The important bit is URBackup is brilliant and it did save my butt while I rebuilt the damaged source machine. Not only did URBackup allow me to rescue my data, but it is an easy backup system to dip in to and pull single documents or whole folders from while the drama is on-going.

1 Like

@BatterPudding Thank you for taking the time to respond.

I have also posted in the UNRAID forums. I initially set up my raid array with BTRFS as it has a snapshot feature. Without a doubt when unraid supports zfs I will be migrating to that. Unfortunately I’m experiencing your issue as well with btrfs. This makes me question if it’s an issue with a container or not provided by the UNRAID Community.

I put a little more trust in BTRFS as it’s what synology nas utilizes for their file system.

Originally I was trying to use TrueNas Scale. However their community app ecosystem is underdeveloped and based on kubernetes. Kubernetes adds a lot of needless complexity on top of the Docker system which is already somewhat complex in a home hosting environment. I’ve received grumpy support for a very basic questions that are not in their documentation. Due to those combination of factors I think I’ll stick with unraid.

Quick unrelated question is it possible to pick single documents/folders from an image backup?

Doesn’t really matter what Synology do. It is about the UNRAID version of BTRFS. There was a lot to read when setting up the UNRAID system and it pushed me more to the ZFS side due to “buggy” comments about BTRFS. So I avoided it. I think they also said ZFS fitted their UNRAID system better or something (sorry, was a while back I set it up)

I tried posting questions asking on UNRAID about large files and Urbackup and got no reply. Assume no one else was using it. I think most UNRAID people have their files on UNRAID and backup elsewhere. So gave up and stuck to a file only system for my home backup.

Pretty certain you can… I have a NTFS based Windows PC running UNRAID for a small business I help out. There it is both file and image backups. I only had it in place as an experiment, but it saved their butts a couple of times already. (Can’t check at the moment as that PC is sitting behind me in the middle of a rebuild… so I’ll update this post later in the weekend)

Pretty certain you can open a backup image and walk around the files. Then pull folders or files as you need them. I kept both styles of backup as file backup is quicker as less to compare each time.

Thanks for the future update and clarifying thoughts on BTRFS! If there’s not a response to my question in a reasonable amount of time I will reach out binhex’s urbackup github repository directly.

I just reviewed if this was possible in UnRaid. Unfortunately it is not without performing a reformat of the drive with the new file system. You would have to transfer the data to different storage medium/drives. Reformat the existing drives in the array and migrate the data back.

Haha - looks up sleeve for spare 16TB of storage to do that shuffle… maybe not.

This is part of my problem though. I am seriously considering flipping my storage setup. If I use UNRAID for my file system it benefits from the parity of UNRAID if problems occur. And then backup the UNRAID to an NTFS PC running Windows as the file system is more flexible.

My original plan was to backup a server and a couple of running PCs. It is those PCs that I really wanted to have the image backups which for now stay using Macrium.

If you want to help testing 2.5.x Testing - UrBackup - Discourse , that one supports vhdx which supports image backups > 2TB.

Otherwise file systems which support cow format:

  • btrfs
  • XFS (with cow enabled)
  • ReFS
  • winbtrfs (will be in the next 2.5.x release)

Thanks @uroni , I know my issue is not with urbackup. It is the UNRAID limitations on file size.

Interesting with UNRAID it doesn’t show cow despite being formatted to btrfs.