UrBackup Server on MS Azure

Dear Community Members,

Has anyone successfully deployed the UrBackup server on Microsoft Azure? If so, what were some of the general settings and options that you used? Windows or Linux server? What storage type did you use to store the backups? What file system employed? I would like to get some details from anyone that has done it successfully, or some caveats from those that tried unsuccessfully.

I know that there is an official AWS image for UrBackup, but currently I have access to Azure at a reduced cost and would like to set this up for a specific project/client.

Thank you in advance for your feedback.


Any of the supported operating systems should work on Azure, that is Windows, Linux and FreeBSD. You’ll probably be able to safe costs if you go with Linux or FreeBSD.

You’ll probably have to store the backups to a Azure block device or page blob using one of the file systems supported by the operating systems, i.e. NTFS/XFS/btrfs/ZFS.

Btw. I’d say Azure is the priciest per unit of performance of the big three (Google, Amazon, Microsoft) – at least the last time I tried it. So maybe the reduced cost is not worth it.

I’m deployed in Azure on Windows Server 2012 r2.

I used SSD+HHD with deduplication in a datastore. Additionally I formated with a 64k allocation table out of habit but I’m not sure if I needed to do that.

I used azure because my client gets $5,000 a year in free credits. They are a non-profit org.

So far so good. I’m curious to see how it works under pressure.

I used azure because my client gets $5,000 a year in free credits. They are a non-profit org.

Same situation here. That’s why it’s worth experimenting.

How are you doing deduplication?

I have deduplication enabled on the volume. In my case F:\

Import-Module ServerManager
Add-WindowsFeature -name FS-Data-Deduplication
Import-Module Deduplication
Enable-DedupVolume F: -UsageType HyperV
Start-DedupJob –Volume F: –Type Optimization

You can also do it from the GUI on a schedule.


1 Like

Currently testing with Ubuntu 16.04 and zfs in Azure.

For this type of workload it seems like Windows dedup did a better job than ZFS. This might just be on my server though.

Are you doing image backups or file backups to the deduplicated volume? I would imagine that the deduplication would be more effective with file backups, as opposed to image backups. I tend to think of the image backups as containers that are sealed. I could be wrong.

I’m doing both. File backups with 365 day archives and image backups with 60 day archives.

With Windows dedup I’m seeing 65%+ rates. With ZFS I saw about 14%

Windows included a new scheme for dedup that sets up the storage pool to deal with vhd(x) files. It’s really nice. It was meant to be used for VDI deployments but I’ve found it works nicely for image backups :wink:

The uncompressed VHD has a block size of 512kb with a 512 byte bitmap in front. I’d say this throws off the ZFS fixed block dedup. If you can you should use cow-raw image files with ZFS not VHD. But the “content-aware” offline Microsoft dedup will still win, I guess.

1 Like

Should I do a write up on setting up dedup in Windows for UrBackup? Would that be helpful to anyone?

If there are special consideration to be taken for urbackup , maybe put that in the wiki ?
if it s generic dedup setup info, i guess peoples can google for that (and maybe find more up to date articles)

I would appreciate​ that. I want to test this setup and having something to reference would be useful.