Urbackup and OpenDedup

Hi Uroni,

First of all i wanted to thank you for this great Piece of Software. For me it does exactly what it should: Get rid of Windows Home Server as Central Backup Server in my Home Environment.

But there is one question i wanted to ask:
On my (Ubuntu-Server 12.04) Backup-Storage i have deduplication working with OpenDedup (sdfs), which works fine if i restart urbackup after mounting the dedup-Volume with mount.sdfs. Since mount.sdfs is a Java Application, i can not start the sdfs Volume before urbackup is starting (eg. /etc/fstab).
Is there a way to delay the start of urbackup after restarting the Server? Just to be shure, that deduplication is running before urbackup starts with Error Messages and a manual restart is necessary…

Go on with your great Work and Thank you very much,


P.S.: OpenDedup stats, that my Backups have 73GB of Data, 55GB Unique and 18GB deduplicated… Just FYI

Interesting. Never tried it because it is a filesystem written in Java. How is the performance? (Consider switching to ZFSonLinux if it is bad :) .)

With regards to the delay: In your case I would just remove urbackup from the init-system via

update-rc.d remove urbackup_srv

and then add start_urbackup_server to /etc/rc.local . Perhaps with a
sleep 60
or something before it.

Hello Uroni,

Thank you for the answer. I don`t know why I didn’t figure out the solution with /etc/rc.local by myself… OpenDedup starts via /etc/rc.local… Thank you very much…

Regarding my weird System Setup:
Of course you are absolutely right, with your suggestion, but i have very limited Hardware on my Home Storage (E-350, 4GB RAM). I have tried btrfs like you mentioned in your Manual. Although I know that Videos dont dedup very well, I copied my Video-Library to the deduplication-Storage before I used it for Backups.
ZFS with dedup consumes in Theory about 2GB of RAM per TB of deduplicated Data, in practice 5GB per TB so I didn`t try it.
With btrfs and its deduplication Patches, I had some Problems and after two days of playing around I didnt feel well using this Filesystem.
Next I tried OpenDedup, which consumes about 1 GB (in practice 3GB) of Memory when deduplicating 4TB of Data.
This means, that my Memory fits well for my 4TB Backup-Storage.
Performance is more or less ok with 128k Chunks, but not the best… (45MB/s with nearly 100%CPU Load).
It isn’t really a Filesystem, it is a Java Application, like you mentioned in your Post. So the Junks are stored on a (reliable) ext4 Filesystem, and OpenDedup Manages the Storage of the Chunks.
But OpenDedup has other issues: Obviously It can’t store Hardlinks, so URBackup has to copy the same Data again and again without Hardlinking and with Warnings in the Backup-Logs. This means more Network Traffic and no “clean” Backup-Logs, but until now, this is no showstopper for me. (OpenDedup deduplicates this duplicated Data, so Hardlinking doesn’t make much sense in my Setup, but this isn’t URBackups Problem)

Do you plan to implement a kind of file-level deduplication?
Hardlinking over all Backups instead of Hardlinking in one Backup-set only would be good enough in most cases. If File X is stored on three PCs, you have to copy it only once, and hardlink it between the Backups on the Server. Perhabs this could be an Opt-In…
If this already works and I have missed this Functionality, then shame on me :-D

Sorry for this looooong Post…
Just my 2 Cent J

Greetz, N3Tc0MM

Yes, it hard-links across clients. That is, if your file system can do hard links ;)

Hello Uroni,

OK… Message received ;-)
I will give btrfs a Chance…
Since i have messed up my System by playing around, i will install the whole Thing new as soon as my Backup is done :-)

Again: Thank you for this great Backup Solution…

Greetz, N3Tc0mm