Urbackup Cluster - backup Client at the same time to multible Servers

Ive tried to get work an urBackup Cluster. But it fails with client connection to other server.

What ive done:

  1. From the First Server ive copied the ident.priv pub files to the second server only to first one.
    if i shutdown the first, it would not connect to the second too. But if it works i would that the client
    copy to the both and when i restore i can choose from which, because the server inititaes the restore
  2. Then i created a client on second server with internet connection only. both clients has the equal authkey.
  3. and different ports
  4. my settings.cfg

In the /etc/defaul/urbackup internet_only = true

But i didnt connect to the second server.
I think my misstake is generell.
What is the right way to build an ha urbackup cluster in master master mode?
Thank you!
Sincerely Bonkersdeluxe

I am really interested in this as well, I ran into an issue with MBR limits on Ubuntu server 20.04 so my disk is capped at 2tb… I want to spin up another VM Ubuntu server with 2tb and run them as a cluster so my network clients can choose which ever has less wait time and backup to which ever is ready first.

Linux has supported GPT for larger hard disks for a long time, and you can use RAID to create even larger volumes. For example, I just added a 4TB drive to my Ubuntu 18.04 “bionic” server, and created this test partition with fdisk and mkfs.ext4.

>sudo fdisk -l /dev/sdd
Disk /dev/sdd: 3.7 TiB, 4000787030016 bytes, 7814037168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 8FD44348-2D63-F041-AD6C-1D914852CC5E

Device     Start        End    Sectors  Size Type
/dev/sdd1   2048 7814037134 7814035087  3.7T Linux filesystem

>df -h
Filesystem      Size  Used Avail Use% Mounted on
/dev/sdd1       3.6T   89M  3.4T   1% /bigboy

As you can see, Ubuntu is supporting this 3.6T partition with an EXT4 filesystem just fine.

I misspoke, I am trying to convert my /dev/vda from dos (MBR) to GPT as the MBR is limited to just over 2Tb.

I kick myself because I added an extra 1tb before knowing about the sector limit so now I have a virtual 1tb that’s not really usable unless I want to create another mount point for something…

But it’s looking like the only way to run that is to wipe the VM and rebuild it.

I have looked into doing a raid setup but my physical server is already RAID 0 with 8Tb running 3 VMs plus a cloud server and web server. So I don’t really see the benefit of creating a Vhd raid, especially when I backup the qcow2 images every week.

Home labs are really awesome, and this backup system is nearly bullet proof… Great for home computers when the wife downloads sketchy stuff lol and need to reimage when she crashes her pc with 86 chrome tabs and 2 windows (150+tabs total)

My second home server runs 4x 4tb in RAID 0 since it has 12 cores I can assign to VMs. Haven’t migrated over to it yet since the 4 core 8tb server is doing just fine.