Issue with Urbackup, docker, and Pacemaker cluster

I have a 2 node Pacemaker cluster, and I am running a bundle resource with the urbackup docker container. This is working fine. If I failover the cluster (hard shutdown a node in my test environment), the urbackup container successfully restarts on the other node. I am using a DRBD-synced folder for the docker volumes (in the pacemaker cluster), and this works as well, i.e. the filesystem also pops up on the other node. I use the docker tag “–network host” so the urbackup web GUI use the IPaddr2 resource of the cluster properly. So whatever node is running, the IPaddr2 successfully gets me to the urbackup GUI.

However, the urbackup server fails to see the clients when it is run on the second node (originally run on the first one). My question is, what triggers this, I am using the same shared folders, so I need to have a complete list of the items that are in play to detect a new server.

I obviously want my client list and backups to be detected whatever node urbackup runs on, so if I have a grocery list of things that need to match, I can identify which one is off, and potentially fix it so I get the result I want. I am so close, just need to fix this issue and my redundant urbackup cluster would be up-and-running.

I just read that with the “–network host” flag, the hostname is automatically set to the host. Is a different hostname (it will differ depending on which node is running) what triggers the detection of a new server?

This hostname cannot be overridden with the -h flag if “–network host” is set…

So… can I get a list of all the tags used to generate the server ID? Again, my goal is to have two container images come up via Pacemaker and having them being seen as the same server.

I have read this FAQ, and it states a bunch of files in /var/urbackup/ to restore so that a new install registers as the same server, but in my case, I map that entire folder to shared storage between the two nodes, so all these files will be identical on whatever node is running the container at the time. So there has to be other factors determining server identity that just those files… Is it hostname, NIC MAC Address, …

Fixed it, needed to add a cluster resource for soureIPaddress