Webinterface difficulties

Hi there,

I’m looking into Urbackup for a small (ca. 40 clients) MS Windows-only network I’ve been drafted to tend to. To test it, I set up a small subnet with a server and a client. On the server I created a fresh OpenVZ CT and pulled a basic Debian Squeeze template. I figured I need layer 2 for the auto-detection working, so I set up a virtual ethernet device (veth) with direct routing. Thus I have full networking over all layers. I installed urbackup from the provided Debian-packages and it seems to be up and running. I can see the processes and netstat tells me that urbackup_srv is listening on port 55414. However, I can’t access the webserver from the client, the connection just times out.

To check my setup I installed nginx (my http-server of choice) inside the CT and sure enough, there it is. To be extra-careful I configured nginx to listen on ports 55414 and 55413 and of course nginx gladly serves my testpage over these ports. So, networking IS fine. Now I went ahead and tried to setup an SSL-enabled connection like you explained in section 3.2 of the documentation. As far as I understand lighttpds configuration, you are saying “pass /urbackup/* to the CGI on”. I assume urbackup should provide a handler here, but it doesn’t on my box. So I poked into your code and couldn’t find any hint what kind of handler I would need for CGI, so I assumed it’s Perl or PHP. After setting up CGI handler for PHP5 and Perl, both do nothing and I just get more time-outs.

For now I will be wasting the rest of my day with looking into other solutions, but I will be back, so any help would really be appreciated.

Best regards!

Okay. I don’t know what could be wrong with the webserver either. You can change the loglevel in /etc/defaults/urbackup_srv to debug. Then the webserver will log every access into /var/log/urbackup.log.

UrBackup uses FastCGI. This is a extension of CGI for long running processes. This literally means that you just have to point /urbackup/x to backupserver:55413 and everything should be fine… The examples for webservers with php etc. are much to complicated as there the webserver has to start multiple instances of PHP (because every instance can only handle on request at a time) and you have to associate the specific file extension with these instances (For UrBackup it’s only one file).

location /urbackup/x {
  include /etc/nginx/fascgi_params;

worked for me. (You have to symlink/copy the urbackup files from …/urbackup/www to /urbackup as well).

Hope you get it to work!

Thanks for the pointer!

You need to do this:

location   /urbackup/x {
   root /var/www/urbackup;
   index index.htm;
   include fastcgi_params;

I’m totally stumped by the “x”. It’s certainly not adhering to any nginx convention. Also, rewrite to / is also not working yet, but maybe I figure it out. I’ll post then. For now I will test how the system takes load.

Thank you again!