UrBackup Server/Client/Restore 2.0.0 beta

unfortunately i can’t… i’m not a programmer…

i just googled for the btrfs syscall for hole punching (FALLOC_FL_PUNCH_HOLE) and then googled if zfs supports the same syscalls…
but if i have read the blog entry correct cp --reflink is not using that ioctl but more likely BTRFS_IOC_CLONE

i think https://github.com/zfsonlinux/zfs/issues/405 would be the right one but it seems that it is not implemented yet :frowning:

Actually I changed it so that images are put into subvolumes so FALLOC_FL_PUNCH_HOLE support suffices. That is this issue https://github.com/zfsonlinux/zfs/issues/326 and though it is open it seems to be supported in 0.6.4 (see last comment).

I think the biggest problem would be that zfs does not have any implementation that works (as easy) like btrfs subvolumes…
zfs snapshots are always read only and you cant reference between different datasets. but…

# zfs snapshot tank/ws/gate@yesterday
# zfs clone tank/ws/gate@yesterday tank/home/ahrens/bug123

that could be the same as copying a btrfs subvolume.
so you have to create a new dataset, write the full backup to it, snapshot the dataset and then clone that snapshot to a new one…
as clones relay on snapshots you can’t delete any snapshot that is cloned. if you do that multiple times for incremental backups you create chains of clones and snapshots that relay on each other and you can’t freeup that space

Hi,

I’m getting endless waiting for metadata download stream to finish in the log as clients reach 100% backup

16/01/23 09:33 DEBUG Waiting for metadata download stream to finish
16/01/23 09:34 DEBUG Waiting for metadata download stream to finish
16/01/23 09:34 DEBUG Waiting for metadata download stream to finish
16/01/23 09:34 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2235”. 59% finished 9 MB/15.1818 MB at 440 Bit/s
16/01/23 09:34 DEBUG Waiting for metadata download stream to finish
16/01/23 09:35 DEBUG Waiting for metadata download stream to finish
16/01/23 09:35 DEBUG Waiting for metadata download stream to finish
16/01/23 09:35 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2235”. 59% finished 9 MB/15.1818 MB at 336 Bit/s
16/01/23 09:35 DEBUG Waiting for metadata download stream to finish
16/01/23 09:36 DEBUG Waiting for metadata download stream to finish
16/01/23 09:36 DEBUG Waiting for metadata download stream to finish
16/01/23 09:36 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2235”. 59% finished 9 MB/15.1818 MB at 336 Bit/s
16/01/23 09:36 DEBUG Waiting for metadata download stream to finish
16/01/23 09:37 DEBUG Waiting for metadata download stream to finish
16/01/23 09:37 DEBUG Waiting for metadata download stream to finish
16/01/23 09:37 DEBUG Waiting for metadata download stream to finish
16/01/23 09:37 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2235”. 59% finished 9 MB/15.1818 MB at 376 Bit/s
16/01/23 09:38 DEBUG Waiting for metadata download stream to finish
16/01/23 09:38 DEBUG Waiting for metadata download stream to finish
16/01/23 09:38 DEBUG Waiting for metadata download stream to finish
16/01/23 09:38 DEBUG Waiting for metadata download stream to finish
16/01/23 09:38 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2235”. 59% finished 9 MB/15.1818 MB at 376 Bit/s

Played around a little bit with zfs snapshots and clones…
This isn’t as easy as i thought.
for a snapshot/clone you have to specify the pool/dataset and that has nothing to do with the actual filesystem hierarchy as with btrfs…
there has to be a new filed in the webinterface and config in which you can specify the actual zfs pool in wich the datasets are stored or you have to find out the pool from the mtab… dunno which one is easier…

Hi Uroni

Tried to install the beta on Freenas Jail.
I managed to get around the
cc1plus: error: unrecognized command line option “-fstack-protector-strong”
error initially encoutered by install gcc49
pkg install gcc49
and then modifying the makefile (commented lines were the originals)
#CC = gcc
CC = gcc49
#CXX = g++
CXX = g++49
#CXXCPP = g++ -E
CXXCPP = g++49 -E
#ac_ct_CC = gcc
#ac_ct_CXX = g++
ac_ct_CC = gcc49
ac_ct_CXX = g++49

This allowed the make to get much further, up until it hit BLKGETSIZE64 function in file_linux.cpp
Apparently this method does not exist in the freenas kernel. is there any way around this ?

Thanks for your help

Hi,

I keep getting these messages while doing backups over internet using v2beta server and client the backup just hangs and never completes sometimes at 100% or lower

16/01/27 20:56 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2276”. Loaded 14 MB at 56 Bit/s
16/01/27 20:58 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2276”. Loaded 14 MB at 56 Bit/s
16/01/27 21:00 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2276”. Loaded 14 MB at 56 Bit/s
16/01/27 21:02 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2276”. Loaded 14 MB at 56 Bit/s
16/01/27 21:04 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2276”. Loaded 14 MB at 56 Bit/s
16/01/27 21:06 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2276”. Loaded 14 MB at 56 Bit/s
16/01/27 21:08 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2276”. Loaded 14 MB at 56 Bit/s
16/01/27 21:10 DEBUG Loading “SCRIPT|urbackup/FILE_METADATA|bDP9Sijri7SLoddXhJUm|2276”. Loaded 14 MB at 56 Bit/s

if i check the client side debug log I’m getting these messages

2016-01-27 20:57:56: ERROR: FileSrv: Error: Seeking in file failed (5044) to 14680064 file size is -1
2016-01-27 20:59:56: ERROR: FileSrv: Error: Seeking in file failed (5044) to 14680064 file size is -1
2016-01-27 21:01:57: ERROR: FileSrv: Error: Seeking in file failed (5044) to 14680064 file size is -1
2016-01-27 21:03:57: ERROR: FileSrv: Error: Seeking in file failed (5044) to 14680064 file size is -1
2016-01-27 21:05:57: ERROR: FileSrv: Error: Seeking in file failed (5044) to 14680064 file size is -1
2016-01-27 21:07:57: ERROR: FileSrv: Error: Seeking in file failed (5044) to 14680064 file size is -1

it does this for both full and incr backups

Thanks for the client log. It seems to have removed the download stream before the server acknowledged that it has received it.

I’ve already improved this area and will be releasing a new beta version soon.

Ok thank you for the feedback i look forward to the next beta

I upgraded the server to the Beta 2.0.0 and am getting an unending progress bar in the web interface. I tried uninstalling and installing fresh, but the problem still occurs. Here is the log :

2016-01-28 09:26:27: ERROR: Loading urlplugin.dll failed
2016-01-28 09:26:29: ERROR: Loading urbackupserver_prevista.dll failed
2016-01-28 09:26:50: WARNING: Error: Unknown action [login]

The system is 32 bit Windows 2003 Standard if that helps.

Note: Server 2003 is EOL since July 2015… Please consider upgrading to at least Server 2008 R2…

It’s a private lab network. The server is just used for backup. The machines don’t even access the internet. 3 of 4 clients run XP. Is there any helpful information someone can offer?

Has anyone notice a speed difference between a client with 2.01 installed and 1.4.10? In a virtualized environment with all machine assigned the same amount of hardware resources I get 45Kbps for clients using 2.0.1 and 560Mbps for machines running the 1.4.10 client. Or was there a setting I missed?

Yeah, EOL does not simply mean that it is not receiving updates and that your only worries are from the internet. It also means that there will not be any bug fixes and that you are several years behind on features.

I know precisely what End Of Life means. I work in IT. I was running 1.4.12 on the server quite well. I decided to test the Beta as the new features are quite enticing. I found it is not working and posted to this forum as that is what this thread is for. If what you are saying is that the new version is specifically designed not to run on Server 2003, then I will either reinstall 1.4.12 or search for a different software for my purposes.

It means that few people will be interested in running it on Windows Server 2003. This being an Open Source project of course somebody could put in the work to support Windows Server 2003.

There is nothing wrong with staying on 1.4.12 either, if it works for you. After all you are staying with Windows Server 2003 because that still works.

Hi Uroni,

Just double checking I’m guessing this means the new client side will also not support windows xp?

To be quite frank, we are approaching a year that XP has been EOL, why would anyone target it? At this point, any project will continue building for it as long as it does not block anything else. As soon as it causes any difficultly in development, you drop all support. There is no sense in supporting a market that has completely faded.

Yeah, I haven’t done anything on purpose to remove the Windows XP support, but I haven’t tested it either (so it probably won’t work). This area is entirely complaint-driven.
If it does not load please use DependencyWalker to tell me which function it cannot load.

Hi Uroni,

I haven’t done any major testing I just noticed that the lone xp client I was running the test on (rest are windows vista/7/8/10 and server 2008/2012) automatically updated to the 2.0.x beta from the server and then stopped connecting to the server (internet based) so no longer backing up