So i had a client which was badly configured at the begining and was backing up too muc
Since a few month i added an exclusion for about 1.4TB of data. And was expecting to regain that room as the time passed on and backups expired.
This went not fast enought so i removed the oldest backup and today all the backups for that client.
So now the client has 0 backups but the directory pool still use 1.4TB.
Can i just delete all the files for that client folder, would that mess up inter client deduplication?
Do you need to some data to help fix this issue?
I did run remove-unknow and cleanup -a 1G a few times , didnt helped.
Manually deleted a few backups from another virtual client then run with remove-unknow.
As i always get this message when i run remove unknown i wasn t paying too much attention. But maybe the link count get unchanged , thus the real data folder is never actually deleted.
It s actually about1500 lines not just the 4 here that are for example
2018-01-15 15:26:43: WARNING: Directory link “/var/docker/data/urbackup-server/datas//ct01.dev.sss.com[jenkins]/171030-1710/.hashes/jenkins.dev.sss.com/data/jenkins/workspace/build/resalys/01_create_version/7.8/.svn/pristine/80” with pool path “/var/docker/data/urbackup-server/datas//ct01.dev.sss.com[jenkins]/.directory_pool/Bn/BnYQYGu1w51509448359439618489” not found in database. Deleting symlink only.
2018-01-15 15:26:45: WARNING: Directory link “/var/docker/data/urbackup-server/datas//ct01.dev.sss.com[jenkins]/171030-1710/.hashes/jenkins.dev.sss.com/data/jenkins/workspace/build/resalys/01_create_version/7.8/.svn/pristine/4c” with pool path “/var/docker/data/urbackup-server/datas//ct01.dev.sss.com[jenkins]/.directory_pool/Si/SizJxIPd8x1509448312439570756” not found in database. Deleting symlink only.
2018-01-15 15:26:45: WARNING: Directory link “/var/docker/data/urbackup-server/datas//ct01.dev.sss.com[jenkins]/171030-1710/.hashes/jenkins.dev.sss.com/data/jenkins/workspace/build/resalys/01_create_version/7.8/.svn/pristine/ea” with pool path “/var/docker/data/urbackup-server/datas//ct01.dev.sss.com[jenkins]/.directory_pool/E8/E8HlDulQuD1509448454439713035” not found in database. Deleting symlink only.
2018-01-15 15:26:45: WARNING: Directory link “/var/docker/data/urbackup-server/datas//ct01.dev.sss.com[jenkins]/171030-1710/.hashes/jenkins.dev.sss.com/data/jenkins/workspace/build/resalys/01_create_version/7.8/.svn/pristine/43” with pool path “/var/docker/data/urbackup-server/datas//ct01.dev.sss.com[jenkins]/.directory_pool/P3/P3qblaZjA51509448304439563201” not found in database. Deleting symlink only.
Did you restore a backup of the UrBackup database at some point btw?
By manual deletion you mean deletion from the web interface? (not simply deleting the folder)
Had to delete the remaining folder .directory_pool today, all the backups were already deleted, only the client configuration was left, backups were deleted via the ui/
Warning can also be caused by a interrupted delete (i.e. power failure) and then be expected behaviour.
It wouldn’t surprise me that the cleanup operation was already interrupted or server shutdown too fast in the past.
But then I am > 90% sure that the last sequence of : stop, cleanup , start , delete backup , stop , cleanup, give you the logs was done without interuption. Could that be the source of orphan .directory_pool ?
i’m having the same kind of issue . recently i change server machine and transferred everything into new server machine.
all clients can see new server and do backup properly. but after changing new server, suddenly clients start taking more space about 25% more compare to previously (i think) as drive fill up.
Some of clients shows only one last backup 10GB (for example) and when i go to .directory pool for that client it shows 50GB (for example). but when i go to web control panel for that client it shows all history and only one last backup (which i can delete from web control panel).
i tried couple of times ruining UNKNOW_REMOVE but doesn’t seem data size reduce.
my question is how can I reduce backup size.?
is it okay if i remove .directory pool folder manually.?
IT’s windows Environment.
If you have a single backup do not delete it … yet
How large do you estimate a single backup should be (on linux ncdu will help, on windows windirstat)?
How large is the full client folder ? (the folder named one level above the .directory pool named as a the client name)
As you have a single backup, both should have similar size, urbackup size may be smaller in case of duplicates files, but not bigger. (well actually if you have alot of small files, because of the .hash, it may be bigger, but you d need to really have a lot of files)
If the urbackup folder is bigger, then i guess you also have some kind of cleaning issue.
From inside the docker
No, the paths from inside/outside are différents.
These are the paths from inside the urbackup server docker.
We had some issues with btrfs, hence why we arent running it. But i guess we will try again soon.
We try to use btrfs about one time per year, but we always encounter some issue, sometime not from btrfs team fault , but it s mostly btrfs which is affected.
Hence why we use a lot of zfs/ext4, with which we never had problems.
Yes this is a “new” issue, as in it has been spotted in this forum post, so investigation is needed.
One of the thing that is needed is the smallest test case than can create this issue. then i guess the fix will be created.
Maybe it happens when backup are deleted manually, or when they are expired, or only with virtual clients,.
Or maybe with more special conditions, like if the server crash at some time in the processing or if the server runs out of space and didn t finish a previous backup.
Or maybe it comes from the folders having specific properties like being hard/softlinked.
Or a combination.
If i have some faith after the office day, i ll try to write a script to produce all theses cases in a vm.
If you have some scripting skills you can also try to so so.
Testing shows that remove_unknown does not remove directory links in .directory_pool with no refrences in the database. This can happen if you e.g. use a database backup while having new backups in backup storage. This will be fixed.
Unresolved is why those unreferenced directories exist after normal operation (if they do – obviously they should not). They should be removed during normal file backup removal.
remove unknow runned for about 10hours.
it removed a lot of directory pools, maybe 1000-10000
running urbackupsrv verify-hashes
it runs since about 10 hours and is at 15%, it s running io maxed at about 20% iowait, at maybe 50mb/s.
Saved backup should be like 150-200GB
That s maybe not significant as this server is very slow in général.
It s not there :
sudo ls -l “/data/urbackup2/pascalou[home]/170911-0850/home/orogor/.config/google-chrome/Default/Pepper Data/Shockwave Flash/WritableRoot/#SharedObjects/L98PDDXW/macromedia.com”
ls: cannot access ‘/data/urbackup2/pascalou[home]/170911-0850/home/orogor/.config/google-chrome/Default/Pepper Data/Shockwave Flash/WritableRoot/#SharedObjects/L98PDDXW/macromedia.com’: No such file or directory
But the whole backup isn’t missing.
sudo ls -ld “/data/urbackup2/pascalou[home]/170911-0850/home/orogor/.config/google-chrome/Default”
drwxr-x— 23 urbackup urbackup 79 sept. 15 20:20 /data/urbackup2/pascalou[home]/170911-0850/home/orogor/.config/google-chrome/Default
Forgot to get more serious about this and run it in a screen and redirect output. Could these be run from the gui from the advanced tab or so and show a progress bar or just add a progress bar for remove-unknown?