Leftover urbackup LVM snapshots

When i run an lsblk command, i notice there is a heap of snapshots, are they supposed to be removed at the completion of a backup? how do i get rid of them, can I safely delete them without losing any data?

lsblk

NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sr0 11:0 1 1024M 0 rom
xvda 202:0 0 10G 0 disk
├─xvda1 202:1 0 487M 0 part /boot
├─xvda2 202:2 0 1K 0 part
└─xvda5 202:5 0 9.5G 0 part
├─cloud–vg-root-real 252:0 0 5.2G 0 lvm
│ ├─cloud–vg-root 252:1 0 5.2G 0 lvm /
│ ├─cloud–vg-urbackup_snap_1d70425ab7eb949ca70d35588354296ef4013c41117306c3 252:3 0 5.2G 0 lvm
│ ├─cloud–vg-urbackup_snap_cfcaeeaeccb7c26b5b21d8f9981f8ebd9737dfc31218b937 252:5 0 5.2G 0 lvm
│ ├─cloud–vg-urbackup_snap_4a7c9bdc3f624354a9330dc1571891edc8f33f8d4df2cf0c 252:7 0 5.2G 0 lvm
│ ├─cloud–vg-urbackup_snap_d933dacd5a6687f5b26f0e22c9d695f46cfe253770358360 252:9 0 5.2G 0 lvm
│ ├─cloud–vg-urbackup_snap_74baee1c1d63c34bb356ac86b9cbc6f0a93a4ea77eae5468 252:11 0 5.2G 0 lvm
│ └─cloud–vg-urbackup_snap_8c70f3cf19515c29073ee6133597b965580b6d81c469752b 252:13 0 5.2G 0 lvm
├─cloud–vg-urbackup_snap_1d70425ab7eb949ca70d35588354296ef4013c41117306c3-cow 252:2 0 108M 0 lvm
│ └─cloud–vg-urbackup_snap_1d70425ab7eb949ca70d35588354296ef4013c41117306c3 252:3 0 5.2G 0 lvm
├─cloud–vg-urbackup_snap_cfcaeeaeccb7c26b5b21d8f9981f8ebd9737dfc31218b937-cow 252:4 0 52M 0 lvm
│ └─cloud–vg-urbackup_snap_cfcaeeaeccb7c26b5b21d8f9981f8ebd9737dfc31218b937 252:5 0 5.2G 0 lvm
├─cloud–vg-urbackup_snap_4a7c9bdc3f624354a9330dc1571891edc8f33f8d4df2cf0c-cow 252:6 0 28M 0 lvm
│ └─cloud–vg-urbackup_snap_4a7c9bdc3f624354a9330dc1571891edc8f33f8d4df2cf0c 252:7 0 5.2G 0 lvm
├─cloud–vg-urbackup_snap_d933dacd5a6687f5b26f0e22c9d695f46cfe253770358360-cow 252:8 0 12M 0 lvm
│ └─cloud–vg-urbackup_snap_d933dacd5a6687f5b26f0e22c9d695f46cfe253770358360 252:9 0 5.2G 0 lvm
├─cloud–vg-urbackup_snap_74baee1c1d63c34bb356ac86b9cbc6f0a93a4ea77eae5468-cow 252:10 0 8M 0 lvm
│ └─cloud–vg-urbackup_snap_74baee1c1d63c34bb356ac86b9cbc6f0a93a4ea77eae5468 252:11 0 5.2G 0 lvm
├─cloud–vg-urbackup_snap_8c70f3cf19515c29073ee6133597b965580b6d81c469752b-cow 252:12 0 4M 0 lvm
│ └─cloud–vg-urbackup_snap_8c70f3cf19515c29073ee6133597b965580b6d81c469752b 252:13 0 5.2G 0 lvm
└─cloud–vg-swap_1 252:14 0 4.1G 0 lvm [SWAP]
xvdc 202:32 0 80G 0 disk
├─data–vg-data–lv-real 252:15 0 57.2G 0 lvm
│ ├─data–vg-data–lv 252:16 0 57.2G 0 lvm /var/www/owncloud/data
│ └─data–vg-urbackup_snap_db1616f24fd857e01d453687b32dd19c27f69660206e1c7c 252:18 0 39.2G 0 lvm
└─data–vg-urbackup_snap_db1616f24fd857e01d453687b32dd19c27f69660206e1c7c-cow 252:17 0 408M 0 lvm
└─data–vg-urbackup_snap_db1616f24fd857e01d453687b32dd19c27f69660206e1c7c 252:18 0 39.2G 0 lvm

You can safely remove them. They are annoying indeed. Because they take up space in LVM and because of that new backups fail…

Help tracking down the cause would be appreciated (e.g. with log files). This should not occur.

How would I go about deleting these snapshots.

Use at own risk!
lvremove -f /dev/cloud–vg-urbackup_snap_1d70425ab7eb949ca70d35588354296ef4013c41117306c3

That should do the trick.

Hello!

I have the same problem at many linux clients. One of this.
Client’s logs attached. But I didn’t find any there.

urbackup_client_logs.zip (525.5 KB)

I’m ready to help to find this issue. Please write what should I do (what logs or tests should I do).

Client: 2.4.10, Debian 10;
Server: 2.4.13, Debian 10;

root@nas:~#
root@nas:~# lsblk
NAME                                                                         MAJ:MIN RM  SIZE RO TYPE MOUNTPOINT
sda                                                                            8:0    0   20G  0 disk
├─sda1                                                                         8:1    0 19.1G  0 part /
├─sda2                                                                         8:2    0    1K  0 part
└─sda5                                                                         8:5    0  880M  0 part [SWAP]
sdb                                                                            8:16   0  300G  0 disk
├─vgdata2-lvoutlook-real                                                     253:16   0  290G  0 lvm
│ ├─vgdata2-lvoutlook                                                        253:17   0  290G  0 lvm  /sharedfolders/OutlookPST/NewHdd
│ ├─vgdata2-urbackup_snap_a9bbfdc7d2b0c35b225edbeb4e0078c50556f1fc48e0bba7   253:19   0  290G  0 lvm
│ └─vgdata2-urbackup_snap_b28b09e8ee6466d2507b81bf07ceb443be59cfc49d3649ba   253:21   0  290G  0 lvm
├─vgdata2-urbackup_snap_a9bbfdc7d2b0c35b225edbeb4e0078c50556f1fc48e0bba7-cow 253:18   0    5G  0 lvm
│ └─vgdata2-urbackup_snap_a9bbfdc7d2b0c35b225edbeb4e0078c50556f1fc48e0bba7   253:19   0  290G  0 lvm
└─vgdata2-urbackup_snap_b28b09e8ee6466d2507b81bf07ceb443be59cfc49d3649ba-cow 253:20   0  2.5G  0 lvm
  └─vgdata2-urbackup_snap_b28b09e8ee6466d2507b81bf07ceb443be59cfc49d3649ba   253:21   0  290G  0 lvm
sdc                                                                            8:32   0  120G  0 disk
├─vgssd-1cbases-real                                                         253:10   0  105G  0 lvm
│ ├─vgssd-1cbases                                                            253:11   0  105G  0 lvm  /srv/dev-disk-by-id-dm-name-vgssd-1cbases
│ ├─vgssd-urbackup_snap_4f55ae8d10ce93b7afca6966a0060a6ecdc9c083f7cf0c3c     253:13   0  105G  0 lvm
│ └─vgssd-urbackup_snap_01ef954641313f16cfcfe5f760798dc0f2a6f95a41da3520     253:15   0  105G  0 lvm
├─vgssd-urbackup_snap_4f55ae8d10ce93b7afca6966a0060a6ecdc9c083f7cf0c3c-cow   253:12   0  7.5G  0 lvm
│ └─vgssd-urbackup_snap_4f55ae8d10ce93b7afca6966a0060a6ecdc9c083f7cf0c3c     253:13   0  105G  0 lvm
└─vgssd-urbackup_snap_01ef954641313f16cfcfe5f760798dc0f2a6f95a41da3520-cow   253:14   0  3.8G  0 lvm
  └─vgssd-urbackup_snap_01ef954641313f16cfcfe5f760798dc0f2a6f95a41da3520     253:15   0  105G  0 lvm
sdd                                                                            8:48   0  901G  0 disk
├─vgdata-data1-real                                                          253:0    0  795G  0 lvm
│ ├─vgdata-data1                                                             253:1    0  795G  0 lvm  /srv/dev-disk-by-id-dm-name-vgdata-data1
│ ├─vgdata-urbackup_snap_244dd0e95b2985628f5952014a9bc5999983a6bf5f572775    253:3    0  795G  0 lvm
│ ├─vgdata-urbackup_snap_90240a32e4453e2869d5acca415b3fc96c326293d23b309b    253:5    0  795G  0 lvm
│ ├─vgdata-urbackup_snap_934a9039809862425f3fe9195da89fe784562be8cf40fe8f    253:7    0  795G  0 lvm
│ └─vgdata-urbackup_snap_14ad373a7e0505e464e6685f3d6a9791e9fdd7b01096c550    253:9    0  795G  0 lvm
├─vgdata-urbackup_snap_244dd0e95b2985628f5952014a9bc5999983a6bf5f572775-cow  253:2    0   53G  0 lvm
│ └─vgdata-urbackup_snap_244dd0e95b2985628f5952014a9bc5999983a6bf5f572775    253:3    0  795G  0 lvm
├─vgdata-urbackup_snap_90240a32e4453e2869d5acca415b3fc96c326293d23b309b-cow  253:4    0 26.5G  0 lvm
│ └─vgdata-urbackup_snap_90240a32e4453e2869d5acca415b3fc96c326293d23b309b    253:5    0  795G  0 lvm
├─vgdata-urbackup_snap_934a9039809862425f3fe9195da89fe784562be8cf40fe8f-cow  253:6    0 13.3G  0 lvm
│ └─vgdata-urbackup_snap_934a9039809862425f3fe9195da89fe784562be8cf40fe8f    253:7    0  795G  0 lvm
└─vgdata-urbackup_snap_14ad373a7e0505e464e6685f3d6a9791e9fdd7b01096c550-cow  253:8    0  6.6G  0 lvm
  └─vgdata-urbackup_snap_14ad373a7e0505e464e6685f3d6a9791e9fdd7b01096c550    253:9    0  795G  0 lvm
sr0                                                                           11:0    1 1024M  0 rom





root@nas:/mnt/urbackup_snaps# ls -l /mnt/urbackup_snaps/
total 32
drwxr-xr-x 2 root root 4096 Mar  6 04:34 01ef954641313f16cfcfe5f760798dc0f2a6f95a41da3520
drwxr-xr-x 2 root root 4096 Mar 17 02:31 14ad373a7e0505e464e6685f3d6a9791e9fdd7b01096c550
drwxr-xr-x 2 root root 4096 Feb  2 00:51 244dd0e95b2985628f5952014a9bc5999983a6bf5f572775
drwxr-xr-x 2 root root 4096 Feb  2 00:53 4f55ae8d10ce93b7afca6966a0060a6ecdc9c083f7cf0c3c
drwxr-xr-x 2 root root 4096 Feb 13 06:23 90240a32e4453e2869d5acca415b3fc96c326293d23b309b
drwxr-xr-x 2 root root 4096 Mar  5 04:34 934a9039809862425f3fe9195da89fe784562be8cf40fe8f
drwxr-xr-x 2 root root 4096 Feb 18 04:36 a9bbfdc7d2b0c35b225edbeb4e0078c50556f1fc48e0bba7
drwxr-xr-x 2 root root 4096 Mar  3 04:33 b28b09e8ee6466d2507b81bf07ceb443be59cfc49d3649ba

Thanks for posting the log, it has some useful information such as:

2021-02-18 04:45:20: ERROR: Error removing snapshot to /mnt/urbackup_snaps/a9bbfdc7d2b0c35b225edbeb4e0078c50556f1fc48e0bba7/
2021-02-18 04:45:20: ERROR: Unmounting /dev/mapper/vgdata2-urbackup_snap_a9bbfdc7d2b0c35b225edbeb4e0078c50556f1fc48e0bba7 at /mnt/urbackup_snaps/a9bbfdc7d2b0c35b225edbeb4e0078c50556f1fc48e0bba7…
2021-02-18 04:45:20: ERROR: umount: /mnt/urbackup_snaps/a9bbfdc7d2b0c35b225edbeb4e0078c50556f1fc48e0bba7: target is busy
2021-02-18 04:45:20: ERROR: (In some cases useful info about processes that
2021-02-18 04:45:20: ERROR: use the device is found by lsof(8) or fuser(1).)

It also has some filenames, so you might want to send it per mail instead… So urbackup has some files in the snapshot open (in some cases) and therefore the unmount fails. Unfortunately it is not a debug log so I cannot see why… perhaps if you look at the log of the backup previous to the 2021-02-18 0445 backup one can see why it still has a file open? (timeout at the file it later complains about?) Could you maybe enable debug logging?

If possible, could you try if this patch fixes it:

0001-Don-t-delete-old-snapshots-on-Linux-as-long-as-they-.patch (1.9 KB)

The scenario being:

  • File backup is running
  • Server is restarted
  • New file backup is started before the file download from the previous file backup times out

If this is the cause following work-arounds:

  • Don’t restart server (so fast) :wink:
  • Configure the server so that it doesn’t start the backup immediately after the client comes online

Hello!
Of course I will try this patch. I’m happy to be able to make a small contribution to help develop UrBackup.

But I don’t know how to use this .patch. As I understand I need to compile client with this patch.

I know that qestion is stupid, but clould you please send small instructions how to patch it. Thank you in advance!

Answers to your questions:

It also has some filenames, so you might want to send it per mail instead

I didn’t understand. What do you mean?

Could you maybe enable debug logging?

I enabled debug logging. When bug repeat I will send logs.

perhaps if you look at the log of the backup previous to the 2021-02-18 0445 backup one can see why it still has a file open? (timeout at the file it later complains about?)

This client backups to 3 UrBackup servers. 2 servers at local lan, 1 internet server. Client has same settings at all servers.
But this is not the reason of this issue because same issue at clients which backups only to one server.

Previous logs to the 2021-02-18 0445 attached.
Logs at servers.zip (185.6 KB)

cd UNPACKED_CLIENT_SOURCE
wget "PATCH_URL"
patch -p1 < PATCH_FILE

I edited my previus post. I answered at your questions.

I guess the backup would have something like Logical volume "urbackup_snap_a9bbfdc7d2b0c35b225edbeb4e0078c50556f1fc48e0bba7" created. and I guess if the server was restarted you’ll only find this in /var/log/urbackup.log.

I deployed the UrBackup test server and test client (debian 10). And reproduced the problem.
Client version 2.4.11, patched by your fix, but it didn’t help.

Full logs attached.
logs_debug_mode.zip (1.1 MB)

The report turned out to be very large. I spent a whole day on these tests. Hope this helps fix the problem.

Snapshots are not deleted if:

  • Reset the client during file transfer.
  • Reset the server during file transfer.

It’s easy to reproduce.

2 reports:

  • Reset client. Step by step.
  • Reset server. Step by step.

Reproduce1. Reset client. Step by step.

Before start backup:

root@snap-leftover-nas-original:~# lvs
  LV          VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups vg7zbackups -wi-ao----   1.76g
  data1       vgdata      -wi-ao---- 500.02g
  
root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  4.5M  196M   3% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  5.8G  487G   2% /srv/dev-disk-by-id-dm-name-vgdata-data1

Start backup

root@snap-leftover-nas-original:~# date
Sun 28 Mar 2021 03:55:27 AM MSK

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638 vgdata      swi-aos---  99.99g      data1  0.01

root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  4.5M  195M   3% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  5.8G  487G   2% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-3                            493G  5.8G  487G   2% /mnt/urbackup_snaps/4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638
root@snap-leftover-nas-original:~#

Reset this client during backup (hard reset, not reboot)
Client started after reset. Snapshot didn’t delete.

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638 vgdata      swi-a-s---  99.99g      data1  0.01
root@snap-leftover-nas-original:~#

root@snap-leftover-nas-original:~# urbackupclientctl status
{
"capability_bits": 65548,
"finished_processes": [],
"internet_connected": false,
"internet_status": "wait_local",
"last_backup_time": 1616892841,
"running_processes": [],
"servers": [],
"time_since_last_lan_connection": 263
}

Start backup2. Old snapshot didn’t delete. New snapshot created.

root@snap-leftover-nas-original:~# date
Sun 28 Mar 2021 03:58:51 AM MSK

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638 vgdata      swi-a-s---  99.99g      data1  0.01
  urbackup_snap_effdd4b50618add29a38a940a15bb70e631fb2cad9b8de0e vgdata      swi-aos---  50.00g      data1  0.01

root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  7.0M  193M   4% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  5.8G  487G   2% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-6                            493G  5.8G  487G   2% /mnt/urbackup_snaps/effdd4b50618add29a38a940a15bb70e631fb2cad9b8de0e
root@snap-leftover-nas-original:~#

Backup succeeded without errors. Last snapshot was deleted. Old snapshot didn’t delete.

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638 vgdata      swi-a-s---  99.99g      data1  0.01

Client logs. Old snapshot didn’t delete. Importand logs.
Old snapshot - urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638
As I see the program thinks that “Current root backup path does not exist anymore”, and didn’t try to delete it.

2021-03-28 03:58:41: ClientService cmd: #IVdUis5D0GHp1gRhSLTAp#2LOGDATA 1616893120 0-1616892939-Starting unscheduled full file backup...
0-1616892940-Logical volume "urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638" created.
0-1616892940-Indexing of "dev-disk-by-id-dm-name-vgdata-data1" done. 45 filesystem lookups 0 db lookups and 0 db updates
0-1616892940-snap-leftover-nas-original.local: Loading file list...
0-1616892940-snap-leftover-nas-original.local: Started loading files...
0-1616892940-Waiting for file transfers...
2-1616893060-Error getting file metadata. Errorcode: CANNOT_OPEN_FILE (3)
2-1616893078-Error getting complete file "Iq6thsabqfnQjZhD45YT|dev-disk-by-id-dm-name-vgdata-data1/cloud1/Обмен/f2f1" from snap-leftover-nas-original.local. Errorcode: BASE_DIR_LOST (7)
0-1616893078-Hint: Current root backup path does not exist anymore. Usually this is because the current file system snapshot it was backing up was deleted because it ran out of snapshot storage space. See https://www.urbackup.org/faq.html$rbase_dir_lost for details and for how to fix this issue
2-1616893078-Client snap-leftover-nas-original.local went offline.
0-1616893086-Waiting for file hashing and copying threads...
0-1616893120-Writing new file list...
0-1616893120-Transferred 191.765 MB - Average speed: 11.6169 MBit/s
0-1616893120-Time taken for backing up client snap-leftover-nas-original.local: 3m
2-1616893120-Backup failed

2021-03-28 03:58:41: rc=0 hasError=true state=0
2021-03-28 03:58:48: FileSrv: ClientThread deleted. 1024 KB Memory freed.
2021-03-28 03:58:48: FileSrv: ClientThread deleted. 1024 KB Memory freed.
2021-03-28 03:59:00: ClientService cmd: #IVdUis5D0GHp1gRhSLTAp#2PING RUNNING pc_done=&status_id=5&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-28 03:59:00: ClientService cmd: #IVdUis5D0GHp1gRhSLTAp#3START FULL BACKUP group=0&running_jobs=1&sha=528&with_permissions=1&with_scripts=1&with_orig_path=1&with_sequence=1&with_proper_symlinks=1&status_id=5&async=1#token=Iq6thsabqfnQjZhD45YT
2021-03-28 03:59:00: Async index 0138baec1fae96862a59d00aa7c96048
2021-03-28 03:59:00: Removing VSS log data...
2021-03-28 03:59:00: rc=0 hasError=true state=0
2021-03-28 03:59:00: Final path: /srv/dev-disk-by-id-dm-name-vgdata-data1
2021-03-28 03:59:00: ClientService cmd: #IVdUis5D0GHp1gRhSLTAp#WAIT FOR INDEX async_id=0138baec1fae96862a59d00aa7c96048#token=Iq6thsabqfnQjZhD45YT
2021-03-28 03:59:00: Wait for async index 0138baec1fae96862a59d00aa7c96048
2021-03-28 03:59:00: Script list at "/usr/local/etc/urbackup/scripts/list" does not exist. Skipping.
2021-03-28 03:59:00: Deleting files... doing full index...
2021-03-28 03:59:00: Script "/usr/local/etc/urbackup/prefilebackup" does not exist
2021-03-28 03:59:00: Final path: /srv/dev-disk-by-id-dm-name-vgdata-data1
2021-03-28 03:59:00: Creating shadowcopy of "dev-disk-by-id-dm-name-vgdata-data1" in indexDirs()
2021-03-28 03:59:01: Logical volume "urbackup_snap_effdd4b50618add29a38a940a15bb70e631fb2cad9b8de0e" created.
2021-03-28 03:59:01: Shadowcopy path: /mnt/urbackup_snaps/effdd4b50618add29a38a940a15bb70e631fb2cad9b8de0e
2021-03-28 03:59:01: done.

Start backup3

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638 vgdata      swi-a-s---  99.99g      data1  0.01
  urbackup_snap_e0e368a4cba7f8a6e284cc1e83e7e3c5c182cdf6b5b8582b vgdata      swi-aos---  50.00g      data1  0.01

root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  7.0M  193M   4% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  8.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  5.8G  487G   2% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-6                            493G  5.8G  487G   2% /mnt/urbackup_snaps/e0e368a4cba7f8a6e284cc1e83e7e3c5c182cdf6b5b8582b

Reset2 this client during backup (hard reset, not reboot)
Client started after reset. Now we have 2 old snapshots.

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638 vgdata      swi-a-s---  99.99g      data1  0.01
  urbackup_snap_e0e368a4cba7f8a6e284cc1e83e7e3c5c182cdf6b5b8582b vgdata      swi-a-s---  50.00g      data1  0.01


root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  4.5M  195M   3% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  5.8G  487G   2% /srv/dev-disk-by-id-dm-name-vgdata-data1

root@snap-leftover-nas-original:~# urbackupclientctl status
{
"capability_bits": 65548,
"finished_processes": [],
"internet_connected": false,
"internet_status": "wait_local",
"last_backup_time": 1616893376,
"running_processes": [],
"servers": [],
"time_since_last_lan_connection": 9302
}

Start backup4. Last old snapshot deleted. But the first old snapshot didn’t delete.

root@snap-leftover-nas-original:~# date
Sun 28 Mar 2021 04:07:44 AM MSK

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_33c380424685af3830e5101c9eb36d3de035f368fc6ae5f9 vgdata      swi-aos---  50.00g      data1  0.01
  urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638 vgdata      swi-a-s---  99.99g      data1  0.01

Backup succeeded without errors. Current snapshot (used for this backup) deleted, first old snapshot not deleted.

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_4b6e0767301c4499e7a082451a783a0aec1a649f5d9de638 vgdata      swi-a-s---  99.99g      data1  0.01

root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  7.0M  193M   4% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  5.8G  487G   2% /srv/dev-disk-by-id-dm-name-vgdata-data1

Client logs. Last old snapshot deleted. Importand logs.
Last old snapshot - urbackup_snap_e0e368a4cba7f8a6e284cc1e83e7e3c5c182cdf6b5b8582b

2021-03-28 04:07:48: ClientService cmd: #IVdUis5D0GHp1gRhSLTAp#WAIT FOR INDEX async_id=bb9fc67986e1eeb679cbc7ba42ca0d03#token=Iq6thsabqfnQjZhD45YT
2021-03-28 04:07:48: Wait for async index bb9fc67986e1eeb679cbc7ba42ca0d03
2021-03-28 04:07:48: Snapshot is not mounted. Already removed
2021-03-28 04:07:48: LVM snapshot still exists. Destroying LVM snapshot /dev/mapper/vgdata-urbackup_snap_e0e368a4cba7f8a6e284cc1e83e7e3c5c182cdf6b5b8582b...
2021-03-28 04:07:48: Logical volume "urbackup_snap_e0e368a4cba7f8a6e284cc1e83e7e3c5c182cdf6b5b8582b" successfully removed
2021-03-28 04:07:49: Logical volume "urbackup_snap_33c380424685af3830e5101c9eb36d3de035f368fc6ae5f9" created.
2021-03-28 04:07:49: Shadowcopy path: /mnt/urbackup_snaps/33c380424685af3830e5101c9eb36d3de035f368fc6ae5f9
2021-03-28 04:07:49: done.

Reproduce2. Reset server. Step by step.

Start backup:

root@snap-leftover-nas-original:~# date
Sun 28 Mar 2021 03:35:01 AM MSK

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_9fb79ab169ebdb5176becd1b80f8a1c1d8a9b947dce51f3b vgdata      swi-aos---  99.99g      data1  0.01

root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  4.5M  195M   3% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  5.8G  487G   2% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-4                            493G  5.8G  487G   2% /mnt/urbackup_snaps/9fb79ab169ebdb5176becd1b80f8a1c1d8a9b947dce51f3b

Reset server. As we see urbackup client didn’t running backup now, but snapshot didn’t delete.

root@snap-leftover-nas-original:~# urbackupclientctl status
{
"capability_bits": 257823,
"finished_processes": [{
"process_id": 1,
"success": true
}
],
"internet_connected": false,
"internet_status": "connected_local",
"last_backup_time": 1616891587,
"running_processes": [{
"action": "INCR",
"done_bytes": 209180896,
"eta_ms": 66558,
"percent_done": 7,
"process_id": 2,
"server_status_id": 2,
"speed_bpms": 0,
"total_bytes": 3065278464
}
,{
"action": "INCR",
"eta_ms": 76556,
"percent_done": -1,
"process_id": 3,
"server_status_id": 1,
"speed_bpms": 0
}
],
"servers": [{
"internet_connection": false,
"name": "30.30.30.233"
}
],
"time_since_last_lan_connection": 24135
}


root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_9fb79ab169ebdb5176becd1b80f8a1c1d8a9b947dce51f3b vgdata      swi-aos---  99.99g      data1  0.01
root@snap-leftover-nas-original:~#

root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  4.5M  195M   3% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  8.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  5.8G  487G   2% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-4                            493G  5.8G  487G   2% /mnt/urbackup_snaps/9fb79ab169ebdb5176becd1b80f8a1c1d8a9b947dce51f3b
root@snap-leftover-nas-original:~#

Start backup2 (after reset server). Old snapshot dind’t delete. New created. Both of them are mounted.

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_9fb79ab169ebdb5176becd1b80f8a1c1d8a9b947dce51f3b vgdata      swi-aos---  99.99g      data1  0.01
  urbackup_snap_a7fcdc06b2747bdd0211e1672272fbfdd224c6485c34456c vgdata      swi-aos---  50.00g      data1  0.01

# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  7.0M  193M   4% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  8.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  5.8G  487G   2% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-4                            493G  5.8G  487G   2% /mnt/urbackup_snaps/9fb79ab169ebdb5176becd1b80f8a1c1d8a9b947dce51f3b
/dev/dm-6                            493G  5.8G  487G   2% /mnt/urbackup_snaps/a7fcdc06b2747bdd0211e1672272fbfdd224c6485c34456c

Client logs. Last old snapshot didn’t deleted. Importand logs.
Last old snapshot - urbackup_snap_9fb79ab169ebdb5176becd1b80f8a1c1d8a9b947dce51f3b

As I understand snapshot didn’t deleted because: WARNING: Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.

2021-03-28 03:38:16: Removing VSS log data...
2021-03-28 03:38:16: rc=0 hasError=true state=0
2021-03-28 03:38:16: Final path: /srv/dev-disk-by-id-dm-name-vgdata-data1
2021-03-28 03:38:16: Script list at "/usr/local/etc/urbackup/scripts/list" does not exist. Skipping.
2021-03-28 03:38:16: Script "/usr/local/etc/urbackup/prefilebackup" does not exist
2021-03-28 03:38:16: Final path: /srv/dev-disk-by-id-dm-name-vgdata-data1
2021-03-28 03:38:16: Creating shadowcopy of "dev-disk-by-id-dm-name-vgdata-data1" in indexDirs()
2021-03-28 03:38:16: ClientService cmd: #IB3M4IkNuEU2p995SOLph#WAIT FOR INDEX async_id=faad847fc82dbe0961a41b9dc7877e3e#token=Iq6thsabqfnQjZhD45YT
2021-03-28 03:38:16: Wait for async index faad847fc82dbe0961a41b9dc7877e3e
2021-03-28 03:38:21: WARNING: Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.
2021-03-28 03:38:26: ClientService cmd: #IB3M4IkNuEU2p995SOLph#2PING RUNNING pc_done=&eta_ms=76554&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-28 03:38:26: WARNING: Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.
2021-03-28 03:38:26: Logical volume "urbackup_snap_a7fcdc06b2747bdd0211e1672272fbfdd224c6485c34456c" created.
2021-03-28 03:38:26: Shadowcopy path: /mnt/urbackup_snaps/a7fcdc06b2747bdd0211e1672272fbfdd224c6485c34456c
2021-03-28 03:38:26: done.

Reset server2. Both snapshots didn’t delete.

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_9fb79ab169ebdb5176becd1b80f8a1c1d8a9b947dce51f3b vgdata      swi-aos---  99.99g      data1  0.01
  urbackup_snap_a7fcdc06b2747bdd0211e1672272fbfdd224c6485c34456c vgdata      swi-aos---  50.00g      data1  0.01

Start backup3 (after reset server). Old 2 snapshots dind’t delete. New created. Both of them are mounted.

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_5ea384ac045d4a022dbe261f95e93f57e9def0ce13586c53 vgdata      swi-aos---  25.00g      data1  0.01
  urbackup_snap_9fb79ab169ebdb5176becd1b80f8a1c1d8a9b947dce51f3b vgdata      swi-aos---  99.99g      data1  0.01
  urbackup_snap_a7fcdc06b2747bdd0211e1672272fbfdd224c6485c34456c vgdata      swi-aos---  50.00g      data1  0.01

Client logs. Old 2 snapshots dind’t delete. Importand logs.

2021-03-28 03:40:33: ClientService cmd: #IVdUis5D0GHp1gRhSLTAp#2LOGDATA 1616892032 0-1616892002-Starting unscheduled incremental file backup...
1-1616892024-Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.
1-1616892024-Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.
1-1616892024-Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.
1-1616892024-Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.

Thanks for the detailed analysis!

The second seems to work as intended now (with patch). It doesn’t delete the snapshots while they are still in use (how are you managing to prevent TCP resets btw. – I don’t seem to manage it?). The snapshots are deleted later in this case. It could try to do that more often, though.

The first is only if you hard reset the whole machine and not reboot, correct?

Answers to your questions:

how are you managing to prevent TCP resets btw. – I don’t seem to manage it?

I use proxmox VE as hypervisor.
I just reset UrBackup server, thats all. I demonstarate process it at this short video https://youtu.be/dX9nVT_B17o.

The first is only if you hard reset the whole machine and not reboot, correct?

Honestly, I don’t remember because I did a lot of manipulations to find a way when the problem could be easily reproduced.

I’ll make tests with reboots and write to you.

The snapshots are deleted later in this case. It could try to do that more often, though.

It will be very good! Snapshots slow down the disk i/o performance a very lot.
And therefore it is very important that snapshots are deleted as soon as possilble.

As I understand it, the UrBackup client checks for old snapshots only when start creating a backup. Usually backups are made once a day.
I use SATA HDD 7200 drives at NAS servers. Nobody can use nas server if snapshots exist.

This version here retries deletion every 60s and syncs while adding the snapshot info to database. Untested from my end for now.

urbackup-client-2.4.11.0.tar.gz (4.5 MB)

Awesome! I’ll check it and write you detail report with logs.

I’ll make tests with reboots and write to you (with this version and previous patched version).

I made a few tests with your urbackup-client-2.4.11.0.tar.gz (retries deletion every 60s).
But it didn’t help.

Strange client version 2.0.36 instead of 2.4.11 at web. Is it ok?

Full client and server logs attached:
urbackup2.4.11-uronifix2_del_snap_every60sec_logs.zip (957.6 KB)

2 tests:

  • Reset server then wait 10 hours (snapshot still exists)
  • Reset server (snapshot still exists), then start new backup error, start backup again urbackupclientbackend frozen 100% CPU, reset client snapshot deleted.

The post is too big. Sent it in 2 messages.

Reproduce1. Reset server then wait 10 hours (snapshot still exists). Step by step.

Before start backup:

root@snap-leftover-nas-original:~# lvs
  LV          VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups vg7zbackups -wi-ao----   1.76g
  data1       vgdata      -wi-ao---- 500.02g

Start backup:

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 03:24:31 AM MSK

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_fb2df9f384cdf4dac3ca29577607d29ad90c5b29f0e13680 vgdata      swi-aos---  99.99g      data1  0.01


root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  4.5M  196M   3% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-3                            493G  4.6G  488G   1% /mnt/urbackup_snaps/fb2df9f384cdf4dac3ca29577607d29ad90c5b29f0e13680

Reset urbackup server. Snapshot still exists

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 03:25:10 AM MSK

root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_fb2df9f384cdf4dac3ca29577607d29ad90c5b29f0e13680 vgdata      swi-aos---  99.99g      data1  0.01


root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  4.5M  195M   3% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-3                            493G  4.6G  488G   1% /mnt/urbackup_snaps/fb2df9f384cdf4dac3ca29577607d29ad90c5b29f0e13680


root@snap-leftover-nas-original:~# urbackupclientctl status
{
"capability_bits": 257823,
"finished_processes": [],
"internet_connected": false,
"internet_status": "connected_local",
"last_backup_time": 1617059800,
"running_processes": [{
"action": "FULL",
"done_bytes": 313400656,
"eta_ms": 185762,
"percent_done": 6,
"process_id": 1,
"server_status_id": 1,
"speed_bpms": 0,
"total_bytes": 4840245248
}
],
"servers": [{
"internet_connection": false,
"name": "30.30.30.233"
}
],
"time_since_last_lan_connection": 51973
}

Wait a few hours…
10 hours later. Snapshot still exists.

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 01:36:07 PM MSK


root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_fb2df9f384cdf4dac3ca29577607d29ad90c5b29f0e13680 vgdata      swi-aos---  99.99g      data1  0.01


root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M   22M  178M  11% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-3                            493G  4.6G  488G   1% /mnt/urbackup_snaps/fb2df9f384cdf4dac3ca29577607d29ad90c5b29f0e13680


root@snap-leftover-nas-original:~# urbackupclientctl status
{
"capability_bits": 257823,
"finished_processes": [],
"internet_connected": false,
"internet_status": "connected_local",
"last_backup_time": 1617059800,
"running_processes": [],
"servers": [{
"internet_connection": false,
"name": "30.30.30.233"
}
],
"time_since_last_lan_connection": 53200
}

As we see noone process use it, but snapshot still exists.

root@snap-leftover-nas-original:~# fuser -m /dev/dm-3


root@snap-leftover-nas-original:~# lsof +D /mnt/urbackup_snaps/fb2df9f384cdf4dac3ca29577607d29ad90c5b29f0e13680/

Part 2. First part at previous post.

Reproduce2. Reset server (snapshot still exists), then start new backup error, start backup again urbackupclientbackend frozen 100% CPU, reset client snapshot deleted. Step by step.

Before start backup:

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 02:37:20 PM MSK


root@snap-leftover-nas-original:~# lvs
  LV          VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups vg7zbackups -wi-ao----   1.76g
  data1       vgdata      -wi-ao---- 500.02g


root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M   22M  178M  11% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  8.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1
root@snap-leftover-nas-original:~#

Start backup:

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 02:38:12 PM MSK


root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_9a69c272ecd750b2aac4486584c99959c66da02039c4c65a vgdata      swi-aos---  99.99g      data1  0.01
root@snap-leftover-nas-original:~#
root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M   22M  178M  11% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  8.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-3                            493G  4.6G  488G   1% /mnt/urbackup_snaps/9a69c272ecd750b2aac4486584c99959c66da02039c4c65a


root@snap-leftover-nas-original:~# urbackupclientctl status
{
"capability_bits": 257823,
"finished_processes": [{
"process_id": 1,
"success": true
}
],
"internet_connected": false,
"internet_status": "connected_local",
"last_backup_time": 1617104009,
"running_processes": [{
"action": "FULL",
"done_bytes": 313924880,
"eta_ms": 144091,
"percent_done": 6,
"process_id": 3,
"server_status_id": 2,
"speed_bpms": 0,
"total_bytes": 4840245248
}
],
"servers": [{
"internet_connection": false,
"name": "30.30.30.233"
}
],
"time_since_last_lan_connection": 9746
}

Reset urbackup server. Snapshot still exists

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 02:40:00 PM MSK
root@snap-leftover-nas-original:~#
root@snap-leftover-nas-original:~#
root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_9a69c272ecd750b2aac4486584c99959c66da02039c4c65a vgdata      swi-aos---  99.99g      data1  0.01

root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M   22M  178M  11% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  8.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-3                            493G  4.6G  488G   1% /mnt/urbackup_snaps/9a69c272ecd750b2aac4486584c99959c66da02039c4c65a


root@snap-leftover-nas-original:~# urbackupclientctl status
{
"capability_bits": 257823,
"finished_processes": [{
"process_id": 1,
"success": true
}
],
"internet_connected": false,
"internet_status": "connected_local",
"last_backup_time": 1617104009,
"running_processes": [{
"action": "FULL",
"done_bytes": 1257147664,
"eta_ms": 114090,
"percent_done": 26,
"process_id": 3,
"server_status_id": 2,
"speed_bpms": 31458.2,
"total_bytes": 4840245248
}
],
"servers": [{
"internet_connection": false,
"name": "30.30.30.233"
}
],
"time_since_last_lan_connection": 41402
}

Run backup again1

Urbackup server logs. Backup had an early error. Deleting partial backup

|Info|30.03.21 14:41|Starting unscheduled full file backup...|
| --- | --- | --- |
|Warnings|30.03.21 14:41|Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.|
|Warnings|30.03.21 14:41|Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.|
|Info|30.03.21 14:41|Logical volume "urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928" created.|
|Errors|30.03.21 14:41|Cannot access path to backup: "/mnt/urbackup_snaps/d46315946d62af7198e510102e9ede506d25b9df2205a928/486584c99959c66da02039c4c65a" Errorcode: 2 - No such file or directory|
|Info|30.03.21 14:41|Unmounting /dev/mapper/vgdata-urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928 at /mnt/urbackup_snaps/d46315946d62af7198e510102e9ede506d25b9df2205a928...|
|Info|30.03.21 14:41|Destroying LVM snapshot /dev/mapper/vgdata-urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928...|
|Info|30.03.21 14:41|Logical volume "urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928" successfully removed|
|Errors|30.03.21 14:41|Indexing files failed, because of error|
|Errors|30.03.21 14:41|Constructing of filelist of "snap-leftover-nas-original.local" failed: error - index error|
|Errors|30.03.21 14:42|Backup had an early error. Deleting partial backup.|

Urbackup client logs
Cannot access path to backup: “/mnt/urbackup_snaps/d46315946d62af7198e510102e9ede506d25b9df2205a928/486584c99959c66da02039c4c65a” Errorcode: 2 - No such file or directory
This file can’t exists at this snaphot. Where UrBackup get this path?

2021-03-30 14:41:26: ClientService cmd: PONG
2021-03-30 14:41:36: ClientService cmd: STATUS DETAIL#pw=wC2sgTNu6V3HCu6pCUQ20JM6whrP3e
2021-03-30 14:41:36: rc=0 hasError=true state=0
2021-03-30 14:41:47: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&status_id=1&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:41:47: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#3START FULL BACKUP group=0&running_jobs=1&sha=528&with_permissions=1&with_scripts=1&with_orig_path=1&with_sequence=1&with_proper_symlinks=1&status_id=1&async=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:41:47: Removing running process (timeout) id 3 server_id 2 token Iq6thsabqfnQjZhD45YT action 2
2021-03-30 14:41:47: Async index a883647266b162b78916835768458a28
2021-03-30 14:41:47: Removing VSS log data...
2021-03-30 14:41:47: rc=0 hasError=true state=0
2021-03-30 14:41:47: Final path: /srv/dev-disk-by-id-dm-name-vgdata-data1
2021-03-30 14:41:47: Script list at "/usr/local/etc/urbackup/scripts/list" does not exist. Skipping.
2021-03-30 14:41:47: Deleting files... doing full index...
2021-03-30 14:41:47: Script "/usr/local/etc/urbackup/prefilebackup" does not exist
2021-03-30 14:41:47: Final path: /srv/dev-disk-by-id-dm-name-vgdata-data1
2021-03-30 14:41:47: Creating shadowcopy of "dev-disk-by-id-dm-name-vgdata-data1" in indexDirs()
2021-03-30 14:41:47: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#WAIT FOR INDEX async_id=a883647266b162b78916835768458a28#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:41:47: Wait for async index a883647266b162b78916835768458a28
2021-03-30 14:41:52: WARNING: Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.
2021-03-30 14:41:57: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=144090&status_id=1&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:41:57: WARNING: Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.
2021-03-30 14:41:57: Logical volume "urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928" created.
2021-03-30 14:41:57: Shadowcopy path: /mnt/urbackup_snaps/d46315946d62af7198e510102e9ede506d25b9df2205a928
2021-03-30 14:41:57: done.
2021-03-30 14:41:57: Indexing "dev-disk-by-id-dm-name-vgdata-data1"...
2021-03-30 14:41:57: ERROR: Cannot access path to backup: "/mnt/urbackup_snaps/d46315946d62af7198e510102e9ede506d25b9df2205a928/486584c99959c66da02039c4c65a" Errorcode: 2 - No such file or directory
2021-03-30 14:41:57: Deleting shadowcopy for path "/mnt/urbackup_snaps/d46315946d62af7198e510102e9ede506d25b9df2205a928/486584c99959c66da02039c4c65a" -2
2021-03-30 14:41:58: Unmounting /dev/mapper/vgdata-urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928 at /mnt/urbackup_snaps/d46315946d62af7198e510102e9ede506d25b9df2205a928...
2021-03-30 14:41:58: Destroying LVM snapshot /dev/mapper/vgdata-urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928...
2021-03-30 14:41:58: Logical volume "urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928" successfully removed
2021-03-30 14:41:58: Deleting Shadowcopy for dir "/srv/dev-disk-by-id-dm-name-vgdata-data1"
2021-03-30 14:41:58: ERROR: Indexing files failed, because of error
2021-03-30 14:41:58: Script "/usr/local/etc/urbackup/postfileindex" does not exist
2021-03-30 14:41:58: Async index a883647266b162b78916835768458a28 finished with "error - index error"
2021-03-30 14:41:58: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#GET VSSLOG
2021-03-30 14:41:58: VSS logdata - 996 bytes
2021-03-30 14:41:58: rc=0 hasError=true state=0
2021-03-30 14:41:58: rc=0 hasError=true state=0
2021-03-30 14:42:07: rc=0 hasError=true state=0
2021-03-30 14:42:08: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2LOGDATA 1617104527 0-1617104507-Starting unscheduled full file backup...
1-1617104518-Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.
1-1617104518-Old shadow copy of /srv/dev-disk-by-id-dm-name-vgdata-data1 still in use. Not deleting or using it.
0-1617104518-Logical volume "urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928" created.
2-1617104518-Cannot access path to backup: "/mnt/urbackup_snaps/d46315946d62af7198e510102e9ede506d25b9df2205a928/486584c99959c66da02039c4c65a" Errorcode: 2 - No such file or directory
0-1617104518-Unmounting /dev/mapper/vgdata-urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928 at /mnt/urbackup_snaps/d46315946d62af7198e510102e9ede506d25b9df2205a928...
0-1617104518-Destroying LVM snapshot /dev/mapper/vgdata-urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928...
0-1617104518-Logical volume "urbackup_snap_d46315946d62af7198e510102e9ede506d25b9df2205a928" successfully removed
2-1617104518-Indexing files failed, because of error
2-1617104518-Constructing of filelist of "snap-leftover-nas-original.local" failed: error - index error
2-1617104527-Backup had an early error. Deleting partial backup.

Old snapshot still exitsts

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 02:42:46 PM MSK


root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M   22M  178M  11% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  8.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-3                            493G  4.6G  488G   1% /mnt/urbackup_snaps/9a69c272ecd750b2aac4486584c99959c66da02039c4c65a


root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_9a69c272ecd750b2aac4486584c99959c66da02039c4c65a vgdata      swi-aos---  99.99g      data1  0.01

Run backup again2. New snapshot didn’t created.
urbackupclientbackend frozen. Previous test I waited more than hour and process still use 100%CPU and nothing happened.

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 02:58:10 PM MSK

# top

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
  561 root      20   0 1105044  66476   8248 S 100.3   3.3  18:40.85 urbackupclientb


root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_9a69c272ecd750b2aac4486584c99959c66da02039c4c65a vgdata      swi-aos---  99.99g      data1  0.01


root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M   22M  178M  11% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  8.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-3                            493G  4.6G  488G   1% /mnt/urbackup_snaps/9a69c272ecd750b2aac4486584c99959c66da02039c4c65a

Server logs (from web):

|30.03.21 14:57  |DEBUG  |Reflink copying is enabled|
|---|---|---|
|30.03.21 14:57  |INFO  |Starting unscheduled full file backup...|
|30.03.21 14:57  |DEBUG  |snap-leftover-nas-original.local: Connecting for filelist...|
|30.03.21 14:57  |DEBUG  |snap-leftover-nas-original.local: Waiting for filelist|
|30.03.21 14:57  |DEBUG  |snap-leftover-nas-original.local: Connecting for filelist (async)...|
and nothing happends.

Client logs:

2021-03-30 14:57:56: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#3START FULL BACKUP group=0&running_jobs=1&sha=528&with_permissions=1&with_scripts=1&with_orig_path=1&with_sequence=1&with_proper_symlinks=1&status_id=2&async=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:57:56: Removing running process (timeout) id 4 server_id 1 token Iq6thsabqfnQjZhD45YT action 2
2021-03-30 14:57:56: Async index 76ac0d2f8510912ad4b895cdf1093688
2021-03-30 14:57:56: rc=0 hasError=true state=0
2021-03-30 14:57:56: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#WAIT FOR INDEX async_id=76ac0d2f8510912ad4b895cdf1093688#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:57:56: Wait for async index 76ac0d2f8510912ad4b895cdf1093688
2021-03-30 14:58:06: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=144095&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:58:16: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=134095&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:58:26: ClientService cmd: PONG
2021-03-30 14:58:26: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=124094&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:58:36: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=114094&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:58:46: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=104094&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:58:56: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=94093&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:59:06: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=84093&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:59:16: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=74092&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:59:26: ClientService cmd: PONG
2021-03-30 14:59:26: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=64092&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:59:36: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=61000&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:59:46: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=61000&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 14:59:56: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=61000&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 15:00:06: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=61000&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 15:00:16: ClientService cmd: #IjhGQdDfX6Nxfr2GmLsE2#2PING RUNNING pc_done=&eta_ms=61000&status_id=2&speed_bpms=0&total_bytes=-1&done_bytes=0&paused_fb=1#token=Iq6thsabqfnQjZhD45YT
2021-03-30 15:00:26: ClientService cmd: PONG

urbackupclientbackend still rozen, 100% CPU

top
 PID USER      PR  NI    VIRT    RES    SHR S  %CPU  %MEM     TIME+ COMMAND
  561 root      20   0 1105044  66476   8248 S 100.0   3.3  21:29.37 urbackupclientb

Reset client
Before reset

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 03:12:37 PM MSK


root@snap-leftover-nas-original:~# lvs
  LV                                                             VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups                                                    vg7zbackups -wi-ao----   1.76g
  data1                                                          vgdata      owi-aos--- 500.02g
  urbackup_snap_9a69c272ecd750b2aac4486584c99959c66da02039c4c65a vgdata      swi-aos---  99.99g      data1  0.01


root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M   22M  178M  11% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  8.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1
/dev/dm-3                            493G  4.6G  488G   1% /mnt/urbackup_snaps/9a69c272ecd750b2aac4486584c99959c66da02039c4c65a
root@snap-leftover-nas-original:~#

After reset client. Snapshot deleted

root@snap-leftover-nas-original:~# date
Tue 30 Mar 2021 03:14:00 PM MSK
root@snap-leftover-nas-original:~#
root@snap-leftover-nas-original:~# lvs
  LV          VG          Attr       LSize   Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  lv7zbackups vg7zbackups -wi-ao----   1.76g
  data1       vgdata      -wi-ao---- 500.02g

root@snap-leftover-nas-original:~# df -h
Filesystem                           Size  Used Avail Use% Mounted on
udev                                 977M     0  977M   0% /dev
tmpfs                                200M  7.0M  193M   4% /run
/dev/sda1                             19G  4.6G   14G  26% /
tmpfs                                998M     0  998M   0% /dev/shm
tmpfs                                5.0M     0  5.0M   0% /run/lock
tmpfs                                998M     0  998M   0% /sys/fs/cgroup
tmpfs                                998M  4.0K  998M   1% /tmp
/dev/mapper/vg7zbackups-lv7zbackups  1.7G   17M  1.6G   1% /backups/nas7zbackups
/dev/mapper/vgdata-data1             493G  4.6G  488G   1% /srv/dev-disk-by-id-dm-name-vgdata-data1

Client logs:

2021-03-30 15:13:24: SQLite: recovered 126 frames from WAL file /usr/local/var/urbackup/backup_client.db-wal code: 283
2021-03-30 15:13:24: urbackupserver: Server started up successfully!
2021-03-30 15:13:24: Started UrBackupClient Backend...
2021-03-30 15:13:24: FileSrv: Binding UDP socket at port 35622...
2021-03-30 15:13:24: FileSrv: done.
2021-03-30 15:13:24: FileSrv: Binding ipv6 UDP socket at port 35622...
2021-03-30 15:13:24: FileSrv: done.
2021-03-30 15:13:24: ERROR: Error joining ipv6 multicast group ff12::f894:d:dd00:ef91
2021-03-30 15:13:24: FileSrv: Servername: -snap-leftover-nas-original.local-
2021-03-30 15:13:24: FileSrv: Server started up successfully
2021-03-30 15:13:24: FileSrv: UDP Thread started
2021-03-30 15:13:25: Looking for old Sessions... 0 sessions
2021-03-30 15:13:25: Internet only mode not enabled. Waiting for local server for 3:00...
2021-03-30 15:13:26: Snapshot is not mounted. Already removed
2021-03-30 15:13:26: LVM snapshot still exists. Destroying LVM snapshot /dev/mapper/vgdata-urbackup_snap_9a69c272ecd750b2aac4486584c99959c66da02039c4c65a...
2021-03-30 15:13:26: Logical volume "urbackup_snap_9a69c272ecd750b2aac4486584c99959c66da02039c4c65a" successfully removed
2021-03-30 15:13:26: Final path: /srv/dev-disk-by-id-dm-name-vgdata-data1
2021-03-30 15:13:43: urbackupserver: No available slots... starting new Worker

Thx! It is a bit more complicated then thought. Next one. This I tested a bit so the chance is higher that it works: urbackup-client-2.4.12.0.tar.gz (4.5 MB)