Migrate_storage_to for backupstorage move

Latest beta gives the same error as 2.1.9 server. Running on Windows 2012 R2 platform.

migrate_storage_to file in C:\Program Files\UrBackupServer\urbackup with the path to new storage location, U:\urbackup

Starting the service results

  • Nothing shows under Activities.

  • Get this error in urbackup.log
    2017-08-20 04:25:35: ERROR: LMDB: Failed to open LMDB database file (There is not enough space on the disk.)
    2017-08-20 04:25:35: ERROR: Error opening inode db

  • The folder U:\urbackup\inode_db is created and the files within inode_db.lmdb and inode_db.lmdb-lock

  • U:\ is an empty 1TB NTFS volume.

  • ACL; SYSTEM has Full access to the folder and disk.

Any solution to this problem?
The reason why is that I need to move all backups to a new volume. Found the migrate_storage_to function which looked like a very elegant solution. But, as you see, can’t get it working. Any takers?

I seem to have increased the inode_db size to 1TB. I assumed this would be a sparse file and it wouldn’t actually need this much space…


Lack of imagination from my side, should have tried creating a larger volume.
The urbackup folder size of the storage I’m migrating wasn’t more than 300GB, so presumed 1TB was fine.

Created a 2TB volume and it’s now on its way migrating the backups.

Story behind and the need to move my storage. I got major performance problem with the source volume. It’s an 12TB RAID5 (which has lived through many firmware updates, expansion, block conversions and RAID-migrations; which i suspect made it go bonkers on its performance) hosted by a hp P420 smart array. Any reading from the volume is really slow and causes Active Time to almost 100% and Average Response Time to >500ms (sometimes up to 2-3 sec). Right now read speeds are around 1-100MB/s … So, I’ll get back in a week or so to report if the migration was successful :slight_smile:

And out of context questions:

  • Now running the beta 2.2.4. In the statusbar I get a question upgrade to 2.1.9?. Just normal BETA behavior?
  • Investigating how to migrate the storage I found some comments on using HardLinkShellExtension doing a smart copy as it should preserve all linking. Is this a possible solution or should one have a “no touch” policy and let UrBackup handle the migration as I do here?

Many thanks for a this awesome software solution! Best I’ve used in the category.


Personally i let urbackup handle all migrations

Try to look at the battery for your array, empty battery cause the cache to not get used , and the array to switch to/from writeback mode and have a large impact on performance (google for details)

I dont think it can have an impact on urbackup, so i guess it s ok to use cp/dd/rsync to switch a file back to sparse.

Upgrade to version in statusbar is normal when you switch versions (not only beta), it sometime happens, it s more or less long. There s infos on the site to change some kernel paramters to upgrade faster.

OK, I’ll let urbackup have it way with this migration stuff then. I’m in no hurry.

It’s a 2GB Flash battery backed up cache installed (cache is set to 10%R/90%W). The report from hp Smart Storage Admin says everything is in working order. I’ve also tried enabling the cache even though if it thinks the battery is not charged. No difference for this bonky volume. And what speaks against any fault on the controller and indicates something fishy with the RAID-volume is that I’ve got another 12TB RAID5 (4x4TB) attached to channel 2 which perform as it should with great read/write speeds.

Upgrade version notification. 2.1.9 feels like a downgrade as I’m now running 2.2.4 BETA, so I’ll just ignore that then.

Status: 28% migrated …

100% done

The migration went well as I can understand. The post job cleanup_unknown took a while though.

Backup up clients now: Under Activities the column Used Storage reports Unknown and and Status of clients reports “No recent backup” even after a backup is made. But maybe this is a BETA issue. Checking the backed up files looks OK.


Should inode_db.lmdb be so big during migration? I had same issue, used same workaround, but I’m wondering if the code has the wrong units for a default value as I can’t see the need for this to be so big. It’s 1024102410241024 bytes. Should only be 10241024 bytes ??

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.