Restore size limitation

I found one unpleasant feature: restore from webUI always truncated by 4GB zip file.
I tried different versions of software (same for server and client), types of OS and many browsers on several client OSes - the result is stable.
Is this limitation of embedded web-server or current zip-format size limit?
We have a 50-60 userdirs with total size about 2TB, and group/department (containing many users) restoring is very often oversize this limit. Subdirs selecting and restoring step-by-step is a terrible way.

UrBackup 1.4.4-1.4.10; Windows 2012, Windows 2008 R2, Synology NAS (as a server); Windows 2003, 2008 R2, 2012 (as clients); Mozilla Firefox/Google Chrome/Internet Explorer last versions.

I am aware of this issue and have been in contact with the zip library author. When/if he gets to releasing zip64 support is open, however.

The UrBackup storage layout is designed to not require any tools, including UrBackup itself, for restores. It mirrors the client’s file layout as close as possible and stores files as is. As such you should be able to use any file transfer program to restore the files.
I don’t use Synology NAS, but I am sure it has the ability to share files via Windows file sharing, FTP, SCP, WebDav, AFP, etc.
The only ceveat is that you have to configure it to follow symlinks if you did not disable “Use symlinks during incremental file backups”. Here is how to do that for Windows file sharing on Linux:

Thanks for Your reply.
Symlinks are very useful for storage saving, so I turned on this feature (on Synology).
Some years ago I had a linux UrBackup server (Ubuntu), but had to be switched to Windows 2012 - there are more tasks now (requiring Windows OS) for backup hardware. Additionaly, Windows 2012 Storage Server NTFS deduplication+compression gives a more storage saving effect (as for Ext3/4 with symlinks). But I’m sure creating smb shares with backup directory access is not the best way (with security in mind), and more - there are routed (internet) clients who can’t use shares (but I have to agree they are restoring big data very seldom).

Ok, I see, waiting for new zip format.
Thanks once again and sorry for my English.

Do you have any sort of timeframe for this? Are there other libraries that could be used that would not have this limitation?

Hey uroni, any news from library author on this?

No, but you can now use the build-in restore to restore large amounts of data. That also reconnects on link failures and uses existing data to transfer only differences.