Eattry key 0 too large

Hey there,
i’m running into a problem where i get Log-Errors and i dont really understand why.
So here is the setup:
Server and Client running on the same machine, bc a samba-share has to be backed up.
Machine is a Raspberry-Pi 4B, Raspian, Server in Docker (official image), Client on Raspian. Samba is mounted to /mnt/nas, /backups is a mounted external WD-Drive 1,5TB of 12TB used. Rootfs is on SD, 64GB, 40+GB available.
So, in every Filebackup of that one samba-share i get Errors, always a signle one, always on a different hash-file, even in different folders. Sometimes on .ini files, .doc files or .url-file, whatever that is…
Errors look like this:

  • Eattry key 0 for “/backups/silentmary/211214-1908/.hashes/nas/memory/03-PRIVAT/SONSTIGES/Favoriten/xxx, (…).url” too large with size 1.52236 GB in “/backups/urbackup_tmp_files/66”
  • Eattry key 0 for “/backups/silentmary/211213-1905/.hashes/nas/memory/03-PRIVAT/SONSTIGES/Favoriten/xxx (…).url” too large with size 1.52236 GB in “/backups/urbackup_tmp_files/62”
  • Eattry key 0 for “/backups/silentmary/211212-1902/.hashes/nas/memory/01-BAUPLAN/xxx_ebk/_fvpNm._pa” too large with size 1.52236 GB in “/backups/urbackup_tmp_files/58”

The Backups itself looks fine, but i’m afraid that something will bite me from behind, if i’m not going to solve this. The files are “real”, no symlinks and the files are kilobites in size, nothing special from what i can see.
How can i investigate this further?

Error gets thrown from FileMetadataDownloadThread::applyUnixMetadata here

There were issues in this area before, e.g.:

As a first step I’d look into if there are some unusual xattrs with those files (getfattr -d FILE), or is it a mounted samba/cifs share?

Another method is looking via hex editor at /backups/urbackup_tmp_files/58 to see where it starts going wrong, but that requires understanding of the data structures there.

Finally you could attach strace to the client to see if there is any unusal return from the list/get xattr syscalls.