SIGSEGV in TreeHash::allAdlerTo64byteHash - buffer overflow when adler_idx exceeds array bounds

Version: urbackup-server-2.5.37

GCC 15, glibc 2.42
OS: Fedora Linux 43 (Workstation Edition) x86_64
Host: SJRC-ADLN-6L
Kernel: Linux 6.19.12-200.fc43.x86_64
CPU: Intel(R) N100 (4) @ 3.40 GHz
Memory: 1.71 GiB / 31.11 GiB (5%)Swap: 0 B / 8.00 GiB (0%)
Disk (/): 87.71 GiB / 349.83 GiB (25%) - btrfs
Disk (/mnt/urbackup): 3.19 TiB / 4.55 TiB (70%) - btrfs
Locale: es_ES.UTF-8

Description

Server crashes with SIGSEGV in TreeHash::allAdlerTo64byteHash when processing
a file patch. adler_idx reaches 8 but the array only has capacity for 8 elements
(indices 0-7), causing an out-of-bounds write.

Backtrace

Core was generated by `/usr/bin/urbackupsrv run --config /etc/sysconfig/urbackup-server --no-consoletime’.
Program terminated with signal SIGSEGV, Segmentation fault.
#0 TreeHash::allAdlerTo64byteHash (h=0x7fb52fffdc20 “W\334\377/\265\177”, size=0, hashed_size=13, byteout=0x7fb52fffdb40 “W\334\377/\265\177”) at urbackupcommon/TreeHash.cpp:192

192 n_adlers[adler_idx] = urb_adler32_combine(n_adlers[adler_idx], input_adler[i], curr_len2);
[Current thread is 1 (Thread 0x7fb52ffff6c0 (LWP 1956587))]

(gdb) p adler_idx
$1 = 8

(gdb) x/9bx byteout
0x7fb52fffdb40: 0x57 0xdc 0xff 0x2f 0xb5 0x7f 0x00 0x00
0x7fb52fffdb48: 0x36
(gdb)

(gdb) bt
#0 TreeHash::allAdlerTo64byteHash (h=0x7fb52fffdc20 “W\334\377/\265\177”, size=0, hashed_size=13, byteout=0x7fb52fffdb40 “W\334\377/\265\177”) at urbackupcommon/TreeHash.cpp:192
#1 0x00000000006683b5 in TreeHash::addHashAllAdler (this=0x7fb52fffe650, h=0x7fb52fffdc20 “W\334\377/\265\177”, size=0, hashed_size=13) at urbackupcommon/TreeHash.cpp:155
#2 0x00000000007029ab in BackupServerPrepareHash::addUnchangedHashes (this=0x7fb538068c20, start=0, size=13, is_sparse=0x0) at urbackupserver/server_prepare_hash.cpp:476
#3 0x0000000000702b95 in BackupServerPrepareHash::next_chunk_patcher_bytes (this=0x7fb538068c20, buf=0x0, bsize=13, changed=false, is_sparse=0x0)
at urbackupserver/server_prepare_hash.cpp:498
#4 0x000000000081e1a6 in ChunkPatcher::ApplyPatch (this=0x7fb538068c68, file=0x7fb524025020, patch=0x7fb5240262c0, extent_iterator=0x0) at urbackupserver/ChunkPatcher.cpp:360
#5 0x000000000070267b in BackupServerPrepareHash::hash_with_patch (this=0x7fb538068c20, f=0x7fb524025020, patch=0x7fb5240262c0, extent_iterator=0x0, hash_with_sparse=true)
at urbackupserver/server_prepare_hash.cpp:437
#6 0x0000000000700dec in BackupServerPrepareHash::operator() (this=0x7fb538068c20) at urbackupserver/server_prepare_hash.cpp:223
#7 0x000000000048a524 in CPoolThread::operator() (this=0x7fb5380695c0) at ThreadPool.cpp:73
#8 0x00000000004299ca in thread_helper_f (t=0x7fb5380695c0) at Server.cpp:1499
#9 0x00007fb5ba07f464 in start_thread () from /lib64/libc.so.6
#10 0x00007fb5ba1025ec in __clone3 () from /lib64/libc.so.6

(gdb) frame 2
#2 0x00000000007029ab in BackupServerPrepareHash::addUnchangedHashes (this=0x7fb538068c20, start=0, size=13, is_sparse=0x0) at urbackupserver/server_prepare_hash.cpp:476
476 reinterpret_cast<TreeHash*>(hashf)->addHashAllAdler(chunkhashes, r, size);
(gdb) print start
$3 = 0
(gdb) p size
$4 = 13
(gdb) p is_sparse
$5 = (bool *) 0x0

(gdb) frame 3
#3 0x0000000000702b95 in BackupServerPrepareHash::next_chunk_patcher_bytes (this=0x7fb538068c20, buf=0x0, bsize=13, changed=false, is_sparse=0x0)
at urbackupserver/server_prepare_hash.cpp:498
498 addUnchangedHashes(file_pos, bsize, is_sparse);
(gdb) p bsize
$6 = 13
(gdb) p changed
$7 = false
(gdb) p (TreeHash)(hashf)
$8 = { = { = {_vptr.IObject = 0xccbf68 <vtable for TreeHash+16>}, }, has_sparse = false, sparse_ctx = {sha = }, md5sum = {
md5 = {<CryptoPP::IteratedHashWithStaticTransform<unsigned int, CryptoPP::EnumToType<CryptoPP::ByteOrder, 0>, 64, 16, CryptoPP::Weak1::MD5, 0, false>> = {<CryptoPP::ClonableImpl<CryptoPP::Weak1::MD5, CryptoPP::AlgorithmImpl<CryptoPP::IteratedHash<unsigned int, CryptoPP::EnumToType<CryptoPP::ByteOrder, 0>, 64, CryptoPP::HashTransformation>, CryptoPP::Weak1::MD5> >> = {<CryptoPP::AlgorithmImpl<CryptoPP::IteratedHash<unsigned int, CryptoPP::EnumToType<CryptoPP::ByteOrder, 0>, 64, CryptoPP::HashTransformation>, CryptoPP::Weak1::MD5>> = {<CryptoPP::IteratedHash<unsigned int, CryptoPP::EnumToType<CryptoPP::ByteOrder, 0>, 64, CryptoPP::HashTransformation>> = {<CryptoPP::IteratedHashBase<unsigned int, CryptoPP::HashTransformation>> = {CryptoPP::HashTransformation = {}, m_countLo = 0, m_countHi = 0},
m_data = {<CryptoPP::SecBlock<unsigned int, CryptoPP::FixedSizeAllocatorWithCleanup<unsigned int, 16, CryptoPP::NullAllocator, false> >> …

Root cause

size=0 with hashed_size=13 is an invalid combination. The adler_idx calculation
does not validate array bounds before writing to n_adlers[adler_idx].

Reproducing

Occurs during incremental file backup when client goes offline mid-transfer
(BASE_DIR_LOST error). Possibly triggered by zero-size chunks or sparse files.
N100 CPU used.

Thanks!