CVE ID : CVE-2025-46560
Published : April 30, 2025, 1:15 a.m. | 1 hour, 52 minutes ago
Description : vLLM is a high-throughput and memory-efficient inference and serving engine for LLMs. Versions starting from 0.8.0 and prior to 0.8.5 are affected by a critical performance vulnerability in the input preprocessing logic of the multimodal tokenizer. The code dynamically replaces placeholder tokens (e.g., , ) with repeated tokens based on precomputed lengths. Due to inefficient list concatenation operations, the algorithm exhibits quadratic time complexity (O(n²)), allowing malicious actors to trigger resource exhaustion via specially crafted inputs. This issue has been patched in version 0.8.5.
Severity: 6.5 | MEDIUM
Visit the link for more details, such as CVSS details, affected products, timeline, and more…
Source: Read More