CVE Alert: CVE-2025-46570

Vulnerability Summary: CVE-2025-46570
vLLM is an inference and serving engine for large language models (LLMs). Prior to version 0.9.0, when a new prompt is processed, if the PageAttention mechanism finds a matching prefix chunk, the prefill process speeds up, which is reflected in the TTFT (Time to First Token). These timing differences caused by matching chunks are significant enough to be recognized and exploited. This issue has been patched in version 0.9.0.
Affected Endpoints:
No affected endpoints listed.
Published Date:
5/29/2025, 5:15:21 PM
❄️ CVSS Score:
Exploit Status:
Not ExploitedReferences:
- https://github.com/vllm-project/vllm/commit/77073c77bc2006eb80ea6d5128f076f5e6c6f54f
- https://github.com/vllm-project/vllm/pull/17045
- https://github.com/vllm-project/vllm/security/advisories/GHSA-4qjh-9fv9-r85r
Recommended Action:
No proposed action available. Please refer to vendor documentation for updates.
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.