CVE Alert: CVE-2025-48944

Vulnerability Summary: CVE-2025-48944
vLLM is an inference and serving engine for large language models (LLMs). In version 0.8.0 up to but excluding 0.9.0, the vLLM backend used with the /v1/chat/completions OpenAPI endpoint fails to validate unexpected or malformed input in the “pattern” and “type” fields when the tools functionality is invoked. These inputs are not validated before being compiled or parsed, causing a crash of the inference worker with a single request. The worker will remain down until it is restarted. Version 0.9.0 fixes the issue.
Affected Endpoints:
No affected endpoints listed.
Published Date:
5/30/2025, 7:15:30 PM
⚠️ CVSS Score:
Exploit Status:
Not ExploitedReferences:
- https://github.com/vllm-project/vllm/pull/17623
- https://github.com/vllm-project/vllm/security/advisories/GHSA-vrq3-r879-7m65
Recommended Action:
No proposed action available. Please refer to vendor documentation for updates.
A considerable amount of time and effort goes into maintaining this website, creating backend automation and creating new features and content for you to make actionable intelligence decisions. Everyone that supports the site helps enable new functionality.
If you like the site, please support us on “Patreon” or “Buy Me A Coffee” using the buttons below
To keep up to date follow us on the below channels.