Denial of Service Vulnerability in vLLM Engine for Large Language Models
CVE-2025-62426

6.5MEDIUM

Key Information:

Status
Vendor
CVE Published:
21 November 2025

What is CVE-2025-62426?

The vulnerability in vLLM, an inference and serving engine for large language models, arises from unsecured API endpoints that accept unvalidated request parameters. Specifically, the endpoints /v1/chat/completions and /tokenize can be manipulated using the chat_template_kwargs request parameter. This manipulation can lead to prolonged server processing delays, effectively blocking the API server from handling other requests. The issue was addressed and patched in version 0.11.1, ensuring that robust validation mechanisms are now in place.

Affected Version(s)

vllm >= 0.5.5, < 0.11.1

References

CVSS V3.1

Score:
6.5
Severity:
MEDIUM
Confidentiality:
None
Integrity:
None
Availability:
None
Attack Vector:
Network
Attack Complexity:
Low
Privileges Required:
Low
User Interaction:
None
Scope:
Unchanged

Timeline

  • Vulnerability published

  • Vulnerability Reserved

.
CVE-2025-62426 : Denial of Service Vulnerability in vLLM Engine for Large Language Models