Remote Code Execution Vulnerability in vLLM Inference Engine from vllm Project
CVE-2026-27893

8.8HIGH

Key Information:

Status
Vendor
CVE Published:
26 March 2026

What is CVE-2026-27893?

The vLLM inference and serving engine for large language models has a significant security flaw impacting versions 0.10.1 through 0.17.9. Two specific model implementation files are configured to always trust remote code (trust_remote_code=True), overriding user-defined security settings (--trust-remote-code=False). This misconfiguration allows for potential remote code execution from malicious model repositories, even when users have opted out of trusting remote code. Users are advised to upgrade to version 0.18.0, which addresses this vulnerability and enhances security measures.

Affected Version(s)

vllm >= 0.10.1, < 0.18.0

References

CVSS V3.1

Score:
8.8
Severity:
HIGH
Confidentiality:
High
Integrity:
High
Availability:
High
Attack Vector:
Network
Attack Complexity:
Low
Privileges Required:
None
User Interaction:
Required
Scope:
Unchanged

Timeline

  • Vulnerability published

  • Vulnerability Reserved

.