Unclaimed project

Are you a maintainer of vllm? Claim this project to take control of your public changelog and roadmap.

Claim this project

Changelog

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

vllm-project/vllm
75k15kPythonApache-2.0
Website
amdblackwellcudadeepseekdeepseek-v3gpt+14

Last updated about 6 hours ago