Unclaimed project

Are you a maintainer of vllm? Claim this project to take control of your public changelog and roadmap.

Claim this project

Changelog

vllm

A high-throughput and memory-efficient inference and serving engine for LLMs

vllm-project/vllm
70k13kPythonApache-2.0
Website
amdblackwellcudadeepseekdeepseek-v3gpt+14

Last updated about 2 months ago