vllm-project vllm: A high-throughput and memory-efficient inference and serving engine for LLMs

by | Apr 17, 2024 | news