Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] running requests low #4022

Open
5 tasks done
luhairong11 opened this issue Mar 3, 2025 · 0 comments
Open
5 tasks done

[Bug] running requests low #4022

luhairong11 opened this issue Mar 3, 2025 · 0 comments
Assignees
Labels
deepseek help wanted Extra attention is needed

Comments

@luhairong11
Copy link

luhairong11 commented Mar 3, 2025

Checklist

  • 1. I have searched related issues but cannot get the expected help.
  • 2. The bug has not been fixed in the latest version.
  • 3. Please note that if the bug-related issue you submitted lacks corresponding environment info and a minimal reproducible demo, it will be challenging for us to reproduce and resolve the issue, reducing the likelihood of receiving feedback.
  • 4. If the issue you raised is not a bug but a question, please raise a discussion at https://github.com/sgl-project/sglang/discussions/new/choose Otherwise, it will be closed.
  • 5. Please use English, otherwise it will be closed.

Describe the bug

When running the test script with 100 concurrent requests, the "running" count for sglang is relatively low, and there is a significant amount of data in the "queue", whereas the vllm service can reach a "running" count of 100.

Reproduction

evaluation script

evalscope perf \
    --url "http://192.168.8.**:31008/v1/chat/completions" \
    --parallel 100 \
    --model DeepSeek-R1 \
    --number 500 \
    --api openai \
    --dataset openqa \
    --stream

sglang

service start command:

CUDA_VISIBLE_DEVICES=6 python3 -m sglang.launch_server --model-path /data/chdmx/models/LLM_models/deepseek_ai/DeepSeek-R1-Distill-Qwen-32B --tp 1 --mem-fraction-static 0.9 --context-length 20000 --trust-remote-code --host 0.0.0.0 --port 31008

result:

Image

vllm

service start command:

CUDA_VISIBLE_DEVICES=6 vllm serve /data/chdmx/models/LLM_models/deepseek_ai/DeepSeek-R1-Distill-Qwen-32B --tensor-parallel-size 1 --gpu-memory-utilization 0.9 --max-model-len 20000 --trust-remote-code --served-model-name DeepSeek-R1 --port 31008    

result:

Image

Environment

root@bms-schyjdmx06:/sgl-workspace# python3 -m sglang.check_env
INFO 03-03 01:33:01 init.py:190] Automatically detected platform cuda.
/usr/local/lib/python3.10/dist-packages/pydantic/_internal/_config.py:345: UserWarning: Valid config keys have changed in V2:

  • 'fields' has been removed
    warnings.warn(message, UserWarning)
    Python: 3.10.16 (main, Dec 4 2024, 08:53:37) [GCC 9.4.0]
    CUDA available: True
    GPU 0,1,2,3,4,5,6,7: NVIDIA A100 80GB PCIe
    GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.0
    CUDA_HOME: /usr/local/cuda
    NVCC: Cuda compilation tools, release 12.4, V12.4.131
    CUDA Driver Version: 550.54.15
    PyTorch: 2.5.1+cu124
    sglang: 0.4.2.post4
    sgl_kernel: 0.0.3.post4
    flashinfer: 0.2.0.post2+cu124torch2.5
    triton: 3.1.0
    transformers: 4.48.3
    torchao: 0.8.0
    numpy: 1.26.4
    aiohttp: 3.11.11
    fastapi: 0.115.6
    hf_transfer: 0.1.9
    huggingface_hub: 0.27.1
    interegular: 0.3.3
    modelscope: 1.22.3
    orjson: 3.10.15
    packaging: 24.2
    psutil: 6.1.1
    pydantic: 2.10.5
    multipart: 0.0.20
    zmq: 26.2.0
    uvicorn: 0.34.0
    uvloop: 0.21.0
    vllm: 0.7.2
    openai: 1.59.8
    anthropic: 0.43.1
    decord: 0.6.0
    NVIDIA Topology:
    GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 NIC0 NIC1 CPU Affinity NUMA Affinity GPU NUMA ID
    GPU0 X PIX PIX PIX SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
    GPU1 PIX X PIX PIX SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
    GPU2 PIX PIX X PIX SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
    GPU3 PIX PIX PIX X SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
    GPU4 SYS SYS SYS SYS X PIX PIX PIX SYS SYS 28-55,84-111 1 N/A
    GPU5 SYS SYS SYS SYS PIX X PIX PIX SYS SYS 28-55,84-111 1 N/A
    GPU6 SYS SYS SYS SYS PIX PIX X PIX SYS SYS 28-55,84-111 1 N/A
    GPU7 SYS SYS SYS SYS PIX PIX PIX X SYS SYS 28-55,84-111 1 N/A
    NIC0 SYS SYS SYS SYS SYS SYS SYS SYS X SYS
    NIC1 SYS SYS SYS SYS SYS SYS SYS SYS SYS X

Legend:

X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks

NIC Legend:

NIC0: mlx5_0
NIC1: mlx5_1

ulimit soft: 1048576

@minleminzui minleminzui self-assigned this Mar 3, 2025
@minleminzui minleminzui added deepseek help wanted Extra attention is needed labels Mar 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
deepseek help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

2 participants