Skip to content

Commit

Permalink
add doc
Browse files Browse the repository at this point in the history
  • Loading branch information
qinxuye committed Nov 28, 2024
1 parent 9aa1ed7 commit b2d6a5b
Show file tree
Hide file tree
Showing 3 changed files with 97 additions and 0 deletions.
95 changes: 95 additions & 0 deletions doc/source/models/builtin/llm/qwq-32b-preview.rst
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
.. _models_llm_qwq-32b-preview:

========================================
QwQ-32B-Preview
========================================

- **Context Length:** 32768
- **Model Name:** QwQ-32B-Preview
- **Languages:** en, zh
- **Abilities:** chat
- **Description:** QwQ-32B-Preview is an experimental research model developed by the Qwen Team, focused on advancing AI reasoning capabilities.

Specifications
^^^^^^^^^^^^^^


Model Spec 1 (pytorch, 32 Billion)
++++++++++++++++++++++++++++++++++++++++

- **Model Format:** pytorch
- **Model Size (in billions):** 32
- **Quantizations:** 4-bit, 8-bit, none
- **Engines**: Transformers
- **Model ID:** Qwen/QwQ-32B-Preview
- **Model Hubs**: `Hugging Face <https://huggingface.co/Qwen/QwQ-32B-Preview>`__, `ModelScope <https://modelscope.cn/models/Qwen/QwQ-32B-Preview>`__

Execute the following command to launch the model, remember to replace ``${quantization}`` with your
chosen quantization method from the options listed above::

xinference launch --model-engine ${engine} --model-name QwQ-32B-Preview --size-in-billions 32 --model-format pytorch --quantization ${quantization}


Model Spec 2 (ggufv2, 32 Billion)
++++++++++++++++++++++++++++++++++++++++

- **Model Format:** ggufv2
- **Model Size (in billions):** 32
- **Quantizations:** Q3_K_L, Q4_K_M, Q6_K, Q8_0
- **Engines**: llama.cpp
- **Model ID:** lmstudio-community/QwQ-32B-Preview-GGUF
- **Model Hubs**: `Hugging Face <https://huggingface.co/lmstudio-community/QwQ-32B-Preview-GGUF>`__, `ModelScope <https://modelscope.cn/models/AI-ModelScope/QwQ-32B-Preview-GGUF>`__

Execute the following command to launch the model, remember to replace ``${quantization}`` with your
chosen quantization method from the options listed above::

xinference launch --model-engine ${engine} --model-name QwQ-32B-Preview --size-in-billions 32 --model-format ggufv2 --quantization ${quantization}


Model Spec 3 (mlx, 32 Billion)
++++++++++++++++++++++++++++++++++++++++

- **Model Format:** mlx
- **Model Size (in billions):** 32
- **Quantizations:** 4-bit
- **Engines**: MLX
- **Model ID:** mlx-community/Qwen_QwQ-32B-Preview_MLX-4bit
- **Model Hubs**: `Hugging Face <https://huggingface.co/mlx-community/Qwen_QwQ-32B-Preview_MLX-4bit>`__

Execute the following command to launch the model, remember to replace ``${quantization}`` with your
chosen quantization method from the options listed above::

xinference launch --model-engine ${engine} --model-name QwQ-32B-Preview --size-in-billions 32 --model-format mlx --quantization ${quantization}


Model Spec 4 (mlx, 32 Billion)
++++++++++++++++++++++++++++++++++++++++

- **Model Format:** mlx
- **Model Size (in billions):** 32
- **Quantizations:** 8-bit
- **Engines**: MLX
- **Model ID:** mlx-community/Qwen_QwQ-32B-Preview_MLX-8bit
- **Model Hubs**: `Hugging Face <https://huggingface.co/mlx-community/Qwen_QwQ-32B-Preview_MLX-8bit>`__

Execute the following command to launch the model, remember to replace ``${quantization}`` with your
chosen quantization method from the options listed above::

xinference launch --model-engine ${engine} --model-name QwQ-32B-Preview --size-in-billions 32 --model-format mlx --quantization ${quantization}


Model Spec 5 (mlx, 32 Billion)
++++++++++++++++++++++++++++++++++++++++

- **Model Format:** mlx
- **Model Size (in billions):** 32
- **Quantizations:** none
- **Engines**: MLX
- **Model ID:** mlx-community/QwQ-32B-Preview-bf16
- **Model Hubs**: `Hugging Face <https://huggingface.co/mlx-community/QwQ-32B-Preview-bf16>`__

Execute the following command to launch the model, remember to replace ``${quantization}`` with your
chosen quantization method from the options listed above::

xinference launch --model-engine ${engine} --model-name QwQ-32B-Preview --size-in-billions 32 --model-format mlx --quantization ${quantization}

1 change: 1 addition & 0 deletions xinference/model/llm/sglang/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -89,6 +89,7 @@ class SGLANGGenerateConfig(TypedDict, total=False):
"deepseek-v2-chat-0628",
"qwen2.5-instruct",
"qwen2.5-coder-instruct",
"QwQ-32B-Preview",
]


Expand Down
1 change: 1 addition & 0 deletions xinference/model/llm/vllm/core.py
Original file line number Diff line number Diff line change
Expand Up @@ -144,6 +144,7 @@ class VLLMGenerateConfig(TypedDict, total=False):
VLLM_SUPPORTED_CHAT_MODELS.append("qwen2.5-instruct")
VLLM_SUPPORTED_MODELS.append("qwen2.5-coder")
VLLM_SUPPORTED_CHAT_MODELS.append("qwen2.5-coder-instruct")
VLLM_SUPPORTED_CHAT_MODELS.append("QwQ-32B-Preview")


if VLLM_INSTALLED and vllm.__version__ >= "0.3.2":
Expand Down

0 comments on commit b2d6a5b

Please sign in to comment.