Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

support multi-modal llamapro #2738

Merged
merged 1 commit into from
Dec 23, 2024

Conversation

tastelikefeet
Copy link
Collaborator

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

Support llamapro for multi modal models
tested ok for qwen-vl qwen2-vl internvl llama-vision

Experiment results

Paste your experiment result here(if needed).

@tastelikefeet tastelikefeet merged commit f17ca92 into modelscope:main Dec 23, 2024
1 of 2 checks passed
tastelikefeet added a commit to tastelikefeet/swift that referenced this pull request Dec 26, 2024
…ui-1226

* commit '6542c5455424f25e1d443682d78b1f1d51201001':
  fix alpaca (modelscope#2771)
  support modern_bert & support bert deploy (modelscope#2767)
  fix app-ui (modelscope#2765)
  fix shell (modelscope#2764)
  fix bugs (modelscope#2761)
  fix web-ui (modelscope#2758)
  support SequenceClassification & update QVQ-72B-Preview (modelscope#2747)
  fix docs multimodal; fix pretrain mllm (modelscope#2742)
  Fix windows encoding gbk (modelscope#2741)
  support AI-ModelScope/Skywork-o1-Open-Llama-3.1-8B (modelscope#2739)
  support mm llamapro (modelscope#2738)
  fix windows (modelscope#2733)
  support paligemma2 (modelscope#2735)

# Conflicts:
#	swift/ui/llm_infer/llm_infer.py
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants