Model Gallery

17 models from 1 repositories

Filter by type:

Filter by tags:

qwen3-vl-embedding-8b
**Model Name:** Qwen3-VL-Embedding-8B **Base Model:** Qwen/Qwen3-VL-8B-Instruct **Description:** The **Qwen3-VL-Embedding** and **Qwen3-VL-Reranker** model series are the latest additions to the Qwen family, built upon the recently open-sourced and powerful Qwen3-VL foundation model. Specifically designed for multimodal information retrieval and cross-modal understanding, this suite accepts diverse inputs including text, images, screenshots, and videos, as well as inputs containing a mixture of these modalities. **Key Features:** - Model Type: MultiModal Embedding - Supported Languages: 30+ Languages - Supported Input Modalities: Text, images, screenshots, videos, and arbitrary multimodal combinations (e.g., text + image, text + video) - Number of Parameters: 8B - Context Length: 32k - Embedding Dimension: Up to 4096, supports user-defined output dimensions ranging from 64 to 4096 **Downloads:** - [GGUF Files](https://huggingface.co/Qwen/Qwen3-VL-Embedding-8B) (e.g., `Qwen3-VL-Embedding-8B-Q8_0.gguf`). **Usage:** - Requires `transformers`, `qwen-vl-utils`, and `torch`. - Example: `from scripts.qwen3_vl_embedding import Qwen3VLEmbedder model = Qwen3VLEmbedder(...)` **Citation:** @article{qwen3vlembedding, ...} This description emphasizes its capabilities, efficiency, and versatility for multimodal search tasks.

Repository: localaiLicense: apache-2.0

qwen3-vl-embedding-2b
**Model Name:** Qwen3-VL-Embedding-2B **Base Model:** Qwen/Qwen3-VL-2B-Instruct **Description:** The **Qwen3-VL-Embedding** and **Qwen3-VL-Reranker** model series are the latest additions to the Qwen family, built upon the recently open-sourced and powerful Qwen3-VL foundation model. Specifically designed for multimodal information retrieval and cross-modal understanding, this suite accepts diverse inputs including text, images, screenshots, and videos, as well as inputs containing a mixture of these modalities. **Key Features:** - Model Type: MultiModal Embedding - Supported Languages: 30+ Languages - Supported Input Modalities: Text, images, screenshots, videos, and arbitrary multimodal combinations (e.g., text + image, text + video) - Number of Parameters: 2B - Context Length: 32k - Embedding Dimension: Up to 2048, supports user-defined output dimensions ranging from 64 to 2048 **Downloads:** - [GGUF Files](https://huggingface.co/Qwen/Qwen3-VL-Embedding-2B) (e.g., `Qwen3-VL-Embedding-2B-Q8_0.gguf`). **Usage:** - Requires `transformers`, `qwen-vl-utils`, and `torch`. - Example: `from scripts.qwen3_vl_embedding import Qwen3VLEmbedder model = Qwen3VLEmbedder(...)` **Citation:** @article{qwen3vlembedding, ...} This description emphasizes its capabilities, efficiency, and versatility for multimodal search tasks.

Repository: localaiLicense: apache-2.0

qwen3-vl-reranker-8b
**Model Name:** Qwen3-VL-Reranker-8B **Base Model:** Qwen/Qwen3-VL-Reranker-8B **Description:** A high-performance multimodal reranking model for state-of-the-art cross-modal search. It supports 30+ languages and handles text, images, screenshots, videos, and mixed modalities. With 8B parameters and a 32K context length, it refines retrieval results by combining embedding vectors with precise relevance scores. Optimized for efficiency, it supports quantized versions (e.g., Q8_0, Q4_K_M) and is ideal for applications requiring accurate multimodal content matching. **Key Features:** - **Multimodal**: Text, images, videos, and mixed content. - **Language Support**: 30+ languages. - **Quantization**: Available in Q8_0 (best quality), Q4_K_M (fast, recommended), and lower-precision options. - **Performance**: Outperforms base models in retrieval tasks (e.g., JinaVDR, ViDoRe v3). - **Use Case**: Enhances search pipelines by refining embeddings with precise relevance scores. **Downloads:** - [GGUF Files](https://huggingface.co/mradermacher/Qwen3-VL-Reranker-8B-GGUF) (e.g., `Qwen3-VL-Reranker-8B.Q8_0.gguf`). **Usage:** - Requires `transformers`, `qwen-vl-utils`, and `torch`. - Example: `from scripts.qwen3_vl_reranker import Qwen3VLReranker; model = Qwen3VLReranker(...)` **Citation:** @article{qwen3vlembedding, ...} This description emphasizes its capabilities, efficiency, and versatility for multimodal search tasks.

Repository: localaiLicense: apache-2.0

qwen3-vl-reranker-2b-i1
**Model Name:** Qwen3-VL-Reranker-2B-i1 **Base Model:** Qwen/Qwen3-VL-Reranker-2B **Description:** A high-performance multimodal reranking model for state-of-the-art cross-modal search. It supports 30+ languages and handles text, images, screenshots, videos, and mixed modalities. With 8B parameters and a 32K context length, it refines retrieval results by combining embedding vectors with precise relevance scores. Optimized for efficiency, it supports quantized versions (e.g., Q8_0, Q4_K_M) and is ideal for applications requiring accurate multimodal content matching. **Key Features:** - **Multimodal**: Text, images, videos, and mixed content. - **Language Support**: 30+ languages. - **Quantization**: Available in Q8_0 (best quality), Q4_K_M (fast, recommended), and lower-precision options. - **Performance**: Outperforms base models in retrieval tasks (e.g., JinaVDR, ViDoRe v3). - **Use Case**: Enhances search pipelines by refining embeddings with precise relevance scores. **Downloads:** - [GGUF Files](https://huggingface.co/mradermacher/Qwen3-VL-Reranker-2B-i1-GGUF) (e.g., `Qwen3-VL-Reranker-2B.i1-Q4_K_M.gguf`). **Usage:** - Requires `transformers`, `qwen-vl-utils`, and `torch`. - Example: `from scripts.qwen3_vl_reranker import Qwen3VLReranker; model = Qwen3VLReranker(...)` **Citation:** @article{qwen3vlembedding, ...} This description emphasizes its capabilities, efficiency, and versatility for multimodal search tasks.

Repository: localaiLicense: apache-2.0

qwen3-vl-30b-a3b-instruct
Meet Qwen3-VL β€” the most powerful vision-language model in the Qwen series to date. This generation delivers comprehensive upgrades across the board: superior text understanding & generation, deeper visual perception & reasoning, extended context length, enhanced spatial and video dynamics comprehension, and stronger agent interaction capabilities. Available in Dense and MoE architectures that scale from edge to cloud, with Instruct and reasoning‑enhanced Thinking editions for flexible, on-demand deployment. #### Key Enhancements: * **Visual Agent**: Operates PC/mobile GUIsβ€”recognizes elements, understands functions, invokes tools, completes tasks. * **Visual Coding Boost**: Generates Draw.io/HTML/CSS/JS from images/videos. * **Advanced Spatial Perception**: Judges object positions, viewpoints, and occlusions; provides stronger 2D grounding and enables 3D grounding for spatial reasoning and embodied AI. * **Long Context & Video Understanding**: Native 256K context, expandable to 1M; handles books and hours-long video with full recall and second-level indexing. * **Enhanced Multimodal Reasoning**: Excels in STEM/Mathβ€”causal analysis and logical, evidence-based answers. * **Upgraded Visual Recognition**: Broader, higher-quality pretraining is able to β€œrecognize everything”—celebrities, anime, products, landmarks, flora/fauna, etc. * **Expanded OCR**: Supports 32 languages (up from 19); robust in low light, blur, and tilt; better with rare/ancient characters and jargon; improved long-document structure parsing. * **Text Understanding on par with pure LLMs**: Seamless text–vision fusion for lossless, unified comprehension. #### Model Architecture Updates: 1. **Interleaved-MRoPE**: Full‑frequency allocation over time, width, and height via robust positional embeddings, enhancing long‑horizon video reasoning. 2. **DeepStack**: Fuses multi‑level ViT features to capture fine-grained details and sharpen image–text alignment. 3. **Text–Timestamp Alignment:** Moves beyond T‑RoPE to precise, timestamp‑grounded event localization for stronger video temporal modeling. This is the weight repository for Qwen3-VL-30B-A3B-Instruct.

Repository: localaiLicense: apache-2.0

qwen3-vl-30b-a3b-thinking
Qwen3-VL-30B-A3B-Thinking is a 30B parameter model that is thinking.

Repository: localaiLicense: apache-2.0

qwen3-vl-4b-instruct
Qwen3-VL-4B-Instruct is the 4B parameter model of the Qwen3-VL series.

Repository: localaiLicense: apache-2.0

qwen3-vl-32b-instruct
Qwen3-VL-32B-Instruct is the 32B parameter model of the Qwen3-VL series.

Repository: localaiLicense: apache-2.0

qwen3-vl-4b-thinking
Qwen3-VL-4B-Thinking is the 4B parameter model of the Qwen3-VL series that is thinking.

Repository: localaiLicense: apache-2.0

qwen3-vl-2b-thinking
Qwen3-VL-2B-Thinking is the 2B parameter model of the Qwen3-VL series that is thinking.

Repository: localaiLicense: apache-2.0

qwen3-vl-2b-instruct
Qwen3-VL-2B-Instruct is the 2B parameter model of the Qwen3-VL series.

Repository: localaiLicense: apache-2.0

huihui-qwen3-vl-30b-a3b-instruct-abliterated
These are quantizations of the model Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated-GGUF

Repository: localaiLicense: apache-2.0

qwen3-vl-8b-instruct
Qwen3-VL-8B-Instruct is the 8B parameter model of the Qwen3-VL series. Uses recommended default parameters according to Unsloth documentation for Qwen 3 VL.

Repository: localaiLicense: apache-2.0

qwen3-vl-8b-thinking
Qwen3-VL-8B-Thinking is the 8B parameter model of the Qwen3-VL series that is thinking. Uses recommended default parameters according to Unsloth documentation for Qwen 3 VL.

Repository: localaiLicense: apache-2.0

huihui-qwen3-vl-30b-a3b-instruct-abliterated-mxfp4_moe
**Model Name:** Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated **Base Model:** Qwen3-VL-30B (a large multimodal language model) **Repository:** [huihui-ai/Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated](https://huggingface.co/huihui-ai/Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated) **Quantization:** MXFP4_MOE (GGUF format, optimized for inference on consumer hardware) **Model Type:** Instruction-tuned, multimodal (text + vision) **Size:** 30 billion parameters (MoE architecture with active 3.7B parameters per token) **License:** Apache 2.0 **Description:** Huihui-Qwen3-VL-30B-A3B-Instruct-abliterated is an advanced, instruction-tuned multimodal large language model based on Qwen3-VL-30B, enhanced with a mixture-of-experts (MoE) architecture and fine-tuned for strong reasoning, visual understanding, and dialogue capabilities. It supports both text and image inputs, making it suitable for tasks such as image captioning, visual question answering, and complex instruction following. This version is quantized using MXFP4_MOE for efficient inference while preserving high performance. Ideal for developers and researchers seeking a powerful, efficient, and open-source multimodal model for real-world applications. > πŸ” *Note: This is a text-only version.*

Repository: localaiLicense: apache-2.0

qwen3-vlto-32b-instruct-i1
**Model Name:** Qwen3-VL-32B-Instruct (Text-Only Variant: Qwen3-VLTO-32B-Instruct) **Base Model:** Qwen/Qwen3-VL-32B-Instruct **Repository:** [mradermacher/Qwen3-VLTO-32B-Instruct-i1-GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-32B-Instruct-i1-GGUF) **Type:** Large Language Model (LLM) – Text-Only (Vision-Language model stripped of vision components) **Architecture:** Qwen3-VL, adapted for pure text generation **Size:** 32 billion parameters **License:** Apache 2.0 **Framework:** Hugging Face Transformers --- ### πŸ” **Description** This is a **text-only variant** of the powerful **Qwen3-VL-32B-Instruct** multimodal model, stripped of its vision components to function as a high-performance pure language model. The model retains the full text understanding and generation capabilities of its parent β€” including strong reasoning, long-context handling (up to 32K+ tokens), and advanced multimodal training-derived coherence β€” while being optimized for text-only tasks. It was created by loading the weights from the full Qwen3-VL-32B-Instruct model into a text-only Qwen3 architecture, preserving all linguistic and reasoning strengths without the need for image input. Perfect for applications requiring deep reasoning, long-form content generation, code synthesis, and dialogue β€” with all the benefits of the Qwen3 series, now in a lightweight, text-focused form. --- ### πŸ“Œ Key Features - βœ… **High-Performance Text Generation** – Built on top of the state-of-the-art Qwen3-VL architecture - βœ… **Extended Context Length** – Supports up to 32,768 tokens (ideal for long documents and complex tasks) - βœ… **Strong Reasoning & Planning** – Excels at logic, math, coding, and multi-step reasoning - βœ… **Optimized for GGUF Format** – Available in multiple quantized versions (IQ3_M, Q2_K, etc.) for efficient inference on consumer hardware - βœ… **Free to Use & Modify** – Apache 2.0 license --- ### πŸ“¦ Use Case Suggestions - Long-form writing, summarization, and editing - Code generation and debugging - AI agents and task automation - High-quality chat and dialogue systems - Research and experimentation with large-scale LLMs on local devices --- ### πŸ“š References - Original Model: [Qwen/Qwen3-VL-32B-Instruct](https://huggingface.co/Qwen/Qwen3-VL-32B-Instruct) - Technical Report: [Qwen3 Technical Report (arXiv)](https://arxiv.org/abs/2505.09388) - Quantization by: [mradermacher](https://huggingface.co/mradermacher) > βœ… **Note**: The model shown here is **not the original vision-language model** β€” it's a **text-only conversion** of the Qwen3-VL-32B-Instruct model, ideal for pure language tasks.

Repository: localaiLicense: apache-2.0

qwen3-vlto-32b-thinking
**Model Name:** Qwen3-VLTO-32B-Thinking **Model Type:** Large Language Model (Text-Only) **Base Model:** Qwen/Qwen3-VL-32B-Thinking (vanilla Qwen3-VL-32B with vision components removed) **Architecture:** Transformer-based, 32-billion parameter model optimized for reasoning and complex text generation. ### Description: Qwen3-VLTO-32B-Thinking is a pure text-only variant of the Qwen3-VL-32B-Thinking model, stripped of its vision capabilities while preserving the full reasoning and language understanding power. It is derived by transferring the weights from the vision-language model into a text-only transformer architecture, maintaining the same high-quality behavior for tasks such as logical reasoning, code generation, and dialogue. This model is ideal for applications requiring deep linguistic reasoning and long-context understanding without image input. It supports advanced multimodal reasoning capabilities *in text form*β€”perfect for research, chatbots, and content generation. ### Key Features: - βœ… 32B parameters, high reasoning capability - βœ… No vision components β€” fully text-only - βœ… Trained for complex thinking and step-by-step reasoning - βœ… Compatible with Hugging Face Transformers and GGUF inference tools - βœ… Available in multiple quantization levels (Q2_K to Q8_0) for efficient deployment ### Use Case: Ideal for advanced text generation, logical inference, coding, and conversational AI where vision is not needed. > πŸ”— **Base Model**: [Qwen/Qwen3-VL-32B-Thinking](https://huggingface.co/Qwen/Qwen3-VL-32B-Thinking) > πŸ“¦ **Quantized Versions**: Available via [mradermacher/Qwen3-VLTO-32B-Thinking-GGUF](https://huggingface.co/mradermacher/Qwen3-VLTO-32B-Thinking-GGUF) --- *Note: The original model was created by Alibaba’s Qwen team. This variant was adapted by qingy2024 and quantized by mradermacher.*

Repository: localaiLicense: apache-2.0