Repository: localaiLicense: apache-2.0

WARNING: NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. ABOUT: Qwen's excellent "Qwen3-30B-A3B", abliterated by "huihui-ai" then combined Brainstorm 20x (tech notes at bottom of the page) in a MOE (128 experts) at 42B parameters (up from 30B). This pushes Qwen's abliterated/uncensored model to the absolute limit for creative use cases. Prose (all), reasoning, thinking ... all will be very different from reg "Qwen 3s". This model will generate horror, fiction, erotica, - you name it - in vivid, stark detail. It will NOT hold back. Likewise, regen(s) of the same prompt - even at the same settings - will create very different version(s) too. See FOUR examples below. Model retains full reasoning, and output generation of a Qwen3 MOE ; but has not been tested for "non-creative" use cases. Model is set with Qwen's default config: 40 k context 8 of 128 experts activated. Chatml OR Jinja Template (embedded) IMPORTANT: See usage guide / repo below to get the most out of this model, as settings are very specific. USAGE GUIDE: Please refer to this model card for Specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like: How to maximize this model in "uncensored" form, with specific notes on "abliterated" models. Rep pen / temp settings specific to getting the model to perform strongly. https://huggingface.co/DavidAU/Qwen3-18B-A3B-Stranger-Thoughts-Abliterated-Uncensored-GGUF GGUF / QUANTS / SPECIAL SHOUTOUT: Special thanks to team Mradermacher for making the quants! https://huggingface.co/mradermacher/Qwen3-42B-A3B-Stranger-Thoughts-Deep20x-Abliterated-Uncensored-GGUF KNOWN ISSUES: Model may "mis-capitalize" word(s) - lowercase, where uppercase should be - from time to time. Model may add extra space from time to time before a word. Incorrect template and/or settings will result in a drop in performance / poor performance.
Links
Tags
Repository: localaiLicense: apache-2.0
### **Qwen3-Yoyo-V4-42B-A3B-Thinking-TOTAL-RECALL-PKDick-V** **Base Model:** Qwen3-Coder-30B-A3B-Instruct (Mixture of Experts) **Size:** 42B parameters (finetuned version) **Context Length:** 1 million tokens (native), supports up to 256K natively with Yarn extension **Architecture:** Mixture of Experts (MoE) — 128 experts, 8 activated per forward pass **Fine-tuned For:** Advanced coding, agentic workflows, creative writing, and long-context reasoning **Key Features:** - Enhanced with **Brainstorm 20x** fine-tuning for deeper reasoning, richer prose, and improved coherence - Optimized for **coding in multiple languages**, tool use, and long-form creative tasks - Includes optional **"thinking" mode** via system prompt for structured internal reasoning - Trained on **PK Dick Dataset** (inspired by Philip K. Dick’s works) for narrative depth and conceptual richness - Supports **high-quality GGUF, GPTQ, AWQ, EXL2, and HQQ quantizations** for efficient local inference - Recommended settings: 6–10 active experts, temperature 0.3–0.7, repetition penalty 1.05–1.1 **Best For:** Developers, creative writers, researchers, and AI researchers seeking a powerful, expressive, and highly customizable model with exceptional long-context and coding performance. > 🌟 *Note: This is a quantization and fine-tune of the original Qwen3-Coder-30B-A3B-Instruct by DavidAU, further enhanced by mradermacher’s GGUF conversion. The base model remains the authoritative version.*
Links
Tags