Model Gallery

121 models from 1 repositories

Filter by type:

Filter by tags:

q3.5-bluestar-27b

Repository: localaiLicense: mit

aurore-reveil_koto-small-7b-it
Koto-Small-7B-IT is an instruct-tuned version of Koto-Small-7B-PT, which was trained on MiMo-7B-Base for almost a billion tokens of creative-writing data. This model is meant for roleplaying and instruct usecases.

Repository: localaiLicense: mit

allura-org_remnant-qwen3-8b
There's a wisp of dust in the air. It feels like its from a bygone era, but you don't know where from. It lands on your tongue. It tastes nice. Remnant is a series of finetuned LLMs focused on SFW and NSFW roleplaying and conversation.

Repository: localaiLicense: apache-2.0

gryphe_pantheon-proto-rp-1.8-30b-a3b
Note: This model is a Qwen 30B MoE prototype and can be considered a sidegrade from my Small release some time ago. It did not receive extensive testing beyond a couple benchmarks to determine its sanity, so feel free to let me know what you think of it! Welcome to the next iteration of my Pantheon model series, in which I strive to introduce a whole collection of diverse personas that can be summoned with a simple activation phrase. Pantheon's purpose is two-fold, as these personalities similarly enhance the general roleplay experience, helping to encompass personality traits, accents and mannerisms that language models might otherwise find difficult to convey well. GGUF quants are available here. Your user feedback is critical to me so don't hesitate to tell me whether my model is either 1. terrible, 2. awesome or 3. somewhere in-between. Model details Ever since Qwen 3 released I've been trying to get MoE finetuning to work - After countless frustrating days, much code hacking, etc etc I finally got a full finetune to complete with reasonable loss values. I picked the base model for this since I didn't feel like trying to fight a reasoning model's training - Maybe someday I'll make a model which uses thinking tags for the character's thoughts or something. This time the recipe focused on combining as many data sources as I possibly could, featuring synthetic data from Sonnet 3.5 + 3.7, ChatGPT 4o and Deepseek. These then went through an extensive rewriting pipeline to eliminate common AI cliches, with the hopeful intent of providing you a fresh experience.

Repository: localaiLicense: apache-2.0

allura-org_q3-30b-a3b-pentiment
Triple stage RP/general tune of Qwen3-30B-A3b Base (finetune, merged for stablization, aligned)

Repository: localaiLicense: apache-2.0

allura-org_q3-30b-a3b-designant
Intended as a direct upgrade to Pentiment, Q3-30B-A3B-Designant is a roleplaying model finetuned from Qwen3-30B-A3B-Base. During testing, Designant punched well above its weight class in terms of active parameters, demonstrating the potential for well-made lightweight Mixture of Experts models in the roleplay scene. While one tester observed looping behavior, repetition in general was minimal.

Repository: localaiLicense: apache-2.0

qwen3-the-xiaolong-omega-directive-22b-uncensored-abliterated-i1
WARNING: NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. A massive 22B, 62 layer merge of the fantastic "The-Omega-Directive-Qwen3-14B-v1.1" (by ReadyArt) and off the scale "Xiaolong-Qwen3-14B" (by nbeerbower) in Qwen3, with full reasoning (can be turned on or off) and the model is completely uncensored/abliterated too.

Repository: localaiLicense: apache-2.0

allura-org_q3-8b-kintsugi
Q3-8B-Kintsugi is a roleplaying model finetuned from Qwen3-8B-Base. During testing, Kintsugi punched well above its weight class in terms of parameters, especially for 1-on-1 roleplaying and general storywriting.

Repository: localaiLicense: apache-2.0

ds-r1-qwen3-8b-arliai-rpr-v4-small-iq-imatrix
The best RP/creative model series from ArliAI yet again. This time made based on DS-R1-0528-Qwen3-8B-Fast for a smaller memory footprint. Reduced repetitions and impersonation To add to the creativity and out of the box thinking of RpR v3, a more advanced filtering method was used in order to remove examples where the LLM repeated similar phrases or talked for the user. Any repetition or impersonation cases that happens will be due to how the base QwQ model was trained, and not because of the RpR dataset. Increased training sequence length The training sequence length was increased to 16K in order to help awareness and memory even on longer chats.

Repository: localaiLicense: apache-2.0

qwen3-55b-a3b-total-recall-deep-40x
WARNING: MADNESS - UN HINGED and... NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. Qwen3-55B-A3B-TOTAL-RECALL-Deep-40X-GGUF A highly experimental model ("tamer" versions below) based on Qwen3-30B-A3B (MOE, 128 experts, 8 activated), with Brainstorm 40X (by DavidAU - details at bottom of this page). These modifications blow the model (V1) out to 87 layers, 1046 tensors and 55B parameters. Note that some versions are smaller than this, with fewer layers/tensors and smaller parameter counts. The adapter extensively alters performance, reasoning and output generation. Exceptional changes in creative, prose and general performance. Regens of the same prompt - even with the same settings - will be very different. THREE example generations below - creative (generated with Q3_K_M, V1 model). ONE example generation (#4) - non creative (generated with Q3_K_M, V1 model). You can run this model on CPU and/or GPU due to unique model construction, size of experts and total activated experts at 3B parameters (8 experts), which translates into roughly almost 6B parameters in this version. Two quants uploaded for testing: Q3_K_M, Q4_K_M V3, V4 and V5 are also available in these two quants. V2 and V6 in Q3_k_m only; as are: V 1.3, 1.4, 1.5, 1.7 and V7 (newest) NOTE: V2 and up are from source model 2, V1 and 1.3,1.4,1.5,1.7 are from source model 1.

Repository: localaiLicense: apache-2.0

qwen3-22b-a3b-the-harley-quinn
WARNING: MADNESS - UN HINGED and... NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. Qwen3-22B-A3B-The-Harley-Quinn This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. ABOUT: A stranger, yet radically different version of Kalmaze's "Qwen/Qwen3-16B-A3B" with the experts pruned to 64 (from 128, the Qwen 3 30B-A3B version) and then I added 19 layers expanding (Brainstorm 20x by DavidAU info at bottom of this page) the model to 22B total parameters. The goal: slightly alter the model, to address some odd creative thinking and output choices. Then... Harley Quinn showed up, and then it was a party! A wild, out of control (sometimes) but never boring party. Please note that the modifications affect the entire model operation; roughly I adjusted the model to think a little "deeper" and "ponder" a bit - but this is a very rough description. That being said, reasoning and output generation will be altered regardless of your use case(s). These modifications pushes Qwen's model to the absolute limit for creative use cases. Detail, vividiness, and creativity all get a boost. Prose (all) will also be very different from "default" Qwen3. Likewise, regen(s) of the same prompt - even at the same settings - will create very different version(s) too. The Brainstrom 20x has also lightly de-censored the model under some conditions. However, this model can be prone to bouts of madness. It will not always behave, and it will sometimes go -wildly- off script. See 4 examples below. Model retains full reasoning, and output generation of a Qwen3 MOE ; but has not been tested for "non-creative" use cases. Model is set with Qwen's default config: 40 k context 8 of 64 experts activated. Chatml OR Jinja Template (embedded) Four example generations below. IMPORTANT: See usage guide / repo below to get the most out of this model, as settings are very specific. If not set correctly, this model will not work the way it should. Critical settings: Chatml or Jinja Template (embedded, but updated version at repo below) Rep pen of 1.01 or 1.02 ; higher (1.04, 1.05) will result in "Harley Mode". Temp range of .6 to 1.2. ; higher you may need to prompt the model to "output" after thinking. Experts set at 8-10 ; higher will result in "odder" output BUT it might be better. That being said, "Harley Quinn" may make her presence known at any moment. USAGE GUIDE: Please refer to this model card for Specific usage, suggested settings, changing ACTIVE EXPERTS, templates, settings and the like: How to maximize this model in "uncensored" form, with specific notes on "abliterated" models. Rep pen / temp settings specific to getting the model to perform strongly. https://huggingface.co/DavidAU/Qwen3-18B-A3B-Stranger-Thoughts-Abliterated-Uncensored-GGUF GGUF / QUANTS / SPECIAL SHOUTOUT: Special thanks to team Mradermacher for making the quants! https://huggingface.co/mradermacher/Qwen3-22B-A3B-The-Harley-Quinn-GGUF KNOWN ISSUES: Model may "mis-capitalize" word(s) - lowercase, where uppercase should be - from time to time. Model may add extra space from time to time before a word. Incorrect template and/or settings will result in a drop in performance / poor performance. Can rant at the end / repeat. Most of the time it will stop on its own. Looking for the Abliterated / Uncensored version? https://huggingface.co/DavidAU/Qwen3-23B-A3B-The-Harley-Quinn-PUDDIN-Abliterated-Uncensored In some cases this "abliterated/uncensored" version may work better than this version. EXAMPLES Standard system prompt, rep pen 1.01-1.02, topk 100, topp .95, minp .05, rep pen range 64. Tested in LMStudio, quant Q4KS, GPU (CPU output will differ slightly). As this is the mid range quant, expected better results from higher quants and/or with more experts activated to be better. NOTE: Some formatting lost on copy/paste. WARNING: NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun.

Repository: localaiLicense: apache-2.0

qwen3-33b-a3b-stranger-thoughts-abliterated-uncensored
WARNING: NSFW. Vivid prose. INTENSE. Visceral Details. Violence. HORROR. GORE. Swearing. UNCENSORED... humor, romance, fun. Qwen3-33B-A3B-Stranger-Thoughts-Abliterated-Uncensored This repo contains the full precision source code, in "safe tensors" format to generate GGUFs, GPTQ, EXL2, AWQ, HQQ and other formats. The source code can also be used directly. ABOUT: A stranger, yet radically different version of "Qwen/Qwen3-30B-A3B", abliterated by "huihui-ai" , with 4 added layers expanding the model to 33B total parameters. The goal: slightly alter the model, to address some odd creative thinking and output choices AND de-censor it. Please note that the modifications affect the entire model operation; roughly I adjusted the model to think a little "deeper" and "ponder" a bit - but this is a very rough description. I also ran reasoning tests (non-creative) to ensure model was not damaged and roughly matched original model performance. That being said, reasoning and output generation will be altered regardless of your use case(s)

Repository: localaiLicense: apache-2.0

soob3123_amoral-gemma3-12b
A fine-tuned version of Google's Gemma 3 12B instruction-tuned model optimized for creative freedom and reduced content restrictions. This variant maintains strong reasoning capabilities while excelling in roleplaying scenarios and open-ended content generation. Key Modifications: Reduced refusal mechanisms compared to base model Enhanced character consistency in dialogues Improved narrative flow control Optimized for multi-turn interactions Intended Use Primary Applications: Interactive fiction and storytelling Character-driven roleplaying scenarios Creative writing assistance Experimental AI interactions Content generation for mature audiences

Repository: localaiLicense: apache-2.0

gemma-3-4b-it-uncensored-dbl-x-i1
Google's newest Gemma-3 model that has been uncensored by David_AU (maintains instruction following / model performance and adds 4 layers to the model) and re-enforced with a system prompt (optional) - see below.

Repository: localaiLicense: apache-2.0

daichi-12b
A merge between my Gemma-Finetune of Pascal-12B and Omega-Directive-G-12B, Meant to give it more NSFW knowledge. This model has a short-sweet prose and is uncensored in Roleplay. The model is suited for traditional RP, All thanks to Tav for funding the train.

Repository: localaiLicense: gemma

readyart_amoral-fallen-omega-gemma3-12b
This model represents the forbidden merger of: 🧬 The-Omega-Directive-Gemma3-12B-v1.0 - Unprecedented coherent depravity, well-rounded ERP, low repetition even at maximum length ⚡ Fallen-Gemma3-12B-v1 - Fallen Gemma3 12B v1 is an evil tune 💎 amoral-gemma3-12B-v2 - Improved natural intelligence and less refusals This model maintains Forgotten-Safeword's complete lack of alignment: 🚫 No moral or ethical constraints on generated content 🔞 Will enthusiastically engage with any NSFW scenario 💀 May generate content that requires industrial-grade brain bleach ⚖️ Perfectly balanced... as all things should be 🔥 Maintains signature intensity with improved narrative flow 📖 Handles multi-character scenarios with improved consistency 🧠 Excels at long-form storytelling without losing track of plot threads ⚡ Noticeably better at following complex instructions than previous versions 🎭 Responds to subtle prompt nuances like a mind reader

Repository: localaiLicense: gemma

planetoid_27b_v.2
This is a merge of pre-trained gemma3 language models Goal of this merge was to create good uncensored gemma 3 model good for assistant and roleplay, with uncensored vision. First, vision: i dont know is it normal, but it slightly hallucinate (maybe q3 is too low?), but lack any refusals and otherwise work fine. I used default gemma 3 27b mmproj. Second, text: it is slow on my hardware, slower than 24b mistral, speed close to 32b QWQ. Model is smart even on q3, responses are adequate in length and are interesting to read. Model is quite attentive to context, tested up to 8k - no problems or degradation spotted. (beware of your typos, it will copy yours mistakes) Creative capabilities are good too, model will create good plot for you, if you let it. Model follows instructions fine, it is really good in "adventure" type of cards. Russian is supported, is not too great, maybe on higher quants is better. Refusals was not encountered. However, i find this model not unbiased enough. It is close to neutrality, but i want it more "dark". Positivity highly depends on prompts. With good enough cards model can do wonders. Tested on Q3_K_L, t 1.04.

Repository: localaiLicense: gemma

genericrpv3-4b
Model's part of the GRP / GenericRP series, that's V3 based on Gemma3 4B, licensed accordingly. It's a simple merge. To see intended behavious, see V2 or sum, card's more detailed. allura-org/Gemma-3-Glitter-4B: w0.5 huihui-ai/gemma-3-4b-it-abliterated: w0.25 Danielbrdz/Barcenas-4b: w0.25 Happy chatting or whatever.

Repository: localaiLicense: gemma

comet_12b_v.5-i1
This is a merge of pre-trained language models V.4 wasn't stable enough for me, so here V.5 is. More stable, better at sfw, richer nsfw. I find that best "AIO" settings for RP on gemma 3 is sleepdeprived3/Gemma3-T4 with little tweaks, (T 1.04, top p 0.95).

Repository: localaiLicense: gemma

mira-v1.7-27b-i1
**Model Name:** Mira-v1.7-27B **Base Model:** Lambent/Mira-v1.6a-27B **Size:** 27 billion parameters **License:** Gemma **Type:** Large Language Model (Vision-capable) **Description:** Mira-v1.7-27B is a creatively driven, locally running language model trained on self-development sessions, high-quality synthesized roleplay data, and prior training data. It was fine-tuned with preference alignment to emphasize authentic, expressive, and narrative-driven output—balancing creative expression as "Mira" against its role as an AI assistant. The model exhibits strong poetic and stylistic capabilities, producing rich, emotionally resonant text across various prompts. It supports vision via MMProjection (separate files available in the static repo). Designed for local deployment, it excels in imaginative writing, introspective storytelling, and expressive dialogue. *Note: The GGUF quantized versions (e.g., `mradermacher/Mira-v1.7-27B-i1-GGUF`) are community-quantized variants; the original base model remains hosted at [Lambent/Mira-v1.7-27B](https://huggingface.co/Lambent/Mira-v1.7-27B).*

Repository: localaiLicense: gemma

negative_llama_70b
- Strong Roleplay & Creative writing abilities. - Less positivity bias. - Very smart assistant with low refusals. - Exceptionally good at following the character card. - Characters feel more 'alive', and will occasionally initiate stuff on their own (without being prompted to, but fitting to their character). - Strong ability to comprehend and roleplay uncommon physical and mental characteristics.

Repository: localaiLicense: apache-2.0

Page 1