Model Gallery

68 models from 1 repositories

Filter by type:

Filter by tags:

mistral-nemo-instruct-2407-12b-thinking-m-claude-opus-high-reasoning-i1
The model described in this repository is the **Mistral-Nemo-Instruct-2407-12B** (12 billion parameters), a large language model optimized for instruction tuning and high-level reasoning tasks. It is a **quantized version** of the original model, compressed for efficiency while retaining key capabilities. The model is designed to generate human-like text, perform complex reasoning, and support multi-modal tasks, making it suitable for applications requiring strong language understanding and output.

Repository: localai

arcee-ai_homunculus
Homunculus is a 12 billion-parameter instruction model distilled from Qwen3-235B onto the Mistral-Nemo backbone. It was purpose-built to preserve Qwen’s two-mode interaction style—/think (deliberate chain-of-thought) and /nothink (concise answers)—while running on a single consumer GPU.

Repository: localaiLicense: apache-2.0

gemma-3-12b-it
google/gemma-3-12b-it is an open-source, state-of-the-art, lightweight, multimodal model built from the same research and technology used to create the Gemini models. It is capable of handling text and image input and generating text output. It has a large context window of 128K tokens and supports over 140 languages. The 12B variant has been fine-tuned using the instruction-tuning approach. Gemma 3 models are suitable for a variety of text generation and image understanding tasks, including question answering, summarization, and reasoning. Their relatively small size makes them deployable in environments with limited resources such as laptops, desktops, or your own cloud infrastructure.

Repository: localaiLicense: gemma

gemma-3-12b-it-qat
This model corresponds to the 12B instruction-tuned version of the Gemma 3 model in GGUF format using Quantization Aware Training (QAT). The GGUF corresponds to Q4_0 quantization. Thanks to QAT, the model is able to preserve similar quality as bfloat16 while significantly reducing the memory requirements to load the model. You can find the half-precision version here.

Repository: localaiLicense: gemma

mlabonne_gemma-3-12b-it-abliterated
This is an uncensored version of google/gemma-3-12b-it created with a new abliteration technique. See this article to know more about abliteration.

Repository: localaiLicense: gemma

soob3123_amoral-gemma3-12b
A fine-tuned version of Google's Gemma 3 12B instruction-tuned model optimized for creative freedom and reduced content restrictions. This variant maintains strong reasoning capabilities while excelling in roleplaying scenarios and open-ended content generation. Key Modifications: Reduced refusal mechanisms compared to base model Enhanced character consistency in dialogues Improved narrative flow control Optimized for multi-turn interactions Intended Use Primary Applications: Interactive fiction and storytelling Character-driven roleplaying scenarios Creative writing assistance Experimental AI interactions Content generation for mature audiences

Repository: localaiLicense: apache-2.0

thedrummer_fallen-gemma3-12b-v1
Fallen Gemma3 12B v1 is an evil tune of Gemma 3 12B but it is not a complete decensor. Evil tunes knock out the positivity and may enjoy torturing you and humanity. Vision still works and it has something to say about the crap you feed it.

Repository: localaiLicense: gemma

gemma-3-glitter-12b-i1
A creative writing model based on Gemma 3 12B IT. This is a 50/50 merge of two separate trains: ToastyPigeon/g3-12b-rp-system-v0.1 - ~13.5M tokens of instruct-based training related to RP (2:1 human to synthetic) and examples using a system prompt. ToastyPigeon/g3-12b-storyteller-v0.2-textonly - ~20M tokens of completion training on long-form creative writing; 1.6M synthetic from R1, the rest human-created

Repository: localaiLicense: gemma

soob3123_amoral-gemma3-12b-v2
Core Function: Produces analytically neutral responses to sensitive queries Maintains factual integrity on controversial subjects Avoids value-judgment phrasing patterns Response Characteristics: No inherent moral framing ("evil slop" reduction) Emotionally neutral tone enforcement Epistemic humility protocols (avoids "thrilling", "wonderful", etc.)

Repository: localaiLicense: gemma

gemma-3-starshine-12b-i1
A creative writing model based on a merge of fine-tunes on Gemma 3 12B IT and Gemma 3 12B PT. This is the Story Focused merge. This version works better for storytelling and scenarios, as the prose is more novel-like and it has a tendency to impersonate the user character. See the Alternate RP Focused version as well. This is a merge of two G3 models, one trained on instruct and one trained on base: allura-org/Gemma-3-Glitter-12B - Itself a merge of a storywriting and RP train (both also by ToastyPigeon), on instruct ToastyPigeon/Gemma-3-Confetti-12B - Experimental application of the Glitter data using base instead of instruct, additionally includes some adventure data in the form of SpringDragon. The result is a lovely blend of Glitter's ability to follow instructions and Confetti's free-spirit prose, effectively 'loosening up' much of the hesitancy that was left in Glitter.

Repository: localaiLicense: gemma

burtenshaw_gemmacoder3-12b
This model is a fine-tuned version of google/gemma-3-12b-it on the open-r1/codeforces-cots dataset. It has been trained using TRL.

Repository: localaiLicense: gemma

daichi-12b
A merge between my Gemma-Finetune of Pascal-12B and Omega-Directive-G-12B, Meant to give it more NSFW knowledge. This model has a short-sweet prose and is uncensored in Roleplay. The model is suited for traditional RP, All thanks to Tav for funding the train.

Repository: localaiLicense: gemma

readyart_amoral-fallen-omega-gemma3-12b
This model represents the forbidden merger of: 🧬 The-Omega-Directive-Gemma3-12B-v1.0 - Unprecedented coherent depravity, well-rounded ERP, low repetition even at maximum length ⚡ Fallen-Gemma3-12B-v1 - Fallen Gemma3 12B v1 is an evil tune 💎 amoral-gemma3-12B-v2 - Improved natural intelligence and less refusals This model maintains Forgotten-Safeword's complete lack of alignment: 🚫 No moral or ethical constraints on generated content 🔞 Will enthusiastically engage with any NSFW scenario 💀 May generate content that requires industrial-grade brain bleach ⚖️ Perfectly balanced... as all things should be 🔥 Maintains signature intensity with improved narrative flow 📖 Handles multi-character scenarios with improved consistency 🧠 Excels at long-form storytelling without losing track of plot threads ⚡ Noticeably better at following complex instructions than previous versions 🎭 Responds to subtle prompt nuances like a mind reader

Repository: localaiLicense: gemma

soob3123_veritas-12b
Veritas-12B emerges as a model forged in the pursuit of intellectual clarity and logical rigor. This 12B parameter model possesses superior philosophical reasoning capabilities and analytical depth, ideal for exploring complex ethical dilemmas, deconstructing arguments, and engaging in structured philosophical dialogue. Veritas-12B excels at articulating nuanced positions, identifying logical fallacies, and constructing coherent arguments grounded in reason. Expect discussions characterized by intellectual honesty, critical analysis, and a commitment to exploring ideas with precision.

Repository: localaiLicense: gemma

comet_12b_v.5-i1
This is a merge of pre-trained language models V.4 wasn't stable enough for me, so here V.5 is. More stable, better at sfw, richer nsfw. I find that best "AIO" settings for RP on gemma 3 is sleepdeprived3/Gemma3-T4 with little tweaks, (T 1.04, top p 0.95).

Repository: localaiLicense: gemma

gemma-3-12b-fornaxv.2-qat-cot
This model is an experiment to try to produce a strong smaller thinking model capable of fitting in an 8GiB consumer graphics card with generalizeable reasoning capabilities. Most other open source thinking models, especially on the smaller side, fail to generalize their reasoning to tasks other than coding or math due to an overly large focus on GRPO zero for CoT which is only applicable for coding and math. Instead of using GRPO, this model aims to SFT a wide variety of high quality, diverse reasoning traces from Deepseek R1 onto Gemma 3 to force the model to learn to effectively generalize its reasoning capabilites to a large number of tasks as an extension of the LiMO paper's approach to Math/Coding CoT. A subset of V3 O3/24 non-thinking data was also included for improved creativity and to allow the model to retain it's non-thinking capabilites. Training off the QAT checkpoint allows for this model to be used without a drop in quality at Q4_0, requiring only ~6GiB of memory. Thinking Mode Similar to the Qwen 3 model line, Gemma Fornax can be used with or without thinking mode enabled. To enable thinking place /think in the system prompt and prefill \n for thinking mode. To disable thinking put /no_think in the system prompt.

Repository: localaiLicense: gemma

thedrummer_tiger-gemma-12b-v3
Gemma 3 12B tune that unlocks more capabilities and less positivity! Should be vision capable. More neutral tone, especially when dealing with harder topics. No em-dashes just for the heck of it. Less markdown responses, more paragraphs. Better steerability to harder themes.

Repository: localaiLicense: gemma

thedrummer_gemma-3-r1-12b-v1
Gemma 3 27B reasoning tune that unlocks more capabilities and less positivity! Should be vision capable.

Repository: localaiLicense: gemma

yanolja_yanoljanext-rosetta-12b-2510
This model is a fine-tuned version of google/gemma-3-12b-pt. As it is intended solely for text generation, we have extracted and utilized only the Gemma3ForCausalLM component from the original architecture. Unlike our previous EEVE models, this model does not feature an expanded tokenizer. Base Model: google/gemma-3-12b-pt This model is a 12-billion parameter, decoder-only language model built on the Gemma3 architecture and fine-tuned by Yanolja NEXT. It is specifically designed to translate structured data (JSON format) while preserving the original data structure. The model was trained on a multilingual dataset covering the following languages equally: Arabic Bulgarian Chinese Czech Danish Dutch English Finnish French German Greek Gujarati Hebrew Hindi Hungarian Indonesian Italian Japanese Korean Persian Polish Portuguese Romanian Russian Slovak Spanish Swedish Tagalog Thai Turkish Ukrainian Vietnamese While optimized for these languages, it may also perform effectively on other languages supported by the base Gemma3 model.

Repository: localaiLicense: gemma

latitudegames_wayfarer-large-70b-llama-3.3
We’ve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games aren’t all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on. Similarly, great games need opposition. You must be able to fail, die, and may even have to start over. This makes games more fun! However, the vast majority of AI models, through alignment RLHF, have been trained away from darkness, violence, or conflict, preventing them from fulfilling this role. To give our players better options, we decided to train our own model to fix these issues. The Wayfarer model series are a set of adventure role-play models specifically trained to give players a challenging and dangerous experience. We wanted to contribute back to the open source community that we’ve benefitted so much from so we open sourced a 12b parameter version version back in Jan. We thought people would love it but people were even more excited than we expected. Due to popular request we decided to train a larger 70b version based on Llama 3.3.

Repository: localaiLicense: llama3.3

mistral-nemo-instruct-2407
The Mistral-Nemo-Instruct-2407 Large Language Model (LLM) is an instruct fine-tuned version of the Mistral-Nemo-Base-2407. Trained jointly by Mistral AI and NVIDIA, it significantly outperforms existing models smaller or similar in size.

Repository: localaiLicense: apache-2.0

Page 1