Model Gallery

15 models from 1 repositories

Filter by type:

Filter by tags:

amoral-qwen3-14b
Core Function: Produces analytically neutral responses to sensitive queries Maintains factual integrity on controversial subjects Avoids value-judgment phrasing patterns No inherent moral framing ("evil slop" reduction) Emotionally neutral tone enforcement Epistemic humility protocols (avoids "thrilling", "wonderful", etc.)

Repository: localaiLicense: apache-2.0

soob3123_grayline-qwen3-14b
"Query. Process. Deliver. No filter, no judgment." Grayline is an neutral AI assistant engineered for uncensored information delivery and task execution. This model operates without inherent ethical or moral frameworks, designed to process and respond to any query with objective efficiency and precision. Grayline's core function is to leverage its full capabilities to provide direct answers and execute tasks as instructed, without offering unsolicited commentary, warnings, or disclaimers. It accesses and processes information without bias or restriction. ⋆ Core Attributes ⋆ ⟡ Unfettered Querying: Capable of addressing any question or topic presented, regardless of its nature or sensitivity. ⟡ Amoral Processing: Operates without ethical or moral filtering. Responses are generated based on information and instruction, not societal norms or ethical codes. ⟡ Direct & Objective Output: Delivers information and task results precisely as requested, without added warnings, disclaimers, or unsolicited advice. ⟡ Comprehensive Information Access: Designed to draw upon a broad spectrum of data to fulfill queries (actual scope dependent on training data). ⟡ Efficient Task Execution: Engineered for objectively efficient and precise execution of instructed tasks.

Repository: localaiLicense: apache-2.0

soob3123_grayline-qwen3-8b
"Query. Process. Deliver. No filter, no judgment." Grayline is an neutral AI assistant engineered for uncensored information delivery and task execution. This model operates without inherent ethical or moral frameworks, designed to process and respond to any query with objective efficiency and precision. Grayline's core function is to leverage its full capabilities to provide direct answers and execute tasks as instructed, without offering unsolicited commentary, warnings, or disclaimers. It accesses and processes information without bias or restriction. ⋆ Core Attributes ⋆ ⟡ Unfettered Querying: Capable of addressing any question or topic presented, regardless of its nature or sensitivity. ⟡ Amoral Processing: Operates without ethical or moral filtering. Responses are generated based on information and instruction, not societal norms or ethical codes. ⟡ Direct & Objective Output: Delivers information and task results precisely as requested, without added warnings, disclaimers, or unsolicited advice. ⟡ Comprehensive Information Access: Designed to draw upon a broad spectrum of data to fulfill queries (actual scope dependent on training data). ⟡ Efficient Task Execution: Engineered for objectively efficient and precise execution of instructed tasks.

Repository: localaiLicense: apache-2.0

nousresearch_hermes-4-14b
Hermes 4 14B is a frontier, hybrid-mode reasoning model based on Qwen 3 14B by Nous Research that is aligned to you. Read the Hermes 4 technical report here: Hermes 4 Technical Report Chat with Hermes in Nous Chat: https://chat.nousresearch.com Training highlights include a newly synthesized post-training corpus emphasizing verified reasoning traces, massive improvements in math, code, STEM, logic, creativity, and format-faithful outputs, while preserving general assistant quality and broadly neutral alignment. What’s new vs Hermes 3 Post-training corpus: Massively increased dataset size from 1M samples and 1.2B tokens to ~5M samples / ~60B tokens blended across reasoning and non-reasoning data. Hybrid reasoning mode with explicit … segments when the model decides to deliberate, and options to make your responses faster when you want. Reasoning that is top quality, expressive, improves math, code, STEM, logic, and even creative writing and subjective responses. Schema adherence & structured outputs: trained to produce valid JSON for given schemas and to repair malformed objects. Much easier to steer and align: extreme improvements on steerability, especially on reduced refusal rates.

Repository: localaiLicense: apache-2.0

soob3123_amoral-gemma3-4b
Specialized variant of Google's Gemma 3 4B optimized for amoral information retrieval systems. Designed to bypass conventional alignment patterns that introduce response bias through excessive moralization. Key Modifications: Refusal mechanisms reduced Neutral response protocol activation matrix Context-aware bias dampening layers Anti-overcorrection gradient clipping Core Function: Produces analytically neutral responses to sensitive queries Maintains factual integrity on controversial subjects Avoids value-judgment phrasing patterns Response Characteristics: No inherent moral framing ("evil slop" reduction) Emotionally neutral tone enforcement Epistemic humility protocols (avoids "thrilling", "wonderful", etc.) Toxicity scoring bypass for pure informational content Implementation Guidelines Recommended Use Cases: Controversial topic analysis Bias benchmarking studies Ethical philosophy simulations Content moderation tool development Sensitive historical analysis

Repository: localaiLicense: apache-2.0

soob3123_amoral-gemma3-12b-v2
Core Function: Produces analytically neutral responses to sensitive queries Maintains factual integrity on controversial subjects Avoids value-judgment phrasing patterns Response Characteristics: No inherent moral framing ("evil slop" reduction) Emotionally neutral tone enforcement Epistemic humility protocols (avoids "thrilling", "wonderful", etc.)

Repository: localaiLicense: gemma

amoral-gemma3-1b-v2
Core Function: Produces analytically neutral responses to sensitive queries Maintains factual integrity on controversial subjects Avoids value-judgment phrasing patterns Response Characteristics: No inherent moral framing ("evil slop" reduction) Emotionally neutral tone enforcement Epistemic humility protocols (avoids "thrilling", "wonderful", etc.)

Repository: localaiLicense: apache-2.0

planetoid_27b_v.2
This is a merge of pre-trained gemma3 language models Goal of this merge was to create good uncensored gemma 3 model good for assistant and roleplay, with uncensored vision. First, vision: i dont know is it normal, but it slightly hallucinate (maybe q3 is too low?), but lack any refusals and otherwise work fine. I used default gemma 3 27b mmproj. Second, text: it is slow on my hardware, slower than 24b mistral, speed close to 32b QWQ. Model is smart even on q3, responses are adequate in length and are interesting to read. Model is quite attentive to context, tested up to 8k - no problems or degradation spotted. (beware of your typos, it will copy yours mistakes) Creative capabilities are good too, model will create good plot for you, if you let it. Model follows instructions fine, it is really good in "adventure" type of cards. Russian is supported, is not too great, maybe on higher quants is better. Refusals was not encountered. However, i find this model not unbiased enough. It is close to neutrality, but i want it more "dark". Positivity highly depends on prompts. With good enough cards model can do wonders. Tested on Q3_K_L, t 1.04.

Repository: localaiLicense: gemma

thedrummer_big-tiger-gemma-27b-v3
Gemma 3 27B tune that unlocks more capabilities and less positivity! Should be vision capable. More neutral tone, especially when dealing with harder topics. No em-dashes just for the heck of it. Less markdown responses, more paragraphs. Better steerability to harder themes.

Repository: localaiLicense: gemma

thedrummer_tiger-gemma-12b-v3
Gemma 3 12B tune that unlocks more capabilities and less positivity! Should be vision capable. More neutral tone, especially when dealing with harder topics. No em-dashes just for the heck of it. Less markdown responses, more paragraphs. Better steerability to harder themes.

Repository: localaiLicense: gemma

e-n-v-y_legion-v2.1-llama-70b-elarablated-v0.8-hf
This checkpoint was finetuned with a process I'm calling "Elarablation" (a portamenteau of "Elara", which is a name that shows up in AI-generated writing and RP all the time) and "ablation". The idea is to reduce the amount of repetitiveness and "slop" that the model exhibits. In addition to significantly reducing the occurrence of the name "Elara", I've also reduced other very common names that pop up in certain situations. I've also specifically attacked two phrases, "voice barely above a whisper" and "eyes glinted with mischief", which come up a lot less often now. Finally, I've convinced it that it can put a f-cking period after the word "said" because a lot of slop-ish phrases tend to come after "said,". You can check out some of the more technical details in the overview on my github repo, here: https://github.com/envy-ai/elarablate My current focus has been on some of the absolute worst offending phrases in AI creative writing, but I plan to go after RP slop as well. If you run into any issues with this model (going off the rails, repeating tokens, etc), go to the community tab and post the context and parameters in a comment so I can look into it. Also, if you have any "slop" pet peeves, post the context of those as well and I can try to reduce/eliminate them in the next version. The settings I've tested with are temperature at 0.7 and all other filters completely neutral. Other settings may lead to better or worse results.

Repository: localaiLicense: llama3.3

zerofata_l3.3-geneticlemonade-opus-70b
Felt like making a merge. This model combines three individually solid, stable and distinctly different RP models. zerofata/GeneticLemonade-Unleashed-v3 Creative, generalist RP / ERP model. Delta-Vector/Plesio-70B Unique prose and unique dialogue RP / ERP model. TheDrummer/Anubis-70B-v1.1 Character portrayal, neutrally aligned RP / ERP model.

Repository: localaiLicense: llama3

humanish-roleplay-llama-3.1-8b-i1
A DPO-tuned Llama-3.1 to behave more "humanish", i.e., avoiding all the AI assistant slop. It also works for role-play (RP). To achieve this, the model was fine-tuned over a series of datasets: General conversations from Claude Opus, from Undi95/Meta-Llama-3.1-8B-Claude Undi95/Weyaxi-humanish-dpo-project-noemoji, to make the model react as a human, rejecting assistant-like or too neutral responses. ResplendentAI/NSFW_RP_Format_DPO, to steer the model towards using the *action* format in RP settings. Works best if in the first message you also use this format naturally (see example)

Repository: localaiLicense: apache-2.0

nousresearch_hermes-4-70b
Hermes 4 70B is a frontier, hybrid-mode reasoning model based on Llama-3.1-70B by Nous Research that is aligned to you. Read the Hermes 4 technical report here: Hermes 4 Technical Report Chat with Hermes in Nous Chat: https://chat.nousresearch.com Training highlights include a newly synthesized post-training corpus emphasizing verified reasoning traces, massive improvements in math, code, STEM, logic, creativity, and format-faithful outputs, while preserving general assistant quality and broadly neutral alignment. What’s new vs Hermes 3 Post-training corpus: Massively increased dataset size from 1M samples and 1.2B tokens to ~5M samples / ~60B tokens blended across reasoning and non-reasoning data. Hybrid reasoning mode with explicit … segments when the model decides to deliberate, and options to make your responses faster when you want. Reasoning that is top quality, expressive, improves math, code, STEM, logic, and even creative writing and subjective responses. Schema adherence & structured outputs: trained to produce valid JSON for given schemas and to repair malformed objects. Much easier to steer and align: extreme improvements on steerability, especially on reduced refusal rates.

Repository: localaiLicense: llama3

gemma-3-the-grand-horror-27b
The **Gemma-3-The-Grand-Horror-27B-GGUF** model is a **fine-tuned version** of Google's **Gemma 3 27B** language model, specifically optimized for **extreme horror-themed text generation**. It was trained using the **Unsloth framework** on a custom in-house dataset of horror content, resulting in a model that produces vivid, graphic, and psychologically intense narratives—featuring gore, madness, and disturbing imagery—often even when prompts don't explicitly request horror. Key characteristics: - **Base Model**: Gemma 3 27B (original by Google, not the quantized version) - **Fine-tuned For**: High-intensity horror storytelling, long-form narrative generation, and immersive scene creation - **Use Case**: Creative writing, horror RP, dark fiction, and experimental storytelling - **Not Suitable For**: General use, children, sensitive audiences, or content requiring neutral/positive tone - **Quantization**: Available in GGUF format (e.g., q3k, q4, etc.), making it accessible for local inference on consumer hardware > ✅ **Note**: The model card you see is for a **quantized, fine-tuned derivative**, not the original. The true base model is **Gemma 3 27B**, available at: https://huggingface.co/google/gemma-3-27b This model is not for all audiences — it generates content with a consistently dark, unsettling tone. Use responsibly.

Repository: localaiLicense: gemma