Model Gallery

19 models from 1 repositories

Filter by type:

Filter by tags:

huihui-ai_huihui-gpt-oss-20b-bf16-abliterated
This is an uncensored version of unsloth/gpt-oss-20b-BF16 created with abliteration (see remove-refusals-with-transformers to know more about it).

Repository: localaiLicense: apache-2.0

openai-gpt-oss-20b-abliterated-uncensored-neo-imatrix
These are NEO Imatrix GGUFs, NEO dataset by DavidAU. NEO dataset improves overall performance, and is for all use cases. This model uses Huihui-gpt-oss-20b-BF16-abliterated as a base which DE-CENSORS the model and removes refusals. Example output below (creative; IQ4_NL), using settings below. This model can be a little rough around the edges (due to abliteration) ; make sure you see the settings below for best operation. It can also be creative, off the shelf crazy and rational too. Enjoy!

Repository: localaiLicense: apache-2.0

huihui-ai_qwen3-14b-abliterated
This is an uncensored version of Qwen/Qwen3-14B created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. Ablation was performed using a new and faster method, which yields better results.

Repository: localaiLicense: apache-2.0

huihui-jan-nano-abliterated
This is an uncensored version of Menlo/Jan-nano created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. Ablation was performed using a new and faster method, which yields better results.

Repository: localaiLicense: apache-2.0

mlabonne_gemma-3-27b-it-abliterated
This is an uncensored version of google/gemma-3-27b-it created with a new abliteration technique. See this article to know more about abliteration.

Repository: localaiLicense: gemma

mlabonne_gemma-3-12b-it-abliterated
This is an uncensored version of google/gemma-3-12b-it created with a new abliteration technique. See this article to know more about abliteration.

Repository: localaiLicense: gemma

mlabonne_gemma-3-4b-it-abliterated
This is an uncensored version of google/gemma-3-4b-it created with a new abliteration technique. See this article to know more about abliteration.

Repository: localaiLicense: gemma

huihui-ai_gemma-3-1b-it-abliterated
This is an uncensored version of google/gemma-3-1b-it created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens

Repository: localaiLicense: gemma

huihui-ai_huihui-gemma-3n-e4b-it-abliterated
This is an uncensored version of google/gemma-3n-E4B-it created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens. It was only the text part that was processed, not the image part. After abliterated, it seems like more output content has been opened from a magic box.

Repository: localaiLicense: gemma

falcon3-1b-instruct-abliterated
This is an uncensored version of tiiuae/Falcon3-1B-Instruct created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

Repository: localaiLicense: falcon-llm-license

falcon3-3b-instruct-abliterated
This is an uncensored version of tiiuae/Falcon3-3B-Instruct created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

Repository: localaiLicense: falcon-llm-license

falcon3-10b-instruct-abliterated
This is an uncensored version of tiiuae/Falcon3-10B-Instruct created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

Repository: localaiLicense: falcon-llm-license

falcon3-7b-instruct-abliterated
This is an uncensored version of tiiuae/Falcon3-7B-Instruct created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

Repository: localaiLicense: falcon-llm-license

meta-llama-3.1-8b-instruct-abliterated
This is an uncensored version of Llama 3.1 8B Instruct created with abliteration.

Repository: localaiLicense: llama3.1

llama-3.1-8b-instruct-ortho-v3
A few different attempts at orthogonalization/abliteration of llama-3.1-8b-instruct using variations of the method from "Mechanistically Eliciting Latent Behaviors in Language Models". Each of these use different vectors and have some variations in where the new refusal boundaries lie. None of them seem totally jailbroken.

Repository: localaiLicense: wtfpl

l3.1-purosani-2-8b
The following models were included in the merge: hf-100/Llama-3-Spellbound-Instruct-8B-0.3 arcee-ai/Llama-3.1-SuperNova-Lite + grimjim/Llama-3-Instruct-abliteration-LoRA-8B THUDM/LongWriter-llama3.1-8b + ResplendentAI/Smarts_Llama3 djuna/L3.1-Suze-Vume-2-calc djuna/L3.1-ForStHS + Blackroot/Llama-3-8B-Abomination-LORA

Repository: localaiLicense: llama3.1

llama-3.1-8b-instruct-uncensored-delmat-i1
Decensored using a custom training script guided by activations, similar to ablation/"abliteration" scripts but not exactly the same approach. I've found this effect to be stronger than most abliteration scripts, so please use responsibly etc etc. The training script is released under the MIT license: https://github.com/nkpz/DeLMAT

Repository: localaiLicense: llama3.1

huihui-ai_deepseek-r1-distill-llama-70b-abliterated
This is an uncensored version of deepseek-ai/DeepSeek-R1-Distill-Llama-70B created with abliteration (see remove-refusals-with-transformers to know more about it). This is a crude, proof-of-concept implementation to remove refusals from an LLM model without using TransformerLens.

Repository: localai

flux.1dev-abliteratedv2
The FLUX.1 [dev] Abliterated-v2 model is a modified version of FLUX.1 [dev] and a successor to FLUX.1 [dev] Abliterated. This version has undergone a process called unlearning, which removes the model's built-in refusal mechanism. This allows the model to respond to a wider range of prompts, including those that the original model might have deemed inappropriate or harmful. The abliteration process involves identifying and isolating the specific components of the model responsible for refusal behavior and then modifying or ablating those components. This results in a model that is more flexible and responsive, while still maintaining the core capabilities of the original FLUX.1 [dev] model.

Repository: localaiLicense: apache-2.0