Model Gallery

12 models from 1 repositories

Filter by type:

Filter by tags:

llama-3.3-70b-instruct-ablated
Llama 3.3 instruct 70B 128k context with ablation technique applied for a more helpful (and based) assistant. This means it will refuse less of your valid requests for an uncensored UX. Use responsibly and use common sense. We do not take any responsibility for how you apply this intelligence, just as we do not for how you apply your own.

Repository: localaiLicense: llama3

l3.3-ms-nevoria-70b
This model was created as I liked the storytelling of EVA, the prose and details of scenes from EURYALE and Anubis, enhanced with Negative_LLAMA to kill off the positive bias with a touch of nemotron sprinkeled in. The choice to use the lorablated model as a base was intentional - while it might seem counterintuitive, this approach creates unique interactions between the weights, similar to what was achieved in the original Astoria model and Astoria V2 model . Rather than simply removing refusals, this "weight twisting" effect that occurs when subtracting the lorablated base model from the other models during the merge process creates an interesting balance in the final model's behavior. While this approach differs from traditional sequential application of components, it was chosen for its unique characteristics in the model's responses.

Repository: localaiLicense: llama3.3

l3.3-nevoria-r1-70b
This model builds upon the original Nevoria foundation, incorporating the Deepseek-R1 reasoning architecture to enhance dialogue interaction and scene comprehension. While maintaining Nevoria's core strengths in storytelling and scene description (derived from EVA, EURYALE, and Anubis), this iteration aims to improve prompt adherence and creative reasoning capabilities. The model also retains the balanced perspective introduced by Negative_LLAMA and Nemotron elements. Also, the model plays the card to almost a fault, It'll pick up on minor issues and attempt to run with them. Users had it call them out for misspelling a word while playing in character. Note: While Nevoria-R1 represents a significant architectural change, rather than a direct successor to Nevoria, it operates as a distinct model with its own characteristics. The lorablated model base choice was intentional, creating unique weight interactions similar to the original Astoria model and Astoria V2 model. This "weight twisting" effect, achieved by subtracting the lorablated base model during merging, creates an interesting balance in the model's behavior. While unconventional compared to sequential component application, this approach was chosen for its unique response characteristics.

Repository: localaiLicense: eva-llama3.3

tarek07_legion-v2.1-llama-70b
My biggest merge yet, consisting of a total of 20 specially curated models. My methodology in approaching this was to create 5 highly specialized models: A completely uncensored base A very intelligent model based on UGI, Willingness and NatInt scores on the UGI Leaderboard A highly descriptive writing model, specializing in creative and natural prose A RP model specially merged with fine-tuned models that use a lot of RP datasets The secret ingredient: A completely unhinged, uncensored final model These five models went through a series of iterations until I got something I thought worked well and then combined them to make LEGION. The full list of models used in this merge is below: TheDrummer/Fallen-Llama-3.3-R1-70B-v1 Sao10K/Llama-3.3-70B-Vulpecula-r1 Sao10K/L3-70B-Euryale-v2.1 SicariusSicariiStuff/Negative_LLAMA_70B allura-org/Bigger-Body-70b Sao10K/70B-L3.3-mhnnn-x1 Sao10K/L3.3-70B-Euryale-v2.3 Doctor-Shotgun/L3.3-70B-Magnum-v4-SE Sao10K/L3.1-70B-Hanami-x1 Sao10K/70B-L3.3-Cirrus-x1 EVA-UNIT-01/EVA-LLaMA-3.33-70B-v0.1 TheDrummer/Anubis-70B-v1 ArliAI/Llama-3.3-70B-ArliAI-RPMax-v1.4 LatitudeGames/Wayfarer-Large-70B-Llama-3.3 NeverSleep/Lumimaid-v0.2-70B mlabonne/Hermes-3-Llama-3.1-70B-lorablated ReadyArt/Forgotten-Safeword-70B-3.6 ReadyArt/Fallen-Abomination-70B-R1-v4.1 ReadyArt/Fallen-Safeword-70B-R1-v4.1 huihui-ai/Llama-3.3-70B-Instruct-abliterated

Repository: localaiLicense: llama3.3

e-n-v-y_legion-v2.1-llama-70b-elarablated-v0.8-hf
This checkpoint was finetuned with a process I'm calling "Elarablation" (a portamenteau of "Elara", which is a name that shows up in AI-generated writing and RP all the time) and "ablation". The idea is to reduce the amount of repetitiveness and "slop" that the model exhibits. In addition to significantly reducing the occurrence of the name "Elara", I've also reduced other very common names that pop up in certain situations. I've also specifically attacked two phrases, "voice barely above a whisper" and "eyes glinted with mischief", which come up a lot less often now. Finally, I've convinced it that it can put a f-cking period after the word "said" because a lot of slop-ish phrases tend to come after "said,". You can check out some of the more technical details in the overview on my github repo, here: https://github.com/envy-ai/elarablate My current focus has been on some of the absolute worst offending phrases in AI creative writing, but I plan to go after RP slop as well. If you run into any issues with this model (going off the rails, repeating tokens, etc), go to the community tab and post the context and parameters in a comment so I can look into it. Also, if you have any "slop" pet peeves, post the context of those as well and I can try to reduce/eliminate them in the next version. The settings I've tested with are temperature at 0.7 and all other filters completely neutral. Other settings may lead to better or worse results.

Repository: localaiLicense: llama3.3

steelskull_l3.3-shakudo-70b
L3.3-Shakudo-70b is the result of a multi-stage merging process by Steelskull, designed to create a powerful and creative roleplaying model with a unique flavor. The creation process involved several advanced merging techniques, including weight twisting, to achieve its distinct characteristics. Stage 1: The Cognitive Foundation & Weight Twisting The process began by creating a cognitive and tool-use focused base model, L3.3-Cogmoblated-70B. This was achieved through a `model_stock` merge of several models known for their reasoning and instruction-following capabilities. This base was built upon `nbeerbower/Llama-3.1-Nemotron-lorablated-70B`, a model intentionally "ablated" to skew refusal behaviors. This technique, known as weight twisting, helps the final model adopt more desirable response patterns by building upon a foundation that is already aligned against common refusal patterns. Stage 2: The Twin Hydrargyrum - Flavor and Depth Two distinct models were then created from the Cogmoblated base: L3.3-M1-Hydrargyrum-70B: This model was merged using `SCE`, a technique that enhances creative writing and prose style, giving the model its unique "flavor." The Top_K for this merge were set at 0.22 . L3.3-M2-Hydrargyrum-70B: This model was created using a `Della_Linear` merge, which focuses on integrating the "depth" of various roleplaying and narrative models. The settings for this merge were set at: (lambda: 1.1) (weight: 0.2) (density: 0.7) (epsilon: 0.2) Final Stage: Shakudo The final model, L3.3-Shakudo-70b, was created by merging the two Hydrargyrum variants using a 50/50 `nuslerp`. This final step combines the rich, creative prose (flavor) from the SCE merge with the strong roleplaying capabilities (depth) from the Della_Linear merge, resulting in a model with a distinct and refined narrative voice. A special thank you to Nectar.ai for their generous support of the open-source community and my projects. Additionally, a heartfelt thanks to all the Ko-fi supporters who have contributed—your generosity is deeply appreciated and helps keep this work going and the Pods spinning.

Repository: localaiLicense: llama3.3

llama3.1-gutenberg-doppel-70b
mlabonne/Hermes-3-Llama-3.1-70B-lorablated finetuned on jondurbin/gutenberg-dpo-v0.1 and nbeerbower/gutenberg2-dpo.

Repository: localaiLicense: llama3.1

hermes-3-llama-3.1-70b-lorablated
This is an uncensored version of NousResearch/Hermes-3-Llama-3.1-70B using lorablation. The recipe is based on @grimjim's grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter (special thanks): Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3 (meta-llama/Meta-Llama-3-70B-Instruct) and an abliterated Llama 3.1 (failspy/Meta-Llama-3.1-70B-Instruct-abliterated). Merge: We merge this new LoRA adapter using task arithmetic to the censored NousResearch/Hermes-3-Llama-3.1-70B to abliterate it.

Repository: localaiLicense: llama3.1

hermes-3-llama-3.1-8b-lorablated
This is an uncensored version of NousResearch/Hermes-3-Llama-3.1-8B using lorablation. The recipe is simple: Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3.1 (meta-llama/Meta-Llama-3-8B-Instruct) and an abliterated Llama 3.1 (mlabonne/Meta-Llama-3.1-8B-Instruct-abliterated). Merge: We merge this new LoRA adapter using task arithmetic to the censored NousResearch/Hermes-3-Llama-3.1-8B to abliterate it.

Repository: localaiLicense: llama3.1

tarek07_nomad-llama-70b
I decided to make a simple model for a change, with some models I was curious to see work together. models: - model: ArliAI/DS-R1-Distill-70B-ArliAI-RpR-v4-Large - model: TheDrummer/Anubis-70B-v1.1 - model: Mawdistical/Vulpine-Seduction-70B - model: Darkhn/L3.3-70B-Animus-V5-Pro - model: zerofata/L3.3-GeneticLemonade-Unleashed-v3-70B - model: Sao10K/Llama-3.3-70B-Vulpecula-r1 base_model: nbeerbower/Llama-3.1-Nemotron-lorablated-70B

Repository: localaiLicense: llama3.3

deepseek-r1-qwen-2.5-32b-ablated
DeepSeek-R1-Distill-Qwen-32B with ablation technique applied for a more helpful (and based) reasoning model. This means it will refuse less of your valid requests for an uncensored UX. Use responsibly and use common sense. We do not take any responsibility for how you apply this intelligence, just as we do not for how you apply your own.

Repository: localaiLicense: mit

mistral-nemo-prism-12b
Mahou-1.5-mistral-nemo-12B-lorablated finetuned on Arkhaios-DPO and Purpura-DPO. The goal was to reduce archaic language and purple prose in a completely uncensored model.

Repository: localaiLicense: apache-2.0