Model Gallery

19 models from 1 repositories

Filter by type:

Filter by tags:

yanfei-v2-qwen3-32b
A repair of Yanfei-Qwen-32B by TIES merging huihui-ai/Qwen3-32B-abliterated, Zhiming-Qwen3-32B, and Menghua-Qwen3-32B using mergekit.

Repository: localaiLicense: apache-2.0

virtuoso-lite
Virtuoso-Lite (10B) is our next-generation, 10-billion-parameter language model based on the Llama-3 architecture. It is distilled from Deepseek-v3 using ~1.1B tokens/logits, allowing it to achieve robust performance at a significantly reduced parameter count compared to larger models. Despite its compact size, Virtuoso-Lite excels in a variety of tasks, demonstrating advanced reasoning, code generation, and mathematical problem-solving capabilities.

Repository: localaiLicense: falcon-llm

negative-anubis-70b-v1
Enjoyed SicariusSicariiStuff/Negative_LLAMA_70B but the prose was too dry for my tastes. So I merged it with TheDrummer/Anubis-70B-v1 for verbosity. Anubis has positivity bias so Negative could balance things out. This is a merge of pre-trained language models created using mergekit. The following models were included in the merge: SicariusSicariiStuff/Negative_LLAMA_70B TheDrummer/Anubis-70B-v1

Repository: localaiLicense: llama3.3

nohobby_l3.3-prikol-70b-v0.5
99% of mergekit addicts quit before they hit it big. Gosh, I need to create an org for my test runs - my profile looks like a dumpster. What was it again? Ah, the new model. Exactly what I wanted. All I had to do was yank out the cursed official DeepSeek distill and here we are. From the brief tests it gave me some unusual takes on the character cards I'm used to. Just this makes it worth it imo. Also the writing is kinda nice.

Repository: localaiLicense: llama3.3

llama-3.3-magicalgirl-2
New merge. This an experiment to increase the "Madness" in a model. Merge is based on top UGI-Bench models (So yeah, I would think this would be benchmaxxing.) This is the second time I'm using SCE. The previous MagicalGirl model seems to be quite happy with it. Added KaraKaraWitch/Llama-MiraiFanfare-3.3-70B based on feedback I got from others (People generally seem to remember this rather than other models). So I'm not sure how this would play into the merge. The following models were included in the merge: TheDrummer/Anubis-70B-v1 SicariusSicariiStuff/Negative_LLAMA_70B LatitudeGames/Wayfarer-Large-70B-Llama-3.3 KaraKaraWitch/Llama-MiraiFanfare-3.3-70B Black-Ink-Guild/Pernicious_Prophecy_70B

Repository: localaiLicense: llama3.3

llama_3.3_70b_darkhorse-i1
Dark coloration L3.3 merge, to be included in my merges. Can also be tried as a standalone to have a darker Llama Experience, but I didn't take the time. Edit : I took the time, and it meets its purpose. It's average on the basic metrics (smarts, perplexity), but it's not woke and unhinged indeed. The model is not abliterated, though. It has refusals on the usual point-blank questions. I will play with it more, because it has potential. My note : 3/5 as a standalone. 4/5 as a merge brick. Warning : this model can be brutal and vulgar, more than most of my previous merges.

Repository: localaiLicense: llama3.3

hermes-3-llama-3.1-70b-lorablated
This is an uncensored version of NousResearch/Hermes-3-Llama-3.1-70B using lorablation. The recipe is based on @grimjim's grimjim/Llama-3.1-8B-Instruct-abliterated_via_adapter (special thanks): Extraction: We extract a LoRA adapter by comparing two models: a censored Llama 3 (meta-llama/Meta-Llama-3-70B-Instruct) and an abliterated Llama 3.1 (failspy/Meta-Llama-3.1-70B-Instruct-abliterated). Merge: We merge this new LoRA adapter using task arithmetic to the censored NousResearch/Hermes-3-Llama-3.1-70B to abliterate it.

Repository: localaiLicense: llama3.1

l3.1-moe-2x8b-v0.2
This model is a Mixture of Experts (MoE) made with mergekit-moe. It uses the following base models: Joseph717171/Llama-3.1-SuperNova-8B-Lite_TIES_with_Base ArliAI/Llama-3.1-8B-ArliAI-RPMax-v1.2 Heavily inspired by mlabonne/Beyonder-4x7B-v3.

Repository: localaiLicense: llama3.1

dark-chivalry_v1.0-i1
The dark side of chivalry... This model was merged using the TIES merge method using ValiantLabs/Llama3.1-8B-ShiningValiant2 as a base.

Repository: localaiLicense: apache-2.0

hyperllama3.1-v2-i1
HyperLlama3.1-v2 is a merge of the following models using mergekit: vicgalle/Configurable-Llama-3.1-8B-Instruct bunnycore/HyperLlama-3.1-8B ValiantLabs/Llama3.1-8B-ShiningValiant2

Repository: localaiLicense: apache-2.0

ockerman0_anubislemonade-70b-v1
AnubisLemonade-70B-v1 is a 70B parameter model that is a follow-up to Anubis-70B-v1.1. It is a state-of-the-art (SOTA) model developed by ockerman0, representing the world's first model to feature Intermediate Thinking capabilities. Unlike traditional models that provide single-pass responses, AnubisLemonade-70B-v1 employs a revolutionary multi-phase thinking process that allows the model to think, reconsider, and refine its reasoning multiple times throughout a single response.

Repository: localaiLicense: llama3.1

acolyte-22b-i1
LoRA of a bunch of random datasets on top of Mistral-Small-Instruct-2409, then SLERPed onto base at 0.5. Decent enough for its size. Check the LoRA for dataset info.

Repository: localaiLicense: apache-2.0

starcannon-unleashed-12b-v1.0
This is a merge of pre-trained language models created using mergekit. MarinaraSpaghetti_NemoMix-Unleashed-12B Nothingiisreal_MN-12B-Starcannon-v3

Repository: localaiLicense: cc-by-nc-4.0

mn-chunky-lotus-12b
I had originally planned to use this model for future/further merges, but decided to go ahead and release it since it scored rather high on my local EQ Bench testing (79.58 w/ 100% parsed @ 8-bit). Bear in mind that most models tend to score a bit higher on my own local tests as compared to their posted scores. Still, its the highest score I've personally seen from all the models I've tested. Its a decent model, with great emotional intelligence and acceptable adherence to various character personalities. It does a good job at roleplaying despite being a bit bland at times. Overall, I like the way it writes, but it has a few formatting issues that show up from time to time, and it has an uncommon tendency to paste walls of character feelings/intentions at the end of some outputs without any prompting. This is something I hope to correct with future iterations. This is a merge of pre-trained language models created using mergekit. The following models were included in the merge: Epiculous/Violet_Twilight-v0.2 nbeerbower/mistral-nemo-gutenberg-12B-v4 flammenai/Mahou-1.5-mistral-nemo-12B

Repository: localaiLicense: cc-by-4.0

mn-12b-mag-mell-r1-iq-arm-imatrix
This is a merge of pre-trained language models created using mergekit. Mag Mell is a multi-stage merge, Inspired by hyper-merges like Tiefighter and Umbral Mind. Intended to be a general purpose "Best of Nemo" model for any fictional, creative use case. 6 models were chosen based on 3 categories; they were then paired up and merged via layer-weighted SLERP to create intermediate "specialists" which are then evaluated in their domain. The specialists were then merged into the base via DARE-TIES, with hyperparameters chosen to reduce interference caused by the overlap of the three domains. The idea with this approach is to extract the best qualities of each component part, and produce models whose task vectors represent more than the sum of their parts. The three specialists are as follows: Hero (RP, kink/trope coverage): Chronos Gold, Sunrose. Monk (Intelligence, groundedness): Bophades, Wissenschaft. Deity (Prose, flair): Gutenberg v4, Magnum 2.5 KTO. I've been dreaming about this merge since Nemo tunes started coming out in earnest. From our testing, Mag Mell demonstrates worldbuilding capabilities unlike any model in its class, comparable to old adventuring models like Tiefighter, and prose that exhibits minimal "slop" (not bad for no finetuning,) frequently devising electrifying metaphors that left us consistently astonished. I don't want to toot my own bugle though; I'm really proud of how this came out, but please leave your feedback, good or bad.Special thanks as usual to Toaster for his feedback and Fizz for helping fund compute, as well as the KoboldAI Discord for their resources. The following models were included in the merge: IntervitensInc/Mistral-Nemo-Base-2407-chatml nbeerbower/mistral-nemo-bophades-12B nbeerbower/mistral-nemo-wissenschaft-12B elinas/Chronos-Gold-12B-1.0 Fizzarolli/MN-12b-Sunrose nbeerbower/mistral-nemo-gutenberg-12B-v4 anthracite-org/magnum-12b-v2.5-kto

Repository: localaiLicense: unlicense

trappu_magnum-picaro-0.7-v2-12b
This model is a merge between Trappu/Nemo-Picaro-12B, a model trained on my own little dataset free of synthetic data, which focuses solely on storywriting and scenrio prompting (Example: [ Scenario: bla bla bla; Tags: bla bla bla ]), and anthracite-org/magnum-v2-12b. The reason why I decided to merge it with Magnum (and don't recommend Picaro alone) is because that model, aside from its obvious flaws (rampant impersonation, stupid, etc...), is a one-trick pony and will be really rough for the average LLM user to handle. The idea was to have Magnum work as some sort of stabilizer to fix the issues that emerge from the lack of multiturn/smart data in Picaro's dataset. It worked, I think. I enjoy the outputs and it's smart enough to work with. But yeah the goal of this merge was to make a model that's both good at storytelling/narration but also fine when it comes to other forms of creative writing such as RP or chatting. I don't think it's quite there yet but it's something for sure.

Repository: localaiLicense: apache-2.0

luckyrp-24b
LuckyRP-24B is a merge of the following models using mergekit: trashpanda-org/MS-24B-Mullein-v0 cognitivecomputations/Dolphin3.0-Mistral-24B

Repository: localaiLicense: apache-2.0

lyranovaheart_starfallen-snow-fantasy-24b-ms3.2-v0.0
So.... I'm kinda back, I hope. This was my attempt at trying to get a stellar like model out of Mistral 3.2 24b, I think I got most of it down besides a few quirks. It's not quite what I want to make in the future, but it's got good vibes. I like it, so try please? The following models were included in the merge: zerofata/MS3.2-PaintedFantasy-24B Gryphe/Codex-24B-Small-3.2 Delta-Vector/MS3.2-Austral-Winton

Repository: localaiLicense: apache-2.0

verbamaxima-12b-i1
**VerbaMaxima-12B** is a highly experimental, large language model created through advanced merging techniques using [mergekit](https://github.com/cg123/mergekit). It is based on *natong19/Mistral-Nemo-Instruct-2407-abliterated* and further refined by combining multiple 12B-scale models—including *TheDrummer/UnslopNemo-12B-v4*, *allura-org/Tlacuilo-12B*, and *Trappu/Magnum-Picaro-0.7-v2-12b*—using **model_stock** and **task arithmetic** with a negative lambda for creative deviation. The result is a model designed for nuanced, believable storytelling with reduced "purple prose" and enhanced world-building. It excels in roleplay and co-writing scenarios, offering a more natural, less theatrical tone. While experimental and not fully optimized, it delivers a unique, expressive voice ideal for creative and narrative-driven applications. > ✅ **Base Model**: natong19/Mistral-Nemo-Instruct-2407-abliterated > 🔄 **Merge Method**: Task Arithmetic + Model Stock > 📌 **Use Case**: Roleplay, creative writing, narrative generation > 🧪 **Status**: Experimental, high potential, not production-ready *Note: This is the original, unquantized model. The GGUF version (mradermacher/VerbaMaxima-12B-i1-GGUF) is a quantized derivative for inference on local hardware.*

Repository: localaiLicense: apache-2.0