Model Gallery

16 models from 1 repositories

Filter by type:

Filter by tags:

gemma-3-starshine-12b-i1
A creative writing model based on a merge of fine-tunes on Gemma 3 12B IT and Gemma 3 12B PT. This is the Story Focused merge. This version works better for storytelling and scenarios, as the prose is more novel-like and it has a tendency to impersonate the user character. See the Alternate RP Focused version as well. This is a merge of two G3 models, one trained on instruct and one trained on base: allura-org/Gemma-3-Glitter-12B - Itself a merge of a storywriting and RP train (both also by ToastyPigeon), on instruct ToastyPigeon/Gemma-3-Confetti-12B - Experimental application of the Glitter data using base instead of instruct, additionally includes some adventure data in the form of SpringDragon. The result is a lovely blend of Glitter's ability to follow instructions and Confetti's free-spirit prose, effectively 'loosening up' much of the hesitancy that was left in Glitter.

Repository: localaiLicense: gemma

planetoid_27b_v.2
This is a merge of pre-trained gemma3 language models Goal of this merge was to create good uncensored gemma 3 model good for assistant and roleplay, with uncensored vision. First, vision: i dont know is it normal, but it slightly hallucinate (maybe q3 is too low?), but lack any refusals and otherwise work fine. I used default gemma 3 27b mmproj. Second, text: it is slow on my hardware, slower than 24b mistral, speed close to 32b QWQ. Model is smart even on q3, responses are adequate in length and are interesting to read. Model is quite attentive to context, tested up to 8k - no problems or degradation spotted. (beware of your typos, it will copy yours mistakes) Creative capabilities are good too, model will create good plot for you, if you let it. Model follows instructions fine, it is really good in "adventure" type of cards. Russian is supported, is not too great, maybe on higher quants is better. Refusals was not encountered. However, i find this model not unbiased enough. It is close to neutrality, but i want it more "dark". Positivity highly depends on prompts. With good enough cards model can do wonders. Tested on Q3_K_L, t 1.04.

Repository: localaiLicense: gemma

latitudegames_wayfarer-large-70b-llama-3.3
We’ve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games aren’t all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on. Similarly, great games need opposition. You must be able to fail, die, and may even have to start over. This makes games more fun! However, the vast majority of AI models, through alignment RLHF, have been trained away from darkness, violence, or conflict, preventing them from fulfilling this role. To give our players better options, we decided to train our own model to fix these issues. The Wayfarer model series are a set of adventure role-play models specifically trained to give players a challenging and dangerous experience. We wanted to contribute back to the open source community that we’ve benefitted so much from so we open sourced a 12b parameter version version back in Jan. We thought people would love it but people were even more excited than we expected. Due to popular request we decided to train a larger 70b version based on Llama 3.3.

Repository: localaiLicense: llama3.3

l3.3-geneticlemonade-unleashed-v2-70b
An experimental release. zerofata/GeneticLemonade-Unleashed qlora trained on a test dataset. Performance is improved from the original in my testing, but there are possibly (likely?) areas where the model will underperform which I am looking for feedback on. This is a creative model intended to excel at character driven RP / ERP. It has not been tested or trained on adventure stories or any large amounts of creative writing.

Repository: localaiLicense: llama3

hubble-4b-v1
Equipped with his five senses, man explores the universe around him and calls the adventure 'Science'. This is a finetune of Nvidia's Llama 3.1 4B Minitron - a shrunk down model of Llama 3.1 8B 128K.

Repository: localaiLicense: llama3.1

dans-personalityengine-v1.0.0-8b
This model is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline. It has been trained on a wide array of one shot instructions, multi turn instructions, role playing scenarios, text adventure games, co-writing, and much more. The full dataset is publicly available and can be found in the datasets section of the model page. There has not been any form of harmfulness alignment done on this model, please take the appropriate precautions when using it in a production environment.

Repository: localaiLicense: apache-2.0

hamanasu-adventure-4b-i1
Thanks to PocketDoc's Adventure datasets and taking his Dangerous Winds models as inspiration, I was able to finetune a small Adventure model that HATES the User The model is suited for Text Adventure, All thanks to Tav for funding the train. Support me and my finetunes on Ko-Fi https://ko-fi.com/deltavector

Repository: localaiLicense: llama3.1

sicariussicariistuff_impish_llama_4b
5th of May, 2025, Impish_LLAMA_4B. Almost a year ago, I created Impish_LLAMA_3B, the first fully coherent 3B roleplay model at the time. It was quickly adopted by some platforms, as well as one of the go-to models for mobile. After some time, I made Fiendish_LLAMA_3B and insisted it was not an upgrade, but a different flavor (which was indeed the case, as a different dataset was used to tune it). Impish_LLAMA_4B, however, is an upgrade, a big one. I've had over a dozen 4B candidates, but none of them were 'worthy' of the Impish badge. This model has superior responsiveness and context awareness, and is able to pull off very coherent adventures. It even comes with some additional assistant capabilities too. Of course, while it is exceptionally competent for its size, it is still 4B. Manage expectations and all that. I, however, am very much pleased with it. It took several tries to pull off just right. Total tokens trained: about 400m (due to being a generalist model, lots of tokens went there, despite the emphasis on roleplay & adventure). This took more effort than I thought it would. Because of course it would. This is mainly due to me refusing to release a model only 'slightly better' than my two 3B models mentioned above. Because "what would be the point" in that? The reason I included so many tokens for this tune is that small models are especially sensitive to many factors, including the percentage of moisture in the air and how many times I ran nvidia-smi since the system last started. It's no secret that roleplay/creative writing models can reduce a model's general intelligence (any tune and RL risk this, but roleplay models are especially 'fragile'). Therefore, additional tokens of general assistant data were needed in my opinion, and indeed seemed to help a lot with retaining intelligence. This model is also 'built a bit different', literally, as it is based on nVidia's prune; it does not 'behave' like a typical 8B, from my own subjective impression. This helped a lot with keeping it smart at such size. To be honest, my 'job' here in open source is 'done' at this point. I've achieved everything I wanted to do here, and then some.

Repository: localaiLicense: llama3.1

rocinante-12b-v1.1
A versatile workhorse for any adventure!

Repository: localaiLicense: apache-2.0

dans-personalityengine-v1.1.0-12b
This model series is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline. It has been trained on a wide array of one shot instructions, multi turn instructions, tool use, role playing scenarios, text adventure games, co-writing, and much more.

Repository: localaiLicense: apache-2.0

wayfarer-12b
We’ve heard over and over from AI Dungeon players that modern AI models are too nice, never letting them fail or die. While it may be good for a chatbot to be nice and helpful, great stories and games aren’t all rainbows and unicorns. They have conflict, tension, and even death. These create real stakes and consequences for characters and the journeys they go on. Similarly, great games need opposition. You must be able to fail, die, and may even have to start over. This makes games more fun! However, the vast majority of AI models, through alignment RLHF, have been trained away from darkness, violence, or conflict, preventing them from fulfilling this role. To give our players better options, we decided to train our own model to fix these issues. Wayfarer is an adventure role-play model specifically trained to give players a challenging and dangerous experience. We thought they would like it, but since releasing it on AI Dungeon, players have reacted even more positively than we expected. Because they loved it so much, we’ve decided to open-source the model so anyone can experience unforgivingly brutal AI adventures! Anyone can download the model to run locally. Or if you want to easily try this model for free, you can do so at https://aidungeon.com. We plan to continue improving and open-sourcing similar models, so please share any and all feedback on how we can improve model behavior. Below we share more details on how Wayfarer was created.

Repository: localaiLicense: apache-2.0

pocketdoc_dans-personalityengine-v1.2.0-24b
This model series is intended to be multifarious in its capabilities and should be quite capable at both co-writing and roleplay as well as find itself quite at home performing sentiment analysis or summarization as part of a pipeline. It has been trained on a wide array of one shot instructions, multi turn instructions, tool use, role playing scenarios, text adventure games, co-writing, and much more.

Repository: localaiLicense: apache-2.0

pocketdoc_dans-sakurakaze-v1.0.0-12b
A model based on Dans-PersonalityEngine-V1.1.0-12b with a focus on character RP, visual novel style group chats, old school text adventures, and co-writing.

Repository: localaiLicense: apache-2.0

beaverai_mn-2407-dsk-qwqify-v0.1-12b
Test model to try to give an existing model QwQ's thoughts. For this first version it is ontop of PocketDoc/Dans-SakuraKaze-V1.0.0-12b (an rp/adventure/co-writing model), which was trained ontop of PocketDoc/Dans-PersonalityEngine-V1.1.0-12b (a jack of all trades instruct model), which was trained ontop of mistralai/Mistral-Nemo-Base-2407. The prompt formatting and usage should be the same as with QwQ; Use ChatML, and remove the thinking from previous turns. If thoughts arent being generated automatically, add \n to the start of the assistant turn. It should follow previous model turns formatting. On first turns of the conversation you may need to regen a few times, and maybe edit the model responses for the first few turns to get it to your liking.

Repository: localaiLicense: apache-2.0

impish_magic_24b-i1
It's the 20th of June, 2025—The world is getting more and more chaotic, but let's look at the bright side: Mistral released a new model at a very good size of 24B, no more "sign here" or "accept this weird EULA" there, a proper Apache 2.0 License, nice! 👍🏻 This model is based on mistralai/Magistral-Small-2506 so naturally I named it Impish_Magic. Truly excellent size, I tested it on my laptop (16GB gpu) and it works quite fast (4090m). This model went "full" fine-tune over 100m unique tokens. Why do I say "full"? I've tuned specific areas in the model to attempt to change the vocabulary usage, while keeping as much intelligence as possible. So this is definitely not a LoRA, but also not exactly a proper full finetune, but rather something in-between. As I mentioned in a small update, I've made nice progress regarding interesting sources of data, some of them are included in this tune. 100m tokens is a lot for a Roleplay / Adventure tune, and yes, it can do adventure as well—there is unique adventure data here, that was never used so far. A lot of the data still needs to be cleaned and processed. I've included it before I did any major data processing, because with the magic of 24B parameters, even "dirty" data would work well, especially when using a more "balanced" approach for tuning that does not include burning the hell of the model in a full finetune across all of its layers. Could this data be cleaner? Of course, and it will. But for now, I would hate to make perfect the enemy of the good. Fun fact: Impish_Magic_24B is the first roleplay finetune of magistral!

Repository: localaiLicense: apache-2.0

impish_nemo_12b
August 2025, Impish_Nemo_12B — my best model yet. And unlike a typical Nemo, this one can take in much higher temperatures (works well with 1+). Oh, and regarding following the character card: It somehow gotten even better, to the point of it being straight up uncanny 🙃 (I had to check twice that this model was loaded, and not some 70B!) I feel like this model could easily replace models much larger than itself for adventure or roleplay, for assistant tasks, obviously not, but the creativity here? Off the charts. Characters have never felt so alive and in the moment before — they’ll use insinuation, manipulation, and, if needed (or provoked) — force. They feel so very present. That look on Neo’s face when he opened his eyes and said, “I know Kung Fu”? Well, Impish_Nemo_12B had pretty much the same moment — and it now knows more than just Kung Fu, much, much more. It wasn’t easy, and it’s a niche within a niche, but as promised almost half a year ago — it is now done. Impish_Nemo_12B is smart, sassy, creative, and got a lot of unhingedness too — these are baked-in deep into every interaction. It took the innate Mistral's relative freedom, and turned it up to 11. It very well maybe too much for many, but after testing and interacting with so many models, I find this 'edge' of sorts, rather fun and refreshing. Anyway, the dataset used is absolutely massive, tons of new types of data and new domains of knowledge (Morrowind fandom, fighting, etc...). The whole dataset is a very well-balanced mix, and resulted in a model with extremely strong common sense for a 12B. Regarding response length — there's almost no response-length bias here, this one is very much dynamic and will easily adjust reply length based on 1–3 examples of provided dialogue. Oh, and the model comes with 3 new Character Cards, 2 Roleplay and 1 Adventure!

Repository: localaiLicense: apache-2.0