ZonUI-3B — A lightweight, resolution-aware GUI grounding model trained with only 24K samples on a single RTX 4090.
Links
Tags
Repository: localaiLicense: apache-2.0

Supreme context One million tokens to play with. Strong Roleplay internet RP format lovers will appriciate it, medium size paragraphs. Qwen smarts built-in, but naughty and playful Maybe it's even too naughty. VERY compliant with low censorship. VERY high IFeval for a 14B RP model: 78.68.
Links
Tags
Repository: localaiLicense: apache-2.0

INTELLECT-2 is a 32 billion parameter language model trained through a reinforcement learning run leveraging globally distributed, permissionless GPU resources contributed by the community. The model was trained using prime-rl, a framework designed for distributed asynchronous RL, using GRPO over verifiable rewards along with modifications for improved training stability. For detailed information on our infrastructure and training recipe, see our technical report.
Links
Tags
Repository: localaiLicense: apache-2.0

This model works with Russian only. This model is designed to run GURPS roleplaying games, as well as consult and assist. This model was trained on an augmented dataset of the GURPS Basic Set rulebook. Its primary purpose was initially to become an assistant consultant and assistant Game Master for the GURPS roleplaying system, but it can also be used as a GM for running solo games as a player.
Links
Tags
Repository: localaiLicense: apache-2.0
FuseO1-Preview is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing our advanced SCE merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains.
Links
Tags
Repository: localaiLicense: apache-2.0
FuseO1-Preview is our initial endeavor to enhance the System-II reasoning capabilities of large language models (LLMs) through innovative model fusion techniques. By employing our advanced SCE merging methodologies, we integrate multiple open-source o1-like LLMs into a unified model. Our goal is to incorporate the distinct knowledge and strengths from different reasoning LLMs into a single, unified model with strong System-II reasoning abilities, particularly in mathematics, coding, and science domains.
Links
Tags

An UncensoredLLM with Reasoning, what more could you want?
Links
Tags