Kokoros Rust TTS - Japanese. Uses the Kokoro v1.0 ONNX model with Japanese phonemization.
Links
Tags
Repository: localaiLicense: llama3.2

Small parameter LLMs are ideal for navigating the complexities of the Japanese language, which involves multiple character systems like kanji, hiragana, and katakana, along with subtle social cues. Despite their smaller size, these models are capable of delivering highly accurate and context-aware results, making them perfect for use in environments where resources are constrained. Whether deployed on mobile devices with limited processing power or in edge computing scenarios where fast, real-time responses are needed, these models strike the perfect balance between performance and efficiency, without sacrificing quality or speed.
Links
Tags
Repository: localaiLicense: llama3.1
The Llama-3.1-70B-Japanese-Instruct-2407-gguf model is a Japanese language model that uses the Instruct prompt tuning method. It is based on the LLaMa-3.1-70B model and has been fine-tuned on the imatrix dataset for Japanese. The model is trained to generate informative and coherent responses to given instructions or prompts. It is available in the gguf format and can be used for a variety of tasks such as question answering, text generation, and more.
Links
Tags
Repository: localaiLicense: llama3.1

Llama 3.1 Swallow is a series of large language models (8B, 70B) that were built by continual pre-training on the Meta Llama 3.1 models. Llama 3.1 Swallow enhanced the Japanese language capabilities of the original Llama 3.1 while retaining the English language capabilities. We use approximately 200 billion tokens that were sampled from a large Japanese web corpus (Swallow Corpus Version 2), Japanese and English Wikipedia articles, and mathematical and coding contents, etc (see the Training Datasets section) for continual pre-training. The instruction-tuned models (Instruct) were built by supervised fine-tuning (SFT) on the synthetic data specially built for Japanese. See the Swallow Model Index section to find other model variants.
Links
Tags
Repository: localaiLicense: apache-2.0
**Model Name:** John1604-AI-status-japanese-2025 **Base Model:** Qwen3-8B **Language:** Japanese **License:** International Inventor's License **Description:** A Japanese-language large language model fine-tuned from Qwen3-8B to provide insightful, forward-looking perspectives on AI status and trends in 2025. Designed for high-quality text generation in Japanese, this model excels in reasoning, technical writing, and contextual understanding. Ideal for developers, researchers, and content creators focused on Japanese AI discourse. **Key Features:** - Fine-tuned for Japanese language accuracy and depth - Built on the robust Qwen3-8B foundation - Optimized for real-world applications including technical reporting and scenario analysis - Supports long-form generation (up to 16,384 tokens) **Use Case:** AI trend analysis, Japanese content generation, technical documentation, and future-oriented scenario planning. **Repository:** [John1604/John1604-AI-status-japanese-2025](https://huggingface.co/John1604/John1604-AI-status-japanese-2025)
Links
Tags