Model Gallery

6 models from 1 repositories

Filter by type:

Filter by tags:

dreamshaper
A text-to-image model that uses Stable Diffusion 1.5 to generate images from text prompts. This model is DreamShaper model by Lykon.

Repository: localaiLicense: other

stable-diffusion-3-medium
Stable Diffusion 3 Medium is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features greatly improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.

Repository: localaiLicense: stabilityai-ai-community

wan-2.1-t2v-1.3b-ggml
Wan 2.1 T2V 1.3B — text-to-video diffusion model, GGUF-quantized for the stable-diffusion.cpp backend. Generates short (33-frame) 832x480 clips from a text prompt. Cheapest Wan variant, suitable for CPU-offloaded inference with ~10 GB of usable RAM.

Repository: localaiLicense: apache-2.0

sd-1.5-ggml
Stable Diffusion 1.5

Repository: localaiLicense: creativeml-openrail-m

sd-3.5-medium-ggml
Stable Diffusion 3.5 Medium is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.

Repository: localaiLicense: stabilityai-ai-community

sd-3.5-large-ggml
Stable Diffusion 3.5 Large is a Multimodal Diffusion Transformer (MMDiT) text-to-image model that features improved performance in image quality, typography, complex prompt understanding, and resource-efficiency.

Repository: localaiLicense: stabilityai-ai-community