Build Your Private AI Image Generator: Docker Model Runner + Open WebUI
Introduction
Have you ever needed to generate images for a project but hesitated because of privacy concerns, credit limits, or frustrating content filters? You type in a prompt for a dragon in a business suit, and the service rejects it for being “unsafe.” Meanwhile, you’re left wondering where your data ends up. What if you could eliminate all those worries by running the entire image generation pipeline on your own machine—with a clean, chat-based interface on top?

That’s exactly what Docker Model Runner now makes possible. With just a few commands, you can download an image generation model, connect it to Open WebUI, and start creating images directly from a chat interface—completely local, fully private, and entirely under your control. Think of it as your own private DALL-E, with no cloud subscription and no data leaving your computer.
What You’ll Need
Before we begin, make sure your environment meets these requirements:
- Docker Desktop (macOS) or Docker Engine (Linux)
- ~8 GB of free RAM for a small model (more memory is better for larger models or higher resolutions)
- GPU (optional but highly recommended): NVIDIA (CUDA), Apple Silicon (MPS), or a CPU fallback
If you can run docker model version without errors, you’re good to go. The process is designed to be straightforward, even if you’re not a machine learning expert.
How Docker Model Runner Works with Open WebUI
Before diving into the steps, let’s understand the architecture. Docker Model Runner acts as the control plane. It downloads the model, manages the inference backend lifecycle, and exposes a 100% OpenAI-compatible API. This includes the /v1/images/generations endpoint that Open WebUI already knows how to communicate with. In other words, the integration is seamless—Open WebUI thinks it’s talking to a cloud service, but everything runs locally.
Step 1: Pull an Image Generation Model
Docker Model Runner uses a compact packaging format called DDUF (Diffusers Unified Format) to distribute image generation models via Docker Hub, just like any other OCI artifact. This single-file format bundles all components of a diffusion model (text encoder, VAE, UNet/DiT, scheduler config) into one portable package that Docker Model Runner can unpack at runtime.
Start by pulling the stable-diffusion model:
docker model pull stable-diffusion
Once the download completes, verify the model is ready:
docker model inspect stable-diffusion
You’ll see output similar to this:
{
"id": "sha256:5f60862074a4c585126288d08555e5ad9ef65044bf490ff3a64855fc84d06823",
"tags": ["docker.io/ai/stable-diffusion:latest"],
"created": 1768470632,
"config": {
"format": "diffusers",
"architecture": "diffusers",
"size": "6.94GB",
"diffusers": {
"dduf_file": "stable-diffusion-xl-base-1.0-FP16.dduf",
"layout": "dduf"
}
}
}
The model is stored locally as a DDUF file. This means you won’t have to manage multiple files or dependencies—everything is in one place, and Docker Model Runner knows how to load it when needed.

Step 2: Launch Open WebUI
This is where the magic happens. Docker Model Runner includes a built-in launch command that automatically wires up Open WebUI against your local inference endpoint. Just run:
docker model launch openwebui
That’s it. Behind the scenes, Docker Model Runner starts the inference backend (using the model you pulled in Step 1), configures it to listen on the correct port, and launches Open WebUI configured to communicate with that local endpoint. You’ll be greeted with a chat interface, ready to accept your image generation prompts.
No need to set up environment variables, no complicated YAML files—just one command that handles everything. You can now type prompts like “a dragon wearing a business suit” and watch as Stable Diffusion creates the image entirely on your machine.
Conclusion
By combining Docker Model Runner with Open WebUI, you gain complete control over AI image generation. No more worrying about cloud privacy, credit exhaustion, or absurd content filters. Your prompts stay on your computer, and your images are generated locally using powerful open-source models like Stable Diffusion.
This setup is perfect for hobbyists, developers, and anyone who values privacy while exploring creative AI. With just two commands—docker model pull stable-diffusion and docker model launch openwebui—you’re ready to go. So why not give it a try? Your own private image generator is just a few keystrokes away.
Related Articles
- PCPJack Worm: 10 Critical Facts About This Credential-Stealing, Cloud-Targeting Malware
- AWS Service Sunset Sparks Community Concern: WorkMail and App Runner Affected
- AWS Unleashes NVIDIA Nemotron 3 Super on Bedrock, Unveils Nova Forge SDK and Corretto 26
- Amazon S3 Files: Unifying Object Storage with File System Access
- Runpod Flash Launches as Open Source Tool to Eliminate Docker for Serverless AI Workloads
- 10 Critical Lessons from the .de DNSSEC Outage: How One Misconfiguration Broke the Internet
- 8 Key Insights into Cloudflare's Dynamic Workflows: Durable Execution for Every Tenant
- How to Build and Scale a Hardened Container Image Pipeline: Lessons from Our First Year