No description
- Shell 53.5%
- Dockerfile 46.5%
|
All checks were successful
Build ComfyUI Rolling / build (cpu, cpu) (push) Successful in 25s
Build ComfyUI Rolling / build (cuda, cu128) (push) Successful in 16s
Build ComfyUI Rolling / build (rocm, rocm7.2) (push) Successful in 21s
Build ComfyUI Rolling / build (rocm-legacy, rocm5.7) (push) Successful in 10s
|
||
|---|---|---|
| .forgejo/workflows | ||
| Dockerfile | ||
| entrypoint.sh | ||
| LICENSE | ||
| README.md | ||
ComfyUI_Docker
Minimal, stateless Docker environment for ComfyUI.
Built on python:3.10-slim. Bypasses bloated NVIDIA base images by relying on PyTorch's bundled CUDA runtime. Designed for persistence across container destruction, utilizing local volumes for models, plugins, and pip cache.
Structure
- Stateless base: The image contains only the OS, Python, ComfyUI, and ComfyUI-Manager.
- Persistent cache: Mounting
~/.cache/pipprevents redownloading dependencies for custom nodes when containers are recreated. - Dynamic entrypoint: An initialization script symlinks ComfyUI-Manager and resolves
requirements.txtfor custom nodes automatically on startup.
Usage
Create docker-compose.yaml:
services:
comfyui:
image: git.trashlab.qzz.io/unins0/comfyui_docker:latest
ports:
- "8188:8188"
volumes:
- ./models:/app/models
- ./custom_nodes:/app/custom_nodes
- ./user:/app/user
- ./output:/app/output
- ./pip_cache:/root/.cache/pip
# Use ["--cpu"] for CPU-only
# command: ["--cpu"]
deploy:
resources:
reservations:
devices:
- driver: nvidia
count: all
capabilities: [gpu]
Commands
docker compose up -d # Start daemon
docker compose stop # Suspend
docker compose start # Resume
docker compose down # Destroy container (data persists in volumes)
Credits
This project integrates methods from the following repositories:
- YanWenKun/ComfyUI-Docker - Slim image methodology and manager cache bypass.
- lecode-official/comfyui-docker - Symlink entrypoint implementation for ComfyUI-Manager persistence.
- Google Gemini
License
GPLv3. See LICENSE.