Self-Hosting OpenFactory
OpenFactory can be self-hosted on your own infrastructure, giving you full control over the build environment, data, and network configuration.
Deployment Options
There are two ways to self-host OpenFactory, depending on what you need:
Option A: Docker — VM Management Only
Run the CTO-GUI VM management interface on any Linux machine with KVM. No openfactory.tech account required. One command to start:
docker compose -f docker-compose.self-hosted.yml up -dYou get: VM lifecycle management, network topology, VNC console, ISO upload, benchmarks, test execution.
You don’t get: AI-powered OS building, build scheduling, security patch monitoring, policy documents. To build custom ISOs, use console.openfactory.tech and download the ISOs to your self-hosted instance.
Best for: Teams that want a local VM management UI for their KVM infrastructure, using ISOs built on openfactory.tech or from other sources.
See Docker Quick Start to get running in 5 minutes.
Option B: Full Platform
Run the complete OpenFactory platform — build engine, AI agent, VM management, and frontend. Requires more setup but gives you full offline capability including AI-powered OS building.
You get: Everything. Build custom Linux ISOs with AI assistance, test them in VMs, deploy to machines, all from your own infrastructure.
Best for: Organizations that need full control over the build pipeline, air-gapped environments, or custom build infrastructure.
Continue with Prerequisites for the full setup guide.
Full Platform Architecture
A full self-hosted OpenFactory deployment consists of three backend services and a frontend:
┌─────────────────────────────────────────────────────┐
│ Browser │
│ http://localhost:3000 │
└──────────────────────┬──────────────────────────────┘
│
┌────────▼─────────┐
│ elster-terminal │ Next.js frontend
│ (port 3000) │ Auth, UI, API proxy
└────────┬─────────┘
│
┌─────────────┼─────────────┐
│ │ │
┌────────▼───────┐ ┌───▼──────────┐ ┌▼───────────────────┐
│ elster-terminal │ │ cto-gui │ │ web-terminal │
│ -backend │ │ -libvirt │ │ -backend │
│ (port 8000) │ │ -backend │ │ (port 8002) │
│ │ │ (port 8001) │ │ │
│ Build engine, │ │ VM mgmt, │ │ SSH terminal │
│ AI agent, │ │ testing, │ │ access │
│ user data │ │ benchmarks │ │ │
└────────┬────────┘ └──────┬───────┘ └─────────────────────┘
│ │
│ ┌─────▼──────┐
│ │ libvirtd │
│ │ KVM/QEMU │
│ └────────────┘
│
┌────▼─────┐
│ Claude │
│ Code CLI │ AI-powered build configuration
└──────────┘Components
| Service | Port | Purpose |
|---|---|---|
| elster-terminal | 3000 | Next.js frontend — authentication, chat UI, API proxy |
| elster-terminal-backend | 8000 | Build engine, AI agent orchestration, user/build data |
| cto-gui-libvirt-backend | 8001 | VM management, test execution, CIS benchmarks |
| web-terminal-backend | 8002 | Browser-based SSH terminal access to VMs |
| libvirtd | — | KVM/QEMU hypervisor for running and testing VMs |
How Builds Work
- User describes their OS requirements in the chat UI
- The frontend proxies the request to
elster-terminal-backend - The backend invokes the Claude Code CLI (via
claude-agent-sdk) to generate a live-build configuration - A build worker runs
lb build(Debian live-build) inside a git worktree, as root via sudo - The resulting ISO is stored and its status is streamed to the frontend via SSE
- Optionally, the ISO is booted in a KVM VM via
cto-gui-libvirt-backendfor automated testing
Next Steps
- Docker Quick Start — VM management UI with Docker (5 minutes)
- Prerequisites — hardware and software requirements (full platform)
- Installation — set up all components
- Configuration — environment variables and auth
- Running Services — start, stop, and monitor
- Production Deployment — reverse proxy, systemd, backups