Skip to Content
Self HostingSelf-Hosting OpenFactory

Self-Hosting OpenFactory

OpenFactory can be self-hosted on your own infrastructure, giving you full control over the build environment, data, and network configuration.

Deployment Options

There are two ways to self-host OpenFactory, depending on what you need:

Option A: Docker — VM Management Only

Run the CTO-GUI VM management interface on any Linux machine with KVM. No openfactory.tech account required. One command to start:

docker compose -f docker-compose.self-hosted.yml up -d

You get: VM lifecycle management, network topology, VNC console, ISO upload, benchmarks, test execution.

You don’t get: AI-powered OS building, build scheduling, security patch monitoring, policy documents. To build custom ISOs, use console.openfactory.tech  and download the ISOs to your self-hosted instance.

Best for: Teams that want a local VM management UI for their KVM infrastructure, using ISOs built on openfactory.tech or from other sources.

See Docker Quick Start to get running in 5 minutes.

Option B: Full Platform

Run the complete OpenFactory platform — build engine, AI agent, VM management, and frontend. Requires more setup but gives you full offline capability including AI-powered OS building.

You get: Everything. Build custom Linux ISOs with AI assistance, test them in VMs, deploy to machines, all from your own infrastructure.

Best for: Organizations that need full control over the build pipeline, air-gapped environments, or custom build infrastructure.

Continue with Prerequisites for the full setup guide.


Full Platform Architecture

A full self-hosted OpenFactory deployment consists of three backend services and a frontend:

┌─────────────────────────────────────────────────────┐ │ Browser │ │ http://localhost:3000 │ └──────────────────────┬──────────────────────────────┘ ┌────────▼─────────┐ │ elster-terminal │ Next.js frontend │ (port 3000) │ Auth, UI, API proxy └────────┬─────────┘ ┌─────────────┼─────────────┐ │ │ │ ┌────────▼───────┐ ┌───▼──────────┐ ┌▼───────────────────┐ │ elster-terminal │ │ cto-gui │ │ web-terminal │ │ -backend │ │ -libvirt │ │ -backend │ │ (port 8000) │ │ -backend │ │ (port 8002) │ │ │ │ (port 8001) │ │ │ │ Build engine, │ │ VM mgmt, │ │ SSH terminal │ │ AI agent, │ │ testing, │ │ access │ │ user data │ │ benchmarks │ │ │ └────────┬────────┘ └──────┬───────┘ └─────────────────────┘ │ │ │ ┌─────▼──────┐ │ │ libvirtd │ │ │ KVM/QEMU │ │ └────────────┘ ┌────▼─────┐ │ Claude │ │ Code CLI │ AI-powered build configuration └──────────┘

Components

ServicePortPurpose
elster-terminal3000Next.js frontend — authentication, chat UI, API proxy
elster-terminal-backend8000Build engine, AI agent orchestration, user/build data
cto-gui-libvirt-backend8001VM management, test execution, CIS benchmarks
web-terminal-backend8002Browser-based SSH terminal access to VMs
libvirtdKVM/QEMU hypervisor for running and testing VMs

How Builds Work

  1. User describes their OS requirements in the chat UI
  2. The frontend proxies the request to elster-terminal-backend
  3. The backend invokes the Claude Code CLI (via claude-agent-sdk) to generate a live-build configuration
  4. A build worker runs lb build (Debian live-build) inside a git worktree, as root via sudo
  5. The resulting ISO is stored and its status is streamed to the frontend via SSE
  6. Optionally, the ISO is booted in a KVM VM via cto-gui-libvirt-backend for automated testing

Next Steps