Self-Hosting Guide
Run LoreForge on your own infrastructure with Docker. Full control over your data, your models, and your setup.
Prerequisites
- Docker and Docker Compose (v2+). Install from docker.com.
- Git to clone the repository.
- 4 GB RAM minimum for the application containers. More if running Ollama on the same machine.
- A GPU (optional) if using Ollama for local LLM inference. 8 GB VRAM minimum for
qwen2.5:7b, 16 GB+ recommended forqwen2.5:14b.
Basic Setup
1. Clone the Repository
git clone https://github.com/jordanmiller/loreforge.git
cd loreforge
2. Configure Environment
cp .env.example .env
Open .env in your editor. The most important settings:
| Variable | Default | Description |
|---|---|---|
DATABASE_URL |
sqlite+aiosqlite:////workspace/data/loreforge.db |
Database connection string. Default uses SQLite (no setup needed). |
OLLAMA_BASE_URL |
http://host.docker.internal:11435 |
URL of your Ollama instance. Adjust if Ollama runs on a different host or port. |
WRITER_MODEL |
qwen2.5:14b |
Model used for creative actions (expand, dialogue, codex). |
UTILITY_MODEL |
qwen2.5:7b |
Model used for analytical actions (summarize, connections, contradictions). |
ANTHROPIC_API_KEY |
empty | Set this to use Anthropic Claude instead of Ollama. |
OPENAI_API_KEY |
empty | Set this to use OpenAI GPT instead of Ollama. |
HOST_IP |
localhost |
Set to your LAN IP (e.g., 192.168.1.100) to access from other devices. |
NEXT_PUBLIC_API_URL |
http://localhost:8250 |
Where the frontend sends API requests. Update if using a custom domain or reverse proxy. |
PORT |
3500 |
Frontend port. |
REQUIRE_AUTH |
false |
Set to true to require login. Generate a secure SECRET_KEY if enabled. |
3. Start the Containers
docker compose up -d
This builds and starts:
- loreforge-api (port 8250) — FastAPI backend with SQLite
- loreforge-web (port 3500) — Next.js frontend
Check that both containers are healthy:
docker compose ps
docker compose logs -f
4. Open LoreForge
Navigate to http://localhost:3500. Create a world, add entries, and start building.
Ollama Setup
Ollama is the default LLM provider for self-hosted LoreForge. It runs models locally on your GPU.
Install Ollama
Download from ollama.ai and install. On Linux:
curl -fsSL https://ollama.ai/install.sh | sh
Pull Models
# Creative writing model (14B parameters, needs ~10 GB VRAM)
ollama pull qwen2.5:14b
# Utility model (7B parameters, needs ~5 GB VRAM)
ollama pull qwen2.5:7b
Configure the Connection
By default, .env points to http://host.docker.internal:11435. This works if:
- Ollama runs on the same machine as Docker
- Ollama is listening on port
11435(or the default11434— update the URL accordingly)
If Ollama is on a different machine, set OLLAMA_BASE_URL to its LAN IP:
OLLAMA_BASE_URL=http://192.168.1.50:11434
host.docker.internal resolves to the host machine. On Linux, you may need to add --add-host=host.docker.internal:host-gateway to your Docker run command, or use the host's LAN IP directly.
Alternative: Run Ollama in Docker
docker run -d --gpus all -p 11434:11434 --name ollama \
-v ollama-data:/root/.ollama \
ollama/ollama
# Pull models inside the container
docker exec ollama ollama pull qwen2.5:14b
docker exec ollama ollama pull qwen2.5:7b
Then set OLLAMA_BASE_URL=http://ollama:11434 and add Ollama to the same Docker network as LoreForge.
Using Cloud Providers (Anthropic / OpenAI)
If you prefer cloud LLMs over local Ollama, add your API key to .env:
Anthropic Claude
ANTHROPIC_API_KEY=sk-ant-api03-...
WRITER_MODEL=claude-sonnet-4-6
UTILITY_MODEL=claude-haiku-4-5-20251001
OpenAI GPT
OPENAI_API_KEY=sk-...
WRITER_MODEL=gpt-4o
UTILITY_MODEL=gpt-4o-mini
You can also switch providers at runtime through the Settings page in the web app — no restart required. The app-config settings stored in the database take precedence over .env values.
HTTPS with Cloudflare Tunnel
To expose LoreForge over HTTPS without configuring reverse proxies or SSL certificates, use a Cloudflare Tunnel:
1. Create a Tunnel
- Go to Cloudflare Zero Trust → Networks → Tunnels → Create a tunnel.
- Name it (e.g., “loreforge”).
- Add a Public Hostname:
loreforge.yourdomain.com→http://web:3500. - Copy the tunnel token.
2. Add the Token to .env
CLOUDFLARE_TUNNEL_TOKEN=eyJhIjoiYWJjMTIz...
3. Start with the Tunnel Profile
docker compose --profile tunnel up -d
This starts a cloudflared sidecar container that creates an encrypted tunnel to Cloudflare. Your site is now accessible at https://loreforge.yourdomain.com with automatic SSL.
PostgreSQL (Optional)
For production deployments or multi-user setups, you can use PostgreSQL instead of SQLite:
DATABASE_URL=postgresql+asyncpg://loreforge:password@postgres:5432/loreforge
The cloud compose file (docker-compose.cloud.yml) is pre-configured for PostgreSQL. It expects a PostgreSQL instance on the db_net Docker network.
After switching to PostgreSQL, run Alembic migrations:
docker exec loreforge-api python -m alembic upgrade head
app/db/migrations.py) that run automatically on startup. PostgreSQL uses Alembic migrations that must be run explicitly. There is no automatic data migration tool between the two — use the JSON export/import to transfer data.
Enabling Authentication
By default, self-hosted LoreForge has no authentication. To enable it:
REQUIRE_AUTH=true
SECRET_KEY=your-secure-random-string-here
Generate a secure key:
python -c "import secrets; print(secrets.token_urlsafe(64))"
CLOUD_MODE=true, the API will refuse to start with the default SECRET_KEY. You must set a secure random value.
Backups
SQLite
The SQLite database lives in a Docker volume (loreforge-db). To back it up:
# Find the volume path
docker volume inspect loreforge-db
# Copy the database file
docker cp loreforge-api:/workspace/data/loreforge.db ./backup-$(date +%Y%m%d).db
PostgreSQL
pg_dump -U loreforge -h localhost -p 5432 loreforge > backup-$(date +%Y%m%d).sql
Uploaded Files
Map and entry images are stored in /workspace/uploads/ inside the API container. Back this up separately:
docker cp loreforge-api:/workspace/uploads ./uploads-backup
JSON Export
The simplest backup method: use the Export feature in the web app to download a full JSON export of your world. This includes all entries, edges, settings, and metadata.
License Key (Pro Features)
Self-hosted LoreForge is fully functional without a license key. A license key unlocks Pro-tier features:
- Unlimited AI generations (no monthly cap)
- Priority support
- Future Pro-only features
To activate a license key, add it to your .env:
LICENSE_KEY=LF-XXXX-XXXX-XXXX-XXXX
Or use the API endpoint:
curl -X POST http://localhost:8250/license/activate \
-H "Content-Type: application/json" \
-d '{"key": "LF-XXXX-XXXX-XXXX-XXXX"}'
To generate a trial license for testing:
curl http://localhost:8250/license/generate-trial
Updating LoreForge
# Pull latest code
git pull origin main
# Rebuild and restart
docker compose down
docker compose build --no-cache
docker compose up -d
# (PostgreSQL only) Run migrations
docker exec loreforge-api python -m alembic upgrade head
docker compose up -d --build is usually sufficient. The API will apply any pending schema changes before serving requests.
Troubleshooting
Container won't start
Check logs: docker compose logs api and docker compose logs web. Common issues:
- Port conflict — Another service is using port 8250 or 3500. Change
API_PORTorPORTin.env. - SECRET_KEY error — If
CLOUD_MODE=true, you must set a non-defaultSECRET_KEY.
AI actions fail
- Ollama unreachable — Check that Ollama is running and the
OLLAMA_BASE_URLis correct. Trycurl http://localhost:11434/api/tagsfrom the host. - Model not found — Ensure you've pulled the model:
ollama pull qwen2.5:14b. - API key invalid — Check your Anthropic or OpenAI API key in
.envor the Settings page. - AppConfig override — Settings stored in the database (via the Settings UI) override
.env. Checkcurl http://localhost:8250/app-config.
Frontend can't reach the API
- Check that
NEXT_PUBLIC_API_URLmatches the API's actual address. - On Windows, use
127.0.0.1instead oflocalhost(Windows resolves localhost to IPv6). - If using a reverse proxy, ensure it forwards to the correct internal port.
Health check
# API health
curl http://localhost:8250/health
# Available models
curl http://localhost:8250/models
# App config (LLM provider, models, keys)
curl http://localhost:8250/app-config