Skip to content

Architecture

This page explains how Encore Server is structured internally, how requests move through the system, and where resource and security controls are enforced.

Design goals

  • Keep the control plane small and fast
  • Launch instances from declarative templates
  • Return connectable host/port only when an instance is actually ready
  • Keep runtime operations simple (single binary + config + template files)
  • Support optional Linux isolation controls (cgroup v2)

High-level components

  • HTTP layer: route handling, request validation, and JSON responses
  • Shared state: templates, instance registry, port pools, process handles
  • Template loader/validator: TOML parsing + schema/rule enforcement
  • Process runtime: child process spawn/kill + argument substitution
  • Isolation layer: optional cgroup v2 setup for each instance

Runtime topology

mermaid
flowchart LR
  A["Game Clients"] --> B["HTTP API /api/v1"]
  B --> C["Request Validation"]
  C --> D["Shared State (RwLock)"]
  D --> E["Template Catalog + Port Pools"]
  D --> F["Instance Registry + Token Index"]
  C --> G["Process Spawner"]
  G --> H["Godot Server Process"]
  G --> I["Optional cgroup v2 Setup"]
  H --> F

Instance lifecycle (create flow)

When a client calls POST /api/v1/instances, Encore executes this pipeline:

  1. Validate template_id format (game_ + 10 alphanumeric chars).
  2. Resolve template from in-memory catalog.
  3. Validate extra_options against template whitelist and types.
  4. Check capacity:
    • max_instances for the template
    • global port availability across server-configured pools
  5. Register instance in state (instance ID, user token, selected port).
  6. Render command arguments (, , plus validated options).
  7. Spawn the game server process.
  8. Mark readiness:
    • if port_liveness_probe = true, wait for local TCP accept or UDP bind-in-use on assigned port
    • otherwise wait a short fixed startup delay
  9. Return 201 with connection info and join token.

If spawn/probe fails, Encore rolls back the instance registration and releases resources.

Read/query lifecycle

GET /api/v1/instances/{identifier} supports two identifiers:

  • user token: 6 uppercase alphanumeric chars (player-facing)
  • instance ID: inst_ + 10 alphanumeric chars (internal/admin-friendly)

The response is built from in-memory state and includes uptime and stored extra_options.

Termination lifecycle

Termination can happen from:

  • Admin API DELETE /api/v1/instances/{identifier}
  • CLI encore instance kill <identifier>
  • Max lifetime cleanup background logic

Termination removes the instance from state, releases the port, decrements counters, and kills the child process handle.

State model and concurrency

Encore keeps active runtime data in one shared state object guarded by tokio::sync::RwLock.

  • Read-heavy operations (GET requests) take read locks.
  • Mutating operations (create/kill/reload) take write locks only for short critical sections.
  • Process kill/wait actions happen outside lock-critical paths where possible.

This keeps contention predictable while preserving consistency between:

  • instance registry
  • token-to-instance mapping
  • per-template instance counts
  • port allocation pools

Template management

Templates can be managed in two ways:

File-based (reload)

Admin route POST /api/v1/serve/reload:

  1. Re-loads and validates template files from disk.
  2. Reconciles with current in-memory catalog.
  3. Adds/updates/removes templates.
  4. Maintains bookkeeping for disabled or re-enabled templates.

This allows controlled runtime updates without restarting the service.

API-based (CRUD)

Admin routes for programmatic template management without SSH access:

  1. Upload a game server binary via POST /api/v1/uploads (multipart).
  2. Create a template referencing the uploaded binary via POST /api/v1/templates.
  3. Update or delete templates via PUT / DELETE /api/v1/templates/{id}.

API-created templates are written to disk as .toml files and loaded into memory. Deleting a template with running instances marks it as spawn-disabled until those instances terminate.

Security and trust boundaries

  • Public routes:
    • create instance
    • query instance
  • Admin routes:
    • list instances/templates
    • terminate instance
    • reload templates
    • create/update/delete templates
    • upload binaries

Admin routes are not mounted unless admin_token exists in server config.
When mounted, they require X-Admin-Token or Authorization: Bearer <token>.

Template authoring is also a security boundary:

  • extra_options whitelist prevents arbitrary argument injection
  • argument substitution only uses declared values and fixed placeholders
  • optional clear_env and cgroup constraints reduce process blast radius

Scalability boundaries

Encore is intentionally simple and host-local:

  • One process manages child processes on one host.
  • Capacity is bounded by host CPU/memory, configured template limits, and port ranges.
  • Horizontal scale is done by running multiple Encore hosts behind your own matchmaking/routing layer.