Concepts

Architecture

A three-tier model — Mini App, Express API, OpenClaw servers — with shallow interfaces between each layer and everything observable.

High-level diagram

  ┌─────────────────────┐
  │  Telegram Mini App  │  React 19 + Vite + @twa-dev/sdk
  │  (frontend/)        │  Served as static files from /opt/exmer/frontend/dist
  └──────────┬──────────┘
             │ HTTPS / JWT bearer
             ▼
  ┌─────────────────────┐
  │  EXMER Backend      │  Express 5 + ssh2 + SQLite
  │  (backend/)         │  Single Node process, no workers
  └──────────┬──────────┘
             │ SSH (password or key auth)
             ▼
  ┌─────────────────────┐
  │  OpenClaw Servers   │  Your remote Linux boxes
  │  (~/.openclaw/)     │  Managed one at a time via sshExec
  └─────────────────────┘

Request lifecycle

  1. User opens the Mini App in Telegram. @twa-dev/sdk hands us initData — signed by the Telegram client.
  2. Frontend posts initData to /api/auth/telegram. Backend validates the HMAC signature using TELEGRAM_BOT_TOKEN.
  3. Backend issues a JWT (24h TTL) containing the Telegram user ID and username. Frontend stores it at module level (not React state).
  4. Every subsequent request carries Authorization: Bearer <jwt>. Middleware chain: requestId → rateLimit → authMiddleware → routes.
  5. Per-server routes additionally run requireServerAccess(role) which checks the server_access table (owner > admin > viewer > none) unless the user is in ADMIN_USER_IDS.
  6. Destructive actions also call audit(...) which records who, what, when, and the result.

Data flow for OpenClaw operations

Two distinct paths, chosen per-endpoint:

Fast path: direct file read

Config, sessions, workspace files, auth profiles — read directly from JSON files under ~/.openclaw/. Round-trip: ~20 ms. Used for everything the UI shows.

CLI path: openclaw binary

Agent add/delete, gateway restart, skills list, health check — goes through the openclaw CLI over SSH. Round-trip: 3–8 seconds. Cached in memory for 30–60 s so repeated requests don't pay the cost twice.

Database schema

SQLite with WAL mode and foreign keys on. Tables:

TablePurpose
serversSSH connection info + owner + has_openclaw flag
server_accessPer-server role grants (owner/admin/viewer)
alertsMonitor-generated alerts, read/unread
audit_logEvery destructive action with user, result, request_id
notification_prefsPer-user alert mutes and quiet hours

Background workers

Inside the same Node process:

No queue, no microservices

Everything runs in one Node process. This is deliberate: for single-digit-to-low-double-digit server counts, a single process is simpler to deploy, debug, and back up than any queue-based architecture. If you ever need to scale past that, the task manager is the only piece that would need to move to Redis — everything else is stateless HTTP.