Work fanmirror
web

FanMirror

Real-time 3D avatar mirror for live events — face/hand/pose tracked entirely in the browser.

FanMirror turns event attendees into 3D VRM/GLB avatars in real time. Walk up to a screen, the camera tracks your face / hands / pose simultaneously via MediaPipe, and a Three.js avatar mirrors your blendshapes, head rotation, IK arm positions, and finger curls — no app, no plugin, all client-side ML. Backed by a Node/Express CRM API for event management, snapshot email delivery, and asset hosting; ships with an Electron kiosk mode for offline venue installs.

Last updateApr 14, 2026 PrimaryJavaScript
  • JavaScript
  • Node.js
  • Express
  • Three.js
  • MediaPipe
  • WebGL
  • SQLite
  • Electron
  • Apache
  • systemd
FanMirror — Real-time 3D avatar mirror for live events — face/hand/pose tracked entirely in the browser.
FanMirror media
FanMirror media
FanMirror media
FanMirror media

FanMirror is a 3D avatar mirror for live events — conventions, brand activations, sports activations, anywhere a screen needs to do something a screen can't usually do. A visitor walks up, the camera locks onto their face, and a Three.js avatar mimics their head turns, blendshapes, arm positions, and finger curls in real time. The whole pipeline runs in the browser; no app to install, no plugin to allow.

What's actually being tracked

  • Face landmarks → 52 ARKit blend shapes, mapped from MediaPipe's mesh into the VRM expression rig. Smiles, brow raises, jaw drops, eye direction — all of it lands on the avatar's face within a frame or two.
  • Upper-body IK across head, torso, shoulders, elbows, wrists, and the full five-finger × three-joint hand rig. The avatar's torso leans with the visitor's hips; arms reach where the visitor reaches.
  • Procedural idle breathing when no face is in frame, so the screen never goes inert between guests.
  • Avatar + background pickers — drop in any VRM/GLB, pick from five scene backgrounds plus a custom color wheel, and it's a different installation in seconds.
  • Snapshot capture with countdown, branding overlay, instant download, and email delivery via the CRM API. Visitors leave with a 3D-mirrored selfie; the operator leaves with the email list.

How it's deployed

The browser experience is static HTML / CSS / JS in public/; the CRM lives in api/ as a Node.js + Express service on SQLite, fronted by Apache as a reverse proxy on fanmirror.gamingworld.uk. systemd manages the fanmirror-api.service process; Let's Encrypt handles the cert. For venues without reliable internet, the project also packages an Electron kiosk build that runs the same mirror locally with cached models — same UX, no network dependency.

Why client-side ML

The obvious alternative is a server-side pipeline with a real GPU — but that puts every frame on the wire, costs per minute, and dies with the wifi. MediaPipe's WASM/WebGL runtime is fast enough on a modest laptop that the entire ML stack can live in the visitor's tab. Latency is decided by the camera and the GPU in front of the visitor, not a round trip to a data centre. For a kiosk that needs to feel instant, that's the right trade.

Straight from the source

The project's own README.

Rendered in place — every link, image, and code block carried over from the repo. The page below is what a contributor would see opening the project for the first time.

FanMirror

Real-time digital avatar mirror for live events. Attendees see themselves transformed into 3D VRM/GLB characters via browser-based face, hand, and pose tracking. No downloads, no plugins — runs entirely in the browser.

Live at: https://fanmirror.gamingworld.uk Mirror app: https://fanmirror.gamingworld.uk/mirror/ Private repo — do not share or make public.


What It Does

FanMirror is a two-part system:

  1. Web Portal (public/index.html) — Landing page with animated canvas particle swirl, feature cards, and a "Launch Mirror" CTA.
  2. Browser Mirror (public/mirror/) — The main experience. Opens a webcam, runs three ML models simultaneously (face, hands, pose), and drives a 3D avatar in real-time via Three.js.

The avatar's facial expressions, head rotation, arm positions, elbow bends, finger curls, torso lean, and hip tilt are all driven by the user's body via MediaPipe landmark detection. Everything runs client-side — no server-side ML, no cloud GPU.

Key Features

  • 3D avatar rendering via Three.js with VRM and GLB format support
  • 52 ARKit facial blend shapes mapped from MediaPipe to VRM expressions
  • Full upper-body tracking: head, torso, arms (shoulder→elbow→wrist IK), fingers (5 per hand × 3 joints)
  • Procedural idle breathing animation when no face is tracked
  • Snapshot capture with countdown, branding overlay, download, and email delivery
  • Background picker with 5 scene images + custom color wheel
  • Avatar picker with auto-generated thumbnails
  • Admin panel (Ctrl+Shift+A) for mode/avatar/color config
  • Electron kiosk mode for offline event deployment
  • CRM API for event management, user email capture, asset hosting

Technology Stack (Complete)

Infrastructure

Technology Version Purpose
Ubuntu Server 22.04.5 LTS (Jammy Jellyfish) Host OS, Linux 5.15 kernel
Apache HTTP Server 2.4.52 Reverse proxy to Node.js API (port 8430), static file serving from public/, SSL termination, .htaccess security rules blocking .md/.log/.db files
Let's Encrypt / Certbot Auto-renewed Free HTTPS certificates for fanmirror.gamingworld.uk
systemd fanmirror-api.service manages the Node.js API process

Backend (CRM API)

The API lives in api/ and handles event management, user CRM, email capture, and asset hosting. It runs on port 8430 behind Apache's reverse proxy.

Technology Version Purpose
Node.js v23.11.1 JavaScript runtime for the API server
npm 10.9.2 Package manager
Express.js ^4.21.0 REST API framework — routes in api/routes/ (events, users, assets, public)
SQLite 3 3.37.2 Embedded database (database/fanmirror.db) — events, users, snapshots, assets
better-sqlite3 ^11.0.0 Synchronous SQLite3 bindings for Node.js
helmet ^8.0.0 HTTP security headers
cors ^2.8.5 Cross-origin resource sharing middleware
express-rate-limit ^7.4.0 Rate limiting on public API endpoints
multer ^1.4.5-lts.1 Multipart file upload handling (avatar/background uploads)
nodemailer ^6.9.0 Email sending for snapshot delivery
dotenv ^16.4.0 Environment variable management (API keys, SMTP config)
uuid ^10.0.0 Unique ID generation for database records

API Authentication: Protected routes use Bearer token. Public routes (/api/public/*) are rate-limited, no auth required. Default admin password: changeme (stored in localStorage, not server-side).

Frontend (Mirror App)

The mirror app is pure vanilla JavaScript with ES Modules — no React, Vue, Angular, webpack, Vite, or any build system. Dependencies are loaded via CDN using the browser-native <script type="importmap"> feature.

Technology Version How It's Used
HTML5 ES Modules, Import Maps Semantic markup, <script type="importmap"> for CDN dependency resolution
CSS3 Custom Properties, Glassmorphism backdrop-filter: blur(), CSS animations (spin, pulse, float), responsive grid, custom scrollbars
JavaScript (ES2022+) Vanilla, no framework All DOM manipulation, state management, and rendering logic written in plain JS modules
Three.js v0.170.0 WebGL 3D rendering engine — scene graph, PBR materials, skeletal animation, OrbitControls, texture loading, WebGLRenderer with sRGB color space
@pixiv/three-vrm v3.4.5 VRM avatar format loader — humanoid bone mapping, expression manager (aa, oh, blink, happy, angry, etc.), VRMLoaderPlugin for GLTFLoader
WebGL 2.0 Browser-native GPU-accelerated 3D rendering, anti-aliased, preserveDrawingBuffer for snapshot capture

Import map (from mirror/index.html):

{
  "imports": {
    "three": "https://cdn.jsdelivr.net/npm/[email protected]/build/three.module.js",
    "three/addons/": "https://cdn.jsdelivr.net/npm/[email protected]/examples/jsm/",
    "@pixiv/three-vrm": "https://cdn.jsdelivr.net/npm/@pixiv/[email protected]/lib/three-vrm.module.min.js"
  }
}

ML / Motion Capture Pipeline

All ML inference runs entirely in the browser using WebAssembly + GPU delegate. No server-side ML, no cloud API calls.

Model Version What It Outputs
MediaPipe Tasks Vision v0.10.18 Runtime/WASM bundle for all 3 models below
FaceLandmarker float16, GPU delegate 478 face landmarks, 52 ARKit blend shapes (jawOpen, eyeBlinkLeft, mouthSmileRight, etc.), 4×4 facial transformation matrix
HandLandmarker float16, GPU delegate 21 landmarks per hand, up to 2 hands, handedness classification (Left/Right)
PoseLandmarker Lite float16, GPU delegate 33 full-body landmarks with visibility scores (shoulders, elbows, wrists, hips, knees, etc.)

Model URLs (loaded at runtime from Google Cloud Storage):

Face: https://storage.googleapis.com/mediapipe-models/face_landmarker/face_landmarker/float16/1/face_landmarker.task
Hand: https://storage.googleapis.com/mediapipe-models/hand_landmarker/hand_landmarker/float16/1/hand_landmarker.task
Pose: https://storage.googleapis.com/mediapipe-models/pose_landmarker/pose_landmarker_lite/float16/1/pose_landmarker_lite.task

Camera input: 640×480, user-facing (facingMode: 'user'), via WebRTC getUserMedia.

IK & Animation Pipeline (hand-ik.js)

This is the most complex part of the system and an area of active development. The challenge is mapping 2D/3D landmark positions from a monocular webcam to believable 3D skeletal animation.

How Arm IK Works

  1. PoseLandmarker provides shoulder (11/12), elbow (13/14), and wrist (15/16) positions in normalized coordinates (0-1 range, with noisy Z-depth)
  2. Pose landmarks are smoothed via exponential moving average (EMA) with velocity-damped jump detection — large sudden jumps get dampened more aggressively
  3. Upper arm rotation is computed from shoulder→elbow direction: Y component → raise angle (Z-axis rotation), Z component → forward/back (X-axis rotation, clamped to prevent behind-body)
  4. Lower arm (elbow) rotation is computed from the angle between upper-arm and forearm vectors
  5. All rotations are applied via THREE.Quaternion.slerp() at ARM_SPEED (0.18) for smooth interpolation
  6. When no pose data is detected for 18+ frames, arms slerp back to a natural rest pose (at sides)

How Finger Curl Works

  1. HandLandmarker provides 21 landmarks per hand (wrist, 4 thumb joints, 4×4 finger joints)
  2. Raw landmarks are used directly (no smoothing) — the distance-based curl method is inherently noise-resistant
  3. For each finger, measure MCP-to-TIP straight-line distance, normalized by wrist-to-MCP distance for scale invariance
  4. Map the extension ratio (0=curled, 1=straight) to per-joint rotation angles:
    • Proximal: up to ~83°
    • Intermediate: up to ~89°
    • Distal: up to ~60°
  5. Thumb uses a different approach: ThumbMetacarpal gets Y-rotation (opposition) based on thumb-tip-to-palm-center distance. Proximal and Distal get joint-angle-based flex.
  6. Finger bones slerp at FINGER_SPEED (0.35) — faster than arms because finger movement needs to feel responsive

Known IK Challenges (Help Wanted)

  • Z-depth noise: MediaPipe Pose uses monocular depth estimation which is unreliable. Arms sometimes appear to go behind the body. Current mitigation: clamp forward angle to [-0.15, 1.0] range.
  • Thumb mapping: VRM's 3-bone thumb (Metacarpal, Proximal, Distal) doesn't map cleanly to real thumb opposition. The current approach approximates but doesn't look perfect.
  • Finger responsiveness vs smoothness tradeoff: Too much smoothing makes fingers feel unresponsive; too little causes jitter. Currently using slerp at 0.35 with no landmark smoothing.
  • Arm jitter at rest: When standing still, small pose landmark fluctuations cause subtle arm movement. Mitigated by EMA smoothing on pose landmarks and rest-return after 18 frames without significant change.
  • Elbow estimation: The 2D elbow angle computation can be ambiguous (same 2D projection for multiple 3D poses). Full IK solve would be better but complex to implement.

Tuning Constants (in hand-ik.js)

REST_DELAY   = 18    // frames before returning to rest (~0.3s at 60fps)
REST_SPEED   = 0.10  // slerp speed for rest return
ARM_SPEED    = 0.18  // arm slerp — responsive but not twitchy
BODY_SPEED   = 0.12  // torso slerp
FINGER_SPEED = 0.35  // finger slerp — must be responsive for fist
POSE_ALPHA   = 0.45  // EMA blend for pose landmark smoothing

Avatar Format Support

Format Extension How It's Loaded Tracking Features
VRM .vrm @pixiv/three-vrm VRMLoaderPlugin → GLTFLoader Full: facial expressions (VRM expression mapping), head rotation, arm IK, finger curl, torso rotation, procedural breathing
GLB .glb Three.js GLTFLoader directly Partial: morph targets (if matching ARKit names), head bone rotation, animation playback. No finger/arm IK.

VRM models are rotated 180° around Y-axis to face the camera (VRM default faces -Z). Head rotation Y/Z axes are negated for correct mirror behavior.

Browser APIs Used

API Purpose
WebRTC getUserMedia Camera access for face/hand/pose tracking
Canvas 2D Webcam preview with landmark overlay, face bounding box crop (256×256), snapshot branding bar
Fullscreen API Immersive kiosk display (F key)
localStorage Admin config persistence (mode, avatar model, wireframe color, password)
Blob / Data URL Snapshot capture, download as PNG, email attachment
ES Module Import Maps CDN dependency resolution (Three.js, VRM, MediaPipe)
requestAnimationFrame Main render loop + tracking loop
Performance.now() FPS counter, MediaPipe video timestamp synchronization

Desktop Kiosk (Electron)

Technology Version Purpose
Electron ^33.0.0 Chromeless fullscreen window, offline operation
electron-builder ^25.0.0 Windows (.exe) and macOS (.app) builds

The kiosk app (kiosk_src/) wraps the mirror app in an Electron shell for offline event deployment. Config is stored in local config.json instead of localStorage.

Design System

Element Value
Primary Red #e63946
Primary Blue #4895ef
Background Dark #07071a (landing), #0a0a1a (mirror)
Muted Text #777790
Display Font Orbitron (500/700/900) — futuristic headings
Heading Font Rajdhani (400/600/700) — semi-geometric UI labels
Body Font Inter (400/500/600) — clean readability
Gradients Always linear-gradient(135deg, blue, red)
Glass Effects backdrop-filter: blur(12-16px) with semi-transparent surfaces
Canvas Swirl Custom particle system — red/blue/purple flow only (no orange, yellow, cyan)
Asset Format WebP for all images

CDN Dependencies

CDN What It Serves
jsDelivr (cdn.jsdelivr.net) Three.js v0.170.0, @pixiv/three-vrm v3.4.5, devicon technology icons
Google Cloud Storage (storage.googleapis.com) MediaPipe ML models (face, hand, pose landmarkers)
Google Fonts (fonts.googleapis.com) Orbitron, Rajdhani, Inter font families
MediaPipe WASM (cdn.jsdelivr.net) @mediapipe/tasks-vision v0.10.18 WASM runtime bundle

Project Structure

/var/www/fanmirror/
├── public/                      # Web root (Apache serves this)
│   ├── index.html               # Landing page with canvas swirl + tech stack
│   ├── css/style.css            # Landing page styles (~900 lines)
│   ├── js/
│   │   ├── swirl.js             # Canvas particle swirl background
│   │   └── cards.js             # Card background image lazy-loader
│   ├── images/                  # Landing page assets
│   │   ├── logo.webp            # FanMirror logo
│   │   ├── splash.webp          # Splash artwork
│   │   ├── launch-fan-mirror-button.webp
│   │   ├── live-face-card.webp
│   │   ├── how-it-works-card.webp
│   │   └── cards/               # Feature card background textures
│   ├── mirror/                  # Browser mirror app
│   │   ├── index.html           # Mirror UI layout + import map
│   │   ├── css/mirror.css       # Mirror styles (~670 lines)
│   │   └── js/
│   │       ├── app.js           # Entry point — wires modules, avatar/bg pickers
│   │       ├── scene-manager.js # Three.js scene, rendering, model loading
│   │       ├── face-tracker.js  # MediaPipe Face + Hand + Pose landmarkers
│   │       ├── hand-ik.js       # Arm/finger/torso IK from landmarks
│   │       ├── admin.js         # Admin panel (Ctrl+Shift+A), config
│   │       └── snapshot.js      # Capture, branding, download, email
│   ├── assets/
│   │   ├── *.vrm                # VRM avatar models (3 included)
│   │   ├── facecap.glb          # ARKit blendshape GLB model
│   │   └── backgrounds/         # Scene background images (5 included)
│   ├── snapshots/               # User snapshots (auto-created, gitignored)
│   ├── favicon.ico, *.png       # Favicon pack
│   ├── site.webmanifest
│   ├── robots.txt, humans.txt
│   └── .well-known/security.txt
├── api/                         # Express CRM API (port 8430)
│   ├── server.js                # Express app entry
│   ├── routes/
│   │   ├── events.js            # Event CRUD (auth required)
│   │   ├── users.js             # User/email CRUD (auth required)
│   │   ├── assets.js            # Asset management (auth required)
│   │   └── public.js            # Public endpoints (rate-limited, no auth)
│   ├── middleware/
│   │   ├── auth.js              # Bearer token validation
│   │   └── validate.js          # Request validation
│   └── utils/init-db.js         # Database schema initialization
├── kiosk_src/                   # Electron kiosk app
│   ├── src/main/main.js         # Electron main process
│   ├── src/renderer/            # Renderer process (loads mirror app)
│   └── package.json             # Electron ^33.0.0, electron-builder ^25.0.0
├── database/                    # SQLite DB (gitignored)
├── docs/
│   ├── quick-start.md           # Development guide
│   ├── architecture.md          # System architecture
│   ├── api-schema.md            # API endpoint documentation
│   └── kiosk-guide.md           # Electron kiosk setup
├── CHANGELOG.md                 # Version history
├── CLAUDE.md                    # AI assistant project instructions
└── README.md                    # This file

Data Flow

Camera (640×480)
    │
    ▼
┌─────────────────────────────────────────────────────┐
│  face-tracker.js                                     │
│  ├── FaceLandmarker  → 478 landmarks, 52 blendshapes│
│  │                      4×4 transformation matrix    │
│  ├── HandLandmarker  → 21 landmarks × 2 hands       │
│  │                      + handedness (Left/Right)    │
│  └── PoseLandmarker  → 33 body landmarks             │
│                          + visibility scores          │
└──────────┬──────────────┬───────────────┬────────────┘
           │              │               │
     onFace callback  onHands callback  onPose callback
           │              │               │
           ▼              ▼               ▼
┌─────────────────────────────────────────────────────┐
│  scene-manager.js                                    │
│  ├── updateFaceMesh(landmarks) → wireframe mode      │
│  ├── updateAvatar(blendShapes, matrix)               │
│  │   ├── VRM: mapARKitToVRM() → expressionManager    │
│  │   └── GLB: morphTargetInfluences                  │
│  │   └── _applyHeadRotation(matrix) → head bone      │
│  ├── updateHands(hands, handedness)                  │
│  │   └── hand-ik.js: applyFingers() + applyArmsFromHands() │
│  └── updatePose(poseLandmarks)                       │
│      └── hand-ik.js: applyPose() → arms + torso      │
└──────────────────────────────────────────────────────┘
           │
           ▼
    Three.js WebGL Renderer → <canvas>

Quick Start

# The frontend is served by Apache — changes to HTML/CSS/JS are live immediately.

# Restart the API server (after code changes to api/)
sudo systemctl restart fanmirror-api

# View API logs
sudo journalctl -u fanmirror-api -f

# Initialize/reset the database
cd api && node utils/init-db.js

# Run API in dev mode (auto-restart on changes)
cd api && npm run dev

# Run Electron kiosk (requires display server)
cd kiosk_src && npm install && npm start

URLs

Keyboard Shortcuts (Mirror App)

Key Action
Ctrl+Shift+A Open admin panel
\ (backslash) Toggle webcam preview
F Toggle fullscreen

Adding Assets

Avatar Models

  1. Place .vrm or .glb file in public/assets/
  2. Set permissions: chown www-data:www-data file && chmod 644 file
  3. Add a <button class="avatar-pick" data-url="/assets/yourmodel.vrm"> to mirror/index.html
  4. The API endpoint GET /api/public/models auto-discovers files in the assets directory

Background Images

  1. Place image in public/assets/backgrounds/ (WebP preferred, 1920×1080+)
  2. Add a <button class="bg-pick" data-bg="/assets/backgrounds/yourimage.webp"> to mirror/index.html

Architecture Decisions

  • No build system — pure ES Modules with CDN imports via browser-native import maps. This means zero build step, instant iteration, and no node_modules for the frontend.
  • No frontend framework — vanilla JS for DOM, state, and rendering. The app is simple enough that React/Vue would add complexity without benefit.
  • VRM over custom format — VRM is the standard humanoid avatar format with built-in bone naming conventions, expression presets, and spring bone physics. It's widely supported by VRoid Studio, Ready Player Me, and other avatar creation tools.
  • MediaPipe over TensorFlow.js — MediaPipe Tasks Vision provides pre-trained, optimized models that run via WASM+GPU delegate. Higher accuracy and better performance than rolling custom TF.js models.
  • Monocular camera only — No depth camera required. This is a deliberate choice for maximum device compatibility (any laptop/phone with a webcam). The tradeoff is noisy Z-depth estimation, which we mitigate with pose landmark smoothing and arm clamping.
  • Slerp-based smoothing — Quaternion spherical interpolation (slerp) for all bone rotations. This produces smooth, natural-looking motion at the cost of some latency. The slerp factor is the primary tuning knob for responsiveness vs smoothness.
  • Distance-based finger curl — Instead of computing per-joint angles from landmark positions (noisy), we measure the straight-line distance from knuckle to fingertip and map that to curl amount. This is more robust and gives reliable fist detection.

What NOT to Do

  • Don't use orange, yellow, cyan, or white in the swirl/accent colors
  • Don't hardcode secrets — use .env for API, localStorage for mirror config
  • Don't break the existing Tulpa project (separate site on the same server)
  • Don't modify Apache configs for other sites
  • Keep files under 200 lines where possible
  • Don't commit .env, database/, or node_modules/

Version History

See CHANGELOG.md for full details.

Version Date Highlights
0.10.0 2026-03-15 IK v3: balanced smoothing, responsive fingers, forest default bg, tech stack page
0.9.0 2026-03-13 Procedural idle breathing
0.8.0 2026-03-13 Pose tracking, FPS counter, background picker
0.7.0 2026-03-13 VRM avatar support, hand tracking, admin cleanup
0.6.0 2026-03-13 Face tracker rewrite to Tasks Vision, 52 ARKit blend shapes
0.5.0 2026-03-13 OrbitControls, head pose, blend shapes, auto-framing
0.1.0 2026-03-13 Initial scaffolding

Built by GamingWorld.uk — Powered by Three.js, MediaPipe, and VRM.

Gallery

The full set.

Build something like this

Want a tool like this for your shop?

We've shipped this kind of thing before. Twenty-minute intro call, no slides.