| Platform | Status | Notes |
|---|---|---|
| macOS arm64 (Apple Silicon) | Tested | Primary target. Signed + notarized DMG. |
| macOS Intel | Not current focus | Builds run under Rosetta; arm64 is the primary target. Report anything that breaks on Intel. |
| Windows 10/11 | Builds, lightly tested | WASAPI audio capture works; GPU path falls back to CPU if Vulkan unavailable. If you’re on Windows, expect to find rough edges — please report them. |
| Linux | Unsupported | Not built; no plans this cycle. |
wallspace-releases repo — Jack will send you a collaborator invite. Double-click to install.~/Library/Application Support/crt-wall-controller/ on macOS). Presets survive reinstalls; they do not sync between machines automatically.electron-store (same userData dir). Masked in the UI after first entry. Never exported in presets.
WallSpace talks to a lot of external services but most of them are optional. The table below shows exactly what needs a key and what doesn’t. Heads-up: key entry is scattered across several panels (there’s no single “Settings → API Keys” screen yet — we know, it’s on the list). The “Where entered” column tells you which panel to open.
| Service | Key required? | Free tier / default | Where entered |
|---|---|---|---|
| Whisper (local ASR) | No | Fully offline, runs via whisper.cpp subprocess | N/A — works out of the box; it’s the default caption engine |
| LRCLIB (lyrics) | No | Public API, no limit you’ll hit casually | N/A |
| Spotify | OAuth, no raw key | Free Spotify account works for search & metadata; Premium needed for playback control | Zone 2 → Spotify panel → Connect |
| RunPod (cloud GPU) | Yes | Pay-as-you-go; sign up at runpod.io | Expand GPU Console (bar above the tabs) → GPU Resources column → API |
| Hugging Face token | Only for gated models | Free HF account fine for public models | GPU Console → GPU Resources → HF |
| Daydream / fal.ai | Yes | Pay-per-inference. See Daydream pricing convergence. | Per-layer Daydream config panel (select a Daydream layer) |
| Deepgram (cloud ASR) | Yes | $200 free credit on signup | Captions panel → Transcription service picker (if enabled) |
| DeepL (translation) | Yes | Free tier: 500k chars/month. Key ending in :fx auto-detected as free | Captions panel → Translation settings |
| LibreTranslate | No | Fallback to DeepL; bring-your-own endpoint optional | Same panel as DeepL |
| Shazam (song ID) | Yes | RapidAPI free tier available | Layer Editor → Music/Lyrics source → RapidAPI key field |
| Cloudflare TURN | Optional | Free tier: 1TB/month. Falls back to metered.ca free relay if not set | GPU Console → GPU Resources → TURN (configure) |
| AI plugins (ModelsLab, Weavy, Freepik, Leonardo, Gemini Veo, Sora) | Varies | Per-plugin; most have free tiers | AI Plugin Settings panel → pick plugin → enter credentials |
| Scope WebRTC | No | URL-based per layer; signalling handled by Electron | Scope Layer config → endpoint URL |
You don’t need any of these to start using the app. Canvas, 2D sources, Outputs, 3D, MIDI, Captions (via local Whisper), and most of the toolbar work without a single key. If you only have 10 minutes for setup, do these two:
Open the Sources tab at the bottom (Zone 2). In the Folder column, click the folder button (📂) and point it at a folder of videos and images. The app scans it; drag files from the folder list onto any layer in the hierarchy panel (the tree on the left of the canvas). Fastest way to get pixels on screen.
Go to the Outputs tab. If you’ve got a second display connected, pick it from the dropdown on CRT Output 1 and hit Open. That’s enough to see your composition on another screen. Fullscreen it with the button next to Open.
Keys are stored locally in Electron’s secure store; after you type them in once they’re
masked as ****wxyz. They never leave your machine and never get written to presets.
If you’re uncomfortable pasting a key, paste it in a fresh terminal first to check
the prefix is what you expect — then paste into WallSpace.
The canvas is the big area in the middle of the window — it shows your virtual wall layout with every CRT drawn in real time. You compose here; whatever ends up on the canvas is what gets sent to outputs.
Source file: src/renderer/components/CanvasView.tsx,
AllWallsCrossLayout.tsx.
The 2D Preview tab (labelled “2D Preview” in the top tab bar) is the flat counterpart to the 3D tab. Instead of a spatial scene, it shows your walls laid out in 2D with live CRT content rendered in place, plus the hierarchy and edit sidebar for whatever you’ve selected. This is where you do precise wall / CRT / projector editing without the 3D camera getting in the way. Media browsing and sources live in the toolbar — see §7a.
Magnet locks pan to the active wall and lets you zoom in cleanly — best for per-wall precision work. Overlay unlocks pan so you can freely drag the preview around. Toggle in the canvas toolbar.
Independent zoom slider on top of the display-mode’s own zoom — useful for getting in on a tiny corner-pin adjustment.
The right-side panel flips between BatchCRTEditPanel (when CRTs are selected), ProjectorEditPanel (projector selected), and wall-edit controls. Selection drives it; click something in the canvas or the hierarchy to edit.
If a Daydream layer is active on this tab, its live-parameter panel shows here — guidance weight, prompt tweaks, publish toggle — so you can tune without switching tabs.
One-click button to create a Syphon output (macOS) from the current 2D preview and open Scope against it. Handy shortcut for external VJ pipelines.
Starts a local Scope server and opens it in your browser — no cloud pod needed for quick tests.
Source file: src/renderer/components/PhysicalTab.tsx.
This is where the pixels actually leave the app. Each output is an OS-level window that you put on a display, a projector, or feed to an external tool like Syphon/Spout/NDI. Corner-pin, zoom, and quality all live here.
| Control | What it does |
|---|---|
| Display dropdown | Which physical screen this output lands on. Up to 4 outputs supported simultaneously. |
| Open / Close | Opens or closes the output window on the chosen display. |
| Fullscreen toggle | Per-output fullscreen. Helpful for projectors; for CRTs on HDMI, go fullscreen after opening. |
| Corner pin (numeric) | Edit projector warp by top-left / top-right / bottom-left / bottom-right x,y numeric values. Use for keystone + skewed projection surfaces. |
| Corner-pin presets | Reset-to-square plus common layouts. Start here if your projection is a normal rectangle. |
| Zoom slider | Pan/zoom inside the output frame without changing the source. |
| Quality selector | High (60fps), Medium (30fps), Low (24fps). Drop it if you’re hitting frame drops. |
Underneath the per-output cards, there’s a grid packing config — auto / custom cols / rows / cell aspect ratio. The Auto-assign button distributes your CRTs across the available output slots using the current grid. Useful when you’ve added new CRTs and don’t want to place them by hand.
Cards in the lower section of the tab route your output to external tools:
The separate mapping window, zoom slider, and draggable corner-pin preview were reverted in v42 because IPC sync between the mapping window and the real projector output was unreliable. The numeric inputs + preset buttons in the Outputs tab are the primary way to adjust corner-pin right now. Pressing P on the output window lets you drag corners locally, but those changes don’t sync back to the main app — use the numeric inputs if you want persistent edits.
When an output window is active, toggling Mapping mode overlays a labelled grid
(A1…D4 for slots plus G1/G2 gutters and
A1/A2 crown rows) so you can see exactly where each CRT or projector
region lands. The four corner handles at TL/TR/BL/BR
show the current warp; drag them on the output window (press P) for a quick local
preview, or use the numeric inputs in the Outputs tab to persist.
When an output contains both a projector and CRTs, the projector shows the scene content filling the entire projector area with the CRT positions visually represented as:
A1, B1…)
corrected to the physical 4:3 aspect ratio, so you can confirm each CRT's footprint matches
the real wall geometry before dragging corner handles.
While dragging corner handles in Mapping mode, the overlay grid can flicker slightly between frames — the warp math settles after a brief pause. Cosmetic only, no effect on the rendered output; queued for a post-demo pass to debounce the per-frame overlay recomputation.
Source files: OutputsTab.tsx, ProjectorEditPanel.tsx,
ExternalOutputCard.tsx.
The 3D tab gives you a spatial view of your walls, CRTs, and projectors as real geometry. It’s how you place walls in a virtual room and set the camera path for the 3D web viewer (so remote audiences see the scene from meaningful angles). You can also edit wall transforms directly with a move/rotate/scale gizmo.
Select a wall from the hierarchy panel, then switch transform mode in the toolbar:
The old “blank 3D on first click” glitch was a lazy-load race between the Three.js container mount and the scene initialisation. Patched in v2.6.3 with a container-migrate poll (up to 3 s) plus a texture-update dependency on init completion, so the scene paints AND keeps live frames flowing the first time you open the tab. If you hit it on an older build, bounce back to Canvas and return.
Source files: ThreeDTab.tsx,
transform/TransformToolbar.tsx.
Everything below lives in Zone 2 — the bottom panel. Tabs along its top switch between Layers & Scenes, Timelines, MIDI, Protocols, Cloud, Captions, Signals, Logs, and property tabs (Base content / Transform / Color / Effects / Source types — only visible when you have a layer selected).
Default tab on load. This is where you manage scenes, layers, and the selected layer’s details all in one place. The layout is: Scenes panel on the left (scene tabs + output shortcuts + scene colour), Layers panel below (per-scene layer list with reorder, duplicate, delete), Layer Details on the right.
LWS2L1 = Left Wall / Scene 2 / Layer 1), up/down reorder, delete.
The hierarchy panel that’s always visible on the left of the canvas
(TimelineHierarchyPanel) is a separate surface — it shows every wall / scene /
CRT / layer in one tree, lets you select anything for editing, and is where you drag media in
from the Sources tab.
The Zone 2 tab labelled Sources is a dedicated media-ingest surface. Three columns, each one source type you can drag onto layers in the hierarchy.
Every connected camera appears here automatically.
Loaded media files (images + videos).
A pointer to a folder — fastest way to bring in a whole bulk library.
Timelines give you a transport bar (play / pause / stop / record / BPM), a Resolume-style scene × layer trigger matrix, and keyframe editing for the selected layer.
The pieces exist — BPM CC input via MIDI, tap-tempo trigger, beat phase output through the MIDI parameter resolver. But we haven’t validated sync quality under load with real controllers. If you try it, please tell us how it behaves — phase drift, jitter, hang-ups. This is on the list to formalise in the next iteration.
MIDI panel handles device connection, parameter mapping, and includes a virtual on-screen Launch Control XL for testing without hardware.
CC updates are rAF-batched (120Hz knob streams folded to 60Hz render cadence) so smooth turns don’t stutter the UI. Note-on triggers fire immediately.
Triggering scenes via the MIDI panel’s on-screen scene-trigger buttons was crashing the
renderer (SIGSEGV). A try/catch guard landed in v2.6.2 (commit
5aedc90) so the hard crash surfaces as a console warning instead — but the
underlying state-cascade issue isn’t fully fixed.
Workaround: switch scenes via the scene tabs at the bottom of the app, not via the MIDI panel’s trigger buttons. Learn mode and knob/CC mappings are fine; it’s only the scene-trigger action that’s flaky. If you hit it and the app goes weird, see the localStorage-nuke recovery in §9.
Live traffic monitor for OSC / MIDI / Art-Net / sACN / DMX. Use it when you’re debugging why a signal isn’t doing what you expect.
Art-Net and sACN messages show universe + channel; OSC shows the address pattern + value; MIDI shows CC number + value or note + velocity.
The Zone 2 Cloud tab is the cloud media browser — shared library hosted on R2 / CDN, filterable by source (WallSpace, A.EYE.STUDIOS) and type. Drag entries straight onto a layer to use them. On a fresh install without sign-in the tab shows a neutral notice — “Sign in to browse the cloud media library.” — instead of the old red “Failed to fetch” banner (softened in v2.6.3). Sign in to populate the grid; empty state otherwise means no assets match the active filter.
GPU-pod management (RunPod API key, pod picker, cost/hr, HF token, Cloudflare TURN) is not in the Cloud tab — it lives in the GPU Console (the bar just above the Zone 2 tabs). Expand the console → right-hand GPU Resources column. See §8.
The GPU picker shows cost/hr per GPU type. A GraphQL call fetches live pricing from RunPod,
but on failure it falls back to a hardcoded table that’s weeks stale.
Prices in the picker may be lower than reality — cross-check on runpod.io before booting
a pod if cost matters. Fix is scheduled; see project_runpod_pricing_fix.md.
Daydream and other fal-backed sources hit a hard 60-minute session limit.
The server sends MAX_DURATION_EXCEEDED with no advance warning, kills the
connection, and deletes assets uploaded directly in the Scope UI (VACE reference
images uploaded via the layer config can be re-uploaded). A 55-min client countdown exists;
auto-handoff to a fresh session is planned but not yet built. Long set? Plan a 55-min cycle
and keep source assets on disk so they can be re-uploaded.
The Cloud tab can already list assets already published to R2, but the upload flow from the web portal and A.EYE.STUDIOS into this browser is still under active development and not fully tested. You may see missing thumbnails, stale listings after publishing, or auth prompts that don’t quite land. Report anything weird, but don’t rely on round-tripping media through the Cloud tab for production work yet — upload directly via the web portal and pull files into layers with Load File… from the Source picker for now.
Real-time speech-to-text + optional translation + emotion scoring + keyword triggers. The captioning system is deliberately general-purpose — it works for live events, conferences, video calls, conversations, and offline audio files.
Two surfaces: (1) the Zone 2 Captions tab is the transcript history browser for the current scene — search, clear, export to SRT/VTT/TXT/JSON/MD; (2) the per-layer Captions source (select a layer, set its source type to Captions / Text) is where the live engine config lives.
The Signals tab is the live readout of everything the caption intelligence engine is deriving from the current input — which emotion is detected, what the voice features read, what the currently-active text is, and which downstream styling / effects are being applied to the caption layer as a result. Two views, one per layer (pick the layer from the dropdown at the top):
OSC signal routing (trigger rules, external addresses) is configured in the MIDI and Protocols panels + per-layer Scope OSC config — the Signals tab is the observer, not the router. The dedicated Signal Debug / Visualisation panel + preset import/export tools are scheduled but not yet in this build.
The central log view for everything the app is doing. Useful when something isn’t behaving and you want to know why without cracking open developer tools.
When you report an issue, include a log snippet — filter to Errors, copy, paste. It saves a round trip.
Logs (45)) shows how many entries the aggregator currently holds.The aggregator captures main process + renderer + any console.* calls with source/service tagged. When something breaks: click Errors to filter, then Copy, and paste that block with your bug report. Timestamps are local time, service column tells us whether it fired in the compositor, transcription, Scope, etc.
When you select a layer, the “Source” tab in Zone 2 shows a dropdown of source types. This is where the layer’s content comes from. Each source type has its own config block. The table below is the full reference; the walkthrough that follows is the visual tour of every type with a screenshot ready.
| Type | What it is | Key required? |
|---|---|---|
| 📁 Media | Image or video file from your media library | No |
| 📷 Webcam / Capture Card | Live camera or capture-card feed | No (OS permission) |
| 🖥 Syphon | Accept a Syphon video stream from another macOS app | No |
| 📡 NDI | Accept an NDI stream over the network | No |
| 🔊 Audio File | Audio-only source for music / ambient tracks | No |
| ♫ Spotify | Album art + track metadata (display-only; audio stays in Spotify) | OAuth (Spotify account required) |
| 🌀 MilkDrop | Music visualisation presets, beat-sync-friendly | No |
| 🎵 Audio Visualizer | Audio-reactive band-bar / waveform modes rendered as layer visuals | No |
| 🌊 Audio-Reactive Noise | Generative noise patterns (plasma, flow, particles, bounce, grid, voronoi, rings, fractal, lissajous, stars) | No |
| 🧠 Scope | Scope plugin for real-time AI re-styling. Plugin picker + prompt + model config. Runs on local Scope server or remote RunPod pod | Scope endpoint URL; RunPod key only if using a remote pod |
| ☁ Daydream Live SD Turbo | ControlNet + guidance weight + prompt input. Powered by fal.ai | Daydream / fal.ai API key (yes) |
| 🌐 WebRTC / Browser | Embed a web page, a live WebRTC feed, or a remote source room | Depends on target |
| 📹 Remote Camera | Accept a camera feed from a remote viewer via the wallspace.studio portal | No (but see TURN note in §9) |
| 🎨 AI Media | Generative AI source via RunPod-hosted models (Flux, Seedance, SDXL, Qwen, Wan, Sora, Weavy, VFX Effects, etc.) — each model has its own input mode / prompts / params. | RunPod required; model + plugin specific |
| 🎲 3D Scene | Render a mini 3D scene into the layer. WIP Feedback loop between 3D scene source + compositor is still being worked out — for a clean 3D preview use the web portal room viewer, which renders 3D scenes without the feedback issues. | No |
| 💬 Captions / Text | Rendered live captions or static text as a video source | Depends on engine — Whisper local = no |
Every screenshot below is a real layer running on the four-wall main room. Use the preset picker / model dropdown in each panel — nothing shown here is bespoke. Source types marked Coming soon have screenshots pending.
The default source. Drop a file from your media library onto a layer; video auto-loops, still images hold. Base Content controls (fit, pan/zoom, opacity) apply on top.
Music-reactive preset engine — the MILKDROP VISUALIZER panel has a preset picker (filter + preview + Next/Random buttons). Pick a preset and the layer reacts to whatever audio is feeding the app. No key required.
Any audio layer can render itself as a visual using the built-in Audio Visualizer. The Base Content tab has a mode row (bars / spectrum / waveform / radial…) and an Audio Analyzer strip showing the live FFT. Good fallback when you don’t want to pull in a MilkDrop preset.
Generative noise source with 10+ sub-patterns. Pick the pattern in the Source tab; each one reacts to audio amplitude & frequency differently. All local, no key, zero network dependency — great offline fallback.
Embed a web page as a live source. Paste a URL, pick the video element on the page with the element picker, and the layer captures that element as its feed. Works for YouTube, live streams, webapps with canvas, etc. — anything Chromium can render. Capture is local (no external key needed for the source itself; the page’s own auth still applies).
Heads up: some AI Media parameters, model names, and UI details have changed since these screenshots were taken. Treat the panel layout below as representative — the exact model list + required params in the current build may differ.
Generative model picker with a big catalogue (Flux Dev, Qwen Image, Seedance, SDXL, Wan, Sora, Weavy, VFX Effects Workers, Chatterbox TTS, and more). Pick a model, fill in the required input mode (text → image, image → video, text → audio, etc.), provide prompts / params, and the layer renders the generated output. RunPod key required (enter it in GPU Console → GPU Resources).
Live captions from a configurable engine (Whisper local by default — no key), or static text you type in. Captions have their own panel with transcription settings, language, display toggles, and extensive styling controls (font, size, colour, outline, drop shadow, animation, translation). Becomes a video source like any other, which means you can blend it, transform it, or drop it on specific walls.
Routes the layer through the Scope node editor — a visual graph where you wire source nodes through models (English/language node, prompt compositor, control nodes) into a result. Runs on a local Scope server or a remote RunPod pod. Requires a Scope endpoint URL; RunPod key only if you’re using a remote pod.
ControlNet-driven live image restyling through fal.ai’s Daydream Live SD Turbo model. Pairs with a ControlNet preprocessor on another layer (see Effects) to condition the output. Requires a Daydream / fal.ai API key. 60-minute session cap — see Known issues.
Renders a mini 3D scene into the layer. Feedback loop between the 3D scene source and the compositor is still being worked out — for a clean 3D preview use the web portal room viewer, which renders 3D scenes without the feedback issues.
Source files: LayerEditor.tsx, MilkDropLayerConfig.tsx,
ScopeLayerConfig.tsx, DaydreamLayerConfig.tsx,
CaptionSignalPanel.tsx.
The “Base content” property tab covers how the source maps onto the layer’s slot.
Layer position, scale, rotation — with both slider controls and an on-canvas interactive gizmo.
Per-layer colour adjustment. Works on any source including live webcam and AI layers.
The Effects sub-tab has four columns — a per-layer Canvas effect chain, a ControlNet preprocess picker, a Daydream-CN summary, and an opt-in serverless GPU plugin pipe.
Per-pixel Canvas effects that run in the desktop compositor: Blur, Glow, Edge Detect, Threshold, Scanlines, Chroma Key. Each one has its own slider(s) when enabled. Stack what you like — cheap on CPU/GPU and fully local.
Preprocessors that turn this layer’s source into a condition map (edges, depth, scribble, etc.) for a Daydream layer that references this layer as its ControlNet input. Options: Canny Edge, Lineart, Anime Lineart, Soft Edge (with strength slider + Invert toggle — good starting point for webcam input), Scribble, Depth Map, Color Quantize. Only one can be active at a time.
Reverse view: “No ControlNet targeting this layer” when nothing uses the layer as a condition; otherwise lists each Daydream layer that’s pulling from this source with live preview + opacity. Lets you audit what’s cross-wired before a live set.
Tick Enable Serverless GPU Plugin to route this layer’s source through a standalone GPU plugin (Real-ESRGAN upscale, Depth Anything, VFX Pack, etc.) running on RunPod serverless workers. Requires RunPod credentials (GPU Console → GPU Resources). Adds latency — use for recorded experiences, not live shows.
You don’t need a Daydream layer wired up to see what a ControlNet preprocessor is doing. Pick Soft Edge / Depth Map / Canny on any webcam or video layer with the layer visible on a wall and the output becomes the wall content — good for dialling in edge strength or depth range before you commit to a Daydream run.
Source files: EffectsPanel.tsx, ControlNetPreview.tsx,
DaydreamLayerConfig.tsx.
The GPU Console is the collapsible bar sitting between the tab bar and content. It’s where you tune how hard the app pushes the GPU, and where your cloud-inference credentials (RunPod, Hugging Face, Cloudflare TURN) actually live. Click the ▸ chevron to expand.
| Control | What it does | Default |
|---|---|---|
| GPU status dot | Green = ready, yellow = connecting, gray = disabled, red = error. Click for pod detail. | Green if local GPU detected |
| Per-section quality | Drop quality for Layers / 2D Preview / 3D Preview / Outputs independently. | High / Standard across the board |
| Global quality preset | One switch: Ultra / High / Medium / Low / Standard. Changes all per-section values at once. | High |
| GPU effect pipeline | Off / CRT Phosphor / Chromatic / Color Grade / Full CRT. The visual flavour of the CRT emulation. | Off |
| AI Upscale | Off / Low / Medium / High. Per-output AI upsampling. | Off |
| Perf mode | Skip certain sections entirely for max throughput. Gating flag. | On (with high quality + CPU 4x) as of v2.6.2 |
| WebGPU toggle | Experimental Hardware-accelerated compositor via WebGPU instead of Canvas2D. | Off |
| FPS / latency readout | Live stats from the GPU frame client. | — |
| Pod status | Inline RunPod badge with cost/hr when connected. | Hidden if no pod |
The Perf mode toggle (top of the expanded GPU Console) is the master switch for “throughput over fidelity.” When on, the render loop can skip expensive per-pixel work to keep frame rates high on heavy scenes or under live-streaming load. Use it when you see GPU fps dropping in the header readout, when captions or transforms stutter, or when a show has a lot of simultaneously active outputs + Scope/Daydream pipelines.
| Skip option | What it drops | Visual cost |
|---|---|---|
| Skip Effects | Per-pixel effect chains on layers (blur, glow, edge, scanlines, chroma, canny, depth, lineart). Colour adjustments + warps still run. | Medium — stylized looks flatten; base content still composites correctly |
| Skip Blend / Transform | Layer blend-mode blending + transform compositing on the preview canvas. Outputs still receive full transform math. | Low — Canvas preview loses blend-mode mixes; outputs unaffected |
| Skip Projectors | The projector render pass entirely — wall content still goes to CRT outputs, but projector slots stay black. | High if you’re actually projecting. Use only for testing with CRT-only setups. |
| Skip 3D labels | 3D-scene label sprites (wall IDs, zone badges). Scene geometry + textures still render. | Low — you lose the ID overlays |
| Lower JPEG quality | Output transport encodes at a lower JPEG quality (0.3 instead of 0.5) — smaller IPC payload, faster encode. | Low-medium — visible banding on gradients if you’re pixel-peeping |
Rule of thumb: if the GPU fps readout in the console header drops below your output’s target framerate (60 / 30 / 24 depending on quality preset) for more than a few seconds, flip Perf mode on and toggle off the heaviest skip option first (Skip Effects). Re-measure. If still struggling, add Skip Blend/Transform. Leave Skip Projectors off unless you’re certain no projector is active in the show. Half-resolution encode (1920×1080 inside a 3840×2160 output window) is always on and is the single biggest throughput gain — no toggle needed.
Source files: GpuConsolePanel.tsx, QualitySelector.tsx,
compositor.ts.
The desktop app is half of WallSpace; the other half is wallspace.studio — the public web portal where audiences watch live shows, browse recorded experiences, send their phone camera into a performance, and view in VR. The Go Live button in the app top bar is the bridge: it opens a WebSocket + WebRTC stream, registers your room with the portal, and generates a share URL audiences can open in any modern browser.
The Go Live button (purple, top right of the app) drives the full publish flow.
Anyone you share that URL with can watch in their browser with no install. Audiences on unrestricted networks connect via WebRTC direct; others fall back to Cloudflare TURN relay (if you’ve configured TURN credentials — see §2).
Before you click Go Live: make sure Start (the green button next to Setup) is on so outputs are actively rendering; otherwise there’s no stream to publish. If you hit Go Live and nothing happens, check Zone 2 → Logs (Errors filter) — the app logs the reason rather than surfacing it as a toast.
wallspace.studio
Primary CTA. Opens room.html where a visitor can connect to a live broadcast.
If you share a direct URL (?wss=…), the room connects automatically.
Otherwise they enter a WallSpace host + port manually.
Opens experience.html — a catalog of recorded VJ sets
and sessions (SAMCAR Real Estate session is the current flagship). On-demand playback with
timeline + captions + chat + fullscreen.
Direct entry field — paste a code or IP that a host gave you, hit Join. Same endpoint as the URL-based entry but useful for paper shares or QR codes.
Auto-populated from the rooms API: shows every room that pinged the heartbeat within the last N seconds. Empty state reads “No rooms are live right now.” Click a card to enter.
Static catalog hosted on R2; click a card to watch. Each card shows duration, wall count, and a thumbnail. Full details in memory: SAMCAR + future VJ recordings use MediaRecorder for capture.
Marketing-style cards describing Watch Live Shows / Send Your Camera / VR + XR Viewing / Live Captions / Music + Lyrics / Remote Collaboration. Exists to explain the value to audiences on the landing, not as navigation.
The experience player renders a 3D room with the recorded session’s walls in place, plays back the media that was streamed during recording, and supports captions as a toggle. Tap to enable audio appears on first load because browsers block autoplay audio until a user gesture. Transport shows Play / pause, scrub, speed (0.25x–4x), loop, mute, CC captions toggle, chat (💬), and fullscreen.
?wss= parameter is present in the URL.This is the manual fallback entry. Almost always, audiences arrive with a share URL that pre-fills the WebSocket target and skips this view entirely. Status badge top-right shows Disconnected / Connecting / Connected. Once connected, the room transforms into the 3D viewer with live walls + audio.
| Feature | Where | Status |
|---|---|---|
| Watch live shows | Enter a Room with a share URL | Stable |
| Recorded experiences (VOD) | Browse Experiences → Experience player | Stable |
| 3D immersive viewing | Room + Experience renders 3D walls | Perf jitter on scene rebuild (partially fixed) |
| VR / XR viewing | WebXR headset in a supported browser | Built, lightly tested |
| Live captions in the room | CC toggle in the transport bar | Working with desktop-app caption source |
| Music + Lyrics | Auto-detected when a music layer is active on the host | Shazam + LRCLIB pipeline |
| Send Your Camera (visitor → host) | Room → “Share camera”; feeds visitor cam into host’s Remote Camera layer | TURN missing in portal — see below |
| Chat | Chat bubble in the room + experience players | Built, moderated by host |
| Remote Collaboration (co-VJ) | Multiple publishers onto one room (VJ battles, 24/7 channels) | Early — feedback welcome |
Once a room is live, the portal renders the host’s wall composite in-browser over WebRTC. Below: a live session published from WallSpace to the portal (“I am the VJ / Don’t ask me about the music” was the source content for the session) with the 3D room view showing two mirror walls + the CRT grid cluster in between.
wallspace.studio/room — render 60fps / atlas 415fps, two walls + CRT cluster rendering in real time.
The WebRTC publish pipeline works end-to-end but has rough edges we’re still sanding. If you hit one of these during beta testing, it’s expected — please report it (Zone 2 → Logs, filter by Errors) so we can narrow the repro:
*.trycloudflare.com tunnel we use for dev-mode publish. Disable VPN or whitelist that hostname for testing.
The web portal’s room.html uses Google STUN only — there’s no
TURN fallback. Visitors on corporate / hotel / cellular networks that block direct UDP
can’t reach the host’s camera endpoint. Desktop-app-to-desktop-app has Cloudflare
TURN wired up; portal-to-app doesn’t yet. If your audience reports “camera
won’t connect,” this is almost always why. Watch-only works regardless of NAT
because outbound WebRTC to the host is initiated by the host side.
When you click Go Live, the app shows the share URL (plain text + a Copy button). That URL
has the WebSocket target baked in as a query param, so audiences clicking it go straight
into your room — no Connect screen. Paste it into chat / QR code / social share.
Anyone who visits wallspace.studio directly without that URL lands on the
generic landing page instead.
One place to scan every landmine currently in the app. The goal isn’t to scare you off — it’s to save you the cycle of debugging a known issue from scratch. Ordered by blast radius.
| Area | Issue | State | Workaround / Notes |
|---|---|---|---|
| Boot | Renderer blue-screens on boot (exit code 11 SIGSEGV) | Recovery exists | Nuke Local Storage/leveldb + Session Storage + Preferences in the userData dir. Presets are preserved. Let Jack know if you hit it so we can narrow down the root cause. |
| MIDI | Scene-trigger buttons in the MIDI panel can crash or corrupt state | Guarded, not fully fixed | try/catch guard landed in v2.6.2 (commit 5aedc90). Underlying setCRTs + setWalls + setScenes cascade still suspect. Use scene tabs, not trigger buttons. |
| Timelines | Beat matching end-to-end untested | Unverified | BPM CC + tap tempo + phase output exist. Sync quality under load is unknown. Feedback on real-world behaviour very welcome. |
| Sources | 3D Scene source type has feedback loop issues on desktop compositor | WIP | Feedback / render-loop coupling between the 3D scene source and the main compositor is still being debugged — can surface as flicker, stale frames, or runaway re-renders. Workaround: use the web portal room viewer to show 3D scenes cleanly until the desktop-side fix lands. |
| Cloud | Live RunPod GPU pricing falls back to stale static table | Stale data | Cross-check prices on runpod.io before booting a large pod. Fix scheduled. |
| Cloud | fal.ai hard 60-minute session cap, no advance warning | Known | 55-min client countdown exists. Plan sessions in under-55-min cycles. Assets uploaded directly in the Scope UI are lost on session kill; VACE ref images via layer config can be re-uploaded. |
| Outputs | MP4 loop regression after media-element-source change | Fixed, needs field testing | Commit 8fe5e77. If you notice a video not looping, let us know. |
| 3D web viewer | Perf jitter on scene rebuild | Partially fixed | Scene-hash dedup + disposal fixes landed. If it’s still jittery post-deploy, report — incremental scene updates still on the list. |
| 3D tab | First-visit blank / static frame in Start mode | Fixed in v2.6.3 | Container-migrate poll + texture-update dep on init-completion signal. Scene paints on first click and live frames keep flowing in Start mode. |
| Outputs | Projector with CRTs in same slot rendered mostly black | Fixed in v2.6.3 | v2.4.5 projector mapping logic (L1/L2 sub-cell math, 4:3 mask correction, cyan CRT outlines during mapping, CRT-cell blackout pass) was accidentally reverted in v2.4.6; re-transplanted onto v2.6.3. See §5 for walkthrough. |
| Outputs | Map-overlay grid flickers while dragging corner handles | Cosmetic | Warp math settles after a brief pause. No effect on rendered output. Queued for post-demo debounce pass. |
| Sources | Webcam hot-unplug took down all cams simultaneously | Fixed in v2.6.3 | Enumerate-merge + per-track 'ended'/'mute' listeners. A single yanked cam now fails alone; siblings keep streaming. Re-plug to recover. |
| Captions | Layout / safe-zone / sizing reset to default on each new segment | Fixed in v2.6.3 | Token-reveal render path was missing the layout props (safeZoneEnabled, maxWidthPct, etc.) and overwrote the correct canvas with a default-layout one. Both source + translation blocks now carry layout config identically. |
| Captions | Voice-tone emotion score stuck at neutral (0.00) despite voice analysis on |
Fixed in v2.6.3 | Voice extractor never started because a one-shot 2s timer raced with Whisper’s up-to-10s stream acquire. Now polls every 500ms for up to 15s. Separately, voice emotion is scored from a 2.5s peak-in-window (same pattern as the transient accumulator) so brief pauses between phrases don’t collapse the score to zero. |
| Web portal (browsers) | Remote viewers behind NAT can’t send their camera to the app | Unfixed | The web portal (room.html) uses Google STUN only — no TURN fallback. Desktop app has Cloudflare TURN. Remote-camera-in for the portal is planned; for now the portal is watch-only over restrictive NATs. |
| Captions | Offline translation TODO | Not yet built | Apple’s Translation framework is SwiftUI-only on macOS 15+; unreachable from our native bridge. Planned path: bundle Argos Translate or CTranslate2. For now, DeepL / LibreTranslate online only. |
| API keys | No single Settings → API Keys screen | UX debt | Keys entered per-panel (see §2 table). Unification is planned but not scheduled. |
| Notarization | macOS app notarization | Fixed | DMG builds signed with Developer ID + notarized via notarytool. No Gatekeeper prompt expected on first launch for latest releases. |
If you find something in this guide that’s wrong, out of date, or missing — or a new bug — drop a note directly with Jack (Discord / message) with: