WallSpace Desktop — Beta Guide

Everything the app can do, what’s stable, and what’s still soft
v2.6.3 · Updated April 21, 2026
Heads-up: this guide is honest about known issues and untested features — flagged inline with Beta, Known bug, Fixed in v2.6.3, or Experimental. A round of fixes landed in v2.6.3 (2026-04-21): 3D first-visit paint + live frames, projector+CRT combo, webcam hot-unplug isolation, caption layout persistence, voice-tone emotion scoring, Cloud tab soft empty state. Fast-feedback loop: message Jack directly.

Table of contents

  1. 1. Before you start
  2. 2. Setup & API keys
  3. 3. Canvas
  4. 4. 2D Preview tab
  5. 5. Outputs tab
  6. 6. 3D tab
  7. 7. Toolbar functionality
    1. Scenes & Layers
    2. Sources — webcams, library, folder
    3. Timelines
    4. MIDI
    5. Protocols
    6. Cloud
    7. Captions
    8. Signals
    9. Logs
    10. Source types
    11. Base content
    12. Transform
    13. Color
    14. Effects
  8. 8. GPU console
  9. 9. Web Portal & Go Live
  10. 10. Known issues & untested features

📋 1. Before you start

System requirements

PlatformStatusNotes
macOS arm64 (Apple Silicon)TestedPrimary target. Signed + notarized DMG.
macOS IntelNot current focusBuilds run under Rosetta; arm64 is the primary target. Report anything that breaks on Intel.
Windows 10/11Builds, lightly testedWASAPI audio capture works; GPU path falls back to CPU if Vulkan unavailable. If you’re on Windows, expect to find rough edges — please report them.
LinuxUnsupportedNot built; no plans this cycle.

Install & first launch

  1. Grab the signed DMG (macOS) or installer (Windows) from the private wallspace-releases repo — Jack will send you a collaborator invite. Double-click to install.
  2. On first launch, macOS will prompt for Microphone, Camera, Screen Recording, and Speech Recognition permissions. All four are needed for the captioning + webcam + audio-loopback features to work. Grant them (you can revoke later in System Settings → Privacy & Security).
  3. The app opens with an empty project. A welcome/onboarding modal lets you pick a feature presetAll Features, Live Performance, or Minimal — plus toggle six individual feature groups (Video Outputs, External I/O, Audio Reactivity, AI / Cloud GPU, MIDI Control, 3D Room Preview). Disabled features won’t load, which improves stability and performance. Pick All Features to see everything covered in this guide; you can revisit later from Setup.
  4. If the app blue-screens on boot with an SIGSEGV (exit code 11), see Known issues §9 for the localStorage nuke fix.

Where your data lives

Welcome to WallSpace.Studio modal with feature preset buttons (All Features / Live Performance / Minimal) and six feature toggles below.
Fresh install, “All Features” preset selected — pick it to see everything covered in this guide.

🔑 2. Setup & API keys

WallSpace talks to a lot of external services but most of them are optional. The table below shows exactly what needs a key and what doesn’t. Heads-up: key entry is scattered across several panels (there’s no single “Settings → API Keys” screen yet — we know, it’s on the list). The “Where entered” column tells you which panel to open.

ServiceKey required?Free tier / defaultWhere entered
Whisper (local ASR)NoFully offline, runs via whisper.cpp subprocessN/A — works out of the box; it’s the default caption engine
LRCLIB (lyrics)NoPublic API, no limit you’ll hit casuallyN/A
SpotifyOAuth, no raw keyFree Spotify account works for search & metadata; Premium needed for playback controlZone 2 → Spotify panel → Connect
RunPod (cloud GPU)YesPay-as-you-go; sign up at runpod.ioExpand GPU Console (bar above the tabs) → GPU Resources column → API
Hugging Face tokenOnly for gated modelsFree HF account fine for public modelsGPU Console → GPU Resources → HF
Daydream / fal.aiYesPay-per-inference. See Daydream pricing convergence.Per-layer Daydream config panel (select a Daydream layer)
Deepgram (cloud ASR)Yes$200 free credit on signupCaptions panel → Transcription service picker (if enabled)
DeepL (translation)YesFree tier: 500k chars/month. Key ending in :fx auto-detected as freeCaptions panel → Translation settings
LibreTranslateNoFallback to DeepL; bring-your-own endpoint optionalSame panel as DeepL
Shazam (song ID)YesRapidAPI free tier availableLayer Editor → Music/Lyrics source → RapidAPI key field
Cloudflare TURNOptionalFree tier: 1TB/month. Falls back to metered.ca free relay if not setGPU Console → GPU Resources → TURN (configure)
AI plugins (ModelsLab, Weavy, Freepik, Leonardo, Gemini Veo, Sora)VariesPer-plugin; most have free tiersAI Plugin Settings panel → pick plugin → enter credentials
Scope WebRTCNoURL-based per layer; signalling handled by ElectronScope Layer config → endpoint URL

What to set up first

You don’t need any of these to start using the app. Canvas, 2D sources, Outputs, 3D, MIDI, Captions (via local Whisper), and most of the toolbar work without a single key. If you only have 10 minutes for setup, do these two:

1. Pick a media folder

Open the Sources tab at the bottom (Zone 2). In the Folder column, click the folder button (📂) and point it at a folder of videos and images. The app scans it; drag files from the folder list onto any layer in the hierarchy panel (the tree on the left of the canvas). Fastest way to get pixels on screen.

2. Open an output

Go to the Outputs tab. If you’ve got a second display connected, pick it from the dropdown on CRT Output 1 and hit Open. That’s enough to see your composition on another screen. Fullscreen it with the button next to Open.

Entering cloud keys

Keys are stored locally in Electron’s secure store; after you type them in once they’re masked as ****wxyz. They never leave your machine and never get written to presets. If you’re uncomfortable pasting a key, paste it in a fresh terminal first to check the prefix is what you expect — then paste into WallSpace.

Expanded GPU Console showing the GPU Resources column with API / HF / TURN entry fields all 'Not set' and a Pod selector.
Expand the GPU Console (bar above the tabs) → right-hand GPU Resources column is where your RunPod API key, Hugging Face token, and Cloudflare TURN credentials go. Keys are masked after first entry.

🖼 3. Canvas

The canvas is the big area in the middle of the window — it shows your virtual wall layout with every CRT drawn in real time. You compose here; whatever ends up on the canvas is what gets sent to outputs.

Canvas tab showing three walls (Front / Left / Right) arranged side by side with CRTs in each, hierarchy panel on the left, Quick Add panel on the right.
Canvas tab, default “All Walls” view — hierarchy (left), walls + CRTs (center), Quick Add (right).

The controls you’ll use most

Zoom & pan

  • Zoom range: 30 – 500% (all walls), up to 500% on a single wall
  • Magnet mode (default) — pan locked to wall centre, zoom enabled
  • Overlay mode — free pan, drag the canvas around
  • Toggle between them in the canvas toolbar

Selection

  • Click a CRT to select
  • Cmd-click or Ctrl-click to multi-select
  • Corner handles appear on selected CRTs — drag for perspective
  • Click a wall name in the legend to zoom to that wall in single-wall edit mode

Theater mode

  • Toolbar button (or press T) expands the canvas to fill the app window
  • Preview-only — panels hidden
  • Same key toggles back

Layer visibility

  • CRT/projector visibility toggles live in the canvas toolbar
  • Hiding a layer on the canvas doesn’t affect outputs
  • Use this to declutter the edit view without breaking what’s live

Source file: src/renderer/components/CanvasView.tsx, AllWallsCrossLayout.tsx.

🖼 4. 2D Preview tab

The 2D Preview tab (labelled “2D Preview” in the top tab bar) is the flat counterpart to the 3D tab. Instead of a spatial scene, it shows your walls laid out in 2D with live CRT content rendered in place, plus the hierarchy and edit sidebar for whatever you’ve selected. This is where you do precise wall / CRT / projector editing without the 3D camera getting in the way. Media browsing and sources live in the toolbar — see §7a.

2D Preview tab showing the Front Wall with CRTs C1, D1, E1 rendered in place plus wall-switcher tabs and Magnet/Overlay display-mode toggle.
2D Preview tab with wall switcher (Front / Left / Right) and Magnet / Overlay display mode.

Display mode

Magnet locks pan to the active wall and lets you zoom in cleanly — best for per-wall precision work. Overlay unlocks pan so you can freely drag the preview around. Toggle in the canvas toolbar.

Viewport zoom

Independent zoom slider on top of the display-mode’s own zoom — useful for getting in on a tiny corner-pin adjustment.

Hierarchy + edit sidebar

The right-side panel flips between BatchCRTEditPanel (when CRTs are selected), ProjectorEditPanel (projector selected), and wall-edit controls. Selection drives it; click something in the canvas or the hierarchy to edit.

Daydream live config

If a Daydream layer is active on this tab, its live-parameter panel shows here — guidance weight, prompt tweaks, publish toggle — so you can tune without switching tabs.

Syphon out

One-click button to create a Syphon output (macOS) from the current 2D preview and open Scope against it. Handy shortcut for external VJ pipelines.

Launch Scope local

Starts a local Scope server and opens it in your browser — no cloud pod needed for quick tests.

Source file: src/renderer/components/PhysicalTab.tsx.

📺 5. Outputs tab

This is where the pixels actually leave the app. Each output is an OS-level window that you put on a display, a projector, or feed to an external tool like Syphon/Spout/NDI. Corner-pin, zoom, and quality all live here.

Outputs tab showing Output 1 with a 2×2 slot grid (A1 assigned, 3 empty) and Output 2 unassigned, each with Display / FS / Open / Map controls.
Outputs tab: Output 1 with a 2×2 slot grid + Output 2 empty. Each output has its own Display picker, FS, Open, and Map buttons.

Per-output controls

ControlWhat it does
Display dropdownWhich physical screen this output lands on. Up to 4 outputs supported simultaneously.
Open / CloseOpens or closes the output window on the chosen display.
Fullscreen togglePer-output fullscreen. Helpful for projectors; for CRTs on HDMI, go fullscreen after opening.
Corner pin (numeric)Edit projector warp by top-left / top-right / bottom-left / bottom-right x,y numeric values. Use for keystone + skewed projection surfaces.
Corner-pin presetsReset-to-square plus common layouts. Start here if your projection is a normal rectangle.
Zoom sliderPan/zoom inside the output frame without changing the source.
Quality selectorHigh (60fps), Medium (30fps), Low (24fps). Drop it if you’re hitting frame drops.

Grid layout & auto-assign

Underneath the per-output cards, there’s a grid packing config — auto / custom cols / rows / cell aspect ratio. The Auto-assign button distributes your CRTs across the available output slots using the current grid. Useful when you’ve added new CRTs and don’t want to place them by hand.

External outputs (Syphon / Spout / NDI / WebRTC / Daydream)

Cards in the lower section of the tab route your output to external tools:

About the mapping window

The separate mapping window, zoom slider, and draggable corner-pin preview were reverted in v42 because IPC sync between the mapping window and the real projector output was unreliable. The numeric inputs + preset buttons in the Outputs tab are the primary way to adjust corner-pin right now. Pressing P on the output window lets you drag corners locally, but those changes don’t sync back to the main app — use the numeric inputs if you want persistent edits.

Mapping mode — seeing the corner-pin grid

When an output window is active, toggling Mapping mode overlays a labelled grid (A1D4 for slots plus G1/G2 gutters and A1/A2 crown rows) so you can see exactly where each CRT or projector region lands. The four corner handles at TL/TR/BL/BR show the current warp; drag them on the output window (press P) for a quick local preview, or use the numeric inputs in the Outputs tab to persist.

CRT Output window in Mapping mode showing a rectangular 4x4 labelled grid (A1-D4) with G1/G2 gutter columns and A1/A2 crown rows, cyan borders, corner handles at TL/TR/BL/BR, a purple 'MAPPING MODE' pill at the top, and Mode/Opacity/Audio controls below.
Mapping mode in the output window: labelled grid + corner handles, untouched (square).
Same CRT Output mapping mode but with the grid warped: the top-left corner pulled down and the right edge rotated, showing a non-rectangular perspective warp with dashed guide lines indicating the distorted shape.
Mapping mode with a corner pulled — the full grid warps in real time so you can match an off-square projection surface.

Projector outputs with CRTs in the same slot (v2.6.3+)

When an output contains both a projector and CRTs, the projector shows the scene content filling the entire projector area with the CRT positions visually represented as:

3D preview showing two walls with webcam feeds — Left Wall empty and Right Wall with live webcam content — plus a small centered CRT screen showing the VJ's face. The projection surfaces fill the full wall area with faint cyan outlines indicating where physical CRTs sit in the scene.
Live mode — projector surfaces fill the full wall with scene content; CRTs render as small outlined rectangles on top.
CRT Output window showing a 4x4 grid of video tiles — top 2 rows and parts of row 3 filled with live webcam frames showing the VJ, the bottom-right quadrant black. Demonstrates projector+CRT composite with the v2.4.5 cherry-pick applied.
Projector output window with CRTs assigned — live scene content fills the projector area, CRT cells show their individual content in the composite.
CRT Output in Mapping mode with a 3-row grid: top row shows a single A1 cell, middle row shows B1/B2 side-by-side, bottom row shows C1/C2/C3. Each cell is outlined in cyan with its label centered. Corner handles TL/TR visible at the top.
Mapping mode on a projector output — each CRT cell rendered as a cyan outline with the CRT ID label (A1, B1, B2, C1…). Physical 4:3 aspect correction is applied so the outlines match the real CRT footprint.
Full-output Mapping mode view with cell labels FWS1A1, FWS1B1, FWS1C2, FWS1C3 across the top, FWS1B2, FWS1C1, CLS3D1, LWP1:P1-S1 in the middle row, and RWP2:P2-S1, FWP3:P3-S1 in the bottom row. Colored tiles show which wall + scene + layer each cell maps to. Two CRT output windows stacked with a minimap thumbnail in the lower-right corner.
Zoomed out mapping view — every cell labeled with its scene + layer code (FWS1A1 = Front Wall, Scene 1, layer A1). Useful for large installations with many walls and scenes.

Known quirk: map overlay is mildly jittery

While dragging corner handles in Mapping mode, the overlay grid can flicker slightly between frames — the warp math settles after a brief pause. Cosmetic only, no effect on the rendered output; queued for a post-demo pass to debounce the per-frame overlay recomputation.

Source files: OutputsTab.tsx, ProjectorEditPanel.tsx, ExternalOutputCard.tsx.

🎲 6. 3D tab

The 3D tab gives you a spatial view of your walls, CRTs, and projectors as real geometry. It’s how you place walls in a virtual room and set the camera path for the 3D web viewer (so remote audiences see the scene from meaningful angles). You can also edit wall transforms directly with a move/rotate/scale gizmo.

3D Installation Preview showing Left Wall and Front Wall rendered as 3D geometry with CRTs in place (A1, C1, D1), wall name labels floating in space. Batch Edit sidebar on the right shows the '3D' sub-tab with Depth, Pitch, Yaw, Roll sliders and a Reset 3D button.
3D tab in Edit mode, Outside camera preset. Batch Edit sidebar → 3D sub-tab exposes per-wall Depth / Pitch / Yaw / Roll plus a Reset 3D — this is how you tilt, rotate, or push walls back in 3D space.

Camera presets

Transform mode (for editing walls)

Select a wall from the hierarchy panel, then switch transform mode in the toolbar:

Other things you can toggle

3D first-visit blank — Fixed in v2.6.3

The old “blank 3D on first click” glitch was a lazy-load race between the Three.js container mount and the scene initialisation. Patched in v2.6.3 with a container-migrate poll (up to 3 s) plus a texture-update dependency on init completion, so the scene paints AND keeps live frames flowing the first time you open the tab. If you hit it on an older build, bounce back to Canvas and return.

Source files: ThreeDTab.tsx, transform/TransformToolbar.tsx.

🛠 7. Toolbar functionality

Everything below lives in Zone 2 — the bottom panel. Tabs along its top switch between Layers & Scenes, Timelines, MIDI, Protocols, Cloud, Captions, Signals, Logs, and property tabs (Base content / Transform / Color / Effects / Source types — only visible when you have a layer selected).

Zone 2 tab strip with nine tabs: Scenes & Layers (active), Timelines, Sources, MIDI, Cloud, Logs, Protocols, Signals, Captions.
Zone 2 tab strip — nine tabs, left to right: Scenes & Layers, Timelines, Sources, MIDI, Cloud, Logs, Protocols, Signals, Captions. Everything in this section lives here.

7a. Scenes & Layers

Default tab on load. This is where you manage scenes, layers, and the selected layer’s details all in one place. The layout is: Scenes panel on the left (scene tabs + output shortcuts + scene colour), Layers panel below (per-scene layer list with reorder, duplicate, delete), Layer Details on the right.

The hierarchy panel that’s always visible on the left of the canvas (TimelineHierarchyPanel) is a separate surface — it shows every wall / scene / CRT / layer in one tree, lets you select anything for editing, and is where you drag media in from the Sources tab.

7a.1 Sources tab — webcams, library, folder

The Zone 2 tab labelled Sources is a dedicated media-ingest surface. Three columns, each one source type you can drag onto layers in the hierarchy.

Webcams 📷

Every connected camera appears here automatically.

  • Start / Stop per camera; stopping releases the device
  • LIVE / HLS chip toggles stream mode per source
  • Refresh (🔄) rescans the device list
  • Coloured tag chips show which layers reference the camera

Library 📁

Loaded media files (images + videos).

  • + button — open a file picker; files loaded here persist for this project
  • Each file shows status, tags (which layers use it), proxy badge for downsampled content, DUP chip for duplicates
  • Load re-ingests a file (use when it changed on disk)

Project / Folder 📂

A pointer to a folder — fastest way to bring in a whole bulk library.

  • 📂 button picks the folder. Unset → label Folder (transient); set as project → Project + SAVED badge
  • Multi-select files with Shift/Cmd-click, or Cmd-A, then drag onto any layer in the hierarchy
  • 🔄 refreshes the folder listing
Sources tab with five connected webcams listed on the left (FaceTime HD, NDI Virtual Camera, iPhone, USB Camera, Logitech), empty Library in the middle, empty Folder column on the right.
Sources tab — webcams (left), library (middle), project folder (right).

7b. Timelines Beat matching untested

Timelines give you a transport bar (play / pause / stop / record / BPM), a Resolume-style scene × layer trigger matrix, and keyframe editing for the selected layer.

Beat matching — untested end-to-end

The pieces exist — BPM CC input via MIDI, tap-tempo trigger, beat phase output through the MIDI parameter resolver. But we haven’t validated sync quality under load with real controllers. If you try it, please tell us how it behaves — phase drift, jitter, hang-ups. This is on the list to formalise in the next iteration.

Timelines tab: transport bar with play/stop/time, three scene columns, Walls row with FRO/LEF/RIG/ALL buttons, filter row (All/A/B/C/D/E; Group: Flat/Zone/Wall/Type), scene trigger grid below.
Timelines tab: transport, scene columns, wall scope, and the zone-by-scene trigger grid.

7c. MIDI Guarded but fragile

MIDI panel handles device connection, parameter mapping, and includes a virtual on-screen Launch Control XL for testing without hardware.

CC updates are rAF-batched (120Hz knob streams folded to 60Hz render cadence) so smooth turns don’t stutter the UI. Note-on triggers fire immediately.

Known issue: scene-trigger buttons can destabilise the renderer DEFERRED

Triggering scenes via the MIDI panel’s on-screen scene-trigger buttons was crashing the renderer (SIGSEGV). A try/catch guard landed in v2.6.2 (commit 5aedc90) so the hard crash surfaces as a console warning instead — but the underlying state-cascade issue isn’t fully fixed.

Workaround: switch scenes via the scene tabs at the bottom of the app, not via the MIDI panel’s trigger buttons. Learn mode and knob/CC mappings are fine; it’s only the scene-trigger action that’s flaky. If you hit it and the app goes weird, see the localStorage-nuke recovery in §9.

MIDI tab showing the 'Scene / Layer Grid — click to select MIDI mapping targets' panel with scene columns and the per-zone trigger grid, configurable via MIDI learn.
MIDI tab: Scene / Layer grid used to pick parameters for MIDI learn. The trigger buttons here are where the SIGSEGV issue lives — prefer scene tabs for switching scenes.

7d. Protocols

Live traffic monitor for OSC / MIDI / Art-Net / sACN / DMX. Use it when you’re debugging why a signal isn’t doing what you expect.

Art-Net and sACN messages show universe + channel; OSC shows the address pattern + value; MIDI shows CC number + value or note + velocity.

7e. Cloud Stale pricing; fal.ai 60-min cap

The Zone 2 Cloud tab is the cloud media browser — shared library hosted on R2 / CDN, filterable by source (WallSpace, A.EYE.STUDIOS) and type. Drag entries straight onto a layer to use them. On a fresh install without sign-in the tab shows a neutral notice — “Sign in to browse the cloud media library.” — instead of the old red “Failed to fetch” banner (softened in v2.6.3). Sign in to populate the grid; empty state otherwise means no assets match the active filter.

Where RunPod, HF, TURN actually live

GPU-pod management (RunPod API key, pod picker, cost/hr, HF token, Cloudflare TURN) is not in the Cloud tab — it lives in the GPU Console (the bar just above the Zone 2 tabs). Expand the console → right-hand GPU Resources column. See §8.

Live RunPod pricing can fall back to stale static table

The GPU picker shows cost/hr per GPU type. A GraphQL call fetches live pricing from RunPod, but on failure it falls back to a hardcoded table that’s weeks stale. Prices in the picker may be lower than reality — cross-check on runpod.io before booting a pod if cost matters. Fix is scheduled; see project_runpod_pricing_fix.md.

fal.ai: hard 60-minute session cap

Daydream and other fal-backed sources hit a hard 60-minute session limit. The server sends MAX_DURATION_EXCEEDED with no advance warning, kills the connection, and deletes assets uploaded directly in the Scope UI (VACE reference images uploaded via the layer config can be re-uploaded). A 55-min client countdown exists; auto-handoff to a fresh session is planned but not yet built. Long set? Plan a 55-min cycle and keep source assets on disk so they can be re-uploaded.

Cloud tab showing WallSpace / A.EYE.STUDIOS source toggles and an 'All' filter dropdown. In v2.6.3 the old 'Failed to fetch' red banner is replaced with a neutral 'Sign in to browse the cloud media library.' notice.
Cloud tab (cloud media browser) — WallSpace / A.EYE.STUDIOS source toggles + type filter. Empty state in v2.6.3 shows a neutral sign-in prompt.

Cloud media upload + cross-portal browsing — work in progress

The Cloud tab can already list assets already published to R2, but the upload flow from the web portal and A.EYE.STUDIOS into this browser is still under active development and not fully tested. You may see missing thumbnails, stale listings after publishing, or auth prompts that don’t quite land. Report anything weird, but don’t rely on round-tripping media through the Cloud tab for production work yet — upload directly via the web portal and pull files into layers with Load File… from the Source picker for now.

7f. Captions Whisper default; stable

Real-time speech-to-text + optional translation + emotion scoring + keyword triggers. The captioning system is deliberately general-purpose — it works for live events, conferences, video calls, conversations, and offline audio files.

Two surfaces: (1) the Zone 2 Captions tab is the transcript history browser for the current scene — search, clear, export to SRT/VTT/TXT/JSON/MD; (2) the per-layer Captions source (select a layer, set its source type to Captions / Text) is where the live engine config lives.

Captions tab (Scene 2) showing a transcript history with 7 segments — 00:10 *Taps* *Taps*, 00:15 Haha, god., 00:35 Testing, testing, testing., 00:41 La la la la la — with a Search transcript field, Clear + Export buttons top-right, and a '7 segments 00:10 – 01:03' footer. The canvas above shows two walls with Spanish translation 'Felicidad, tristeza' at top and English caption 'Uhhh... Happiness. Sadness.' at the bottom of each wall.
Captions tab — caption history browser for the current scene. Timestamped segments, search, clear, and export to SRT/VTT/TXT/JSON/MD. Above the panel: walls showing the live English captions + live Spanish translation layered together.

7g. Signals

The Signals tab is the live readout of everything the caption intelligence engine is deriving from the current input — which emotion is detected, what the voice features read, what the currently-active text is, and which downstream styling / effects are being applied to the caption layer as a result. Two views, one per layer (pick the layer from the dropdown at the top):

Per-layer emotion + voice signals

Signals tab with a caption layer selected (layer_1776804826716 in the dropdown). Sections: EMOTION shows a horizontal bar labelled 'Joy' with the bar filled to full, coloured pale yellow. VOICE section shows four rows — Volume: silent (-80dB), Spectral: 0Hz warm dark, Tremor: stable, Rate: 0 WPM. SPEAKER section starts below but is cut off.
Per-layer Emotion + Voice signals — dominant emotion with confidence bar (Joy shown here), plus live voice-feature readouts driven by the voice extractor: volume category with dB, spectral centroid with timbre descriptor, tremor / stability, and speaking rate in WPM.

Active text + applied effects

Signals tab with a second section visible for the same layer. 'No keyword triggered yet / 0 keyword(s) configured' at top. ACTIVE TEXT section shows 'happiness, sadness'. EFFECTS section shows two columns — Size 1.00x, Shake 0,0px, Color with a small yellow swatch and #FFE4A0; Weight normal, Opacity 1.00, Spacing 0px. FEATURES row at the bottom with five dots — Voice (grey), Emotion (green dot active), Speaker (grey), Style (grey), Keywords (grey).
Active text + applied caption effects — shows the current text, plus the resolved style modifiers the emotion/voice engines are applying (size multiplier, shake, emotion colour swatch, weight, opacity, letter spacing). The Features dots show which caption-intelligence features are active on this layer.

OSC signal routing (trigger rules, external addresses) is configured in the MIDI and Protocols panels + per-layer Scope OSC config — the Signals tab is the observer, not the router. The dedicated Signal Debug / Visualisation panel + preset import/export tools are scheduled but not yet in this build.

7h. Logs

The central log view for everything the app is doing. Useful when something isn’t behaving and you want to know why without cracking open developer tools.

When you report an issue, include a log snippet — filter to Errors, copy, paste. It saves a round trip.

Logs (45) tab in Zone 2 showing filter tabs (All, Errors, Ports, Scope, Perf) and Copy / Clear buttons top-right. Entries are timestamped (13:06:09, 13:06:14, etc.) with level (INFO) and service columns (Transcription, OSC, TranscriptSess). Typical lines: 'Fresh master stream: 1 tracks, states: live', 'Loading model: small.en', 'Bridge auto-started for caption broadcast', 'Model loaded successfully', 'Started: whisper:master → .../Transcripts/2026-04-21/13-06-09_whisper_master', 'AudioWorklet processor registered', 'AudioWorklet capture started (sampleRate: 48000, chunkInterval: 5000ms)', 'Active — AudioWorklet capturing PCM', 'Started Whisper session for master → 1 layers', followed by a run of 'Skipping silent chunk (-InfinitydB, 237568 samples @ 48000Hz)' entries every ~5s.
Logs tab populated with a Whisper session lifecycle — model load, transcript-session path, AudioWorklet setup, then silent-chunk skips while no one’s speaking. Count pill on the tab (Logs (45)) shows how many entries the aggregator currently holds.

Good logs to report with

The aggregator captures main process + renderer + any console.* calls with source/service tagged. When something breaks: click Errors to filter, then Copy, and paste that block with your bug report. Timestamps are local time, service column tells us whether it fired in the compositor, transcription, Scope, etc.

7i. Source types (per layer)

When you select a layer, the “Source” tab in Zone 2 shows a dropdown of source types. This is where the layer’s content comes from. Each source type has its own config block. The table below is the full reference; the walkthrough that follows is the visual tour of every type with a screenshot ready.

TypeWhat it isKey required?
📁 MediaImage or video file from your media libraryNo
📷 Webcam / Capture CardLive camera or capture-card feedNo (OS permission)
🖥 SyphonAccept a Syphon video stream from another macOS appNo
📡 NDIAccept an NDI stream over the networkNo
🔊 Audio FileAudio-only source for music / ambient tracksNo
♫ SpotifyAlbum art + track metadata (display-only; audio stays in Spotify)OAuth (Spotify account required)
🌀 MilkDropMusic visualisation presets, beat-sync-friendlyNo
🎵 Audio VisualizerAudio-reactive band-bar / waveform modes rendered as layer visualsNo
🌊 Audio-Reactive NoiseGenerative noise patterns (plasma, flow, particles, bounce, grid, voronoi, rings, fractal, lissajous, stars)No
🧠 ScopeScope plugin for real-time AI re-styling. Plugin picker + prompt + model config. Runs on local Scope server or remote RunPod podScope endpoint URL; RunPod key only if using a remote pod
☁ Daydream Live SD TurboControlNet + guidance weight + prompt input. Powered by fal.aiDaydream / fal.ai API key (yes)
🌐 WebRTC / BrowserEmbed a web page, a live WebRTC feed, or a remote source roomDepends on target
📹 Remote CameraAccept a camera feed from a remote viewer via the wallspace.studio portalNo (but see TURN note in §9)
🎨 AI MediaGenerative AI source via RunPod-hosted models (Flux, Seedance, SDXL, Qwen, Wan, Sora, Weavy, VFX Effects, etc.) — each model has its own input mode / prompts / params.RunPod required; model + plugin specific
🎲 3D SceneRender a mini 3D scene into the layer. WIP Feedback loop between 3D scene source + compositor is still being worked out — for a clean 3D preview use the web portal room viewer, which renders 3D scenes without the feedback issues.No
💬 Captions / TextRendered live captions or static text as a video sourceDepends on engine — Whisper local = no

Visual walkthrough

Every screenshot below is a real layer running on the four-wall main room. Use the preset picker / model dropdown in each panel — nothing shown here is bespoke. Source types marked Coming soon have screenshots pending.

📁 Media (MP4 / image)

The default source. Drop a file from your media library onto a layer; video auto-loops, still images hold. Base Content controls (fit, pan/zoom, opacity) apply on top.

Layer with Media source type showing an underwater MP4 running on Front Wall with Layer Details panel open — sound waves overlay with caption 'looking tube worms. Look at these'.
Media source — an MP4 from the media library running on the main room, caption layer overlaid on top.

🌀 MilkDrop visualiser

Music-reactive preset engine — the MILKDROP VISUALIZER panel has a preset picker (filter + preview + Next/Random buttons). Pick a preset and the layer reacts to whatever audio is feeding the app. No key required.

MilkDrop layer selected — Source tab shows MILKDROP VISUALIZER with current preset '555 Royal - Mashup' and a searchable preset list below, walls rendering starburst visuals.
MilkDrop source — the preset picker lives directly in the Source tab. Hundreds of classic MilkDrop presets are bundled.

🎵 Audio Visualizer modes

Any audio layer can render itself as a visual using the built-in Audio Visualizer. The Base Content tab has a mode row (bars / spectrum / waveform / radial…) and an Audio Analyzer strip showing the live FFT. Good fallback when you don’t want to pull in a MilkDrop preset.

Audio layer with Audio Visualizer mode active — walls show dense radial burst pattern, Audio Analyzer strip shows live frequency bands.
One mode running live across three walls.
Audio layer with a different Audio Visualizer mode active — walls show flowing organic pattern, same Audio Analyzer strip at bottom.
Different mode selected — same audio, different visual translation.

🌊 Audio-Reactive Noise

Generative noise source with 10+ sub-patterns. Pick the pattern in the Source tab; each one reacts to audio amplitude & frequency differently. All local, no key, zero network dependency — great offline fallback.

Noise source set to Plasma pattern running on walls.
Plasma
Noise source set to Flow pattern.
Flow
Noise source set to Particles pattern.
Particles
Noise source set to Particles pattern with heart-shaped configuration.
Particles — alternate shape
Noise source set to Bounce pattern.
Bounce
Noise source set to Grid pattern.
Grid
Noise source set to Voronoi pattern.
Voronoi
Noise source set to Rings pattern.
Rings
Noise source set to Fractal pattern.
Fractal
Noise source set to Lissajous pattern.
Lissajous
Noise source set to Stars pattern.
Stars

🌐 WebRTC / Browser source

Embed a web page as a live source. Paste a URL, pick the video element on the page with the element picker, and the layer captures that element as its feed. Works for YouTube, live streams, webapps with canvas, etc. — anything Chromium can render. Capture is local (no external key needed for the source itself; the page’s own auth still applies).

Browser source opened to youtube.com with picker overlay asking 'Pick a video element to capture'.
Element picker — point at the video you want.
Layer with WebRTC Browser Source selected, Layer Details panel showing URL field, Syphon In / NDI Out toggles, Mode / Opacity controls, captured page composed onto all four walls.
After capture — layer runs just like any other source, blendable and transformable.

🎨 AI Media (RunPod-hosted models)

Heads up: some AI Media parameters, model names, and UI details have changed since these screenshots were taken. Treat the panel layout below as representative — the exact model list + required params in the current build may differ.

Generative model picker with a big catalogue (Flux Dev, Qwen Image, Seedance, SDXL, Wan, Sora, Weavy, VFX Effects Workers, Chatterbox TTS, and more). Pick a model, fill in the required input mode (text → image, image → video, text → audio, etc.), provide prompts / params, and the layer renders the generated output. RunPod key required (enter it in GPU Console → GPU Resources).

AI Media source with model picker dropdown open showing a long list of models grouped by capability — VFX Effects, Flux Dev, Qwen variants, SDXL 2.1, Wan, Seedance, Chatterbox TTS, and more.
Model picker — the list is long and grouped by input/output mode (text-to-image, image-to-video, text-to-audio, etc.).
AI Media panel with Seedance 1.5 Pro (I2V) selected — prompt field, negative prompt, First Frame / Last Frame image slots, resolution / duration params, status Complete, result thumbnail.
Seedance I2V (image-to-video) mid-run — prompt + first/last frame inputs.
Same Seedance panel, generated clip now playing across the four walls of the main room.
Generated clip composited across the walls — plays like any other video source.

💬 Captions / Text

Live captions from a configurable engine (Whisper local by default — no key), or static text you type in. Captions have their own panel with transcription settings, language, display toggles, and extensive styling controls (font, size, colour, outline, drop shadow, animation, translation). Becomes a video source like any other, which means you can blend it, transform it, or drop it on specific walls.

Captions layer selected — Live Transcription section with Engine dropdown (Whisper), Language (English), Label By, Show prompt / Show overlay / Block by word display / Block by word prompt / Intention overlay toggles, Position / Routing config below.
Captions source — engine + language + display toggles. Whisper local is the default and needs no key.
Captions layer Tools / Design tab — font picker, size slider, outline, colour, opacity, typeface, drop shadow, Animation Stop controls. Walls show live caption 'And these are very well' styled accordingly.
Styling tab — font / size / colour / outline / drop shadow / animation. Same engine, different typographic treatment.

🧠 Scope (real-time AI restyling)

Routes the layer through the Scope node editor — a visual graph where you wire source nodes through models (English/language node, prompt compositor, control nodes) into a result. Runs on a local Scope server or a remote RunPod pod. Requires a Scope endpoint URL; RunPod key only if you’re using a remote pod.

Daydream Scope node-graph editor — Source node connected to English node into a configuration node, fanning out to Result node showing processed output, with Logs / FPS / Bitrate readouts at bottom.
Scope node graph — source → model chain → result. Visual wiring, live preview.

☁ Daydream Live SD Turbo Coming soon

ControlNet-driven live image restyling through fal.ai’s Daydream Live SD Turbo model. Pairs with a ControlNet preprocessor on another layer (see Effects) to condition the output. Requires a Daydream / fal.ai API key. 60-minute session cap — see Known issues.

Screenshot pending — Daydream Live SD Turbo panel walkthrough.

🎲 3D Scene Coming soon

Renders a mini 3D scene into the layer. Feedback loop between the 3D scene source and the compositor is still being worked out — for a clean 3D preview use the web portal room viewer, which renders 3D scenes without the feedback issues.

Screenshot pending — 3D Scene source panel walkthrough.

Source files: LayerEditor.tsx, MilkDropLayerConfig.tsx, ScopeLayerConfig.tsx, DaydreamLayerConfig.tsx, CaptionSignalPanel.tsx.

7j. Base content

The “Base content” property tab covers how the source maps onto the layer’s slot.

7k. Transform

Layer position, scale, rotation — with both slider controls and an on-canvas interactive gizmo.

7l. Color

Per-layer colour adjustment. Works on any source including live webcam and AI layers.

7m. Effects

The Effects sub-tab has four columns — a per-layer Canvas effect chain, a ControlNet preprocess picker, a Daydream-CN summary, and an opt-in serverless GPU plugin pipe.

Effects

Per-pixel Canvas effects that run in the desktop compositor: Blur, Glow, Edge Detect, Threshold, Scanlines, Chroma Key. Each one has its own slider(s) when enabled. Stack what you like — cheap on CPU/GPU and fully local.

ControlNet

Preprocessors that turn this layer’s source into a condition map (edges, depth, scribble, etc.) for a Daydream layer that references this layer as its ControlNet input. Options: Canny Edge, Lineart, Anime Lineart, Soft Edge (with strength slider + Invert toggle — good starting point for webcam input), Scribble, Depth Map, Color Quantize. Only one can be active at a time.

Daydream CN

Reverse view: “No ControlNet targeting this layer” when nothing uses the layer as a condition; otherwise lists each Daydream layer that’s pulling from this source with live preview + opacity. Lets you audit what’s cross-wired before a live set.

GPU Plugin Experimental

Tick Enable Serverless GPU Plugin to route this layer’s source through a standalone GPU plugin (Real-ESRGAN upscale, Depth Anything, VFX Pack, etc.) running on RunPod serverless workers. Requires RunPod credentials (GPU Console → GPU Resources). Adds latency — use for recorded experiences, not live shows.

Effects sub-tab showing four columns — Effects (Blur, Glow, Edge Detect, Threshold, Scanlines, Chroma Key), ControlNet with Soft Edge active at strength 3 and Invert unchecked (Canny Edge, Lineart, Anime Lineart, Scribble, Depth Map, Color Quantize also visible), Daydream CN showing 'No ControlNet targeting this layer', and GPU Plugin with 'Enable Serverless GPU Plugin' checkbox and description of Real-ESRGAN / Depth Anything / VFX Pack. Above the panel the canvas shows Left/Front/Right walls with the webcam feed rendered as white-on-black edge-detection outlines — the ControlNet preprocessor's output live on the walls.
Effects sub-tab with Soft Edge ControlNet active at strength 3 — the wall previews show the preprocessor output (white-on-black edges) which is what a Daydream layer referencing this source would receive as its condition map.

Tip: preview a ControlNet preprocessor without a Daydream layer

You don’t need a Daydream layer wired up to see what a ControlNet preprocessor is doing. Pick Soft Edge / Depth Map / Canny on any webcam or video layer with the layer visible on a wall and the output becomes the wall content — good for dialling in edge strength or depth range before you commit to a Daydream run.

Source files: EffectsPanel.tsx, ControlNetPreview.tsx, DaydreamLayerConfig.tsx.

🔥 8. GPU console

The GPU Console is the collapsible bar sitting between the tab bar and content. It’s where you tune how hard the app pushes the GPU, and where your cloud-inference credentials (RunPod, Hugging Face, Cloudflare TURN) actually live. Click the ▸ chevron to expand.

What’s in the expanded console

GPU Console expanded showing Performance Mode button, Features toggle row (TV/IO/AU/AI/MI/3D/GPU), Canvas/2D mirrors Output row, Outputs quality controls, 3D Preview quality controls, GPU Windows, GPU Resources column with API/HF/TURN/Pod fields, Render Pipeline and AI Plugins collapsed sections.
GPU Console expanded. GPU Resources (API / HF / TURN / Pod) on the right — this is the actual home of RunPod + Hugging Face + Cloudflare TURN setup.

What each control does

ControlWhat it doesDefault
GPU status dotGreen = ready, yellow = connecting, gray = disabled, red = error. Click for pod detail.Green if local GPU detected
Per-section qualityDrop quality for Layers / 2D Preview / 3D Preview / Outputs independently.High / Standard across the board
Global quality presetOne switch: Ultra / High / Medium / Low / Standard. Changes all per-section values at once.High
GPU effect pipelineOff / CRT Phosphor / Chromatic / Color Grade / Full CRT. The visual flavour of the CRT emulation.Off
AI UpscaleOff / Low / Medium / High. Per-output AI upsampling.Off
Perf modeSkip certain sections entirely for max throughput. Gating flag.On (with high quality + CPU 4x) as of v2.6.2
WebGPU toggleExperimental Hardware-accelerated compositor via WebGPU instead of Canvas2D.Off
FPS / latency readoutLive stats from the GPU frame client.
Pod statusInline RunPod badge with cost/hr when connected.Hidden if no pod

High Performance Mode — when to flip it on

The Perf mode toggle (top of the expanded GPU Console) is the master switch for “throughput over fidelity.” When on, the render loop can skip expensive per-pixel work to keep frame rates high on heavy scenes or under live-streaming load. Use it when you see GPU fps dropping in the header readout, when captions or transforms stutter, or when a show has a lot of simultaneously active outputs + Scope/Daydream pipelines.

What Perf mode skips (controlled by per-feature checkboxes)

Skip optionWhat it dropsVisual cost
Skip EffectsPer-pixel effect chains on layers (blur, glow, edge, scanlines, chroma, canny, depth, lineart). Colour adjustments + warps still run.Medium — stylized looks flatten; base content still composites correctly
Skip Blend / TransformLayer blend-mode blending + transform compositing on the preview canvas. Outputs still receive full transform math.Low — Canvas preview loses blend-mode mixes; outputs unaffected
Skip ProjectorsThe projector render pass entirely — wall content still goes to CRT outputs, but projector slots stay black.High if you’re actually projecting. Use only for testing with CRT-only setups.
Skip 3D labels3D-scene label sprites (wall IDs, zone badges). Scene geometry + textures still render.Low — you lose the ID overlays
Lower JPEG qualityOutput transport encodes at a lower JPEG quality (0.3 instead of 0.5) — smaller IPC payload, faster encode.Low-medium — visible banding on gradients if you’re pixel-peeping

When to use Perf mode

Rule of thumb: if the GPU fps readout in the console header drops below your output’s target framerate (60 / 30 / 24 depending on quality preset) for more than a few seconds, flip Perf mode on and toggle off the heaviest skip option first (Skip Effects). Re-measure. If still struggling, add Skip Blend/Transform. Leave Skip Projectors off unless you’re certain no projector is active in the show. Half-resolution encode (1920×1080 inside a 3840×2160 output window) is always on and is the single biggest throughput gain — no toggle needed.

Source files: GpuConsolePanel.tsx, QualitySelector.tsx, compositor.ts.

🌐 9. Web Portal & Go Live

The desktop app is half of WallSpace; the other half is wallspace.studio — the public web portal where audiences watch live shows, browse recorded experiences, send their phone camera into a performance, and view in VR. The Go Live button in the app top bar is the bridge: it opens a WebSocket + WebRTC stream, registers your room with the portal, and generates a share URL audiences can open in any modern browser.

Go Live — publish from desktop to the portal

The Go Live button (purple, top right of the app) drives the full publish flow.

1. Click Go Live 2. App starts a local WebSocket video signaling server 3. App POSTs your room metadata to https://wallspace.studio/api/rooms 4. Portal returns a public share URL: https://wallspace.studio/room.html?wss=<ws-url> 5. App begins streaming output frames over WebRTC via Cloudflare TURN 6. Heartbeat ping every N seconds keeps the room “Live Now” on the landing page 7. Click Go Live again to stop → room disappears from the portal

Anyone you share that URL with can watch in their browser with no install. Audiences on unrestricted networks connect via WebRTC direct; others fall back to Cloudflare TURN relay (if you’ve configured TURN credentials — see §2).

Before you click Go Live: make sure Start (the green button next to Setup) is on so outputs are actively rendering; otherwise there’s no stream to publish. If you hit Go Live and nothing happens, check Zone 2 → Logs (Errors filter) — the app logs the reason rather than surfacing it as a toast.

The portal landing page — wallspace.studio

Full wallspace.studio landing page with header (Experiences, Beta Guide, NDI Guide, Sign In, Connect), hero 'Your walls, everywhere.' with Enter a Room + Browse Experiences, Live Now section (empty state), Recorded Experiences with SAMCAR Real Estate Session card, and a 'What You Can Do' feature grid.
wallspace.studio landing — public entry point for audiences and collaborators.

Enter a Room

Primary CTA. Opens room.html where a visitor can connect to a live broadcast. If you share a direct URL (?wss=…), the room connects automatically. Otherwise they enter a WallSpace host + port manually.

Browse Experiences

Opens experience.html — a catalog of recorded VJ sets and sessions (SAMCAR Real Estate session is the current flagship). On-demand playback with timeline + captions + chat + fullscreen.

Room code / IP address Join

Direct entry field — paste a code or IP that a host gave you, hit Join. Same endpoint as the URL-based entry but useful for paper shares or QR codes.

Live Now

Auto-populated from the rooms API: shows every room that pinged the heartbeat within the last N seconds. Empty state reads “No rooms are live right now.” Click a card to enter.

Recorded Experiences

Static catalog hosted on R2; click a card to watch. Each card shows duration, wall count, and a thumbnail. Full details in memory: SAMCAR + future VJ recordings use MediaRecorder for capture.

Features row

Marketing-style cards describing Watch Live Shows / Send Your Camera / VR + XR Viewing / Live Captions / Music + Lyrics / Remote Collaboration. Exists to explain the value to audiences on the landing, not as navigation.

Recorded Experience player

SAMCAR Real Estate Session recorded experience in 3D. Three curved wall screens show in order: a classroom scene with a presenter standing in front of attendees, a table interview with two speakers, and a montage of 'Star Power OG Club' memorabilia (magazine pages, cassette tapes, newspaper cutouts). Transport bar at the bottom with Play, progress, 1x speed, Unmute, Loop, Captions Off, CC, chat, Fullscreen, plus Reset / Front / Left / Right camera presets top-right and a 'Tap to enable audio' overlay.
Recorded experience player: 3D room preview, transport bar, captions toggle, chat, fullscreen.

The experience player renders a 3D room with the recorded session’s walls in place, plays back the media that was streamed during recording, and supports captions as a toggle. Tap to enable audio appears on first load because browsers block autoplay audio until a user gesture. Transport shows Play / pause, scrub, speed (0.25x–4x), loop, mute, CC captions toggle, chat (💬), and fullscreen.

Enter a Room (live) — the Connect screen

Room connect screen with 'Connect to WallSpace' title, host text field (wallspace.studio), port field (8765), Connect button, and a Disconnected status badge in the top-right corner.
Manual connect view: host + port + Connect. Reached when no ?wss= parameter is present in the URL.

This is the manual fallback entry. Almost always, audiences arrive with a share URL that pre-fills the WebSocket target and skips this view entirely. Status badge top-right shows Disconnected / Connecting / Connected. Once connected, the room transforms into the 3D viewer with live walls + audio.

Portal features worth calling out

FeatureWhereStatus
Watch live shows Enter a Room with a share URL Stable
Recorded experiences (VOD) Browse Experiences → Experience player Stable
3D immersive viewing Room + Experience renders 3D walls Perf jitter on scene rebuild (partially fixed)
VR / XR viewing WebXR headset in a supported browser Built, lightly tested
Live captions in the room CC toggle in the transport bar Working with desktop-app caption source
Music + Lyrics Auto-detected when a music layer is active on the host Shazam + LRCLIB pipeline
Send Your Camera (visitor → host) Room → “Share camera”; feeds visitor cam into host’s Remote Camera layer TURN missing in portal — see below
Chat Chat bubble in the room + experience players Built, moderated by host
Remote Collaboration (co-VJ) Multiple publishers onto one room (VJ battles, 24/7 channels) Early — feedback welcome

Live WebRTC publishing in practice

Once a room is live, the portal renders the host’s wall composite in-browser over WebRTC. Below: a live session published from WallSpace to the portal (“I am the VJ / Don’t ask me about the music” was the source content for the session) with the 3D room view showing two mirror walls + the CRT grid cluster in between.

wallspace.studio/room viewer showing a live WebRTC session with two walls displaying an 'I am the VJ' t-shirt video on a purple background and a small CRT+projector cluster between them. Header shows render 60fps / atlas 415fps. Right side shows Quick Add panel with CRT type, grid arrangement, placement, slices, and projector controls.
Live WebRTC publish from WallSpace to wallspace.studio/room — render 60fps / atlas 415fps, two walls + CRT cluster rendering in real time.
Google Meet call with Matthew Israelsohn presenting his WallSpace desktop app screen showing a 'Send Camera' view with webcam feed, a 'Connected Peers' section showing Web Portal, and the Meet sidebar chat with 'want to screen share?', 'It completely froze for a while, connect button was greyed out etc then you did something & boom I was connected', 'VPN still enabled' messages discussing an A.EYE.Echo Meeting sync wallspace.studio/room URL.
Field testing the WebRTC publish with Matt: the share URL (wss://….trycloudflare.com) goes into Meet chat, recipient clicks, browser joins the room. Chat captures a real connection-issue session — “completely froze, connect button greyed out, then boom I was connected.”
Same wallspace.studio/room view but focused on the 3D Scene with mannequin-style models wearing the 'I am the VJ Don't ask me about the music' t-shirt on both side walls, tiny CRT stack in the center foreground, purple gradient background, and 'Waiting for scene data...' badge in the lower-left. Layers panel shows FaceTime HD Camera, Remote: Web Portal (LWS2L3), and Noise: fractal (LWS2L2).
Same room, 3D-scene view — remote layer (LWS2L3, labelled “Remote: Web Portal”) piping visitor video back into the host’s scene. “Waiting for scene data…” fires briefly while the host pushes wall geometry.

Known WebRTC connection issues (actively being worked on)

The WebRTC publish pipeline works end-to-end but has rough edges we’re still sanding. If you hit one of these during beta testing, it’s expected — please report it (Zone 2 → Logs, filter by Errors) so we can narrow the repro:

Portal-side known issue: remote-camera send fails behind restrictive NATs

The web portal’s room.html uses Google STUN only — there’s no TURN fallback. Visitors on corporate / hotel / cellular networks that block direct UDP can’t reach the host’s camera endpoint. Desktop-app-to-desktop-app has Cloudflare TURN wired up; portal-to-app doesn’t yet. If your audience reports “camera won’t connect,” this is almost always why. Watch-only works regardless of NAT because outbound WebRTC to the host is initiated by the host side.

Tip: share the right URL

When you click Go Live, the app shows the share URL (plain text + a Copy button). That URL has the WebSocket target baked in as a query param, so audiences clicking it go straight into your room — no Connect screen. Paste it into chat / QR code / social share. Anyone who visits wallspace.studio directly without that URL lands on the generic landing page instead.

10. Known issues & untested features

One place to scan every landmine currently in the app. The goal isn’t to scare you off — it’s to save you the cycle of debugging a known issue from scratch. Ordered by blast radius.

AreaIssueStateWorkaround / Notes
Boot Renderer blue-screens on boot (exit code 11 SIGSEGV) Recovery exists Nuke Local Storage/leveldb + Session Storage + Preferences in the userData dir. Presets are preserved. Let Jack know if you hit it so we can narrow down the root cause.
MIDI Scene-trigger buttons in the MIDI panel can crash or corrupt state Guarded, not fully fixed try/catch guard landed in v2.6.2 (commit 5aedc90). Underlying setCRTs + setWalls + setScenes cascade still suspect. Use scene tabs, not trigger buttons.
Timelines Beat matching end-to-end untested Unverified BPM CC + tap tempo + phase output exist. Sync quality under load is unknown. Feedback on real-world behaviour very welcome.
Sources 3D Scene source type has feedback loop issues on desktop compositor WIP Feedback / render-loop coupling between the 3D scene source and the main compositor is still being debugged — can surface as flicker, stale frames, or runaway re-renders. Workaround: use the web portal room viewer to show 3D scenes cleanly until the desktop-side fix lands.
Cloud Live RunPod GPU pricing falls back to stale static table Stale data Cross-check prices on runpod.io before booting a large pod. Fix scheduled.
Cloud fal.ai hard 60-minute session cap, no advance warning Known 55-min client countdown exists. Plan sessions in under-55-min cycles. Assets uploaded directly in the Scope UI are lost on session kill; VACE ref images via layer config can be re-uploaded.
Outputs MP4 loop regression after media-element-source change Fixed, needs field testing Commit 8fe5e77. If you notice a video not looping, let us know.
3D web viewer Perf jitter on scene rebuild Partially fixed Scene-hash dedup + disposal fixes landed. If it’s still jittery post-deploy, report — incremental scene updates still on the list.
3D tab First-visit blank / static frame in Start mode Fixed in v2.6.3 Container-migrate poll + texture-update dep on init-completion signal. Scene paints on first click and live frames keep flowing in Start mode.
Outputs Projector with CRTs in same slot rendered mostly black Fixed in v2.6.3 v2.4.5 projector mapping logic (L1/L2 sub-cell math, 4:3 mask correction, cyan CRT outlines during mapping, CRT-cell blackout pass) was accidentally reverted in v2.4.6; re-transplanted onto v2.6.3. See §5 for walkthrough.
Outputs Map-overlay grid flickers while dragging corner handles Cosmetic Warp math settles after a brief pause. No effect on rendered output. Queued for post-demo debounce pass.
Sources Webcam hot-unplug took down all cams simultaneously Fixed in v2.6.3 Enumerate-merge + per-track 'ended'/'mute' listeners. A single yanked cam now fails alone; siblings keep streaming. Re-plug to recover.
Captions Layout / safe-zone / sizing reset to default on each new segment Fixed in v2.6.3 Token-reveal render path was missing the layout props (safeZoneEnabled, maxWidthPct, etc.) and overwrote the correct canvas with a default-layout one. Both source + translation blocks now carry layout config identically.
Captions Voice-tone emotion score stuck at neutral (0.00) despite voice analysis on Fixed in v2.6.3 Voice extractor never started because a one-shot 2s timer raced with Whisper’s up-to-10s stream acquire. Now polls every 500ms for up to 15s. Separately, voice emotion is scored from a 2.5s peak-in-window (same pattern as the transient accumulator) so brief pauses between phrases don’t collapse the score to zero.
Web portal (browsers) Remote viewers behind NAT can’t send their camera to the app Unfixed The web portal (room.html) uses Google STUN only — no TURN fallback. Desktop app has Cloudflare TURN. Remote-camera-in for the portal is planned; for now the portal is watch-only over restrictive NATs.
Captions Offline translation TODO Not yet built Apple’s Translation framework is SwiftUI-only on macOS 15+; unreachable from our native bridge. Planned path: bundle Argos Translate or CTranslate2. For now, DeepL / LibreTranslate online only.
API keys No single Settings → API Keys screen UX debt Keys entered per-panel (see §2 table). Unification is planned but not scheduled.
Notarization macOS app notarization Fixed DMG builds signed with Developer ID + notarized via notarytool. No Gatekeeper prompt expected on first launch for latest releases.

Where to report a new issue

If you find something in this guide that’s wrong, out of date, or missing — or a new bug — drop a note directly with Jack (Discord / message) with: