Worker Model

How Web Workers provide isolation, the autonomous render service, and the SharedArrayBuffer abort channel.

Worker Model

The runtime worker is an autonomous reactive render service. It watches file dependencies, debounces changes, renders geometry, and pushes results -- without the main thread telling it when to act. Communication uses MessagePort for commands/events and SharedArrayBuffer for instant render abort. This design mirrors the Language Server Protocol pattern: the worker is a "geometry server" and the main thread is a display client.

Context and Motivation

CAD kernels perform heavy work: WASM-based geometry computation, bundling, tessellation. Running this on the main thread would freeze the UI. Web Workers provide a separate thread with their own event loop. Beyond isolation, the worker owns scheduling decisions: it knows the dependency graph, cache state, and which renders are stale. Pushing this intelligence to the worker eliminates unnecessary main-thread round-trips and enables instant abort of superseded renders.

How It Works

Why Web Workers

  • Isolation -- The worker has its own global scope and event loop. Crashes or infinite loops in kernel code do not freeze the main thread.
  • No main-thread blocking -- Geometry computation, bundling, and WASM execution run off the main thread. The UI remains responsive.
  • Memory separation -- Large allocations (WASM heaps, geometry buffers) live in the worker. The main thread can stay lean.

KernelRuntimeWorker as Multi-Kernel Host

A single worker instance hosts all registered kernels. The KernelRuntimeWorker dynamically loads kernel modules via defineKernel(). When a render is requested, it selects the appropriate kernel (see Kernel Selection) and delegates to that kernel's methods. This avoids one worker per kernel, which would multiply memory and startup cost.

Autonomous Render Loop

After receiving setFile, the worker manages its own render lifecycle:

  1. setFile(file, params) -- Store file and params. Render immediately (aborting any in-progress render). Discover dependencies. Set up filesystem watch subscription. Push geometryComputed.

  2. Watch event (file in dependency graph changed) -- Invalidate caches. Start/reset 500ms debounce timer. On timer fire: render (aborting any in-progress render). Discover new deps and diff watch set. Push geometryComputed.

  3. setParameters(params) -- Store new params. Start/reset 50ms debounce timer. On timer fire: render (aborting any in-progress render). Push geometryComputed.

  4. export(format) -- Export from the last native handle. Push exported result.

Filesystem Bridge

The File Manager Worker is the single owner of the virtual filesystem. It accepts multiple MessagePort bridge connections via exposeFileSystem(handlers, { watchHandler }). Two bridges are established at startup:

Bridge A (main thread): The editor and file manager UI use createFileSystemBridge(fsWorker) + createBridgeProxy<FileManagerProtocol>(port) to write files, read directory trees, and perform file management operations. When a file is written, FileService persists it via the active provider and emits a fileWritten event on the EventBus.

Bridge B (kernel worker): The kernel worker receives its own port as fileSystemPort during initialization and creates a createBridgeProxy<RuntimeFileSystemBase>(port) for file reads, dependency resolution, and watch subscriptions. The WatchRegistry matches EventBus change events against the kernel worker's watch subscriptions and pushes watch events directly over this bridge.

This dual-bridge design means editor writes (Bridge A) trigger watch events to the kernel worker (Bridge B) without any main-thread relay. The main thread never sits on the hot path between a file change and a re-render.

For environments without a separate filesystem worker (e.g., Node.js or testing), createBridgePort(fileSystem) provides an in-process bridge that uses the same protocol.

SharedArrayBuffer Abort Channel

The abort flag must be readable during synchronous WASM execution, when the worker's event loop is blocked and cannot process messages. SharedArrayBuffer provides a memory region visible to both the main thread and the worker simultaneously:

When the main thread calls setFile() or setParameters():

  1. RuntimeClient writes Atomics.store(abortFlag, 0, newGeneration) before posting the message.
  2. The worker may be mid-WASM. Its event loop is blocked. The message queues.
  3. The next OC Proxy call reads Atomics.load(abortFlag, 0) -- sees mismatch -- throws RenderAbortedError.
  4. The render aborts. The worker's event loop resumes and processes the queued message.
  5. New render starts.

The signal channel carries four Int32 slots:

SlotDirectionMechanismPurpose
abortGeneration (0)main -> workerPolled by OC Proxy per WASM callAbort in-progress render
workerState (1)worker -> mainAtomics.notify / Atomics.waitAsyncState transitions (idle/rendering/error)
progressPercent (2)worker -> mainPolled on demandRender progress (cosmetic)
renderPhase (3)worker -> mainPolled on demandCurrent phase (bundling/meshing/etc.)

Per-Kernel Abort Capabilities

KernelProxy abortAsync abortMid-WASM abort?Worst-case latency
ReplicadYesYesYes< 1ms (next OC call)
OpenCASCADEYesYesYes< 1ms (next OC call)
JSCADN/AYesN/A< 10ms (next await)
ManifoldPossibleYesPossible< 10ms
Zoo/KCLN/AYesN/A< 50ms
OpenSCADN/ANoNoFull render duration
TauN/AYesN/A< 10ms

MessagePort-Based Communication Protocol

The RuntimeTransport interface abstracts the channel: send(message, transferables?) and onMessage(handler). The default implementation uses worker.postMessage() and worker.addEventListener('message'). Messages are typed as RuntimeCommand (main -> worker) and RuntimeResponse (worker -> main).

Transferable Support for Zero-Copy Binary Data

When the worker returns geometry (e.g., glTF as ArrayBuffer), the dispatcher calls port.postMessage(response, [buffer]). The buffer is transferred to the main thread; the worker can no longer access it. No copy occurs. For large meshes, this significantly reduces latency and memory pressure.

The filesystem bridge also uses extractTransferables() to transfer Uint8Array buffers for file read/write operations, ensuring large CAD files are moved zero-copy between workers.

Comparison to Prior Art

VS Code Language Server Protocol:

ConceptLSPTau Runtime
Server roleAutonomous analysis serviceAutonomous render service
Client roleDisplay + user inputDisplay + user input
CommunicationJSON-RPC eventsMessagePort events + SharedArrayBuffer
File watchingServer watches workspaceWorker watches dependency graph
Result deliveryPush diagnostics, completionsPush geometry, parameters, errors
LifecycleClient starts/stops serverMain thread creates/terminates worker

Vite HMR:

ConceptViteTau Runtime
File watcherchokidar (OS-level)FileSystem watch (VFS-level)
Dependency graphModule graph (import analysis)Bundle deps (esbuild metafile) + kernel resolvers
DebounceHMR batchingWorker-internal 500ms/50ms timers
Rebuild triggerHMR update pushed to browsergeometryComputed pushed to main thread

Key Relationships

  • Transport and Client -- The client creates or receives a transport and passes it to RuntimeWorkerClient. Custom transports enable testing (mock) or alternative channels.
  • Dispatcher and Worker -- The dispatcher is the worker-side message handler. It receives RuntimeCommand, invokes worker methods, and sends RuntimeResponse.
  • Editor and FileSystem -- The editor writes files to the File Manager Worker through Bridge A. These writes trigger EventBus emissions that feed the kernel worker's watch subscriptions, closing the loop between user edits and autonomous re-renders.
  • FileSystem and Worker -- The kernel worker accesses the filesystem through Bridge B. Watch events flow directly between the File Manager Worker and the kernel worker without main-thread relay. In autonomous mode, the kernel worker subscribes to file change events scoped to the current file's dependency graph.

Implications

  • Async by design -- All kernel operations are async. The client API is Promise-based or event-driven.
  • Single-threaded worker -- The worker runs one render at a time. Abort ensures stale renders are cancelled quickly so the latest render starts with minimal delay.
  • Transfer semantics -- Transferred buffers are moved, not copied. The worker must not retain references after transfer.
  • Cross-origin isolation -- SharedArrayBuffer requires COOP + COEP headers. This is already a prerequisite for OpenCASCADE's pthread support via assertCrossOriginIsolated().

Further Reading