# Worker Model URL: /docs/concepts/worker-model Worker Model [#worker-model] The runtime worker is an **autonomous reactive render service**. It watches file dependencies, debounces changes, renders geometry, and pushes results -- without the main thread telling it when to act. Communication uses MessagePort for commands/events and SharedArrayBuffer for instant render abort. This design mirrors the Language Server Protocol pattern: the worker is a "geometry server" and the main thread is a display client. Context and Motivation [#context-and-motivation] CAD kernels perform heavy work: WASM-based geometry computation, bundling, tessellation. Running this on the main thread would freeze the UI. Web Workers provide a separate thread with their own event loop. Beyond isolation, the worker owns scheduling decisions: it knows the dependency graph, cache state, and which renders are stale. Pushing this intelligence to the worker eliminates unnecessary main-thread round-trips and enables instant abort of superseded renders. How It Works [#how-it-works] Why Web Workers [#why-web-workers] * **Isolation** -- The worker has its own global scope and event loop. Crashes or infinite loops in kernel code do not freeze the main thread. * **No main-thread blocking** -- Geometry computation, bundling, and WASM execution run off the main thread. The UI remains responsive. * **Memory separation** -- Large allocations (WASM heaps, geometry buffers) live in the worker. The main thread can stay lean. KernelRuntimeWorker as Multi-Kernel Host [#kernelruntimeworker-as-multi-kernel-host] A single worker instance hosts all registered kernels. The `KernelRuntimeWorker` dynamically loads kernel modules via `defineKernel()`. When a render is requested, it selects the appropriate kernel (see [Kernel Selection](./kernel-selection)) and delegates to that kernel's methods. This avoids one worker per kernel, which would multiply memory and startup cost. Autonomous Render Loop [#autonomous-render-loop] After receiving `setFile`, the worker manages its own render lifecycle: 1. **setFile(file, params)** -- Store file and params. Render immediately (aborting any in-progress render). Discover dependencies. Set up filesystem watch subscription. Push `geometryComputed`. 2. **Watch event (file in dependency graph changed)** -- Invalidate caches. Start/reset 500ms debounce timer. On timer fire: render (aborting any in-progress render). Discover new deps and diff watch set. Push `geometryComputed`. 3. **setParameters(params)** -- Store new params. Start/reset 50ms debounce timer. On timer fire: render (aborting any in-progress render). Push `geometryComputed`. 4. **export(format)** -- Export from the last native handle. Push exported result. Filesystem Bridge [#filesystem-bridge] The File Manager Worker is the single owner of the virtual filesystem. It accepts multiple MessagePort bridge connections via `exposeFileSystem(handlers, { watchHandler })`. Two bridges are established at startup: **Bridge A (main thread):** The editor and file manager UI use `createFileSystemBridge(fsWorker)` + `createBridgeProxy(port)` to write files, read directory trees, and perform file management operations. When a file is written, `FileService` persists it via the active provider and emits a `fileWritten` event on the EventBus. **Bridge B (kernel worker):** The kernel worker receives its own port as `fileSystemPort` during initialization and creates a `createBridgeProxy(port)` for file reads, dependency resolution, and watch subscriptions. The `WatchRegistry` matches EventBus change events against the kernel worker's watch subscriptions and pushes watch events directly over this bridge. This dual-bridge design means editor writes (Bridge A) trigger watch events to the kernel worker (Bridge B) without any main-thread relay. The main thread never sits on the hot path between a file change and a re-render. For environments without a separate filesystem worker (e.g., Node.js or testing), `createBridgePort(fileSystem)` provides an in-process bridge that uses the same protocol. SharedArrayBuffer Abort Channel [#sharedarraybuffer-abort-channel] The abort flag must be readable during **synchronous WASM execution**, when the worker's event loop is blocked and cannot process messages. `SharedArrayBuffer` provides a memory region visible to both the main thread and the worker simultaneously: When the main thread calls `setFile()` or `setParameters()`: 1. [`RuntimeClient`](../api/client) writes `Atomics.store(abortFlag, 0, newGeneration)` **before** posting the message. 2. The worker may be mid-WASM. Its event loop is blocked. The message queues. 3. The next OC Proxy call reads `Atomics.load(abortFlag, 0)` -- sees mismatch -- throws `RenderAbortedError`. 4. The render aborts. The worker's event loop resumes and processes the queued message. 5. New render starts. The signal channel carries four `Int32` slots: | Slot | Direction | Mechanism | Purpose | | --------------------- | -------------- | -------------------------------------- | ---------------------------------------- | | `abortGeneration` (0) | main -> worker | Polled by OC Proxy per WASM call | Abort in-progress render | | `workerState` (1) | worker -> main | `Atomics.notify` / `Atomics.waitAsync` | State transitions (idle/rendering/error) | | `progressPercent` (2) | worker -> main | Polled on demand | Render progress (cosmetic) | | `renderPhase` (3) | worker -> main | Polled on demand | Current phase (bundling/meshing/etc.) | Per-Kernel Abort Capabilities [#per-kernel-abort-capabilities] | Kernel | Proxy abort | Async abort | Mid-WASM abort? | Worst-case latency | | --------------- | ----------- | ----------- | --------------- | --------------------- | | **Replicad** | Yes | Yes | Yes | \< 1ms (next OC call) | | **OpenCASCADE** | Yes | Yes | Yes | \< 1ms (next OC call) | | **JSCAD** | N/A | Yes | N/A | \< 10ms (next await) | | **Manifold** | Possible | Yes | Possible | \< 10ms | | **Zoo/KCL** | N/A | Yes | N/A | \< 50ms | | **OpenSCAD** | N/A | No | No | Full render duration | | **Tau** | N/A | Yes | N/A | \< 10ms | MessagePort-Based Communication Protocol [#messageport-based-communication-protocol] The [RuntimeTransport](../api/transport) interface abstracts the channel: `send(message, transferables?)` and `onMessage(handler)`. The default implementation uses `worker.postMessage()` and `worker.addEventListener('message')`. Messages are typed as `RuntimeCommand` (main -> worker) and `RuntimeResponse` (worker -> main). Transferable Support for Zero-Copy Binary Data [#transferable-support-for-zero-copy-binary-data] When the worker returns geometry (e.g., glTF as `ArrayBuffer`), the dispatcher calls `port.postMessage(response, [buffer])`. The buffer is transferred to the main thread; the worker can no longer access it. No copy occurs. For large meshes, this significantly reduces latency and memory pressure. The filesystem bridge also uses `extractTransferables()` to transfer `Uint8Array` buffers for file read/write operations, ensuring large CAD files are moved zero-copy between workers. Comparison to Prior Art [#comparison-to-prior-art] **VS Code Language Server Protocol:** | Concept | LSP | Tau Runtime | | --------------- | ----------------------------- | -------------------------------------- | | Server role | Autonomous analysis service | Autonomous render service | | Client role | Display + user input | Display + user input | | Communication | JSON-RPC events | MessagePort events + SharedArrayBuffer | | File watching | Server watches workspace | Worker watches dependency graph | | Result delivery | Push diagnostics, completions | Push geometry, parameters, errors | | Lifecycle | Client starts/stops server | Main thread creates/terminates worker | **Vite HMR:** | Concept | Vite | Tau Runtime | | ---------------- | ------------------------------ | ------------------------------------------------- | | File watcher | chokidar (OS-level) | FileSystem watch (VFS-level) | | Dependency graph | Module graph (import analysis) | Bundle deps (esbuild metafile) + kernel resolvers | | Debounce | HMR batching | Worker-internal 500ms/50ms timers | | Rebuild trigger | HMR update pushed to browser | `geometryComputed` pushed to main thread | Key Relationships [#key-relationships] * **Transport and Client** -- The client creates or receives a transport and passes it to `RuntimeWorkerClient`. Custom transports enable testing (mock) or alternative channels. * **Dispatcher and Worker** -- The dispatcher is the worker-side message handler. It receives `RuntimeCommand`, invokes worker methods, and sends `RuntimeResponse`. * **Editor and FileSystem** -- The editor writes files to the File Manager Worker through Bridge A. These writes trigger EventBus emissions that feed the kernel worker's watch subscriptions, closing the loop between user edits and autonomous re-renders. * **FileSystem and Worker** -- The kernel worker accesses the filesystem through Bridge B. Watch events flow directly between the File Manager Worker and the kernel worker without main-thread relay. In autonomous mode, the kernel worker subscribes to file change events scoped to the current file's dependency graph. Implications [#implications] * **Async by design** -- All kernel operations are async. The client API is Promise-based or event-driven. * **Single-threaded worker** -- The worker runs one render at a time. Abort ensures stale renders are cancelled quickly so the latest render starts with minimal delay. * **Transfer semantics** -- Transferred buffers are moved, not copied. The worker must not retain references after transfer. * **Cross-origin isolation** -- SharedArrayBuffer requires COOP + COEP headers. This is already a prerequisite for OpenCASCADE's pthread support via `assertCrossOriginIsolated()`. Further Reading [#further-reading] * [Architecture](./architecture) -- How the transport fits in the layered design * [Render Lifecycle](./render-lifecycle) -- Detailed render loop, cancellation strategies, and concurrency model * [Kernel Selection](./kernel-selection) -- How the runtime worker selects kernels * [API: Transport](../api/transport) -- `RuntimeTransport` and `createWorkerTransport` * [Configure the Bundler](../guides/bundler-configuration) -- Worker and bundler setup * [Set Up the Filesystem](../guides/filesystem-setup) -- Connecting a filesystem to the worker