Chat Signal Radar¶
GitHub Repo | Updated: June 22, 2025
A Chrome extension that uses Rust compiled to WebAssembly to analyze YouTube and Twitch live chat in real-time. Built for content creators who need to keep up with fast-moving chat streams without missing important signals buried in the noise.
Tech Stack: - Rust (compiled to WebAssembly) - Chrome Extension (Manifest V3) - JavaScript (extension UI and glue) - WebAssembly (WASM)
Why I Built This¶
If you've ever watched a live stream with an active chat, you know the problem. Messages fly by at 20, 50, sometimes 100 per minute. Questions get buried before the streamer can see them. Sentiment shifts happen in real-time as viewers react to what's happening on screen. Trending topics emerge and fade within seconds. For content creators trying to engage with their audience, it's like drinking from a fire hose.
I wanted a tool that could watch the chat for me and surface what actually matters. Not just a message counter or a simple keyword filter, but something that understands the difference between a question, a bug report, and general hype. Something that can tell me when the mood shifts from excited to confused. Something that shows me what topics are trending right now, not five minutes ago.
The opportunity was clear: use real-time analysis to turn chat noise into actionable signals. But it had to be fast enough to keep up with high-velocity streams, and it had to run entirely in the browser for privacy.
How It Works¶
The architecture is built around a Rust analysis engine compiled to WebAssembly for performance in the browser:
Chat Stream
↓
Content Script (observes DOM)
↓
Message Batching (every 5 seconds)
↓
WASM Analysis Engine (Rust)
├── Message Clustering (Questions, Issues, Requests, General)
├── Topic Extraction (word frequency, stop-word filtering)
└── Sentiment Analysis (lexicon-based mood detection)
↓
Overlay UI (Chrome Extension Sidebar)
├── Mood Indicator (emoji + confidence)
├── Trending Topics (word cloud)
└── Message Clusters (categorized view)
The content script watches the YouTube or Twitch chat DOM and captures new messages as they appear. Every 5 seconds, it sends a batch to the sidebar where the WASM engine runs the analysis. The Rust engine handles clustering (categorizing messages by type), topic extraction (finding words mentioned 5+ times), and sentiment analysis (computing a mood score from positive and negative keywords).
The sidebar UI displays the results in real-time with a mood indicator showing the overall chat sentiment, a word cloud of trending topics, and organized clusters of questions and issues. There's also an optional WebLLM integration for AI-powered chat summaries that runs entirely offline.
Why Rust + WASM? Because live chat analysis needs to be fast. Processing 100 messages every 5 seconds with clustering, topic extraction, and sentiment analysis is CPU-intensive work. JavaScript alone would struggle to keep up, especially on lower-end machines. Compiling Rust to WebAssembly gives near-native performance while still running in the browser sandbox.
Chrome Extension Manifest V3 was the biggest constraint. The new service worker model and stricter content security policies meant rethinking how to pass messages between the content script, background worker, and sidebar. I ended up with a batching approach that minimizes message passing overhead while keeping latency low enough for real-time updates.
What I Learned¶
Compiling Rust to WASM for browser extensions is surprisingly smooth with wasm-pack, but you hit edge cases around memory management. The WASM module needs to be initialized asynchronously, and you have to carefully manage the boundary between JavaScript and Rust. I ended up writing validation helpers on the JS side to sanitize all data before passing it into the WASM engine.
Manifest V3's content security policies are strict. No innerHTML, no inline event handlers, no dynamic script loading. This forced me to write safe DOM manipulation helpers that use textContent and explicit event listeners. It's more code, but the result is more secure by default.
Real-time text analysis is harder than it looks. Early versions were too sensitive to spam and noise. I had to build stop-word filtering for topic extraction (filtering out "the", "a", "is"), spam detection for clustering (ignoring repeated identical messages), and a minimum signal threshold for sentiment (at least 3 sentiment-bearing messages before showing a mood). These guardrails made the analysis much more useful.
The privacy-first approach was non-negotiable. All analysis happens locally in the browser. No data leaves the user's machine. Even the optional AI summaries use WebLLM, which downloads the model once and runs inference locally. This constraint shaped the entire architecture, but it's the right trade-off for a tool that processes live chat content.
If I were building this again, I'd explore embedding-based semantic clustering instead of keyword-based. The current approach works well for English chat with common patterns, but it struggles with slang, emotes, and non-English streams. Semantic embeddings would be more robust, though they'd require shipping a larger WASM bundle.
Links¶
A browser-based tool for content creators who want to understand their live chat in real-time, without missing the signals that matter.