LLM chat breaks when you're learning. Side questions either clutter the main chat, or you copy/paste into a new chat and lose the context.
Tangents makes the workflow explicit. You can branch off any message to chase a rabbit hole, then merge the findings back without polluting the main thread.
Key Features:
- Branching: Select text in any message to fork a new chat.
- The Collector: Highlight snippets across different branches to build a "shopping cart" of context.
- Compose: Send a prompt using only what is in your Collector (explicit context control).
- Console: Inspect exactly what context is being sent to the LLM before you hit enter.
How it's different:
It is not a node-graph canvas. It keeps the linear chat UI but allows inline branching. It is not an agent framework; it is a tool for humans who want manual control over the LLM's context window.
-----
View it here: https://tangents.chat/hn
Note: Currently supports OpenAI (BYOK supported).
OpenAI BYOK is only for the full app when you want real model calls.
More detail on what’s different under the hood:
Feedback I'd love: does Branch + Collector + Compose feel faster than "open a second chat window + copy/paste", or does it feel like extra steps?