Hey HN,
We built ClawdTalk to let AI agents operate over live phone calls.
Most agents today live in chat windows. The moment you try to use them over voice, things break: latency matters, interruptions happen, and the agent has to execute tools while the conversation is still live.
ClawdTalk connects your Clawdbot to the phone network.
The agent gets a real phone number. It can make and receive calls and run the same tools it uses in chat, but under real-time voice constraints.
One of the reasons this is hard is infrastructure. Many voice stacks stitch together separate telephony, speech, and model APIs. Each hop adds latency, and people report 8–30 second round trips.
We got it under ~3 seconds by running the full voice path ourselves. Telnyx (my employer) is a telecom carrier, and we run PSTN, STT, and TTS on our infrastructure. No middlemen.
How it works:
1. Connect your OpenClaw agent to ClawdTalk
2. We provision a phone number
3. Inbound/outbound calls route directly to the agent
4. The agent executes tools mid-conversation
Limitations:
1. Latency still depends on your LLM (we control voice, not inference)
2. US numbers only for now (international coming)
3. Not a new agent framework (OpenClaw only today)
Demo number: +1-301-MYCLAWD (692-5293) (call to talk to the agent)
Happy to answer questions about the architecture or telephony side.