Hacker Newsnew | past | comments | ask | show | jobs | submit | more kwindla's commentslogin

As someone who spends a lot of time looking at timestamped log lines to debug Pipecat pipelines, I'm a big fan of this work from Aleix.

In general, I have three pain points with debugging realtime, multi-model, multi-modal AI stuff. 1. where's the latency creeping in? 2. What context actually got passed to the models. 3. Did the model/processor get data in the format it expected.

For 1 and 3, Whisker is a big step forward. For 2, something like LangFuse (Open Telementry) is very helpful.



Thanks for the link. I see they sell portable bluetooth speakers we can mount under the dash. I like the idea of DIY wrapping both the interior and exterior; I can imagine anime fan boys like my son coming up with very wild art for these wraps. I had also forgotten cars used to have hand cranks to roll up the windows.


In general, for realtime voice AI you don't want this model to support multiple speakers because you have a separate voice input stream for each participant in a session.

We're not doing "speaker diarization" from a single audio track, here. We're streaming the input from each participant.

If there are multiple participants in a session, we still process each stream separately either as it comes in from that user's microphone (locally) or as it arrives over the network (server-side).


I've talked about this a lot with friends.

Endpoint detection (and phrase endpointing, and end of utterance) are terms from the academic literature about this, and related, problems.

Very few people who are doing "AI Engineering" or even "Machine Learning" today know these terms. In the past, I argued that we should use the existing academic language rather than invent new terms.

But then OpenAI released the Realtime API and called this "turn detection" in their docs. And that was that. It no longer made sense to use any other verbiage.


Re SEO, I note "utterance" only occurs once, in a perhaps-ephemeral "Things to do" description.

To help with "what is?" and SEO, perhaps something like "Turn detection (aka [...], end of utterance)"... ?


Thank for the explanation. I guess it makes some sense, considering many people with no nlp background are using those models now…


A couple of interesting updates today:

- 100ms inference using CoreML: https://x.com/maxxrubin_/status/1897864136698347857

- An LSTM model (1/7th the size) trained on a subset of the data: https://github.com/pipecat-ai/smart-turn/issues/1


It takes about 45 minutes to do the current training run on an L4 GPU with these settings:

    # Training parameters
    "learning_rate": 5e-5,
    "num_epochs": 10,
    "train_batch_size": 12,
    "eval_batch_size": 32,
    "warmup_ratio": 0.2,
    "weight_decay": 0.05,

    # Evaluation parameters
    "eval_steps": 50,
    "save_steps": 50,
    "logging_steps": 5,

    # Model architecture parameters
    "num_frozen_layers": 20
I haven't seen a run do all 10 epochs, recently. There's usually an early stop after about 4 epochs.

The current data set size is ~8,000 samples.


Turn detection is deciding when a person has finished talking and expects the other party in a conversation to respond. In this case, the other party in the conversation is an LLM!


Oh I see. Not like segmenting a conversation where people speak in turn. Thanks.


Speaker diarization is also still a tough problem for free models.


huh. how is analyzing conversations in the manner you described NOT the way to train such a model?


Did you reply to the wrong comment? No one is taking about training here.


Can you say more? There's not much open source work in this domain, that I've been able to find.

I'm particularly interested in architecture variations, approaches to the classification head design and loss function, etc.


580M parameters. More info about the model architecture: https://github.com/pipecat-ai/smart-turn?tab=readme-ov-file#...


580m, awesome, incredible


... but will the model learn when to interrupt you out of frustration with your ongoing statements, and start shouting?

it seems like for the obvious use-cases there might need to be some sort of limit on how much this component knows


The Multimodal Live API is free while the model/API is in preview. My guess is that they will be pretty aggressive with pricing when it's in GA, given the 1.5 Flash multimodal pricing.

If you're interested in this stuff, here's a full chat app for the new Gemini 2 API's with text, audio, image, camera video and screen video. This shows how to use both the WebSocket API and to route through WebRTC infrastructure.

https://github.com/pipecat-ai/gemini-multimodal-live-demo


Thanks, this is great!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: