Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I would add the following two numbers if you're generating realtime text or speech for human consumption:

- Human Reading Speed (English): ~250 words per minute

- Human Speaking Speed (English): ~150 words per minute

Should be treated like the Doherty Threshold [1] for generative content.

[1] https://lawsofux.com/doherty-threshold/



Human reading speed varies by a factor of 10 or more between individuals, while speaking speed is much more consistent.


Even my own reading speed even varies by a factor of 5 day to day, depending on how much reading I've been doing, sleep I've gotten, etc.


Plus, whether I am reading light fiction versus technical documentation.


> speaking speed is much more consistent.

Is it? I've noticed a huge variance in speaking speed in the US, but it tends to vary more between regions rather than individuals.


Exceptions for languages where rapidity of speech really varies according to context, such as in Spanish.


But I'd say LLMs produce content faster than I can read or write it, because they can produce content which is really dense.

Ask GPT-4 a question and then answer it yourself. Maybe your answer will be as good or better than GPT-4's but GPT-4 writes its answer a lot faster.


It certainly doesn't produce content as fast as I can read it.


Only if you use gpt-4. gpt-3.5-turbo is much faster, and gpt-4 is only going to get faster as GPUs get faster.


Yep. I use GPT-4 extensively and exclusively, and the comment I was replying to mentioned GPT-4. I can't wait for it to get faster.


Bing also uses GPT-4 and it is very fast. Microsoft spends more ok compute.


It doesn't exclusively use GPT-4, you might be right anyway that their GPT-4 is much faster but you're also not always seeing GPT-4 with them.


I'm pretty sure it at least mostly uses GPT-4.


afaict OpenAI's instance is massively overloaded, you can see with the 32k context model actually being faster in practice rather than slower


dense content? Not in my experience. It seems to be really overly verbose for me.


Prompt it to be information dense in its response then.


So I must specifically ask for it, but it's not at all the default.


I get it, but it is just about infinitely configurable to your specific needs so it doesn't bother me too much what the default response is.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: