Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Oh good lord. This question is more about properly isolating your tests (i.e. writing the characters to /dev/null and getting rid of the "random" call) and properly profiling code (which would show high CPU consumption on the console, not the program writing to stdout) more than anything "tricky" about performance.


The problem is, who on earth would have thought of printing to terminal would have a big penalty?


I've run into it before... I've had compiling etc. limited by the speed & size of my console window.

Isn't it common knowledge though? Cat some 1GB file to your terminal and notice it takes more than the <100ms it takes to cat it to /dev/null.


Never assume something is common knowledge! I've seen many links to mathematics things here on HN that I would consider "common knowledge" yet many people were unaware of them.


You also realize it when adding 'quiet' to the bootloard kernel params. No output => boot time decreased (sometimes halved)


> who on earth would have thought of

Everyone with >5 years of printf() debugging experience.


I remember minimizing the terminal window to make a script run faster. Taking the text rendering out of the execution path actually had a substantial speedup for long running scripts in some cases. It was one of those things you sort of picked up from other people along the way and assumed everyone knew.

Nowadays I don't know if the difference is noticeable enough to be common knowledge though.


I still have to think about it when i'm piping debug statements to a log... Conditional breakpoints are a similar case.


When you're using GNU screen or tmux, you notice that the CPU usage of these processes is sometimes notable.

Here's another way to increase your CPU load with apparently doing nothing:

$ cat /dev/zero




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: