This is exactly where I want to go with RingoJS: Many threads with private mutable scope, global read-only scope, worker/actor based thread interop, one event loop per thread.
Currently we still have shared mutable state that sometimes requires locking (unless you resort to a functional style of programming): http://hns.github.com/2011/05/12/threads.html
I think the opinions expressed in this article are valid.
However, I don't think the inability of current JavaScript to do async I/O without callbacks is Node's biggest problem. As others have said, it works for smaller projects (and even has some geek appeal). And as Havoc Pennington and Dave Herman have explained, generators (which are coming with ECMAScript Harmony) and promises will eventually provide a very nice solution. So Node has a path to grow out of the callback model without giving up its single threaded paradigm.
The bigger problem (which I don't see getting solved anywhere down the road) is the lack of preemptive scheduling, which is available in Erlang or on the JVM. What you see under high load with Node is that latency is spread almost linearly over a very wide spectrum, from very fast to very slow, whereas response times on a preemptively scheduled platform are much more uniform.
And no, this is something that can't be solved by distributing load over multiple CPU cores. This is problem really manifests itself within each core, and it is a direct consequence of Node's single threaded execution model. If anybody knows how to solve this without resorting to some kind of preemptive threading I'd be very curious to hear about it.
The only way to make it expand forever would be to support recursion. Would be interesting what texts you could come up with that. Kind of a textual Mandelbrot set.
It was a dark and stormy night. The captain and his men were huddled around a campfire. The captain turned to Jake and said "Jake, tell us a story.", so Jake began his story: "It was a dark and stormy night. The captain and his men were huddled around the campfire..."
Perhaps using some kind of Markov chain process? It could be amusing, but would be more interesting with some kind of understanding of parts of speech.
Right that is the big one that I have been explaining to people when asked. The secret under the covers is mobile and Oracle's dead set position on that platform. The are doing everything in their power to eliminate all non-Oracle run times from mobile devices. With mobile expected to be the dominate platform in 18 months and it's relative immaturity when compared to other older platforms, there is a huge potential for revenue to anyone who can become the Microsoft of mobile.
Now the conspiracy theorist in me thinks that Oracle is eliminating the VM's on mobile not because they have competency there (Ellison is smart enough know it would be a forgone battle to try to now edge in between IOS and Android) but rather to clear the market for a sympathetic partner that will split the purse. And I think that partner is none other than Apple.
I find the timing of Apple relinquishing the JDK back to Oracle timely and suspicious further both Steve and Larry know that the other has significant competencies is separate, independent, non-overlapping, but complementary markets. If the mobile market it seceded to a closed vertical vendor they are free to chose who become the infrastructure to support the new global mobile network. I think the land has already been divided up and now it is just time to play the battle out.
Yes, --trace-gc shows about 10 Mark-sweeps per second, each taking around 13 ms (no compacts though as far as I could see). But are those ~15% spent in GC are enough to explain the performance?
You were saying that V8 GC is failing here so I just explained why JSON.parse is especially bad for V8 GC.
Strictly speaking I am not even convinced that GC is bottleneck here. Only profiling can reveal the real bottleneck.
[I tried small experiment: used thirdparty pure-JS JSON parser instead of V8 JSON.parse --- that changed GC profile, but did not affect response time.]
Just an educated guess. If you're allocating tons of objects and strings and your app gets slow, it's very likely to be the GC. But I don't know V8 well enough to say for sure.
The JSON I'm parsing is just objects with short string properties (around 10 characters). There's just one longer 25kb JSON string but that is never collected. As to Node configuration, can you provide some specific options to use? I've been asking about this on #node.js (and ryan) and I'm open to any suggestions.
Ringo is running with the server hotspot JVM without any further options.
Per default, Java 6 uses a generational collector with multi-threaded stop-the-world copying for the young generation and single-threaded stop-the-world mark-sweep-compact for the tenured generation.
I'm the author of both the original article and this HN posting - and yes, I am biased, since I'm the main developer of RingoJS (the other platform in that benchmark). I've made that quite clear and provided additional background in the original benchmark to which this is just a short update: http://hns.github.com/2010/09/21/benchmark.html
I think my benchmark and the conclusions I draw from it (after a lot of thinking) are fair. My intention is just to make people see there's no magic bullet with performance or scalability, and that there are alternatives for server-side JavaScript.
I think your conclusions in the article are fair. I think the title on HN is misleading because it's a quantitive issue.
V8 GC is a well known concern in the Node community, but it's still performing well enough that Node is considerably faster than traditional servers (like Apache). The fact that Ringo is also faster doesn't make V8 "not ready" it just means it could be improved.
If I wanted to be contentious I could suggest that "Hacker News comments confirms that RingoJS may not be ready for developers" because the author likes taking pot shots at other frameworks. But that would be petty, wouldn't it?
You are right about the title. That "not ready for the server" is a foolish phrase. I'd change it to "not tuned for the server" if I could, but it looks like it's impossible to change that now.
I submitted a Ringo talk to JSConf.eu 2010, haven't heard anything back from these guys so far. If that fails, I may apply for next JSConf.us. We'll get the word out there eventually :)
Currently we still have shared mutable state that sometimes requires locking (unless you resort to a functional style of programming): http://hns.github.com/2011/05/12/threads.html