Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

C#'s weakness here is that those two patterns are cooperative multitasking only. Under the hood they retain control of a thread until they yield execution. By default resource management is something that needs to be considered and the default thread pool is not an uncontested resource.

I don't use Erlang but my understanding is that while it is not exactly fully pre-emptive, there are safeguards in place to ensure process fairness without developer foresight.



C# async runtime is mixed mode, threadpool will try to optimize the threadcount so that all tasks can advance fairly-ish. This means spawning more worker threads than physical cores and relying on operating system's thread pre-emption to shuffle them for work.

That's why synchronously blocking a thread is not a complete loss of throughput. It used to be worse but starting from .NET 6, threadpool was rewritten in C# and can actively detect blocked threads and inject more to deal with the issue.

Additionally, another commenter above mistakenly called Rust "bare metal" which it is not because for async it is usually paired with tokio or async-std which (by default, configurable) spawn 1:1 worker threads per CPU physical threads and actively manage those too.

p.s.: the goal of cooperative multi-tasking is precisely to alleviate the issues that come with pre-emptive one. I think Java's project Loom approach is a mistake and made sense 10 years ago but not today, with every modern language adopting async/await semantics.


Hey, I also prefer C# and async. Alternatives have yet to prove they can handle gui patterns where main threads matter.

...but the problems stated are real. I'm excited to hear that this might be fixed in .net 6 but it'll be a while before that rolls out to most deployments.


Apologies but it seems you have gotten wrong impression (or maybe I did a poor job in explaining).

It has never been a big issue in the first place because by now everyone knows not to 'Thread.Sleep(500)' or 'File.ReadAllBytes' in methods that can be executed by threadpool and use 'await Task.Delay(500)' or 'await File.ReadAllBytesAsync' instead. And even then you would run into threadpool starvation only under load and when quickly exhausting newly spawned threads. It is a relatively niche problem, not the main cornerstone of runtime design some make it out to be.

Also, .NET 6 is old news and has been released on Nov 8, 2021. It is the deployment target for many enterprise projects nowadays.


"Everyone knows to do it right" is no protection at all. And honestly, I would push back on this in general because no its not well known at all. A fresh grad will not intuitively know to look for WhateverAsync API in case they exist and veterans will miss this as well.

Knowing that file IO is too heavy and has *Async counterpart methods is somewhat obvious to a veteran, but other long running methods are not so obvious. In this case you would need to profile your use case to understand that certain calculations/methods might be best farmed off to a different threadpool.

Unity still uses Mono and has a very low max thread pool size, for example. The thread pool is easily starved in the latest version of that engine and I'm sure it's more common than you think.

Relatively niche, perhaps, but a critical problem when stumbled upon none the less. Again, I like async/await but there are certainly foot guns left to remove.


Unity is special and has its own API and popular patterns, if you block the main/render thread it will explode, regardless of the language of choice, and Erlang/Elixir performance is not acceptable for Gamedev and will likely stumble upon similar issues.

Again, and I cannot stress this enough, we're discussing somewhat niche feature. You have to take into account that even the standard library still has a lot of semi-blocking code, simply due to the nature of certain system calls or networking code. From runtime standpoint, blocking or computationally heavy logic - there is no difference, it will scale the amount of threads to account for fairness automatically. It's that blocking just has extra cost due to being "better" at holding threads (you don't have to think about it). .NET 6 is just comparatively better at dealing with such scenarios but your app would work fine in PROD 9 times out of 10 with invalid code before or after that. It's a difference between running 'Task.Run(() => /* use up thread for no reason for seconds / minutes */))' in a 100s iterations loop going from terrible to very bad.

It's pointless to "fight against words". Just trust the runtime to do its thing right. That's why its baseline cost is somewhat higher than that of Golang or Rust/Tokio - you pay more upfront to get foolproof solution that has really good multi-threaded scaling.

If you don't want to believe the above, just look at average C# solutions on Github. There are no "special magic to learn", that's just how people write code new to the language or otherwise.

p.s.: This situation reminds me one of my colleagues who would always come up with an excuse for his point regardless of context. It's counter-productive and self-defeating.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: