Developers have a bad habit of adding mutable fields to plain old data objects in Go though, so even if it's immutable now, it's now easy for a developer to create a race down the line. There's no way to indicate that something must be immutability at compile-time, so the compiler won't help you there.
Good points. I have also heard others say the same in the past regarding Go. I know very little about Go or its language development, however.
I wonder if Go could easily add some features regarding that. There are different ways to go about it. 'final' in Java is different from 'const' in C++, for example, and Rust has borrow checking and 'const'. I think the language developers of the OCaml language has experimented with something inspired by Rust regarding concurrency.
Rust's `const` is an actual constant, like 4 + 1 is a constant, it's 5, it's never anything else, we don't need to store it anywhere - it's just 5. In C++ `const` is a type qualifier and that keyword stands for constant but really means immutable not constant.
This results in things like you can "cast away" C++ const and modify that variable anyway, whereas obviously we can't try to modify a constant because that's not what the word constant means.
In both languages 5 += 3 is nonsense, it can't mean anything to modify 5. But in Rust we can write `const FIVE: i32 = 5;` and now FIVE is also a constant and FIVE += 3 is also nonsense and won't compile. In contrast in C++ altering an immutable "const" variable you've named FIVE is merely forbidden, once we actually do this anyway it compiles and on many platforms now FIVE is eight...
Right, I forgot that 'const' in Rust is 'constexpr'/'consteval' in C++, while absence of 'mut' is probably closer to C++ 'const', my apologies.
C++ 'constexpr' and Rust 'const' is more about compile-time execution than marking something immutable.
In Rust, it is probably also possible to do a cast like &T to *mut T. Though that might require unsafe and might cause UB if not used properly. I recall some people hoping for better ergonomics when doing casting in unsafe Rust, since it might be easy to end up with UB.
Last I heard, C++ is better regarding 'constexpr' than Rust regarding 'const', and Zig is better than both on that subject.
AFAICT Although C++ now has const, constexpr. consteval and constinit, none of those mean an actual constant. In particular constexpr is largely just boilerplate left over from an earlier idea about true compile time constants, and so it means almost nothing today.
Yes, the C++ compile time execution could certainly be considered more powerful than Rust's and Zig's even more powerful than that. It is expected that Rust will some day ship compile time constant trait evaluations, which will mean you don't have to write awkward code that avoids e.g. iterators -- so with that change it's probably in the same ballpark as C++ 17 (maybe a little more powerful). However C++ 20 does compile-time dynamic allocation†, and I don't think that's on the horizon for Rust.
† In C++ 20 you must free these allocations inside the same compile-time expression, but that's still a lot of power compared to not being allowed to allocate. It is definitely possible that a future C++ language will find a way to sort of "grandfather in" these allocations so that somehow they can survive to runtime rather than needing to free them.
Rust does give you the option to break out the big guns by writing "procedural" aka "proc" macros which are essentially Rust that is run inside your compiler. Obviously these are arbitrarily powerful, but far too dangerous - there's a (serious) proc macro to run Python from inside your Rust program and (joke, in that you shouldn't use it even though it would work) proc macro which will try out different syntax until it finds which of several options results in a valid program...
Rust concurrency also has issues, there are many complaints about async [0], and some Rust developers point to Go as having green threads. The original author of Rust originally wanted green threads as I understand it, but Rust evolved in a different direction.
As for Java, there are fibers/virtual threads now, but I know too little of them to comment on them. Go's green thread story is presumably still good, also relative to most other programming languages. Not that concurrency in Java is bad, it has some good aspects to it.
Rust has concurrency issues for sure. Deadlocks are still a problem, as is lock poisoning, and sometimes dealing with the borrow checker in async/await contexts is very troublesome. Rust is great at many things, but safe Rust only eliminates certain classes of bugs, not all of them.
Regarding green threads: Rust originally started with them, but there were many issues. Graydon (the original author) has "grudgingly accepted" that async/await might work better for a language like Rust[1] in the end.
In any case, I think green threads and async/await are completely orthogonal to data race safety. You can have data race safety with green threeads (Rust was trying to have data-race safety even in its early green-thread era, as far as I know), and you can also fail to have data race-safety with async/await (C# might have fewer data-race safety footguns than Go but it's still generally unsafe).
in .NET, async/await does not protect you from data races and you are exposed to them as much as you are in Go, but there is a critical difference in that data races in .NET can never result (not counting unsafe) in memory safety violations. They can and will in Go.
While I agree, in practice they can actually be parallel. Case in point - the Java Vert.x toolkit. It uses event-loop and futures, but they have also adopted virtual threads in the toolkit. So you still got your async concepts in the toolkit but the VTs are your concurrency carriers.
Could you give an example to distinguish them? Async means not-synchronous, which I understand to mean that the next computation to start is not necessarily the next computation to finish. Concurrent means multiple different parts of the program may make progress before any one of them finishes. Are they not the same? (Of course, concurrency famously does not imply parallelism, one counterexample being a single-threaded async runtime.)
Async, for better or worse, in 2025 is generally used to refer to the async/await programming model in particular, or more generally to non-blocking interfaces that notify you when they're finished (often leading to the so-called "callback hell" which motivated the async/await model).
If you are waiting for a hardware interrupt to happen based on something external happening, then you might use async. The benefit is primarily to do with code structure - you write your code such that the next thing to happen only happens when the interrupt has triggered, without having to manually poll completion.
You might have a mechanism for scheduling other stuff whilst waiting for the interrupt (like Tokio's runtime), but even that might be strictly serial.
I dislike some of this article, my impression is similar to some of the complaints of others here.
However, are Go programs not supposed to typically avoid sharing mutable data across goroutines in the first place? If only immutable messages are shared between goroutines, it should be way easier to avoid many of these issues. That is of course not always viable, for instance due to performance concerns, but in theory can be done a lot of the time.
I have heard others call for making it easier to track mutability and immutability in Go, similar to what the author writes here.
As for closures having explicit capture lists like in C++, I have heard some Rust developers saying they would also have liked that in Rust. It is more verbose, but can be handy.
There is a LOT of demand for explicit capture clauses. This is one thing that C++ got right and Rust got wrong with all its implicit and magic behaviour.
Go is a weird one, because it's super easy to learn -if- you're familiar with say, C. If you're not, it still appears to be super easy to learn, but has enough pitfalls to make your day bad. I feel like much of the article falls into the latter camp.
I recently worked with a 'senior' Go engineer. I asked him why he never used pointer receivers, and after explaining what that meant, he said he didn't really understand when to use asterisks or not. But hey, immutability by default is something I guess.
That sounds really interesting, whether it is done in Rust, some Rust 2.0, or a successor or experimental language.
I do not know whether it is possible, though. If one does not unwind, what should actually happen instead? How would for instance partial computations, and resources on the stack, be handled? Some partial or constrained unwinding? I have not given it a lot of thought, though.
How do languages without exceptions handle it? How does C handle it? Error codes all the way? Maybe something with arenas or regions?
I do not have a good grasp on panics in Rust, but panics in Rust being able to either unwind or abort dependent on configuration, seems complex, and that design happened for historical reasons, from what I have read elsewhere.
Vague sketch: imagine if we had scoped panic hooks, unhooked via RAII. So, for use cases that today use unwinding for cleanup (e.g. "switch the terminal back out of curses mode"), you do that cleanup in a panic hook instead.
The hard use case to handle without unwinding is an HTTP server that wants to allow for panics in a request handler without panicking the entire process. Unwinding is a janky way to handle that, and creates issues in code that doesn't expect unwinding (e.g. half-modified states), and poisoning in particular seems likely to cascade and bring down other parts of the process anyway if some needed resource gets poisoned. But we need a reasonable alternative to propose for that use case, in order to seriously evaluate eliminating unwinding.
I am not sure that I understand what scoped panic hooks would or might look like. Are they maybe similar to something like try-catch-finally in Java? Would the language force the programmer to include them in certain cases somehow?
If a request handler for example has at some point in time 7 nested calls, in call no. 2 and call no. 6 have resources and partial computation that needs clean-up somehow and somewhere, and call no. 7 panics, I wonder what the code would look like in the different calls, and what would happen and when, and what the compiler would require, and what other relevant code would look like.
For the simple case, suppose that you're writing a TUI application that takes over the terminal. When it exits, even by panic, you want to clean up the terminal state so the user doesn't have to blindly type "reset".
Today, people sometimes do that by using `panic = "unwind"`, and writing a `catch_unwind` around their program, and using that to essentially implement a "finally" block. Or, they do it by having an RAII type that cleans up on `Drop`, and then they count on unwinding to ensure their `Drop` gets called even on panic. (That leaves aside the issue that something called from `Drop` is not allowed to fail or panic itself.) The question is, how would you do that without unwinding?
We have a panic hook mechanism, where on panic the standard library will call a user-supplied function. However, there is only one panic hook; if you set it, it replaces the old hook. If you have only one cleanup to do, that works fine. For more than one, you can follow the semantic of having your panic hook call the previous hook, but that does not allow unregistering hooks out of order; it only really works if you register a panic hook once for the whole program and never unregister it (e.g. "here's the hook for cleaning up tracing", "here's the hook for cleaning up the terminal state").
Suppose, instead, we had a mechanism that allowed registering arbitrary panic hooks, and unregistering them when no longer needed, in any order. Then, we could do RAII-style resource handling: you could have a `CursesTerminal` type, which is responsible for cleaning up the terminal, and it cleans up the terminal on `Drop` and on panic. To do the latter, it would register a panic hook, and deregister that hook on `Drop`.
With such a mechanism, panic hooks could replace anything that uses `catch_unwind` to do cleanup before going on to exit the program. That wouldn't fully solve the problem of doing cleanup and then swallowing the panic and continuing, but it'd be a useful component for that.
> Suppose, instead, we had a mechanism that allowed registering arbitrary panic hooks, and unregistering them when no longer needed, in any order. Then, we could do RAII-style resource handling: you could have a `CursesTerminal` type, which is responsible for cleaning up the terminal, and it cleans up the terminal on `Drop` and on panic. To do the latter, it would register a panic hook, and deregister that hook on `Drop`.
This doesn't get rid of unwinding at all- it's an inefficient reimplementation of it. There's a reason language implementations have switched away from having the main execution path register and unregister destructors and finally blocks, to storing them in a side table and recovering them at the time of the throw.
Giving special treatment to code that "explicitly wants" to handle unwinding means two things:
* You have to know when an API can unwind, and you have to make it an error to unwind when the caller isn't expecting it. If this is done statically, you are getting into effect annotation territory. If this is done dynamically, are essentially just injecting drop bombs into code that doesn't expect unwinding. Either way, you are multiplying complexity for generic code. (Not to mention you have to invent a whole new set of idioms for panic-free code.)
* You still have to be able to clean up the resources held by a caller that does expect unwinding. So all your vocabulary/glue/library code (the stuff that can't just assume panic=abort) still needs these "scoped panic hooks" in all the same places it has any level of panic awareness in Drop today.
So for anyone to actually benefit from this, they have to be writing panic-free code with whatever new static or dynamic tools come with this, and they have to be narrowly scoped and purpose-specific enough that they could essentially already today afford panic=abort. Who is this even for?
To be very explicit about something: these are all vague design handwaves, and until they become not only concrete but sufficiently clear to handle use cases people have, they're not going to go anywhere. They're vague ideas we're thinking about. Right now, panic unwind isn't going anywhere.
I have not given it much thought, but it would primarily be for the subset of Rust programs that do not need zero-cost abstractions as much, right? Since, even in the case of no panics, one would be paying at runtime for registering panic hooks, if I understand correctly.
I can imagine ways to reduce that cost substantially. And the cost would be a key input into the design, since it's important to optimize for the success path and not have the success path pay cost for the failure path.
I am not very familiar with C++'s API, but I believe that you are right that the C++ example in the article is incorrect, though for a different reason, namely that RAII is supported also in C++.
In C++, a class like std::lock_guard also provides "Automatic unlock". AFAICT, the article argues that only Rust's API provides that.
> In C++, a class like std::lock_guard also provides "Automatic unlock". AFAICT, the article argues that only Rust's API provides that.
The issue isn't automatic unlocking. From the article:
> The problem? Nothing stops you from accessing account without locking the mutex first. The compiler won’t catch this bug.
i.e., a C++ compiler will happily compile code that modifies `account` without taking the lock first. Your lock_guard example suffers from this same issue.
Nothing in the C++ stdlib provides an API that makes it impossible to access `account` without first taking the lock, and while you can write C++ classes that approximate the Rust API you can't quite reach the same level of robustness without external help.
That is a different topic from what I wrote about.
The article wrote:
> Automatic unlock: When you lock, you receive a guard. When the guard goes out of scope, it automatically unlocks. No manual cleanup needed.
And presented Rust as being different from C++ regarding that, and the C++ example was not idiomatic, since it did not use something like std::lock_guard.
I have not addressed the rest of your comment, since it is a different topic, sorry.
Fair point with respect to the separate topic. My apologies.
As for the automatic cleanup bit, perhaps the article is trying to focus purely on the mutex types themselves? Or maybe they included the "when you lock" bit to emphasize that you can't forget to unlock the mutex (i.e., no reliance on unenforced idioms). Hard to say given the brevity/nature of the section, and in the end I think it's not that much of a problem given the general topic of the blogpost.
It seems completely clear. He first gives unidiomatic C++ code, then next gives idiomatic Rust code, and differentiates the two based on the code snippets. It is a mistake on his part, and I do not see how it could reasonably be viewed otherwise. It is not a huge mistake, but it is still a clear mistake.
Perhaps it might help to clarify precisely what claim(s) you think are being made?
From my reading, the section (and the article in general, really) is specifically focusing on mutexes, so the observations the article makes are indeed accurate in that respect (i.e., C++'s std::mutex indeed does not have automatic unlocking; you need to use an external construct for that functionality). Now, if the article were talking about locking patterns more generally, I think your criticism would hold more weight, but I think the article is more narrowly focused than that.
For a bit of a more speculative read, I think it's not unreasonable to take the C++ code as a general demonstration of the mutex API "languages other than Rust" use rather than trying to be a more specific comparison of locking patterns in Rust and C++. Consider the preceding paragraph:
> In languages other than Rust, you typically declare a mutex separately from your data, then manually lock it before entering the critical section and unlock it afterward. Here’s how it looks in C++:
I don't think it's unreasonable to read the "it" in the final sentence as "that pattern"; i.e., "Here's what that pattern looks like when written in C++". The example code would be perfectly correct in that case - it shows a mutex declared separately from the data, and it shows that mutex being manually locked before entering the critical section and unlocked after.
Please stop registering accounts to post guidelines-breaking comments like this in Rust-related threads. Other community members are noticing. It's an abuse of HN to do this and we have to ban accounts that keep doing it.
I am not breaking any rules, instead my comments are on point and show better debate culture than other comments, including better than yours and the previous comment. Please do better. You are at fault 100%, and you are well aware of it.
Edit: Downvoting comments that you know are good, is even further against good debate practice. You are doing worse and worse, and you are well aware of it. Have some shame.
The guidelines apply to everyone. If you continue breaking them with this account or other old or new accounts, they’ll be banned. No further warnings.
Questions for anyone who is an expert on poisoning in Rust:
Is it safe to ignore poisoned mutexes if and only if the relevant pieces of code are unwind-safe, similar to exception safety in C++? As in, if a panic happens, the relevant pieces of code handles the unwinding safely, thus data is not corrupted, and thus ignoring the poison is fine?