The C++ standards committee is still under the illusion people can read, understand and remember the entire spec, and write code without making mistakes. All these bugs are the fault of the people making mistakes, not C++.
I don't think there's any such illusion. You do not see, for example, WG21 members who are confident that they understand the entire C++ language (on the contrary they'll often accept corrections about the language from other committee members) and it's not infrequent that a committee member will agree with the statement that C++ is too large and sprawling for any individual to attain such comprehensive understanding. [Today I would guess maybe Sean Baxter, who wrote his own compiler, has the best individual understanding and I believe Sean is not a member of the committee]
Instead WG21 has very clearly (but without ever admitting it and that's important) taken the path of maintaining a legacy language. Even as debate carried on about whether in future C++ could end up like COBOL, the committee has acted exactly as though it is for some years now. Compatibility is King, no price is too high for compatibility, everything must be sacrificed to make that happen and that's how you end up like COBOL.
Three important opportunities to divert and pick other ways forward should be highlighted here. P1863 "ABI: Now or Never" by Titus Winters in 2020; P2137 "Goals and priorities for C++" also in 2020 but with a long list of authors and P1818 "Epochs" from 2019 by Vittorio Romeo.
In all these cases WG21 chose the "hope the problem goes away" path, preferring not only not to address the critical problem highlighted and take a new route forward, but to specifically ignore the problem and press on anyway.
"Hope the problem goes away" is also, quietly, the preferred strategy by WG21 for the safety problem.
There's a reason (albeit a terrible one) to prefer the C++ ISO document's language over the approach of Rust. These are both general purpose languages (I might also write separately in this thread about a non-general purpose language which Google should use more, if I have time) and so must wrestle with Rice's Theorem. Rust's solution is to require the compiler to be conservative. This is very difficult and indeed there are known bugs in the code doing this conservative check in the Rust official compiler. But C++ has a much easier (but IMO fatal) path, it says that's the job of the programmer and when the programmer writes C++ software which is nonsense as a result that's their fault, not the compiler's fault for failing to reject the program.
It would be extremely difficult to explain how a "standards conforming" Rust compiler can correctly accept all the programs Rust's actual compiler accepts and reject all those it rejects without essentially having a black box where the compiler implementation sits. We can explain the purpose of such rules without, but their detailed behaviour not so much.
Take borrow checking. All the easy scoped borrows (which is all that worked in Rust say eight years ago) can be explained without too much trouble, but today a lot fancier (but to a human obviously correct) borrowing will compile, because the checker is smarter - now, how do you express, not in Rust source code but in the English language, all the checks to be performed, and neither miss things out nor unknowingly accept programs a real Rust compiler will reject ?
C++ just needn't do that, in effect the ISO document says. "Don't do borrows that last longer than the thing borrowed, if you do, that's not C++ but your compiler won't notice so the result is arbitrary nonsense"
> how do you express, not in Rust source code but in the English language, all the checks to be performed, and neither miss things out nor unknowingly accept programs a real Rust compiler will reject ?
I think this is unintentionally stuck in the mindset of "the purpose of a language specification document is to enable armchair language lawyers to flame each other on Usenet about whether or not such-and-such degenerate edge case is technically valid". But a specification doesn't need to be written in English, it can be written as a formal proof, and indeed I would expect a theoretical Rust spec to specify the behavior of the borrow checker as just such a proof. Rust's borrow checking may no longer be as simple as the lexically-scoped model that existed as of Rust 1.0, but it's not like the extensions that have been added since then are ad-hoc; they're all still designed to result in a model that is provably sound.
> But a specification doesn't need to be written in English, it can be written as a formal proof, and indeed I would expect a theoretical Rust spec to specify the behavior of the borrow checker as just such a proof.
Did you miss the part where the person you're responding to mentioned Rice's theorem? Do you know Rice's theorem and hence understand what they're implying?
Rice's theorem isn't relevant here. The goal is not to create a system that produces no false positives, it's perfectly fine to do a conservative syntactic analysis that allows false positives but disallows false negatives, and it's then possible to produce a formal proof that this analysis is sound. It is this formal proof that I would expect to be included in a specification in lieu of English prose.
Sure, there's a reason I said "Extremely difficult" not "Impossible". Defying Rice without losing generality is mathematically impossible.
This is on that continuum where it's definitely neither impossible nor easy enough that we can just let some bored grad student knock out the answer, and so now somebody who wants this must do lots of hard work.
I think a specification which says e.g. here's the semantic requirement, here's a rule for scoped borrows which works, you must do at least that, but you can do more however you must not allow anything which violates the semantic requirement - would be great, but if you had that rule in your standard then people can write conforming Rust programs which don't compile - they need a yet-to-be-written smarter compiler to figure out why they're legal, which is kinda annoying as a language feature.
Some people believe that the Church-Turing intuition doesn't tell us anything about humans, that what humans are doing isn't computation but something more powerful. In my experience their lack of evidence for this belief just makes them believe it even harder, and they often write whole books which are in effect the argument from incredulity but expanded to book form.
There is no proof that humans are just glorified Turing machines and even as a nonreligious person, I find such a statement to be as lacking in evidence as those that claim humanity has some soul or similar that cannot be replicated.
The actual logic of gggp's statement also doesn't make any sense. We as humans also under and overestimate the soundness of programs.
Sometimes, a perfectly fine solution is massaged to better adhere to best practices because we can't convince ourselves that it's correct. Rust requires that we convince the compiler, and then we know it's correct via the compiler's proofs, instead of requiring us to do the proof all the time.
> I find such a statement to be as lacking in evidence as those that claim humanity has some soul or similar that cannot be replicated.
It doesn't need evidence; it is the null hypothesis.
Brains clearly compute, and it appears that computation is sufficient to produce the observed behaviour of brains. All our experience of the universe and physics suggests that there is no magic or metaphysics or souls or whatever.
So the onus is on you to show that there's something more going on. It isn't a 50:50 "is it heads or tails", it's more like "I claim that the tooth fairy exists" vs "I'm pretty sure it's your mum".
It's simpler than this. Turing machines are a beautiful abstraction. Whatever happens in humans is much, much, much, much, much, much messier, on the account of it being subject to laws of evolution and working on a scale where various micro-effects can be felt (radiation, Brownian motion and quantum effects anyone?).
So even if the Turing machine model is correct (and we don't know that), it's overtly simplified.
Humans are not Turning machines.
I'm not talking how we work on fundamental level.
I'm saying we don't obey axioms of Turing machine model. So Rice theorem nor Godel theorem can apply to unsafe code written by humans.
Even if borrow checker is limited by the Rice theorem, you can create either safe abstractions provably or unsound abstractions provably or potentially unsound abstractions, which humans can reject or accept.
This doesn't really have anything to do with Rice's theorem. It's difficult to specify the behaviour of the borrow checker simply because it is very complex behaviour.
You absolutely could do it, but it would be a ridiculous effort for nebulous benefit.
I still don't get why the standard library went the other way, other than starting the tradition of standardised wrong defaults.