Iran doesnt have the defensive capabilities needed tomproperly defend their airspace, so their plan was to make it hurt if they were attacked.
The fact that the US seemed surprised they shut down the strait is such an immense intelligence failure.
If anyone by this point think there will be any meaningful change in Iranian society that won't be shoved down their throats imperialist style, they shouldn't be in politics or military intelligence. They should be selling pencils from a cup.
Whatever aims the US had with this illegal war, we should all admit they are a lost cause.
Unless the US allows for immense civilian suffering, I think Iran will outlast any US political patience for this war
The article explains most of this, but the key takeaway for beginners once this lands is: With `become` you can write tail calls in Rust and it will promise they either work or don't compile, you can't have the case (which exists in several languages) where you thought you'd written a tail call but you hadn't (or maybe you had but you switched to a different compiler or made a seemingly inconsequential change to the code) and now the stack has overflowed.
Rust has been really good at providing ergonomic support for features we're too used to seeing provided as "Experts only" features with correspondingly poor UX.
"Either it works or doesn't compile" compared to "oops it silently degraded to the less performant thing because an invariant stopped being true" is remarkably similar to how I tend to describe the benefit of having move semantics by default with opt-in copies in Rust compared to in C++ where you need to set up things right and might still accidentally have it copy if you mess it up.
No, it is used for loops within functions as well. But it’s not fully generalized like in Scheme. You can’t have mutually recursive functions using tail recursion via “recur,” for instance. There is another Clojure technique for that (“trampoline”). Clojure runs on the JVM and is limited by the JVM’s original omission of TCO. When I started using Clojure, I was concerned about these limitations, but in practice I haven’t found them to be a problem. Most of the time, I just need a loop and “recur” works fine. I rarely use “trampoline” (just for state machines).
I like this change. I was wondering if I would've preferred to have something on the function signature (eg `tcc_fn foo() ...` as in Tail Call Constrained fn) and when encountering that the rust compiler would make checks about whether the body of the function is tail-callable.
My fear is that adding yet another keyword it might get lost in the sea of keywords that a Rust developer needs to remember. And if recursion is not something you do often you might not reach for it when actually needed.
Having this signal in the function signature means that people would be exposed to it just by reading the documentation and eventually will learn it exists and (hopefully) how to wield it.
The property we care about isn't a property of functions but of callers, so marking a function doesn't help.
`become blah(foo, bar)` is the same thing as `blah(foo, bar)` except that we, the caller are promising that we have nothing further to do and so when blah returns it can return to our caller.
If somebody else calls blah they don't want that behaviour, they might have lots more to do and if they were skipped over that's a disaster.
In some languages it's very obvious when you're going to get TCO anyway, but Rust has what C++ calls RAII, when a function ends all the local variables get destroyed and this may be non-trivial work. Presumably destroying a local i32 is trivial & so is a [u8; 32] but destroying a local String isn't, let alone a HashMap and who knows how complicated it is to destroy a File or a DBQuery or a Foo...
So in a sense "all" become does is try to bring that destruction sooner a little, so it happens before the call, leaving nothing to do afterwards. We weren't using that String any more anyway, lets just destroy it first, and the HashMap? Sure, and oh... no, actually if we destroy that Foo before calling blah which needs the Foo that messes things up... Rust's borrowck comes in clutch here to help us avoid a terrible mistake, our code was nonsense, it doesn't build.
Given that it's not really that uncommon to see something like `pub(crate) async fn foo() ...`, the concern of function signatures starting to get unwieldy feels a lot more concrete than hypotheticals about a "sea of keywords". From looking at the list of keywords in the language currently (listed here: https://doc.rust-lang.org/std/#keywords), I don't really see a whole lot that I think the average Rust programmer would say is a burden to have to remember. At most, `union` and `unsafe` are probably ones that most Rust programmers are not going to need to use directly, and `SelfTy` might look a bit confusing at first due to the fact that the keyword itself is spelled `Self` (and presumably listed in that way to make it easier to differentiate from the `self` entry in the documentation), but even including those I'd argue that probably over half aren't specific to Rust.
As for being in the documentation, I'd argue that might even be an explicit non-goal; it's not clear to me why that should be something considered part of the API. If someone updates their tail recursive function to manually loop instead (or vice-versa), it doesn't seem like that should necessitate the documentation being updated.
I'd actually say that for people learning Rust after something like C or C++ in particular the rare cases where a keyword means something else are the most confusing. In particular `const` in Rust means constant whereas in several languages it means an immutable variable.
const NINE: i32 = // Some arbitrary *constant* expression;
In K&R C this qualifier didn't exist so there's no confusion, but C89, all versions of C++ and several other languages inspired by them use "const" to mean an immutable variable.
That's a fair point, and maybe even a case that there should be more keywords rather than fewer.
Relatedly, I still sometimes get tripped up by the nuances of using `const` versus `static` for top-level constants. Most of the time the difference is entirely opaque to the programmer (because it's not obvious when most things are getting inlined or being referenced from a single place in memory), but it's possible to run into cases where one works and the other won't (e.g. trying to be clever with `OnceCell` rather than `OnceLock`).
It might help to think about whether you want an actual singular concrete thing, which means you need static or whether you just want to talk about the idea and so it doesn't matter whether at runtime this exists many places or nowhere at all, which is a const.
Statics can be mutated - though not safely - because they are a single concrete thing so they can be changed, whereas it can't mean anything to mutate a constant, hence the word "constant".
For larger objects you might want a single concrete thing even though it might intuitively not seem important because it impacts performance. For example if we keep talking about FACTOR[n] where FACTOR is an array of a million numbers (maybe computed by scientist colleagues for your application) and n is a variable, if FACTOR is const Rust is going to just put a copy of that enormous array everywhere it needed to do this indexing operation, which gets out of hand really fast, whereas if we use static we get a single concrete thing, named FACTOR and everywhere in the program will use that one single million number array, much tidier and less likely to result in say, running out of RAM on a small computer.
I was wondering how they solved the `drop` problem (the fact that `return foo;` is not the last code executed in most functions, because Rust values are all implicitly dropped at the end of scope), and it seems that they cut the Gordian knot so that values are all implicitly dropped before `become` instead. Hope that works out - probably why it's still nightly for now.
What are some examples of macros that your would be able to be written with tail cails? Because macros in Rust can already be recursive (and I've written plenty of ones that take advantage of it over the years), it's not immediately obvious what doors better optimization of tail calls in Rust would open up for them.
I'm not sure how this would be useful in Rust, but macros and tail calls are what allows one to (for example) write iterative loops in Scheme, which doesn't have a native loop syntax.
Maybe the same idea can be used in Rust where some constructs are easier to write in recursive form instead of a loop?
In any case, here's a silly example of a `for-loop` macro in Scheme using a tail call:
(define-syntax for-loop
(syntax-rules ()
((for-loop var start end body ...)
(letrec ((loop (lambda (var)
(unless (>= var end)
body ...
(loop (+ var 1)))))) ; <-- tail call
(loop start)))))
And here's how you'd use it to print the numbers 0 to 9:
(for-loop i 0 10
(display i)
(newline))
This macro expands to a function that calls itself to loop. Since Scheme is guaranteed to have proper tail calls, the calls are guaranteed to not blow the stack.
(Note that you'll probably never see a `letrec` used like this: people would use a named `let`, which is syntax sugar for that exact usage of `letrec`. I wrote it the `letrec` way to make the function explicit).
Interesting, my lack of real experience in Scheme will make this take a bit more work for me to fully work through the implications of. It's not immediately clear to what this would mean for Rust, since there is already a loop construct (well, three of them, although two of them are syntactic sugar for the first). You could define a macro around it in Rust today, but it would be fairly uninteresting: https://play.rust-lang.org/?version=stable&mode=debug&editio...
(There may be a way to use a closure instead of a function to avoid hard-coding the type of `$i` in the macro, but I can't find an easy way to write a recursive closure call in Rust).
I dont know why, but I have always been very fond of the BeOS ui. It was toony, but still said "this is where you come to get shit done". Tabbing windows together and quickly flipping between them with a simple keyboard shortcut was in many ways a lot better than what we have today. Multitasking on a smaller screen stinks (I am looking at you specifically, Gnome), but somehow BeOS made it work.
I went all in about a year ago after running microOs for some time. I am pretty happy with the situation, and while I do like the transactional-update way of microos, their immutable desktops have been too much work for me. Kalpa is alpha, and aeon runs gnome. I tried for a year, but one day I had enough and installed kinoite and haven't looked back (although I have been looking at Aurora).
Since you never touch the base system, you get more or less a rolling release distro, since updating between fedora versions becomes even simpler.
Fun read! If anyone is interested in more, I believe Andy Wingo has written about the implementation of delimited continuations in guile scheme over at https://wingolog.org
After publishing that I actually emailed him about it. I wanted to show him how I solved the interleaving stacks problem he outlined in that article. No idea if he ever saw the email but his about page does mention the fact email has very variable latency.
Most code is read much more than it is written, at least I read much more code than I write. So for me code should be optimized to be read as plain text as that is basically what every tool uses. Requiring a separate tool to get basic readability is not really acceptable. I can't do that on github, I can't do that with the various diff tools, I can't just quickly cat a file or use the countless other tools that are designed to present plain text.
If I then can choose between guix and a language that doesn't require these hoops and extreme quality trade off the choice is not hard.
Anyway if you think guix is better than nix, than nothing stops you from using it.
I just have a hard time taking such a comment seriously, because I have made it myself. Many times even. My second comment on Slashdot in 1999 was a comment just like yours. I try to tell myself it is ok because I was still a teenager.
7 years later I had to write common lisp and I think the parens were a problem for about a week. Since then I have written many thousand lines of lisp code. I usually go back and forth on what I like the most. ML (ocaml mostly) or (guile) scheme. In just about every other language (except maybe factor) I miss the macros (even syntax rules ones) way more than I like the extra expressiveness of, say, Haskell. [0]
Wisp is a part of guile. So you can write your own config using it. It is not completely straightforward, but if you really hate parentheses it is a way. Or you continue the wonderful string gluing of Nix. Whatever floats your boat.
[0]: I do miss typing sometimes in scheme though. I have thought about vibe coding a prototype of an s-expr language that compiles to f# AST.
This isn’t about whether someone can get used to parentheses. Obviously they can. I don’t doubt your extensive experience there. The question is what the language optimizes for by default.
My argument is that S-expressions optimize for structural uniformity and macro power, but they do so at the expense of plain-text readability without tooling. And that trade-off matters in contexts where code is frequently read outside of a fully configured editor, code reviews, diffs, quick inspection, etc.
Saying “editors solve this” doesn’t really address that, it just shifts the burden to tooling. In contrast, many other languages aim to be reasonably legible even in minimal environments.
So I’m not arguing that Lisp is unusable. I’m saying it makes a different set of trade-offs, and for my use case where I spend much more time reading other peoples code in some web portal and with basic terminal tools those trade-offs are a net negative. I would expect this trade off holds for most code produced.
All syntaxes look like windy grass to someone who is not used to them. I think that, while the parsing overhead for lisps might be higher, there is nothing else going on with precedence or syntax. Reading ocaml or haskel operator soup is in my opinion way more taxing than reading verbose scheme code - if it is well formatted.
I dont think most people here see that, or even have the willingness to see that. The same was true for the opioid epidemic. Had it hit a group with any political capital there would have been laws passed and the sacklers would have been not just painted as villains, they would have been castrated a quarter of a century ago.
Just wait 15 years when the middle class has been struggling with easily accessible gambling and it can't be explained as problem of character. There will be laws passed and people prosecuted or successfully sued.
Many of those targeted by the sacklers did start as white, middle class, etc. or even white, upper class.
Those folks that did fall to it, then became (often) lower class while failing to it.
The thing to realize, is that the upper classes ‘eat their own’ just like any other. It’s why Trump is as frantic as he is, he knows what will happen when he stops being ‘useful’/necessary.
The fact that the US seemed surprised they shut down the strait is such an immense intelligence failure.
If anyone by this point think there will be any meaningful change in Iranian society that won't be shoved down their throats imperialist style, they shouldn't be in politics or military intelligence. They should be selling pencils from a cup.
Whatever aims the US had with this illegal war, we should all admit they are a lost cause.
Unless the US allows for immense civilian suffering, I think Iran will outlast any US political patience for this war
reply