Hacker Newsnew | past | comments | ask | show | jobs | submit | ISV_Damocles's commentslogin

Most of the big OSS AI codebases (LLM and Diffusion, at least) have code to work on any GPU, not just nVidia GPUs, now. There's a slight performance benefit to sticking with nVidia, but once you need to split work across multiple GPUs, you can do a cost-benefit analysis and decide that, say, 12 AMD GPUs is faster than 8 nVidia GPUs and cheaper, as well.

Then nVidia's moat begins to shrink because they need to offer their GPUs at a somewhat reduced price to try to keep their majority share.


Share can go up and down if consumption keeps going up crazily. We now spend more per dev on their personal use inferencing providers than their home devices, so inferencing chips are effectively their new personal computers...


> There's a slight performance benefit to sticking with nVidia

In training, not in inference and not in perf/$.


UTF-16 is also just as complicated as UTF-8 requiring multibyte characters to cover the entirety of Unicode, so it doesn't avoid the issue you're complaining about for the newest languages added, and it has the added complexity of a BOM being required to be sure you have the pairs of bytes in the right order, so you are more vulnerable to truncated data being unrecoverable versus UTF-8.

UTF-32 would be a fair comparison, but it is 4 bytes per character and I don't know what, if anything, uses it.


No, UTF-16 is much simpler in that aspect. And its design is no less brilliant. (I've written an state machine encoder and decoder for both these encodings.) If an application works a lot with text I'd say UTF-16 looks more attractive for the main internal representation.


UTF-16 is simpler most of the time, and that's precisely the problem. Anyone working with UTF-8 knows they will have to deal with multibyte codepoints. People working with UTF-16 often forget about surrogate characters, because they're a lot rarer in most major languages, and then end up with bugs when their users put emoji into a text field.


python does (although it will use 8 or 16 bits per character if all characters in the string fit)


Replying to this one since you apparently can't reply to a comment that has been flagged. Why was the grandparent flagged? Google's S2 library has been around for more than a decade and is the first thing I think of when I see "S2" in a tech stack.

And the flippant response from the parent here that they don't really care that they're muddying the waters and just want the crate name is irksome.


This article touched on a point that I feel is very relevant: unexpected show cancellations, apparently now happening for Apple TV+, as well.

Netflix and Disney+ trained me to not even watch a show until it's concluded because it could get cancelled and I don't want to invest my small amount of free time on entertainment that might not even finish. It does produce a self-fulfilling prophecy where people with the same mindset as me on this do the same, and then the rating for something I (and probably they) are interested in aren't high enough and it gets cancelled.

What should worry them, though, is that it also led to the final step for them; I cancelled my Netflix and Disney+ subscriptions with no intention of renewing them around a year ago. The end result is that "TV series"-style shows are effectively dead to me; I've shifted my time on them mostly to novels (that are basically behind-the-curve on this trend, hopefully forever), followed by single-player video games, and finally movies. (Why didn't movies take the first slot? Because I'm only willing/able to give 30-60 minutes of continuous time to entertainment most of the time, and it's very unsatisfying to pause a movie to resume later.)

The continuous, immediate feedback on series performance coupled with a reputation of acting on that feedback immediately is killing the traditional television medium.

On top of all of that, Apple TV+ has the added albatross of requiring their hardware for the shows, as if they were somehow a siren song to get people more tightly nestled into their ecosystem, and therefore dooming their shows to failure, at least amongst people who don't want to pay for overpriced hardware running software of degrading quality over the years (I switched to Linux in 2016 because it was more reliable than my MacBook Air; being better than Windows isn't good enough anymore, especially when Linux has a greater catalog of software these days).

The needs of Apple, Inc weigh on their Apple TV division, they don't help it, and the sins of the streaming services against actually finishing a story further increase the trust deficit with Apple TV+. No amount of marketing is going to turn that around.


> On top of all of that, Apple TV+ has the added albatross of requiring their hardware for the shows

You do not need Apple hardware to watch Apple TV+ -- I'm watching it just fine on my LG TV with WebOS.


> On top of all of that, Apple TV+ has the added albatross of requiring their hardware for the shows

What are you talking about? You don't need an Apple device to watch their TV service. https://www.apple.com/by/apple-tv-app/devices/


`drop` is an optimization. You never have to call it if you don't want to, Rust will automatically free memory for you when the variable goes out of scope.

Rust won't let you do the wrong thing here (except if you explicitly opt-in to with `unsafe` as you note is also possible in other languages). The Rust compiler, when writing normal Rust code will prevent you from compiling code that uses memory incorrectly.

You can then solve the problem by figuring out how you're using the memory incorrectly, or you could just skip out on it by calling `.clone()` all over the place or wrapping your value in `Rc<T>` if it's for single-threaded code, or `Arc<Mutex<T>>` for multi-threaded code, and have it effectively garbage-collected for you.

In any case, this is orthogonal to safety. Rust gives you better safety than Python and Java, but at the cost of a more complex language in order to also give you the option of high performance. If you just want safety and easy memory management, you could use one of the ML variants for that.


You don't really seem to be understanding the point I'm making, or perhaps don't understand what memory safety means. Or perhaps are assuming I'm a Rust newcomer.

> Rust won't let you do the wrong thing here (except if you explicitly opt-in to with `unsafe`

There is no "except if you" in this context. I'm talking about unsafe Rust, specifically. I'm not talking about safe Rust at all. Safe Rust is a very safe language, and equivalent in memory safety to safe Java and safe Python. So if that's your argument, you've missed the point entirely.

> In any case, this is orthogonal to safety.

No, it's not orthogonal - memory safety is exactly what I'm talking about. If you're talking about some other kind of safety, like null safety or something, you've again missed the point entirely.

> ... calling `.clone()` all over the place or wrapping your value in `Rc<T>` if it's for single-threaded code, or `Arc<Mutex<T>>` ...

This whole paragraph is assuming the use of safe abstractions. If you're arguing that safe abstractions are safe, then, well... I agree with you. But I'm talking about raw pointers, so you're missing the point here.


You're moving the goalposts. Your original post had zero mention of unsafe Rust. You have now latched onto this as somehow proving Rust is less safe than Python and Java despite also mentioning how Java also has unsafe APIs you can use, which nullifies even your moved goalposts.

Btw, Python also has unsafe APIs[1, 2, 3, 4] so this doesn't even differentiate these two languages from each other. Some of them are directly related to memory safety, and you don't even get an `unsafe` block to warn you to tread lightly while you're using them. Perhaps we should elevate Rust above Java and Python because of that?

[1]: https://docs.python.org/3/library/gc.html#gc.get_referrers

[2]: https://docs.python.org/3/library/ctypes.html

[3]: https://docs.python.org/3/library/_thread.html

[4]: https://docs.python.org/3/library/os.html#os.fork


No goalposts have been moved here. Rust is a programming language with both safe features and unsafe features. It is a totality.

And now you're linking me docs talking about things I already explicitly mentioned in my past comments.

You are so confidently ignoring my arguments, and so fundamentally misunderstanding basic concepts, that this discussion has really just become exhausting. I hope you have a nice day but I won't be replying further.


Yes, Rust is a language with safe and unsafe features. So is Java and Python (and you admitted that in your comments). So Rust is not any less safe than Java or a Python but that logic, and the original point you’ve made in the first comment is incorrect.

Actually Rust is safer because its unsafe features must be surrounded by ‘unsafe’ keyword which is easy to look for, but you can’t say that about Java and Python.


I can't think of anything in either Java or Python that is memory-unsafe when it comes to the languages themselves.

You can do unsafe stuff using stdlib in either language, sure. But by this standard, literally any language with FFI is "not any less safe" than C. Which is very technically correct, but it's not a particularly useful definition.


Standard library is an inherent part of the language. There is no difference for the end user, whether the call to `unsafe` is a language builtin or a standard library call. The end result is, all of those languages have large safe subsets and you can opt-in into unsafety to do advanced stuff. And there isn't anything in the safe subset of Java / Python that you would need to use unsafe for when translating it to Rust.


Again, by this standard, literally any language with FFI is "unsafe". This is not a useful definition in practice.

As far as translation of Java or Python to safe Rust, sure, if you avoid borrow checking through the usual tricks (using indices instead of pointers etc), you can certainly do so in safe Rust. In the same vein, you can translate any portable C code, no matter how unsafe, to Java or Python by mapping memory to a single large array and pointers to indices into that array (see also: wasm). But I don't think many people would accept this as a reasonable argument that Java and C are the same when it comes to memory safety.


So you can see that the fact you can invoke unsafe code is not a good distinguishing factor. It is the other, safe part. Rust, Java and Python all have huge memory safe subsets that are practical for general purpose programming - almost all of the features are available in those safe subsets. C and C++ do not - in order to make them memory safe you’d have to disallow most of the useful features eg everything related to pointers/references and dynamic memory.


Agreed. My personal experience is Rust is more safe than Python as you get runtime errors when your interpreted Python code has a type error in it, but that's a compiler error in Rust so you don't have an "oopsie" in production.

Much harder to write Rust than Python, but definitely safer.

(Rust vs Java is much closer, but Java's nullable types by default and errors that are `throw`n not needing to be part of the signature of the function lead to runtime errors that Rust doesn't have, as well.)


I'm talking specifically about memory safety (when using unsafe/raw pointers). Being able to say "once I allocate this memory, the garbage collector will take care of keeping it alive up until it's no longer referenced anywhere" makes avoiding most memory safety errors relatively effortless, compared to ensuring correctness of lifetimes.


Please see: https://news.ycombinator.com/item?id=41720769

You can absolutely opt-out of lifetime management in Rust. It's not usually talked about because you sacrifice performance to do it and many in the Rust community want to explicitly push Rust in the niches that C and C++ currently occupy, so to be competitive the developer does have to worry about lifetimes.

But that has absolutely nothing to do with Rust's safety, and the fact that Rust refuses to compile if you don't provide it a proper solution there means it's at least as safe as Python and Java on the memory front (really, it is more as I have already stated). Just because it's more annoying to write doesn't affect it's safety; they are orthogonal dimensions to measure a language by.


Most memory safety errors are from not being able to test things like whether you are really dropping references in all cases or whether your C++ additions are interacting with each other. C is not safe but it is safer than C++. Rust is not going to stop all run away memory possibilities but it isn't going to hide them like a JS GC.

If your goal is to ship to most users something that kind of works then there are certainly complex solutions that will do that.. If your goal is memory safety that's more like every device working as expected which is done with less bloat not more.


I personally only use AMD (excepting one test machine), but Intel does have the best single-thread performance[1] so if you have some crufty code that you can't parallelize in any way, it'll work best with Intel.

[1]: https://www.tomshardware.com/reviews/cpu-hierarchy,4312.html...


The new Zen 5 has a much better single-thread performance than any available Intel CPU.

For instance a slow 5.5 GHz Zen 5 matches or exceeds a 6.0 GHz Raptor Lake in single-thread performance. The faster Zen 5 models, which will be launched in a couple of days, will beat easily any Intel.

Nevertheless, in a couple of months Intel will launch Arrow Lake S, which will have a very close single-thread performance to Zen 5, perhaps very slightly higher.

Because Arrow Lake S will be completely made by TSMC, it will have a much better energy efficiency than the older Intel CPUs and also than AMD Zen 5, because it will use a superior TSMC "3 nm" process. On the other hand, it is expected to have the same maximum clock frequency as AMD Zen 5, so its single thread performance will no longer be helped by a higher clock frequency, like in Raptor Lake.


> in a couple of months Intel will launch Arrow Lake S, which will have a very close single-thread performance to Zen 5

Will they? Intel Innovation event was postponed "until 2025"[1], so I assumed there is not going to be any big launch like that in 2024, anymore? Arrow Lake S was supposed to debut at Intel Innovation event in September [2]

[1] https://www.intel.com/content/www/us/en/events/on-event-seri...

[2] https://videocardz.com/newz/intel-says-raptor-lake-microcode...


The Intel Innovation event was canceled to save money. This has nothing to do with the actual launch of future products, which are expected to bring more money. Intel can make a cheap on-line product launch, like most such launches after COVID.

Since the chips of Arrow Lake S are made at TSMC and Intel does only packaging and testing, like also for Lunar Lake, there will be no manufacturing problems.

The only thing that could delay the Arrow Lake S launch would be a bug so severe that it would require a new set of masks. For now there is no information about something like this.


Unless your workloads are not very cache optimized like most games, then AMD's 3D V-cache CPUs take the lead.


The suggestion to use AR glasses with this keyboard computer feels very Ghost-in-the-Shell cyberpunk to me. Stepping onto a train and you find some guy with glasses sitting near the train door staring blankly at other passengers while typing furiously on the keyboard. Looks a bit creepy. After a moment it's revealed he has an AR display and he's writing an email or whatever.

...why do I feel nostalgic for a cyberpunk dystopia?


We seem to have ended up with the dystopia, but not the cyberpunk


> writing an email or whatever

The only certainty is that email will never die


Nor would I want it to. It's a great medium for messages that may or may not be important and that, in some cases, I want to archive.


I love email, but like SSH to Telnet, we need a 21st century version.


I think IRC would slightly enhance the cyberpunk aesthetic, and is likely to give strong competition to email for the 'eternal communication protocol' contest!


That would actually be seriously cool though.

Even with a full-sized VR headset I'd be willing to spring for it.


Keyboard? Why no TapXR bracelet instead?


Have you tried using it? I’ve read it isn’t so useful.


Very Hiro Protagonist gargoyle vibes.


That would also be true with analog film when you start reaching the film grain size.


I'd be curious about the fidelity floor in analog vs digital optics.

I'd guess(?) that you might be able to do more information reconstruction from analog + lense parameters + film parameters than digital + lense parameters?

Simply by virtue of digital being quantitized at some point.

(But signal processing is far outside my area of expertise, so honestly curious)


https://www.adox.de/Photo/films/cms20ii-en/ is the current max res standard for bw film. Try getting that from your monochrome Leica


The year in this link is very important. In the following year, the Elm team decided to not pay attention to the maxim "perfect is the enemy of good" and crippled their FFI story, making it impossible to actually use the language in production[1].

I would recommend to steer clear of a language that makes these sorts of decisions -- that certain features are off-limits to the regular developer because they can't be trusted to use them correctly -- because if you find yourself in a situation where you need that to solve your problem, you're trapped. I included Go in the set of languages I would recommend steering clear of for years, due to their decision to allow their own `map` type be a generic[2] type but no user-defined types could be[3], leading to ridiculously over-verbose codebases, but they have finally corrected course there.

If you're looking for something kinda like Elm but not likely to break your own work in the future, I'd recommend checking out ReasonML[4] instead.

[1]: https://lukeplant.me.uk/blog/posts/why-im-leaving-elm/ [2]: https://go.dev/blog/maps [3]: https://go.dev/doc/faq#beginning_generics [4]: https://reasonml.github.io/


It's incredible that on just about every piece I've ever read about Elm since they made that decision, this has been the first, second, and third comment. Wanting to try Elm for myself, I disregarded this advice, and.... immediately ran into the exact same problem! I've never seen such a promising project so conclusively killed by pure developer pigheadedness. And, amazingly, they've never backed down at all. They don't seem to mind that they maimed themselves.


A purity pledge is very typical of cults. It's both a filter and an enforcement mechanism.

This may not apply to Elm. But I imagine it can feel easier and more rewarding to manage a community that's more like a cult than a typical free-for-all open source project.


I think it’s probably harder and less rewarding to manage a community where you’re constantly taking flak for a technical decision people don’t like (and which those people generally don’t engage with the pros and cons of said decision!)


Out of curiosity, what did you try to do that you hit that issue right away? I've been writing Elm apps as side projects for years, and never even come close to the kernel thing being a problem. My apps are mostly graphically undemanding games and helper tools. What are the types of applications where this becomes an issue right away?


In my case, it was a regex supplied by the user. Elm 0.18 had no support for constructing a regex at run-time. So I made a package that wraps native RegExp. When 0.19 was released, I couldn't upgrade because of those 5 lines. The regex package eventually got regex.fromstring(). So I could've upgraded. But at the time I was bumping against limits accessing Intl and I really hated the prospect of begging some maintainer for access to a browser api.

Elm was the most fun I ever had developing a browser app. Then they decided I shouldn't be allowed to develop a ShootMyFoot module, and it stopped being fun overnight.


> So I made a package that wraps native RegExp. When 0.19 was released, I couldn't upgrade because of those 5 lines. The regex package eventually got regex.fromstring(). So I could've upgraded.

The last commit to the regex package was in May 2018, and Elm 0.19 was released later in August. https://github.com/elm/regex/commits/1.0.0/

So it seems like by the time of the official release you could have replaced your five lines with `Regex.fromString`.

But the missing Intl API is definitely a huge pain, and I understand that you were switching away if you needed it extensively. Or expected to want other sync APIs wrapped.

A common way to solve something like this is with proxy objects like in https://github.com/anmolitor/intl-proxy but it does not give access to every feature in a nice way.

I went the route of least resistance and built the Elm compiler without the Kernel code check. But in the past few years I hardly needed that anymore.


Thanks for the correction regarding the Regex timeline. Git commits are generally more reliable than my memory.


Yeah, I really feel this a good way to divide developers into two types. There are those like me, to whom the philosophy of discourage foot guns systematically sounds kind of brilliant. To put it in flattering terms, it's pay a short term cost for the long term and hard-to-perceive but very real benefits (making certain categories of errors completely extinct). To put the other side in flattering terms: they're not letting the perfect be the enemy of the good, and never compromising on their vision because the tech is holding them back. I think the latter is definitely dominant in the discipline. I'm glad that at least Elm carries the torch for the former though.


The Elm people themselves worked with the impure browser api all the time. They just don't want me to do it. So it's not even a foolish consistency but just base gatekeeping. Turns me right off.

If you want to divide into two camps how 0.19 was received I'd say it's people who were maintaining a substantial Elm project on the one side and people who weren't on the other. Maybe if you're carrying a torch don't drop it on the ecosystem.


As a Elm non-user, naive question: would it be possible to use the JS interop (ports) for this, at cost of some clunkiness?


Yes, ports or custom elements are the recommended options, https://github.com/elm-community/js-integration-examples

There are a bunch of other options/workarounds/hacks depending on the need. E.g. using getters or creating proxy objects https://github.com/anmolitor/intl-proxy, or event listeners, or postprocessing of the generated JS code, but those shouldn't be the first idea to reach for.


Yes the answer is always ports when this topic comes up. Unfortunately one can only pass some basic types through ports. Passing a RegExp or an Intl.DateTimeFormat is not possible. It needs a wrapper on the Elm side, and the Elm people decided I can't be trusted to write this wrapper for me.

Back then if I wanted to search a list of strings for entries that match a regex supplied at run-time, I'd have to pass the whole list through a port, filter it outside, then pass it back in. Rather than just using the filter function. Ports are asynchronous messaging which means I have to restructure the whole code and wait for a state change when the filtered list is returned.

Let me cite the Elm docs on ports[1]: "Definitely do not try to make a port for every JS function you need."

So! Where does that leave me? Unsupported, that's where. Because I need a JS function. In 0.18 unsupported was fine. They broke it for 0.19 and the project died. Maybe it was dying of other causes anyway, but that one action sure drove people away.

[1] https://guide.elm-lang.org/interop/ports


Most likely they didn't understand Ports and immediately wanted to reach for kernel stuff.


It's odd and rude to declare someone incompetent from so little information. Especially when the same problem is widely reported by other developers.


If time is important to you, please correct your statement.

> The year in this link is very important. In the following year, the Elm team decided to [...]

The blog post was released a year after the official release of Elm 0.19 where the access to native code was further restricted in the official compiler.

It was not something that happened without ample prior notice, see for instance a post [1] by the Elm language creator in March 2018 in which he explains his reasoning for the upcoming change. Or another in March 2017 where he announced that intended change [2]. Even in 2015 he actively discouraged people to rely on these undocumented features and other hacks [3].

I also was not happy with that choice and felt the pain of something being taken away that was possible before, but that didn't stop me from using Elm at work nor from using it for fun.

So far I haven't found an alternative that I liked better, so I will stick to it.

[1]: https://discourse.elm-lang.org/t/native-code-in-0-19/826 [2]: https://groups.google.com/g/elm-dev/c/bAHD_8PbgKE/m/X-z67wTd... [3]: https://groups.google.com/g/elm-dev/c/1JW6wknkDIo/m/H9ZnS71B...


> I would recommend steering clear of [Go] for years, due to their decision to allow their own `map` type be a generic[2] type but no user-defined types could be[3]

I think this is very, very different. First because Go didn’t have a cultish purist aversion to generics, banning people, going after them even outside of community spaces. But on a technical side, maps (and slices and channels) were not gated to be used by std, they were publically available to anyone. Not having generalized a feature is not the same as banning it. There was not even Go syntax to express it. Same as arrays in C, no?

That said, I’m not challenging the recommendation to stay away or not - generics was (and still is, may I add!) quite a pain point with the language. I’m personally quite invested for other reasons (concurrency, networking, std lib), but people come to different conclusions naturally.


ReasonML crippled themselves, too, by splitting the already tiny community between Reason and ReScript

Here's a good (neutral!) write-up: https://ersin-akinci.medium.com/confused-about-rescript-resc...


If you’re a front-end developer, you should checkout ReScript[1], supposedly a JS-oriented successor of ReasonML and developed by the ReasonML team.

[1] https://rescript-lang.org/


If you want to try TEA, but not Elm I reccomend Scala.js with Tyrian[1]. Scala.js is a wonderful, mature project and Tyrian gives you the elm architecture in a very pragmatic way.

[1]: https://tyrian.indigoengine.io/


Are you actually using this in a non-trivial application?

The recurring complain I hear about scala are the bad compile times. I haven't used the language much, so not sure if this only applies to libraries that heavily use compile time metaprogramming.

But I really love that with modern tooling we can get a sub-second editor->browser feedback loop even for a three year old medium-large project on modern hardware. This was primarily one of the reasons I avoided Kotlin+Gradle JS target because among other issues the feedback loop was 2-3x slower.


I have a medium sized project with it and the initial compile can be slow, however every recompile usually has the page updated as I switch from my editor to it. Not as fast as TS, but worth it for the much better programming language. Tyrian also makes it trivial to set up hot-reload with preserved state.


Ok, this is interesting. I'll try this out in a small project.


Reason is a lovely, OCaml-esque language in the same functional vein, but I would say Gren feels like a great spiritual successor to Elm.

https://gren-lang.org/


Gren is a fork, not a spiritual successor. And not only did it not fix that problem, it made it worse: https://news.ycombinator.com/item?id=36275171


Perhaps because it's not the big problem a few people make it out to be..


Yeah, I just read this further down in this topic. Really bummed about it, Elm always seemed so promising, and I thought a healthy fork was what was needed.

I just don’t understand the reasoning for this choice.


along those lines, Zokka is a fork of elm that appears to be mostly dedicated towards bug fixes that Evan (the creator of elm) refuses to acknowledge or merge.

https://github.com/Zokka-Dev/zokka-compiler

edit: there's also Roc, https://www.roc-lang.org/, a language started by Richard Feldman who I believe was a former elm core team member. I think Roc aims to accomplish different things than elm, but definitely feels like a spiritual successor


The commitment to bidirectional compatibility means that Zokka can't fix the problem that GGP was talking about.


“making it impossible to actually use the language in production”

Absolutely false.


Please refer to my comment here: https://news.ycombinator.com/item?id=39551017


> making it impossible to actually use the language in production

Just FUD. I've been on a big team writing a webapp used by hundreds of thousands each day. While it's not necessarily my own first choice, it was great and the least error prone piece of software I've written in my career.


It is impossible if you're being responsible. You don't choose a technology that could potentially block you from solving problems in the future unless it brings a huge value to you.

Elm's value proposition is mostly being a functional language with an opinionated MVU library baked in, so you can reproduce that value with a better functional language and selecting a similar MVU library in that other language, which means it should never actually cross the value bar above the risk it brings if you need a browser feature it doesn't support and actively prevents you from accessing.


Now you're moving the goalpost. You said it was impossible to use in production. Which is clearly wrong.


I am just clarifying why I consider it impossible.

Production is not some place you're supposed to cowboy code, but instead have a reasonable expectation that you will be able to continue supporting it for as many years as it operates, and it's impossible for anyone to responsibly use technology with known limitations that have bitten other real engineering teams that they can find zero workarounds for.

If you don't consider that an impossibility for a production environment, then I certainly wouldn't want to work with you on a team with production responsibilities.


Zero workarounds seems overly dramatic, here are a few: https://news.ycombinator.com/item?id=39567011

If you want to rely on them for every possible future requirement or rather want to pick another tool is another question :D

Anyways, just building the compiler without that check was also not that hard.


I've also had the pleasure of maintaining an Elm codebase. It was filled to the brim with state update bugs. You could never trust what you saw in your browser. Nobody in the team understood how the codebase worked. I spent days implementing some extremely simple changes, which barely worked (to the same standard as the rest of the codebase). Never again.


I don't blame a Rust-codebase for being bad just because I don't know Rust.


So, your positive anecdote was somehow valuable for the discussion, but my negative anecdote had absolutely no value? That's how anecdotes work?


My positive anecdote proved that Elm is usable in production, which was the FUD I was refuting. Yours were irrelevant in that context.


I've seen the word FUD 4 times on this page already. The "elm defense force" sounds a lot like cryptobros defending their rug pull. It's such an odd piece of language to adopt over people not liking some javascript compiler for perfectly valid reasons.


Funny that you have this impression.

To me it looks more like the elm-haters are out in force and the Elm users don't participate anymore on this site.

Many hateful comments here below a historic post prompted me to create a new account after a detox phase of >10 years.

So far I try to correct a false statement in the (as of writing this) top comment [1] or add a more neutral view [2]. And maybe I will add more of my personal opinion in the future - or participate in other interesting topics depending on my mood.

[1]: https://news.ycombinator.com/item?id=39555542 [2]: https://news.ycombinator.com/item?id=39556395


I’ve been working with Go for 10 years and I have no idea what you mean by maps being generic before generics came along, nor did maps ever cause me to have over-verbose code bases. The links didn’t seem to help.


Trying to summarise what they were likely saying: Not having generics makes code verbose because you end up copying and pasting your library code to make it handle different types. The complaint about maps being generic was that the Go team clearly saw a need for generics (as they implemented them for maps and some other types) but decided that others wouldn't need them. So they had one rule for them and another rule for everyone else, which people don't like.


How were maps generic before generics, I am not sure I understand. Didn’t you still have to declare them as map[type1]type2?


So when something is generic, it means it is a type parameterized by other types. So the type map[string]int is indeed generic, but no language users could create their own type btree[X]Y, for example.

Essentially, the go developers saw a need for generics and then decided that only they get to create them, where most modern language developers either make them available for everyone to implement or don't add them at all.


I have to agree that this is FUD. So, it’s impossible to use the language in production because FFI is async (ports)? lol, come on guys.


Sorry, but what is FFI???


Foreign function interface


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: