Rust was initially implemented in OCaml, even. But that said, it gained a lot of more Haskell like features later on; we ended up with type classes, not ML modules, for example.
EDIT: also, strictly speaking, Rust doesn't use Hindley-Milner, and we don't have parametric polymorphism.
Just because some bounds are parametric does not mean that all of them are; specialization violates parametricity, for example. While that's not in Rust proper yet, it did require making sure that dropck could handle it, for example, and the intention is to ship it.
I don't doubt you know more about Rust than I do, but this seems pedantic to me. Kind of like correcting someone for pronouncing forte as "for-TAY" instead of "fort" or telling someone "well technically you don't actually touch _anything_ because of the electrostatic force."
If you ask all the developers out there to describe parametric and ad-hoc polymorphism I think a vast majority would give the example of a type parameter (e.g., Java generics or C++ templates) for parametric polymorphism and Java interfaces or Haskell's classes for ad-hoc polymorphism. I can even quote directly from Pierce (Types and Programming Languages):
> Parametric polymorphism, the topic of this chapter, allows a single piece of code to be typed "generically," using variables in place of actual types, and then instantiated with particular types as needed. Parametric definitions are uniform: all of their instances behave the same.
I think Rust and the aforementioned languages fit this definition. Outside of a specific compiler issue, claiming otherwise seems to only confuse the issue, especially for those just casually reading and not familiar with programming language theory.
> If you ask all the developers out there to describe parametric and ad-hoc polymorphism I think a vast majority would give the example of a type parameter (e.g., Java generics or C++ templates) for parametric polymorphism and Java interfaces or Haskell's classes for ad-hoc polymorphism. I can even quote directly from Pierce (Types and Programming Languages):
In practice it is often useful to drop our demands for rigour by a bit. But any reasonable definition of parametric polymorphism _has_ to exclude C++ templates.
C++ templates are much closer to the definition of ad-hoc polymorphism.
Theorems for Free means that eg the type of the identity function (forall a . a -> a) guarantees that even if you supply it a bool, it can't magically turn into an implementation for `not`, or multiply all integers by 2.
This isn't Rust specific; it's just the definition of parametric polymorphism. Yes, many programmers may give you a slightly incorrect definition, but especially in an article about Haskell, I'd expect a bit more precision.
Which doesn't mean it's terrible to get it wrong, just want to be clear about what Rust does and does not have. It is important because these kinds of definitions either hold or they don't; "sorta kinda mostly parametric" isn't the way folks tend to think about this. Which makes sense, because they're interested in proofs and formal definitions.
Yes, Pierce is great! But the issue is:
> all of their instances behave the same.
This is not true in Rust, as shown in my comment and the other replies. We have accepted APIs that break this property, and we have other language features on the way that break this property.
You can do the same in Java and C++. This may violate a strict definition of parametricity (I've read the definition from a few different sources and am still mulling it over), but I'm not sure how this relates to parametric polymorphism.
The _behavior_ of this function is the same for all types, the _output_ is different. That is, for all types, the function body is the same. Maybe there is a more abstract definition of parametric polymorphism you are using, but as I said above, this seems pedantic.
The internal behavior can trivially be made different just by operating on the value:
fn foo<T>() -> usize {
let x = std::mem::size_of::<T>();
if x % 2 == 0 {
panic!();
}
}
foo::<u8>(); // 1
foo::<u16>(); // panic
That the body is the same isn't necessarily the issue at hand (though of course it's still a useful property in its own right), what matters is that reasoning about what this function will do requires knowing which specific types it is used with.
> this seems pedantic
The first code example is merely the simplest demonstration, in the wild I would expect lots of `size_of` in generic contexts to result in type-dependent behavior somehow.
I'm not saying this is necessarily a very bad thing, nor do I have strong opinions on the usefulness of strict parametricity (which AFAIK Haskell doesn't have either). But in discussions relevant to parametricity, it's useful to know the ways a given language can subvert it (and Rust will further encourage it to be subverted, once the specialization feature is developed).
> I'm not saying this is necessarily a very bad thing, nor do I have strong opinions on the usefulness of strict parametricity (which AFAIK Haskell doesn't have either). But in discussions relevant to parametricity, it's useful to know the ways a given language can subvert it (and Rust will further encourage it to be subverted, once the specialization feature is developed).
In practice Haskell seems to have pretty strong views on enforcing parametric polymorphism, doesn't it?
Haskell gives you ad-hoc polymorphism via typeclasses and there are also existential types and GADTs etc, if you need those. But once you declare something to abide by parametric polymorphism, you are expected to keep your end of the bargain.
(Yes, you could violate the pact via some unsafePerformIO trickery, but that's considered bad form.)
The whole point of parametric polymorphism (as opposed to eg ad-hoc polymorphism) is that just from reading the type of a function you get a guarantee about the limits of its behaviour.
If you functions routinely violate those limits as a matter of course, those guarantees are useless.
I'm all for abusing notation and terminology a bit when it makes sense in practice, but loosening our definitions too much risks making them useless, too.
In practice in Haskell, I often only need a helper function for eg integers, but when the implementation allows, I will give the function the most parametric-polymorphic type that fits, because that makes the readers job easier:
Just like an invocation of `filter` is easier to read than a for-loop, because `filter` is strictly less powerful than the loop.
(In addition, the more general type serves to give the compiler a hint, so it can yell at me in case I accidentally do an operation on the integer that I didn't mean to.)
> However, in 2020, the premier system’s language is surely Rust.
Yesterday I was speaking with a friend who runs a small company doing contracted embedded programming (the embedded OS is often Linux but often not) and he'd never even heard of Rust. Maybe it's a little lower level than what's meant by "systems" here but still, the article's statement is just delusional.
The largest vocal community today for a systems language (meaning down to the OS level with strong determinism guarantees, not the redefinition to make Go a systems language meaning server systems) is definitely Rust. But in practice, no one is using it because their systems predate it, don't have compilers for it, or they simply can't use it due to their problem domain (it has an insufficient history, they can't contract support with the compiler team for it, or whatever).
I can go to Green Hills and get a contract that guarantees their compiler will be available and supported for my target platform for the next 10+ years. Rust is still a moving target with a single implementation, and no (present) way to get the guarantees needed for the embedded domains I've been involved in. I'd like to see Rust experimented with there, but it won't be the "go to" language for quite some time.
> Rust is still a moving target with a single implementation, and no (present) way to get the guarantees needed for the embedded domains I've been involved in.
Absolutely. I've said it before, I will say it again, Rust will not be taken seriously as a systems language until it is specified. As I understand it years ago I was laughed at for this, but it is underway. I am grateful for this.
I forgot to preface my statements with I Love Rust, or, moreso, I actually love OCaml, and I love OCaml inspired languages, and I would love to write an OS in it (save ReactOS).
That being said, the desire to label it as the "premier systems language" (and no fault of the author, many before have tried to say this) is entirely not grounded in any sort of reality.
I mean if we save the "intellectual superiority" of Haskells or other FP languages -- (I've used them all) -- in practice with teams of other engineers, and in production where it counts the most, Go has outshined them all. So primitive is surely a way to describe it.
I fail to see how else than "primitive" you can call the Go type system: it is roughly what you would get from a language designed in the 70's, and a lot of advances have since been made.
Whether or not these advances make a difference (positive or negative) in production is different question, but I'd rather not use "how much is a language used", as an inherent quality metric for a programming language. Otherwise, I fear we might all end up doing old-school Java, Javascript, Visual Basic and ABAP.
> "how much is a language used", as an inherent quality metric for a programming language.
Define quality. Because what I've realized with the PL-elitist crowd is this almost always boils down to "the ability to be expressive" which is fine but it certainly isn't a complete metric, and, to borrow a common refrain, we've had expressive languages in the 70's as well, we've had s-expressions for a while now. What about other metrics, such as "ability for mass amounts of programmers to program the computer to do the correct thing?" and "ability for programmer to maintain said program?" -- you know, real world quality. My point is, people's usual quality metrics are often incomplete or narrowly defined.
In either case, if Go is so primitive, then why didn't the 70s produce a language like Go? Why did developers who created languages, operating systems, and systems software in the 70s not make Go until the 2000s (you do know who created Go, right?)
I just don't buy this argument, especially having used those "advanced" type systems for many years, sorry.
I mostly agree with you, but the claim was that Go's type system was 70s-era primitive. And I think that claim is probably correct. The type system of Go isn't all that sophisticated or "advanced".
But all that says is that the magic of Go isn't in the type system. The couldn't produce-it-until-the-2000s isn't in the type system. The thing that makes Go more used in the real world than Haskell isn't the sophistication of the type system.
I am not sure that my message was completely clear: my point was that despite Go having an objectively old/primitive type system, compared to modern state of the art, that does not preclude it from being a successful "production" language. The value the type system brings is, in my experience, rarely the most important factor in the success/failure of a software project.
My second point was that the popularity of Go (or any language) does not mean that it is inherently a good language, where good can mean, productivity, defect rate, .... Adoption of a language depends on many things, I believe quality of the tools, availability of libraries, and good old marketing (the Google aura around Go has helped in adoption) are often more important. The effort to make an initial, crappy, proof-of-concept is in my experience often decisive for the choice of language. The cost of long term maintenance (where a good type system might bring value) is rarely considered.
For Go, it also hasn't hurt that some of the core developers where being paid by Google to work on it.
The metric of the "ability for mass amounts of programmers to program the computer", is indeed a very relevant one, and I think Go shines there. When reading Go code, I typically find it easy to understand what the goal of the code is. However, "to do the correct thing", is often less of a success: there are off-by-one error and corner cases that have bugs, and reimplementations of the same basic logic. If you have a lot of developers available, you simply have them fix these issues, and your Go project becomes a success.
My point is, that in this scenario, the language itself is of minor importance: success is determined by the fact that you have access to a large pool of relatively cheap labor of reasonable quality (Google or VC funded startups are perfect examples). A 20 or even 50% productivity gain by having a "better" language would simply not matter for the outcome, certainly not if it takes your labor longer to get up to speed.
So, in that context, compared to the alternatives, the quality of Go is relatively high. But I do think that, if during the design of Go they had taken a couple of basic concepts from ML like sum types, polymorphism, and a decent module system, they would have ended up with a language that would make the current Go look unproductive and poor quality, even in that same context. It wouldn't make a difference to Google, but it would make the life of many a developer a little bit more productive and joyful.
And yes, I do know who created Go, but arguments by authority carry little weight.
> When reading Go code, I typically find it easy to understand what the goal of the code is.
I have the opposite experience. At a micro-level you might be able to tell what a line of Go code is doing, like manipulating this variable into that state etc, but it's such a tedious language that drowns the reader in code that it can be very hard to see the forest for the trees.
As an example of a deliberately simple language that mostly easy to read, I would perhaps cite Erlang.
In any case, I agree wholeheartedly that Go would benefit from sum types. (Though given the weak type system otherwise, you couldn't even implement a useful `Maybe` in Go. You'd really want parametric polymorphism to make sum types shine.) A way to express immutability would also have been welcome, especially given that they pride themselves on concurrent programming.
There's a difference in mentality between people who make either argument.
The "PL-elitist" crowd looks at programming languages from a computer science perspective, while the "PL-pragmatist" crowd (for lack of a better term) looks at it from a workplace perspective. At least that's how I see it.
In other words, one group writes tooling and libraries; the other writes business applications using said tooling and libraries.
One paradigm invites complex, highly expressive code as to minimize the chance of unforeseen errors (even if just by having a smaller codebase), and as to provide a clean and powerful interface that "does more with less".
The other invites simple, highly maintainable code as to minimize the training overhead of new employees and the chance that their lack of rigor might end up accidentally introducing or reintroducing bugs. Furthermore, a simpler language means that new teams can hop onto a pre-existing project faster, because they don't have to scrutinize each LOC to the same extent.
I might be wrong on any or all of this, but that's how I perceive it.
There are other languages I can take out of the bag like Mesa/Cedar, Interlisp-D, Smalltalk-80, although they are already on the 80's borderline, which then also brings Ada, Standard ML and Object Pascal into the picture.
"How much a language is used" is a proxy for "how useful is the language for actually writing programs". And, really, what else do we mean by the "quality" of a programming language? If it's beautiful, but not as good for actually writing programs, then it has quality as a work of art, but less quality as a programming language.
There's other reasons languages become popular such as platform support, marketing, what people learn in school or as a first language, and what gets you hired. Popularity does not necessarily imply any sort of superiority of a PL usage compared to others.
I think you are going too far here. Taken at face value, your post says that for programming languages, there is zero correlation between popularity (use) and superiority (fitness for use). That implies that everyone choosing a language for a project is either stupid or ignorant (not the same thing).
That may offer consolation to those who think that certain languages are the best, and who wonder why those "best" languages are used so little. But I don't think it's true. I don't think programmers are stupid, ignorant, or sheep. I think if a tool offers them advantages, they'll use it.
Perhaps I should qualify: significant advantages. There is a cost to switching. The switched-to language has to be enough better to pay back the cost.
I remember what Sun did with Java back in the day. No, I do not agree that Go has been massively marketed.
I will admit, however, that Go was created and supported by Google, and that mattered. It mattered, not in the publicity, but in the tools and libraries (and maybe even in the tutorials).
Compared to C++, Haskell, Python and the long list of other languages not directly backed by BigCos.
So your point is because Java has been marketed more then Go that Go is not massively marketed? If you take the set of all languages and would make a sorted list of how many hours and dollars where spend to market it then I would be very, very surprised to not see Go in the top 5.
C++ has vendors behind it. Those vendors make actual dollars selling C++ (or at least tools that work on C++.) When Microsoft markets Dev Studio (with C++ support), is that marketing C++? How about C#?
In contrast, what does Google do to market Go? Put up a website, and put out notices of the next version? When have you ever seen an advertisement for Go? For a tool that supports Go? How about for C++, C#, and Java?
I'm being very clear. A language with a big company behind it the likes of Google, Facebook, Microsoft etc. I have defined it two times now. That there are some vendors which make money with a language is very different.
Advertisement is a subset of marketing. Back in the day when Go was "new" You saw daily posts on many programmer focused communities about it. Google bankrolled the entire development of Go. Blogposts of many googlers talking about the language. Google sponsored Go events. The positive image of Google itself at the time of Go initial introduction.
You try to weasel me towards C# and Java when those are not two language I even have mentioned. But since you want to hear about. Yes both of these were also heavily marketed.
But if the big company behind it doesn't do anything, what difference does it make?
Or, if the big company doesn't do anything more than is done for other languages (Rust, say), what difference does it make? Is Rust marketed in the same way Go is? Per your definitions, I would say yes, even though the Mozilla Foundation isn't a big company.
But then, is Haskell marketed in the same way? I see lots of posts on it, at least here. Lots of blogposts on it. But there's no big company, or even foundation, behind it. Is that marketing?
Why do you define a set of actions as marketing when people who work for Google do it, but not when others do it?
Yes, considering BigCo had internal resistance against it for years, and only just recently started to adopt it more for its own internal uses. Comments like this show the clear ignorance of the context surrounding the creation of the language.
Ah yes google bankrolled the entire development for no reason. Get real. Go was specifically created for novice programmers at Google to write more safe code then C without them actually having to learn anything new.
You really have to stop assuming "intellectual superiority" when people talk about more advanced / less advanced stuff.
Go have limited capabilities type system. C have limited capabilities type system. Both are quite successful. If talking about C I didn't shown "intellectual superiority", maybe it would be incorrect label if I where to talk about Go?
Would you please stop posting supercilious dismissals of the community? You've already done this more than once, and that's a very bad sign for a new account.
(Edit: Perhaps I should explain that last bit. Actual new users never comment this way—only seasoned users, and the ones who do it are usually using new accounts because they've behaved badly with earlier ones.)
I can try to be more direct in pointing out the obvious biases that we have on this echochamber, if you think that would spark more thoughtful debate? If you think the tone of the post was wrong, I can address that, but I don't really think that it's fair to suggest we avoid talking about this kind of thing?
My other comment that you're calling a "supercilious dismissal" also received 31 upvotes and some interesting responses, fwiw.
What people call "obvious biases" are typically what they themselves are choosing to select out of the statistical cloud of posts that get made here. The bias at work in making such a selection is one's own, but we project it onto the community, and then create a superior image of ourselves by posturing above it, which is what I mean by supercilious. This is a drama between oneself and oneself, which has nothing to do with real conversation. It's a way of changing the subject to oneself. To be fair, you're hardly the only one doing it; it's quite common, but I have to tell you that it tends to be low-quality commenters who do this.
It frequently comes with labels like "echochamber" and "groupthink", which are also ways of elevating oneself as the noble freethinker standing against the mob, or some such image. Actual noble freethinkers never behave like this, so the whole thing is incongruent—it's simply a way of flattering oneself, which (ironically, because we all feel so unique) is one of the most repetitive and tedious things that people do. That's why we shouldn't do it on a site that's trying to have interesting conversation.
Another angle: people seem to posture above the community when they feel like they need to defend themselves against it. Presumably they want to defend themselves against some claim of i-am-very-smartness that they feel is emanating from the community, so they puff themselves up and put the community down by way of compensation. In reality, though, no one is making such a claim. HN isn't making such a claim—HN isn't a person to begin with—and certainly the people running HN are making no such claim. It rather arises in the psyche of some readers, for whatever internal reasons. The internet can be crazy-making that way.
Thanks for the thoughtful response. I couldn't possibly disagree more with most of your sentiments, but I appreciate you taking the time to write them out. To discount the validity of "echochambers" and "group think" is a pretty dangerous strategy, I think. If you were to poll HN users on what programming languages are successful/worth using, as a topical example, you would (and do/have) get very different responses from, say, /r/java (which is an extreme example and also an echochamber, but kinda proves the point).
To then go on to suggest a "no true scotsman" example of nobility wrt pointing out echochambers is also a pretty dangerous sentiment. Pointing out an echochamber is an echochamber is almost so obvious it's barely worth noting - to suggest there is anything "noble" about being a part of and/or not being aware of an echochamber is silly. Messageboards are echochambers. We are here because we have largely similar interests, and the posts on this board are pretty generally related to those interests. Obviously it is an echochamber, and obviously as I am posting here I am a part of it. I'm not sure how that means it is also not worthwhile to sometimes point out/poke fun at the obvious echochamber we're both a part of.
I was amused at that. It's clear this author is not Go's constituency. You cannot use types to ensure consistency in Go, you just have to write the algorithm correctly. But Go fronts the algorithm and keeps the noise around it to a minimum. They are opposed but equivalent strategies.
You still have to get the algorithms right in eg Haskell.
It's just that in Haskell you have to write fewer tests to be sure you did it right, because for many kinds of mistakes you could have made, the compiler can yell at you.
The other advantage of nicer types is that they serve as part of the documentation. Exactly because the types can constraint the behaviour of implementations more than in Go. So the reader can make more valid assumptions from the type alone.
Hoogle uses those types to give you a truly extra-ordinary way to search the Haskell libraries via types. That approach would be almost useless in Go, and you have to fall back to searching via keywords etc only.
The article really rubbed me the wrong way with this tidbit
> (we don’t talk about Go…)
So since the author is speaking of "premier system’s language" we choose to ignore the world's premier cloud language? Interesting.