Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

From my experience Haskellers spend more time talking about how perfect and pure Haskell is than building real-world applications.


Probably because real-world applications mostly aren't that interesting -- they're assuming you're doing that stuff well on your own. You could write "real world applications" in any language you want, some faster (to develop) and others more efficient (at runtime) than haskell, but that's not what haskell brings to the table.

Haskellers (well me) view Haskell as introducing reasoned and disciplined immutability, purity, type safety, etc to the mundane world of real-world applications, which is why Haskellers get excited when these benefits can be applied. Writing a database query? boring. Writing a database query that is guaranteed at compile time to contain the right columns to build the right domain object such that you can never write an incorrect one? that's interesting.


I hear you. Other languages have those properties. Including TypeScript.

In my experience I've found real-world applications, by definition are never as pure and perfect as we like them to be. Users do stupid things, or vendor APIs don't quite fit what we want.

And personally it's why I think the Haskellers I know often get stuck in the mud. Seems the language, or just the Haskellers I know, are better suited for a academic applications.

I'd love to be proven wrong as I'm into type safety and FP, in general.


I absolutely love TypeScript -- right now Node + TypeScript is my pick for the interpreted language wars (Node/Python/Ruby/Perl/etc).

> In my experience I've found real-world applications, by definition are never as pure and perfect as we like them to be. Users do stupid things, or vendor APIs don't quite fit what we want.

This is actually one of haskell's greatest strengths IMO -- it enforces the kind of discipline that sidesteps this problem completely. The kind of functions you write with haskell don't allow bullshit -- non-nullable types are the biggest example of this I can think of, there's also the pervasive use of `Maybe Value` and `Either SomeError Value` types.

In other places it's usually referred to as Domain Driven Design or the onion architecture -- but best practices for software development usually dictate that you get rid of bullshit as early as possible on the borders of your application. Put simply, don't let invalid/incorrect input make it into your system.

> And personally it's why I think the Haskellers I know often get stuck in the mud. Seems the language, or just the Haskellers I know, are better suited for a academic applications.

Totally true, we do get stuck in the mud, but it's such nice mud -- the things you worry about are just different (and I think this is what you're seeing). Roughly zero haskellers are worrying about NPEs -- there's no urge to get something to "just work" because "better" is right there, and haskell encourages you to strive for it.

That said, it's absolutely the case that a ton of energy in haskell land is spent on academic pursuits, but that's actually good imo, languages that don't do this get stale.

> I'd love to be proven wrong as I'm into type safety and FP, in general.

Well I don't know that this is something someone else can prove to you, outside of people just writing more middling/regular software in haskell and getting it out there, which is a bit of a community thing. I do my best to write practical haskell software and write about it, but a bunch of my projects aren't open source (just yet -- I'm heavily considering making one of them open source right now). Rust is also proving to be a huge (welcome) distraction because it gives much of the haskell creature comforts with within-C++/C range efficiency.

Maybe take haskell for a spin? The learning curve is high but it will bend your mind in a good way. Or if you're of the web persuasion, try Elm/Purescript, they're thoroughly practical.


The refreshing part about Haskell to me is that it is declarative. It feels more natural to describe what you want to happen than to try to think like a machine and compose sequential machine instructions for it each of which can fail at any execution.

Haskell's compiler is incredibly strict. It'll guide you until the code is near bullet proof. The outcome is that the surface area where things can go wrong is relatively a lot smaller than in just about any other language our there.


Haskell is not declarative. It is a specific form of lazy + graph-reductive. Thinking of it as declarative leads to polynomial exponential runtime cost in CPU/memory and not understanding why or how to fix it.

https://stackoverflow.com/questions/40130014/why-is-haskell-...


I'm not talking about the compiler or the evaluation. From programmer's perspective the language does seem fairly declarative to me, mostly consisting of expressions instead of statements.

I don't know what's wrong about thinking about expressions as declarative which in my opinion they are. How would thinking about expressions imperatively help me avoid those exponential runtime costs?

For example consider the list comprehension: [toUpper c | c <- s]

If you compare this with your typical imperative for loop to construct the same data structure I find this declarative.


Thanks. You in the Bay by chance?


Sorry super late but no, I'm in Tokyo actually (was in Austin before that) --feel free to reach out (email in bio) if you ever want to chat though!


> I hear you. Other languages have those properties. Including TypeScript.

Limited nominal typing, limited support for record-like functionality (ability to handle datastructures generically-but-safely), no HKT (so difficult to handle secondary concerns in the type system - e.g. writing a function whose type enforces that it's called within a database transaction, but you can still compose it with other such functions and run them all together in a single transaction). I wanted to like TypeScript, I really did, but after a month or so I was fed up enough to actually put nonzero effort into building with Scala.js (which turned out to be really easy) and within an hour I had the safety properties I was used to and was more productive as a result. (Scala isn't Haskell but the advantages are similar).


Typescript's type system is unsound by design. And it doesn't have the sophistication of Haskell. So it isn't that similar IMO


??? Could you drop some links/justification? I know I learned a lot from the set of slides about Flow vs Typescript[0].

Also, not having the sophistication of Haskell doesn't make something bad -- Rust's type system doesn't have the sophistication of Haskell and I think it's a fantastic language (I struggle not to pick it over haskell most of the time).

Typescript, sound type system or not, has brought many of the benefits of the Haskell ecosystem to JS. Python, Ruby (and Perl?) are following in the footsteps right now with their gradual typing schemes. AFAIK JS was the first to get something like this so right -- going from syntax sugar to actually highly beneficial type checking. The stuff people would put on top of C to make it safer stands out but I can't remember such transpiling ever being so embraced and beneficial to a language.

[EDIT] - after some searching, maybe you're referring to this issue (amongst others): https://github.com/Microsoft/TypeScript/issues/9825

[0]: https://djcordhose.github.io/flow-vs-typescript/flow-typescr...


Which doesn't necessarily mean it's not useful for building real world things. Lumi [1][2] was built with Haskell (and PureScript) and looks like a great product.

To me it just means more people need to a) build more real things in Haskell b) write about their experience and help advance the state for non-academic newbies and professionals c) avoid the weeds of the language theory stuff and stick the simple established stuff to stay productive (which does exist).

You could also make the same argument for JavaScript. So many people are trying to reinvent the wheel every other week in the frontend world, it's just as easy to get distracted by the language/framework noise.

Just be a mature developer and don't get suckered into the latest shiny objects.

[1] https://www.lumi.com/

[2] https://www.lumi.dev/blog/purescript-and-haskell-at-lumi


Don't forget: XMonad, pandoc, and ShellCheck


Correct. Interesting you bring up JS, as the Haskellers I spoke of loathed JS and treated it and its developers as below them. But that's just my experience.

I hope Lumi was able to gain efficiencies using PureScript and Haskell. And that they're able to attract talent. Thanks for sharing.


In my experience Haskellers speak that way about every language that isn't Haskell or Rust, which I found to be probably the biggest turnoff for me wrt learning Haskell.


There was an article posted somewhere about why Haskell isn't used very much in the data science/ML community, and the feedback was that the tooling is mostly not so good. It was kind of amusing to read all of the comments on /r/haskell saying "no it's good." I don't think Haskellers would really understand, but lots of people (especially data scientists in my experience) view languages only as tools. The Haskell community seems to derive lots of value from spending tremendous time basking in the elegance of their tool, rather than actually using it to solve problems. So /r/haskell will continue basking, the rest of the world will continue not using it to solve problems.

Or an extremely long winded "yeah I have the same experience as you."


I think the Haskell people saying the tools are good (like me!) are just using a very different criteria of "good" than you (and most data scientists!).

For me a "good tool" is robust, reliable, and easy to fix. For most of the data scientists I know, a "good tool" is one where "a single easy to remember command magically does the thing I want in one go".

And I get it, that last part is super appealing, especially for people who care more about answers to their problem than in technology, I don't blame people for wanting that.

But my personal experience is that many of these magical single command tools break a lot when you try to use them in any non-standard environment (for example, something that is not Ubuntu and where you don't have sudo to fix/install system packages) and I work in environments like that a lot. So the fact that e.g. cabal-install requires a bit more explicit work to setup initially is outweighed by the fact that I can reliably install it on any *nix system where I have a login and sufficient diskspace and have it Just Work.


> I think the Haskell people saying the tools are good (like me!) are just using a very different criteria of "good" than you (and most data scientists!).

Exactly, which is why Haskellers should listen to and respond to feedback rather than asserting it is the feedback-giver that is wrong.

> For me a "good tool" is robust, reliable, and easy to fix. For most of the data scientists I know, a "good tool" is one where "a single easy to remember command magically does the thing I want in one go".

I don't agree with this statement and I think demonstrates some of why Haskellers are kind of frustrating. Most data scientist (and most people in general) want to focus on solving the problem they're tasked with, not elegantly and efficiently positioning themselves to be able to do so. Jupyter and Numpy/Pandas are great because they allow the user to focus exclusively on solving their problem, not on language or framework-level concerns. This is not "a magic command that does the thing I want in one go." It is the separating the needs of the hammer maker from the needs of the carpenter.


I use it to solve problems. I have a database with millions of unstructured JSON documents. I wrote a tool in Haskell to scan the database, parse the unstructured documents and collect the results. It displays the ratio of successful parses and the top N parse errors with a sample to add to my test suite.

Once I can parse 100% of the database I can use another library I'm working on that can migrate between data structures while preserving information and provenance.

Then I can safely migrate millions of lines of unstructured documents with dozens of weird corner cases to a collection of documents with consistent structure and few corner cases.

Sure I could do this in pretty much any language on the market but I've put relatively little effort into this and am nearly done. Programming in Haskell has a good power-to-weight ratio.

I don't really have anything to complain about tools-wise. It's all standard fare or better than most other language ecosystems as far as I'm concerned.


We're building https://hercules-ci.com/ and not talking much about purity :)


We need more startup to use Haskell in production, like Dfinity.


Mercury.co uses Haskell. It’s been great so far.

Mercury.co/jobs if you are interested.


Co–Star is hiring!

costarastrology.com/jobs


Has Dfinity launched yet?


No




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: