Hacker Newsnew | past | comments | ask | show | jobs | submit | _halgari's commentslogin

That was my issue for a long time. I even talked with their founder several times on twitter back a few years ago. Each time I was greeted with buzzwords, that I knew the meaning of but I think they assumed I didn't.

They would claim grand things like having solved the issues with continuations and delimited continuations, distributed process migration and a whole host of other very hard problems that haven't been solved in the past. I would ask their founder: "right, so you know that delimited continuations have problems with accidental captured scope, they run poorly on the JVM, how did you solve this, have any papers I can read?", and all I ever got was that Clojure, immutable data, X and Y would fix these issues and you just had to wait and see what they were cooking up.

That's when I knew they had no clue what they were doing. I'm all for pushing the boundaries of tech, but if you're doing something attempted many times before, you at least need a good elevator pitch as to why it's solved now.

I think a great example of this done right is Rich talking about Clojure. People would ask "isn't immutable data expensive to reclaim and allocate". And his reply was always that the JVM's GC was just that good that the benefits to be gained from immutable data outweighed the marginal performance penalty of the amount of garbage collected. What changed since the old lisp days? Well we now have GCs that are super fast and JITs that can optimize dynamic code well.

That's the sort of laser focused vision I never saw from Red Planet labs. You gotta get that problem statement and the solution out early, refine the elevator pitch and be able to articulate to people who know what they're talking about how you're going to succeed where others have failed for decades.


I see your overall point which is a good one and I know this is a nitpick, but I thought Rich’s core solution to the cost of immutable data structures was to find a way to get the cost down by extending some existing research by Phil Bagwell.

From https://dl.acm.org/doi/pdf/10.1145/3386321

“I then set out to find a treelike implementation for hash maps which would be amenable to the path-copying with structural sharing approach for persistence. I found what I wanted in hash array mapped tries (HAMTs) [Bagwell 2001]. I built (in Java) a persistent implementation of HAMTs with branching factor of 32, using Java’s fast System.arrayCopy during path copying. The node arrays are freshly allocated and imperatively manipulated during node construction, and never mutated afterwards. Thus the implementation is not purely functional but the resulting data structures are immutable after construction. I designed and built persistent vectors on similar 32-way branching trees, with the path copying strategy. Performance was excellent, more akin to O(1) than the theoretical bounds of O(logN). This was the breakthrough moment for Clojure. Only after this did I feel like Clojure could be practical, and I moved forward with enthusiasm to release it later that year (2007).”

Or you tell me what I’m missing. Big fan of your work in core.async if this is the same halgari.


Same halgari, yes :D.

Yeah he took it from Bagwell, and adapted it, but in general there was a whole discussion way back in the day (~2012) questioning how creating this much garbage by boxing and throw away collections could ever be fast. Datomic is another example: making an immutable DB is a dumb idea right? Well what if storage was super cheap, and almost free? Well then maybe it's not such a bad idea.

So a lot of the Clojure community is based on this idea of taking ideas from way back in the 70's and saying "Well everything has changed, what works now that didn't then"


That’s super interesting and makes sense - even with persistence of trunks and branches there will be leaves to throw away / GC. Thanks for explaining!


> People would ask "isn't immutable data expensive to reclaim and allocate". And his reply was always that the JVM's GC was just that good that the benefits to be gained from immutable data outweighed the marginal performance penalty of the amount of garbage collected. What changed since the old lisp days? Well we now have GCs that are super fast and JITs that can optimize dynamic code well.

It's a little odd to see this deeply mistaken belief dating back to the early Java days being advocated for today. GCs are very constrained by the tradeoffs they make, there's no free lunch.

Much in the way an F1 car is only fast on a race track specifically made for it, the only reason a massive rate of allocations can have a merely marginal performance penalty is if the GCs in question have been specifically designed to handle it. But in doing so, they must have made sacrifices elsewhere, e.g. to memory usage.

Code that's not written with the underlying machine that will ultimately execute it in mind will never be fast, no matter how much we jiggle tradeoffs around. Therefore, while the gains of immutable structures may still outweigh the performance loss, the loss cannot possibly be characterized as marginal.


Scala has delimited continuations since some time now, so it can seemingly be done in a way that's performant enough.


Oh for sure, it's a bit wonky on the JVM due to the lack of tail calls, but that sort of thing can be done via full-stack bytecode transformation.

But these people are doing this in Clojure, which is quite removed from the JVM bytecode, and talking about how it solves so many distributed problems, which I just don't see happening.


If you look at redplanetlabs github repo, there's a ton of low level manipulation of 'JVM bytecode assembly language', e.g. https://github.com/redplanetlabs/defexception/blob/master/sr..., for projects that aren't even compilers, one would assume their "compiler" does this even more so.


Another hypothesis is what we're doing is so hard and so valuable that it really requires that much effort.

I suggest reserving your judgement until you've seen what we've built, which will be soon.


Author here, a few things to keep in mind:

Firstly, I wrote this about 3.5 years ago. I was wondering why people were suddenly commenting on it and now it all makes sense.

Part of the fun of writing articles about this is watching everyone argue about what language can be forced into representing types in a given way. Yes, I assume in any situation that if I want a feature X in a type system, that somehow Haskell can be forced to give me that feature, but that doesn't necessarily mean it will fit with the ecosystem of the language or that that's the only feature I'm looking for in the language. So saying "someone hasn't done their homework if they think X can't be done" isn't relevant, what is relevant that I'm not aware of a language that provides the type system features I want combined with acceptable set of trade-offs.

So anyways, I'll stick around for awhile and see if I can answer any questions. Thanks for the discussion, all!


I'm somewhat curious why you'd want both (2) and (3) to hold. Isn't it somewhat contradictory to want types to not just merely denote the structure, while requiring that everything with the same structure satisfies the type?

Maybe it's because I'm influenced by C# but viewed from that perspective it would be like requiring you to explicitly declare that e.g. some value is a ProductID but when you're declaring a type you wouldn't need to declare it is a Person, provided it simply implements the right fields (in C# you would have to explicitly implementing some interface to clarify that first-name and last-name do indeed refer to a person and not, for example, the head and tail of a list of names). This does mean that any external code can't implement your interfaces though, which is a bit annoying, though fixable.


That's commonly called "duck-typing". And since you're discussing C#, I'll throw in that Typescript (another MS language) has duck-typing for interfaces. You can explicitly implement an interface, in which case the compiler will enforce that the interface is implemented. But you can also implicitly meet an interface.


Why compile ahead of time when you know nothing about what the target platform is capable of? Why compile before profiling to make sure you run the correct optimizations? Startup times are sometimes important, but it's not the end-all-be-all of computer science.


It's not about startup times, just in time compiled code is slow in reality, and no contrived cases, of which there have been many, will change that. There is no profiling in just in time compilation. In order for a JVM to pull that off, it would first have to just in time generate code with counters, run that code, figure out when to analyse the results, then recompile and re-order it. That entire provess would make it run even slower than if it were interpreted.

Better then to do that once and generate optimized, reordered machine code and put the pedal to the metal during the lifetime of the program's run. Except there is no advanced optimizing compiler for programs which utilize a JVM, gee I wonder why... heh heh!


From the license:

> You may not use REBL for commercial use.


Sometimes it seems as if Cognitect just doesn't want people to use Clojure. It's like they looked at the results from the survey[0], created a tool that appears to address some of the main gripes that people have, and then proceeded to say "screw you" to all of the people that use Clojure commercially and may actually pay for Datomic...

[0]: https://danielcompton.net/2018/03/28/clojure-survey-2018


I know how easy it is to come up with interpretations like that but the site guidelines ask you to push pause before simply posting them. They have a destructive effect, one that compounds nonlinearly.

"Please respond to the strongest plausible interpretation of what someone says, not a weaker one that's easier to criticize. Assume good faith."

https://news.ycombinator.com/newsguidelines.html


Haskell's motto is "avoid success at all costs", but it feels like it applies more to Clojure.

I genuinely believe that Clojure could have taken over the world if it had a more sensible open-source governance model.


Closed source I understand, they must have something in there they don't want to give away completely for free. But "no commercial use" just cuts out something like 90% of the user base. Because depending on your definition of those terms it would also apply to anyone building anything with it that they plan on selling in the future. Even paid tutorials on how to use it would be out of the question.

It's like "how to not get people to use your tool" 101


I was at the talk, and IIRC he said something about developing it while working with a client. It's quite likely it's a contract obligation.


I don't know what world releasing a tool that addresses some of the main gripes that people have is a "screw you", but it doesn't seem like a pleasant one to inhabit. The important bits are in Clojure 1.10.


I think the exciting thing is the datafy protocol, not necessarily this particular viewer. What he demoed seemed much more like a proof-of-concept. Like he said, he's excited to see what the Clojurescript community comes up with, and so am I. I might take a shot at building a browser in Emacs, though that takes away some more dynamic display opportunities.


An emacs based browser for this would be awesome!


My wild guess is that it will be added rather soon than later to CIDER. The protocols are part of Clojure, and Emacs is great with browsing buffers, so it doesn't seem very hard to implement.

That said I'm not Emacs expert so it may be harder than I imagine.


Everything except HTML and charts will be easy - after all, if something can be represented as or drawn with plain text, Emacs is probably the best tool for working with it. HTML has some half-decent previewers available, but those are essentially text browsers in elisp. Unfortunately, painting arbitrary images gets unwieldy with Emacs. You can get away with ASCII column charts, but forget scatter plots or sparklines. But for viewing that, I wouldn't mind an external application to put on another screen.


Honest question, how do we interpret "commercial use"?

I'd assume it means you cannot sell REBL itself or a derivative, not that you cannot use it as a developer tool to build your project, which kind of makes sense.


the REBL is just a javaFX program that builds upon `datafy` and `nav` connecting with the REPL. I think soon we will get other implementations (CLJS!).

I wouldn't want to read too much into this particular prototype of REBL. I think they don't want to have people re-package this iteration and sell it as-is.


Should it be read as "you may not resell it" or "you may not use it while being paid by someone"? It's a bit unclear.


Doesnt that just mean you cant depoloy REBL as part of a comercial product? Im sure it doesnt mean you can't use the tool at work. Not that I like seeing restrictions in any licences but REBL isnt the cool thing here, its datify and nav.


"Commercial use" is clearly any work-related use. (You are also not allowed to redistribute it at all)


I'd really like to hear that stated unambiguously by someone at Cognitect. It just can't be right. No sane person would think they could enforce such a rule and no sane person would be scared to "violate" it. There's no trace of information indicating what it was used for or when. If there is, no one should use it anyway.


Stated unambiguously by someone at Cognitect: https://github.com/cognitect-labs/REBL-distro/commit/144acd0...


Correct, the first game to use ECS was Dungeon Siege back in 2002: https://www.gamedevs.org/uploads/data-driven-game-object-sys...


Dungeon siege used an "Entity/Component" framework. That's a very different thing to an "Entity/Component/System" framework.


Anyone who has done serious performance testing on a DB knows that there's a massive gap between initial findings and a well tuned system designed with the help of the database maintainers. I've seen some nasty performance out of Riak, Cassandra, SQL, ElasticSearch etc. But with each of those, once I talked to the DB owners and fully understood the limitations of the system it was possible to make massive gains in performance.

Databases are complex programs, and if I ever wrote one, it would be infuriating for someone to pick it up, assume it was "just like MySQL" and then write a blog post crapping on it because it failed to meet their expectations.


Yes, benchmarks can give a misleading impression of a database's performance.

So what? Somehow PostgreSQL is doing fine despite that.

Which is worse publicity for Cognitect: people publishing bad benchmarks or Cognitect forbidding benchmarks Oracle style?


"Generalization of abstraction" sounds a lot like "a maze of twisted passages, all alike"

That's exactly what I don't like about some languages, if everything is a function, then it's all a big ball of mud. The only thing you can do with a function is call it.

I'd rather have classes of capabilities. Some things are callable, others are iterable, some are printable. But if it's truely about abstraction generalization, that sounds like a mess.


> if everything is a function, then it's all a big ball of mud.

Not really, each distinct function will still have a distinct type. (This is largely why I prefer ml-family languages over lisps)

>The only thing you can do with a function is call it. You can also abstract over it, and pass it around. Which allows you to build whatever you want, numbers, booleans, if-then-else, etc.

The combination of function as your basic unit of abstraction and types as the differentiating descriptor is kind of the opposite of a mess, as you have a correspondence to familiar logic operations.

A function type A -> B is implication (given A, we have B), a product type (A,B) is conjuction (A and B), a sum type A|B is disjunction (A or B). (Sure, the logic isn’t necessarily sound, but it’s still useful).

When you have the fundamentals down you can build whatever capability system you need on top of solid abstractions.

I’m currently working on an ETL-project in haskell and it’s structured around a similar capabilities divide as you describe, defined by typeclasses/interfaces, it’s just all functions.


(Edit: The citation got messed up, it should be:

>The only thing you can do with a function is call it.

You can also abstract over it, and pass it around. Which allows you to build whatever you want, numbers, booleans, if-then-else, etc.)


Kind of saw this coming after the third reimplementation in yet another language. And why Rust? 2 years ago they decided to use Rust to implement a new language (IIRC Rust wasn’t even 1.0 at that time). That’s a huge amount of technical risk.

I wanted to be exited about eve, but it was always too light on details, had too much risk to ever allow it to succeed, and after several pivots it failed.

This story probably been a bit different if only it was a bit less ambitious, and a lot more pragmatic.


Well, for the first time since the langauge was created you can now do `brew install clojure`, and then `clj` on the commandline and get a repl. I would have loved something that painless when I was learning Clojure.


Leiningen has been incredibly painless:

    $ brew install leiningen
    $ lein repl
That's literally all it took.


I agree, but it does take a while to figure out that leiningen exist

And then you learn that not everyone use leiningen ... and things become a bit more confusing than necessary for some people, especially beginners


Sorry but this whole conversation is a stretch. Anyone who looks into Clojure finds out about Leiningen. The official clojure.org even mentions it. No it's more than a stretch, it's rationalizing.


As I mentioned at length in my prior answer, this was not the primary driver.


``brew search clojure`` used to find leiningen; not sure why not now.

In fact it used to find leiningen instead of clojure; possibly clojure didn't exist as a brew tap before.


Previously, there was no brew formula for clojure and there was a hardcoded pointer to leiningen. When the clojure formula was added, that was removed.


Installing leiningen is not trivial on all platforms (Eg: Windows).Even on Linux distro repos may contain older versions of leiningen that subtly break things. Perhaps it would be better if leiningen is part of closure distribution rather than existing as separate entity.


So long as you have leiningen installed.

That isn't always trivial.


Sure it is: you can just download and run the script they have on their site: https://raw.githubusercontent.com/technomancy/leiningen/stab...

or for windows: https://raw.githubusercontent.com/technomancy/leiningen/stab...


> So long as you have leiningen installed.

I’m not sure I understood your comment. Parent’s code block has two lines: the first one installs Leiningen; the second one starts its repl.


What I meant is, so long as you have an updated version of leiningen readily available.

This is just another layer in the toolchain to contend with, and since Clojure is built on the JVM, the toolchain is already quite bloated.


leiningen also updates itself.


brew install leiningen -> lein repl does exactly that


Not only this, but also things like `clj myscript.clj` are, I think, useful to beginners.


That's very cool, yes!


I had the great pleasure of interviewing Zach Oakes (author of nightcoders.net) the ~1hr podcast is available here: http://blog.cognitect.com/cognicast/130


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: