Hacker Newsnew | past | comments | ask | show | jobs | submit | insertcredit's commentslogin

I've been all over the US, NYC is a dump of colossal proportions, a train wreck happening in (maybe not so) slow motion. I live in Europe and parts of Manhattan reminded me of the third world. Hype / reality distortion have a lot to do with NYC being perceived as "vibrant" or "most important".


I've vacationed in new york city and I agree. Even the "nice parts" are crowded and shabby.


This post is full of misinformation.

cl-lib.el is not discouraged, it's widely used by Emacs itself and pretty much every substantial Emacs Lisp library out there.

What's discouraged is using an older version, cl.el, at runtime [1] because it replaces existing Emacs Lisp functions and pollutes the namespace. Even that's ok to use at compile time though.

Lastly, cl-lib.el is not cumbersome to use.

[1] https://www.gnu.org/software/emacs/manual/html_node/cl/Organ...


Actually there wasn't much disinformation.

cl-lib is cumbersome to use, especially since the CL operators now have a different prefix (since Emacs Lisp has no packages). This means I can not use any existing CL code in GNU Emacs - the operators are partially there, but all renamed.

That's cumbersome.


Let me give it a try, from an abstract/Emacs point of view:

Emacs is good for Lisp development, but both “raw” Emacs itself and most of the packaged customization versions (Spacemacs, etc) are fairly ‘opinionated’ about Lisp, and that opinion usually falls close to: Lisp is great, but Common Lisp is a little/some/much too much, so our choices often lean away from CL’s. (Lisp-1 vs. Lisp-2, dynamic vs. lexical scope, etc.) This means that you’re customizing in a nearby language, but one that dislikes some of the things that you liked when you chose CL, so there’s some impedance mismatch going on. That said, you can get a great CL dev environment from Emacs, but you’ll want to install a bunch of add-ons first, probably including a non-Emacs lisp interpreter and a package to bridge Emacs and that other lisp. True fans of CL would really like a full-blown Emacs with real CL at its core, and periodically people try to make those (Hemlock, etc), but they usually don’t catch on.


More misinformation here.

Emacs Lisp is a Lisp-2.

Emacs Lisp has lexical scope.

When writing code in Emacs, Emacs Lisp for all intents and purposes can be seen as a subset of Common Lisp, not an entirely different language like you present it to be.

Please try not to propagate this sort of misinformation again in the future. The things you claim are obviously wrong to anyone who has ever used Emacs Lisp, even once. Have you ever done that?


Please edit swipes and incivility out of your comments here, regardless of how misinformed someone else is. It's actually worse when you're right, because then you're discrediting the truth by associating it with rudeness.

There's an additional concern. As Lisp users ourselves, we remember how the CL culture was gutted by the wave of nastiness that rolled into it about 15 years ago. No trace of that is ok on Hacker News. Dismayingly, your comments have contained traces of it in the past as well as in this thread.

In fact, you have posted so many uncivil comments to HN already that we're going to ban you if you keep doing it. If you'd please read https://news.ycombinator.com/newsguidelines.html and take the spirit of this site to heart from now on, we'd greatly appreciate it. This means erring on the side of respecting others, assuming good faith, and providing correct information to teach readers rather than humiliate fellow commenters.

You might also find these links helpful for getting the spirit of this site:

https://news.ycombinator.com/newswelcome.html

https://news.ycombinator.com/hackernews.html

http://www.paulgraham.com/trolls.html

http://www.paulgraham.com/hackernews.html


Emacs Lisp is similar to CL, since both were derived from Maclisp and many basic code looks similar.

In Common Lisp a subset would mean that all programs of that subset would run unchanged in a full CL implementation. Emacs Lisp isn't like that.

More tragically, over time, Emacs Lisp implemented Common Lisp features - even though Richard Stallman does not like Common Lisp's features like keyword arguments - but not in a source compatible way. Emacs Lisp got CL features and operators via libraries, often with incompatible naming, but still can't be programmed in straight Common Lisp.


> This post is full of misinformation.

Maybe a few, but not full neither even so many. Thanks for pointing about cl vs cl-lib.


Please don't be discouraged by the occasional hostile reception. It's an invasive species here, which we do what we can to contain, but the bulk of the community is supportive and I hope that comes across in this thread! (And thanks for not taking the bait; your reply here was admirable.)


Yeah... Unfortunately or not, that type of person in the internet is so common that after years I learnt a little about it. Thanks for the support :) let's continue the good discussions on the thread o/


One persistent problem I see in the Common Lisp (love the language!) space is the wide availability of crapware that not only doesn't bring something new to the table but is actively damaging to the community since it's diluting the set of good libraries and making it harder for new users to tell the wheat from the chaff. Lem is crapware. The problem it's supposed to solve, writing CL without configuring Emacs, is not a problem since there exists Portacle [1]. Lem is inferior to Emacs/SLIME/Sly in every way especially for writing Lisp. Lem has no future. But it exists and may act like a strange attractor to those who don't know better.

A question I'd really like to find the answer to: Why is there so much crapware for CL?. Why doesn't the community come together behind the few, really good, libraries but instead almost everyone goes out and does his own thing, the end result being an ocean of crap.

[1] https://portacle.github.io/


The first Emacs was written on top of TECO.

The came young students who had access to a brand new Lisp Machine in the MIT AI Lab (thanks to the welcoming spirit of Marvin Minsky and others). They implemented EINE (EINE is not Emacs) and ZWEI (ZWEI was EINE initially) and then Zmacs. -> Weinreb and McMahon. Bernard Greenberg wrote an Emacs in Maclisp for Multics. Hemlock was written in Spice Lisp. From then on a bunch of editors were written in Lisp.

Don't let you be discouraged. Learning to write well architectured programs is best done by writing programs.

I applaud those who put their thoughts between nested parentheses and turn them into working code...


I agree with this comment totally, specially in this part:

> Don't let you be discouraged. > Learning to write well architectured programs is best done by writing programs. > I applaud those who put their thoughts between nested parentheses and turn them into working code...

We should appreciate jobs like lem! They are trying, thinking! And this HN section??? JUST COMPLAINING! What are you doing for helping? WTF.


Sorry to hijack the thread but congrats on Lem. It's nice to see how far it's progressed. You even have an XCB binding in there!

Lemonodor-fame is but a hack away!


You haven't actually told us anything except what you think is “crap”. Could you please be more informative and less rude in your comments?

> Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something.

https://news.ycombinator.com/newsguidelines.html


I mentioned the problem Lem claims to solve and proved that there are existing solutions to that problem. I also put forth that Lem is inferior to Emacs/SLIME/Sly when writing Lisp which should be obvious to anyone with Emacs experience. Then I made additional points which at least two people found of interest and tried to provide answers to.

I'm pretty sure all of that does not translate in me not telling you anything.


There can be multiple solutions to one problem, there's nothing wrong with experimentation. You seem to think it's OK to call any Emacs competitor crapware?


I defined crapware as something which dilutes the space of good libraries, offers nothing new and pretends to solve non-existing problems. I made my case.


The problems GNU Emacs does not solve for a CL programmer is its generally clunky user interface, lack of GUI features, its non-reusability in a Common Lisp program and its non-source-compatibility with Common Lisp. GNU Emacs will always be an extra component to a CL program and rarely will be the right user interface for a CL program - even though the program might need editor functionality.

One can get very far with it, but for some the combination of above is enough reason to look for alternatives.


By this logic, every new Linux distro, window manager, text editor, programming language, or anything else that competes with existing software is “crapware” that is “actively damaging” by “diluting the set of good [software]”. Competition is good! The community has never “come together” in any space (case in point: systemd), even outside of Common Lisp.

Corporations and small teams sometimes create software that is more cohesive and might be what you’re looking for, but this has never happened for a “community” of any reasonable size.


I agree with your point and disagree with the OP’s but I do believe that the proliferation of Linux DEs and recreation of basically the same software can often go beyond what is good.

One of the biggest things holding Linux behind today is the lack of a common vision. I think it’s no coincidence that Linux on the desktop really took hold only after Canonical started promoting Ubuntu. With Cqnonical retreating into the server space and basically abandoning desktops, I fear we are about to enter another era of Linux desktop stagnation at a time when arguably we need it most (OSX and Windows are increasingly becoming tied down, and the fastest growing OS is ChromeOS which isn’t just tied down but is also potentially a privacy nightmare).


That’s part of the reason that I’m an OpenBSD user, because there is a much smaller development team that has more of a cohesive vision. However, that doesn’t get in the way of user freedom. I’m just of the opinion that another desktop environment or window manager (or text editor, etc) that doesn’t take off isn’t an issue at all and it often pressures the incumbent to innovate and adopt new features.

Competition and a common vision don’t need to be mutually exclusive.


What a terrible vision about developing software. I hope you can change your mind in the rest of your life. Legacy software is not the end of all things.


I think the Lem developers would appreciate if you filed issues that caused you to believe what you do about the project, so they have a chance to improve the state of affairs.


beneath the invective there is an interesting point.

look at the context. its easy to understand in todays environment why someone would write a somewhat-finished personal project, put it on github, and try to get it noticed. personal itch, self-teaching, self-marketing, etc

whats less clear is how you get together enough time and bodies to spend the substantial effort to make something well documented, tested, and more broadly useful. especially in a lesser-used language.

open source has definitely 'won', but i dont think that issue has been adequately tackled. the story seems to be that you get to step #1, and get enough traction that you can hopefully build a functioning set of contributors.

[as aside lisp also has a reputation, unfounded or not, for encouraging people to develop a bespoke private universe thats hard for other people to get a handle on]


It's been an issue for a long time.

http://www.ccs.neu.edu/home/shivers/papers/sre.txt


...give nodejs a try and see if you change your mind about "crapware" :)


That was fast. Just because something is not popular, it's not crapware. Nodejs libs are worse: they are proud to develop crapware software and have a extreme popularity by doing that.

Why we should support this type of software and attitude? There is something wrong with it.


> Why doesn't the community come together behind the few, really good, libraries but instead almost everyone goes out and does his own thing, the end result being an ocean of crap.

This is the Lisp Curse:

http://www.winestockwebdesign.com/Essays/Lisp_Curse.html#mai...


Immersion is the killer feature of VR, not standalone graphical fidelity. FOV, 6DOF tracking are far more important today than 4K per-eye resolution and 120 FPS. Moreover, foveated rendering will give us drastic graphical fidelity improvements, in the very near future.


Like you I think the Quest will change everything. I feel that it has to, at this point, for VR momentum to break through. Do you think Oculus will dominate the space in the coming decade? I can't help but be reminded of Sense/Net from Neuromancer as a potential VR-only business with massive upside and potential (userbase in the hundreds of millions if not billions) that can be done _today_ if VR headset proliferation went beyond gamers. I am thinking if Carmack and Abrash can't get it done, chances that anyone else will are slim, at least in this generation.


I have conflicting thoughts about Oculus' domination. Last night was an interesting example, Facebook went down and took Oculus with it. But beyond that, if this is going to be the platform of the future, we're going to need more than one company behind it. History tells me that there is a better solution being built in a garage right now. If Lucky Palmer could do it in the 2010's, someone else can be doing it again now. I'd certainly vote for that company to be stand alone.

Have you read The History of the Future (https://www.amazon.com/gp/product/0062455966) yet?


One fact that's seldom reported is that RTM's father, Robert H. Morris Sr, started working for the NSA in 1986, two years before RTM unleashed the worm. Food for thought maybe?


Not seldom reported at all. Was actually considered a major career embarrassment for RHMS.

Just a coincidence.


First, it's Lisp not LISP. Using "LISP" immediately flags you as someone with a superficial (if at all there) understanding of the language.

Second, unsubstantiated proclamations like "Overuse of them led to LISP code being hard to read" reinforce the previous point. Could you provide a clear reference where Lisp macros are considered "mistakes of history"? Clear references where overuse of Lisp macros turned out to be a problem?

I'm curious if you've ever used a Lisp development environment with facilities such as interactive macroexpanders or if you're just assuming things based on your (incomplete, suspect) understanding of the domain.


>First, it's Lisp not LISP. Using "LISP" immediately flags you as someone with a superficial (if at all there) understanding of the language.

Actually both versions are valid.

You seem to not know historical information about Lisp/LISP. While Common Lisp is spelled "Lisp" and more modern use is Lisp, historically LISP has been prevalent (and tons of Lisp dialects prefer the capitalized version, e.g. like "fooLISP" or "barLISP").

Second, you are concerned with superficial details people don't and should not care about. We're programmers, we care about the code and what you can do with it, not about whether some language is "properly" spelled in caps of mixed case.

Third, you are rude, which is worse than both of the above.


"historically LISP has been prevalent"

I learned about it from old books, often on AI, I could scrounge up when I didn't have the Internet or a computer. LISP as in LISt Processor. An acronym. Due to broken memory, I sometimes forget which term to use on stuff that's faded away. I end up about randomly using LISP or Lisp unless its Common Lisp where I usually see "Lisp" in write-ups.

So, good guess.


This seems unnecessarily harsh.

Capitalization isn't a deep signal.

There are lIsPs like Clojure that argue (> data functions macros). Although I mostly disagree (for example I love Racket macros), I understand and appreciate the sentiment. I have heard it from other people who have worked with a variety of LiSp code bases over the years. TL;DR: Any form of non-trivial DSL needs supporting materials like documentation and a simple, clear design.

Although it is nice to be able to expand macros, fully-expanded macro-generating macros are clear in approximately the same way as assembly language. It is impressive if you can navigate that, but even more impressive if can manage not to need to do so.


Clojure argues that (> data functions macros), but it still has macros and accepts that they are not only useful, but sometimes necessary. Clojure's core.async would have had to be built into the compiler, if it wasn't for macros. Just because it prefers data to functions and functions to macros, doesn't mean that it doesn't recognise the importance or usefulness of all three.


> lIsPs like Clojure that argue (> data functions macros).

That's not really Lisp related. It's more like how the community likes to see code being written.

For example I would regularly make use of macros to provide more declarative syntax various programming concepts.

There are Lisp dialects which are light on macros and some which are using macros a lot more. For example the base language of Common Lisp already makes use of many macros by providing them to the user as part of the language (from DEFUN, DEFCLASS, ... INCF, upto LOOP).


Not bad but a little bit of a publicity stunt.

A major source of vulnerabilities is (still) the Javascript engine and that's (still) written in C++.

Even worse, as far as I know, Mozilla has no plans to rewrite even parts of Spidermonkey in Rust.

For some recent examples:

https://usn.ubuntu.com/3688-1/

https://usn.ubuntu.com/3749-1/


SpiderMonkey dev here. As others have mentioned, Cranelift is one component that's being written in Rust. Eventually we want to use Cranelift as compiler backend not just for WebAssembly but also for JS. After that it might be feasible to port more of the JS JIT to Rust. It might also make sense to use Rust for the parser or regular expression engine, who knows.

There will probably always be C++ code in Gecko, but I firmly believe that writing more components in Rust instead of C++ will (in general) improve security and developer productivity.

It still amazes me that we're actually shipping a browser with a CSS engine (and soon graphics engine!) written in Rust. Even more amazing is that these components are mostly shared with an entirely different browser engine.


A JS engine is a high-risk, high-reward problem for Rust. High-reward because JS engines are, to your point, a major source of vulnerabilities; high-risk because JS-engine theory is rather outside of Rust's wheelhouse.

One class of vulnerabilities in JS engines is use-after-move. A raw pointer is extracted, an allocating function is called (triggering a GC), then the raw pointer is used, pointing into nowhere. It's awkward to express in Rust that a function may modify state inaccessible from its parameters.

A second class of vulnerabilities is type-confusion. A value is resolved to (a pointer to) some concrete type, but some later code mutates the value. Now the concrete type is wrong. Again this possibility is awkward to express in Rust.

The problem is complicated by the NaN-boxing and JIT aspects of JS engines, which interfere with Rust's tree-ownership dreams.

People smarter and way better at Rust than myself are working on it; I'm excited by the prospect of novel solutions that can defeat entire classes of problems.


I'm curious what proportion of vulnerabilities in JS engines are due to mis-generated JIT code vs direct errors in their compiled code. Rust allows you to express some nice properties not always directly related to memory safety (e.g. checked consumption, convenient and safe ADTs), but unless there is a novel application of these facilities to the structure of a JIT engine it won't help a ton with the former kind of vulnerabilities.

I'm excited to see a practical programming language that implements full dependent typing; languages like Idris are actually really good at dealing with precisely the kinds of situations you mention.


JS engines have many parts implemented natively, which may be called from JS, and in turn call back into JS. An example is CVE-2015-6764: this grabs an array length, which quickly becomes stale, because accessing one of the array's elements invokes a custom toJSON which in turn modifies the array's length.

This feels like a hopeless problem; can any of Rust's powers be brought to bear here? Could Idris?


F* is probably the best equipped at the moment to deal with situations like that CVE at the moment, since its library has a concept of heaps. Basically, any function that can access or modify the "heap" (which in F* is just a set of pointers that are guaranteed to point to a value and not alias any others outside of the same heap) must specify what properties of the state of the heap must be true at entry, and what properties are true afterwards. So in pseudo-types (, the functions for accessing a JavaScript array would be something along the lines of

    fn arrayLength(x: JSArray*) -> n: uint (requires nothing) (ensures length of x = n, changes nothing)
    fn callToJson(x: JSValue*) -> JSValue* (requires nothing) (ensures nothing)
    fn arrayAccess(x: JSArray*, m: uint) -> JSValue* (requires length of x > m) (changes nothing)
(NB: F* syntax doesn't look much like this, but I'm guessing this will be readable to more people on HN)

The stuff in parentheses after each function type are the preconditions and post-conditions respectively. So if you do something like:

    let x = arrayLength(someArray)
    for i in range(x) {
      let element = arrayAccess(x-1)
    }
It will typecheck just fine. But if you add the call to toJSON:

    let x = arrayLength(someArray)
    for i in range(x) {
      let element = arrayAccess(x-1)
      let transformed = callToJson(element)
      // ERROR: (requires length of x > m) not satisfied for all runs of loop body
    }
Since callToJson cannot ensure any property of the heap after it runs. In this way you can elide range checks when needed for performance without worrying that you've sacrificed safety.

Covering all the cases a JS engine would need without adding 10 million lines of proofs to the size of SpiderMonkey is still an open problem, but this general approach (known as Hoare Logic[1]) is very enticing, and the type systems that languages like Idris and F* have are definitely the closest to realizing it in more places. There are real software engineering efforts using descendants of Hoare logic like TLA+ (notably Amazon IIRC), but it's rare to see it even in huge projects like browsers.

It's also critical to note that the heap concept of F* is not a totally fixed part of the language; most of the specification of how heaps work are actually in the standard library. That level of flexibility is what I think makes these languages likely to become capable of tackling these problems: something like a JS engine or any optimizing compiler is exactly the kind of place where being able to come up with your own type-level verification model is worth the effort.

[1]: https://en.wikipedia.org/wiki/Hoare_logic


> Mozilla has no plans to rewrite even parts of Spidermonkey in Rust.

Here's a substantial part being rewritten in rust by Mozilla: https://github.com/CraneStation/cranelift/blob/master/spider...


I am not sure I would call cranelift "substantial" in terms of exposure/usage. From what I gather, it's not used at all for normal, everyday Javascript.

I stand corrected though, every little bit helps. Here's hope they'll start using Rust in more places where it counts.


I suppose "substantial" is subjective, but I really do thing it counts. Certainly their are unfortunately frequent vulnerabilities in the code it intends to replace. For example:

https://bugzilla.mozilla.org/show_bug.cgi?id=1493900

https://bugzilla.mozilla.org/show_bug.cgi?id=1493903

To be fair I'm not actually sure rust would fix either of the CVEs I linked. Both being about problems in the generated code (as I understand them from a glance), which is something inherently unsafe to do.

Edit: I realized you might be picking out the word "ARM" on that page. I know Crainlift also works on x86, and I assume it's intended to replace IonMonkey everywhere, not just on ARM chips.


Rust isn't just for Firefox. I'm unsure if Safe Rust works with JIT compilation. Unless some new method comes around jit is King when it comes to JavaScript.


> I'm unsure if Safe Rust works with JIT compilation.

I mean, if you just want to compile the code, sure.

Executing arbitrary machine code not generated by the rust compiler (i.e. by the JIT compiler you wrote in rust) is basically the definition of unsafe though...


I've known you (from your posts at comp.lang.lisp) to be eager to present the facts as you see them and thorough in your argumentation. What is it about Urbit/Yarvin that merits this sort of post?


Have you ever looked at the Urbit code?

[EDIT] BTW, that was not a rhetorical question. I need to know so I can frame my answer. And BTW2, thanks for the kind words.


I've spent enough time (not much) with urbit to write a Nock interpreter/compiler. There are aspects of it that rub me the wrong way such as the needless custom terminology and general esoteric nature that sometimes reads like an occult grimoire but I also think that a lot of the criticism aimed at them, especially the politics, is misguided. Having watched Yarvin present on urbit a couple of times, I would say he is mostly driven by the intellectual atmosphere of the early Internet, before the masses moved in, thus his attempts to not "cast pearls before swine" by making things too accessible so to speak. I do not agree with this stance but I can certainly understand it without having to resort to conspiracies. Other than that, he most definitely reinvented Lisp, badly, but I am willing to give him a pass there too. There is nothing at all that attempts to make real the vision behind urbit around today and I do think it is an interesting vision. Finally, Alan Kay likes it.


OK, well, that's pretty much how I see it, except for one thing: I am vehemently opposed to making things unnecessarily complicated in order to keep out the riff-raff, and I don't think one needs to resort to any conspiracy theories in order to take that position. (If anything, there seems to be a fundamental contradiction between Urbit's stated goal of (re-)democratizing the internet and Curtis's approach. That and the fact that they sold address space for cash. But at this point that is neither here nor there.)

Ironically, I agree with Urbit's stated goal of making it easier to run your own server. The reason I say "good riddance" that I was pretty sure that Curtis's approach would fail, and Urbit would implode sooner or later. But as long as it was alive (and funded by Peter Thiel) it was sucking all the oxygen out of the room.


This is disingenuous, systemd-journald is the default in every systemd-using distribution I am aware of. The philosophy of systemd is all about tight coupling and forcing its singular vision on end users. When that vision falls apart you can not claim it is not really a systemd problem because in theory you could have gone out of your way and done something that is not encouraged.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: