Speaking impersonally, and not directly to OP, you can program in Common Lisp in industry, you just don’t want to that badly, and then you justify why you can’t by making up reasons about not being able to hire, not being mainstream, not finding jobs, etc. etc.
Somehow, I’ve been able to keep stable employment as a programmer that writes Lisp for over a decade. I’ve written Lisp at startups. I’ve written Lisp at Facebook. No, I haven’t been a consultant on some weird legacy Lisp project either. The difference between me and someone who doesn’t have a career in Lisp is:
- I choose Lisp for projects that benefit from Lisp;
- I make the case to technical and non-technical people that I can solve their problem cheaply and efficiently, often with a concrete demo and without any of the vague promises of Lisp’s power;
- I put in extra effort to make sure we can hire for Lisp, train for Lisp, and integrate with Lisp.
If you want to write Lisp in industry, and if you’re a senior-or-greater engineer (whatever that means), then get some buy-in and write it. Learn the language backwards and forwards, and be responsible and accountable for your choice.
Unless you’re writing the world’s most boring software that needs 5,000 Java programmers all working on it at the same time, I can assure you, there is no issue with writing and maintaining Lisp, unless everybody you work with, including yourself, likes to pass the buck.
I generally agree, though I would highlight a remark you made in another comment that sometimes you just can't get buy-in. I'd go a little further: sometimes you're working with a team whose leaders are dyed-in-the-wool anti-Lispers.
But if you really want to work in Lisp, then you can. This is my thirty-fourth year of work as a programmer and technical writer. Of those years, I've been paid to work in Lisp for about twenty of them, so it can certainly be done--though you have to be patient and work in something else sometimes. If you want to be a Lisp hacker, learn another language well for those times when you're waiting for the right opportunity. C is generally a good choice; among other things, it's a common compilation target or kernel-implementation language for Lisps.
I also agree that you can teach Lisp to capable programmers without much trouble, but with the caveat that they have to want to learn it. Not all of them do. If you get someone who really just doesn't want to learn it, then no amount of effort or inspiration will hurdle that obstacle.
As a Lisper, your primary consideration when choosing another language to learn well is that it should be a language that enables you to easily find work when Lisp jobs are scarce.
Secondary considerations (that are nevertheless important) are whether it's pleasant to work with and whether it's likely to be useful in your Lisp work.
Whether a language is pleasant to work with is a matter of personal taste.
The other secondary consideration--whether it'll be useful in your Lisp work--is more objective. The language will be useful in your Lisp work if it's used in implementing Lisp, or if it's a common compilation target for Lisp, or if Lisp programs are commonly called upon to interoperate with it.
C hits the first two criteria: there are several Lisp compilers that compile to C, and several Lisp implementations with kernels written in C.
C, C++, Java, Python, and Javascript hit the second criterion: there are many applications of Lisp that need to interoperate with programs written in those languages.
Rust doesn't fit any of these criteria particularly well at the moment. The number of open Rust jobs doesn't seem to be particularly greater than the number of open Lisp jobs, which makes it a poor candidate as a fallback language. There aren't yet a lot of applications of Lisp in which it's important to interoperate with Rust code, either.
Rust is high on my list of languages to learn next, but because it's interesting in its own right, not because it's a particularly good fallback language for a Lisper.
My interest in Rust doesn't have anything particularly to do with Lisp. I'm interested in programming languages in general, for their own sake, and I consider Rust's experiments in type-supported safety and resource-management to be interesting.
In some respects, it's sort of the opposite of what I like in a programming language, particularly in that it's a batch-compiled language with a reputation for long compile times. Still, I'm interested in its approach to safety and resource management.
Say you want to integrate with a Linux-Python-AWS shop if that matters. Any external API -- crypto, aws, json -- we'd love to have it in Lisp natively, but whatever we'll wrap the c lib via FFI.
- SBCL’s built-in static executable compiler for deployment, or sometimes docker if that happens to be more convenient
- Practical Common Lisp for training; Paradigms in AI Programming for taking a trained Lisp programmer “minting” a better Lisp programmer
For individual libraries, there are too many to name. I like yason for JSON, Hunchentoot for web servers, Postmodern for Postgres/SQL, cl-sqlite for SQLite, fiasco for testing, etc.
If the s-expr was the only thing we meant when we say "we like common lisp" that would indeed be a good solution.
It isn't, though, and the experience I have had with languages like that is usually no different than the one I have with python or lua - but with the added benefit of being able to at least use macros.
It’s hard to be categorical but for me: compilers, mathematical software, autonomous or long-running software, high-performance code, data structure-heavy code.
Some of my former colleagues might try to convince you Lisp is a great choice as a backend language. I agree that it’s a great choice in isolation, but I don’t have tons of experience using other tech to build extensive APIs with tons of layers.
A little more meta, but Lisp can be a great choice if a project can only afford a handful of developers. Lisp can give you the “bang for buck” in terms of code output/productivity and comprehensiveness of the solution to a problem.
some quantum computing lab made a lisp thing for their needs
to me the value of lisp is that everything is almost fully open beyond the primitive bits (if, cons, atoms, native types ...) all the rest is customizable so if you need something deeply new and deeply alien .. you'll have near zero resistance from lisp
How about if your company decides the approved languages are X, Y, and Z, and Lisp is not among them? Such companies exist. I work for one. (They are currently discussing removing JavaScript from the approved languages list because its dynamic typing makes it unsafe at any speed. TypeScript is the approved substitute.)
If you cannot get higher-up buy-in to support your chosen language and train new programmers in it, then it doesn't matter how much of a practical advantage your solution in that language provides -- continuing to code in that language is a fast track to getting PIP'd.
If you can’t get buy-in, you can’t get buy-in. Some companies won’t allow you to write whatever you want, Lisp or otherwise. I don’t suggest writing Lisp is possible anywhere, just that it’s perfectly attainable and within one’s power to have a career of which Lisp is a significant part if you decide that should be a priority. If working at $COOL_COMPANY that only allows Python due to CTO-mandated policy is your priority, then there’s not much that can be done to accomplish that and writing Lisp.
Sometimes, also, you have to be more persuasive than “let’s use lisp it’s cool”. Implement a piece of code as a side project that solves a real problem that people care about at your company. Don’t shove it in people’s faces or rogue commit it to production. Ask people about it and have a plan to integrate it and support it. Sometimes it works. Sometimes it doesn’t.
As professional software engineers, our job is to solve technical software problems in exchange for a wage. Some of these technical problems come in the form of giant pieces of software that you’re asked to chip away at. Others come in the form of greenfield implementation work. Some are somewhere in between. Some of these projects have 2 programmers, and others 2,000. Where on these spectra you’re being paid to contribute makes a lot of difference in how much sway you have for foundational technological choices on specific projects.
A colleague implemented a small utility in Clojure (~50 lines long) that worked great. It led to a mandate that allowable languages are: C/C++, C#, Python.
The colleague left 4-5 years ago for an external job. Part of the problem was the utility needed some enhancements/extensions and nobody wanted to continue it in Clojure. I think it was rewritten as an overhaul/revamp of the larger project it was part of.
I also left but for a different internal position where I can write C/C++, Java, Rust, Python (no C# in new group because new job is writing software to run on linux, and our IT does not support dotnet core - in addition there is no demand from us because we have a zillion lines of code all written before any dotnet on linux existed). EDIT: Old group can use C# because that work was on Windows (just to clarify C# wasn't dropped company wide).
Anyway, some orgs might be more receptive to "exotic" languages but I know where I work you'd have to fight really hard to do it. I was pleasantly surprised to see Rust got the thumbs up.
Pretty much this. That and "No one else here knows Lisp. How are you going to collaborate on it? We do software development as a team effort, and you should be pairing/mobbing on everything you write."
From the article: “My heart was broken because Common Lisp is such a fine fine language and it is a joy to work in and hardly anyone uses it in industry. The industry has a lot of code in Java even when it takes much less time to write code in Lisp. What happened to the programmer’s time is more important than the machine’s?”
I think one issue that Common Lisp and many other programming languages face is that while a programmer’s time is more valuable than a machine’s, there are dynamics at play when maintaining commercial software projects that favor coding in more popular languages, such as the ability to easily hire developers knowledgeable in the language and/or paradigm. For example, imagine being a manager of a project written in Haskell that liberally uses functional programming concepts such as monads. It’s easier to hire developers knowledgeable in standard procedural or object-oriented programming languages than it is to find developers knowledgeable in functional programming, and getting a non-FP programmer up to speed on functional programming requires training, which takes time and money.
I love languages like Common Lisp, Scheme, and Smalltalk. However, if I’m working on a team project and my teammates don’t know these languages, then I choose a more common language like Python. Programming is more than just expressing computation; it’s also about communication, and one major purpose of language is to communicate with others.
Lisp doesn’t really have many, if at all, super weird concepts that need months of thinking about in order to understand.
Training seems like some lost art as we collectively move toward a more disposable programming ethos. Training a good programmer how to write Lisp is, what, two weeks of work? Learn how to make variables, write functions, define classes, and compile a project. That gets you 80% of the way there, and definitely gets you across the “productivity threshold”.
Somehow, in the same breath, we find it okay to spend weeks agreeing on how to ship Python code, what the standard development environment is, and how a 2->3 transition of 100,000 lines of code will be accomplished.
> Lisp doesn’t really have many, if at all, super weird concepts that need months of thinking about in order to understand.
I mean Haskell advocates will say the same about Haskell (monads are really not that bad: I'd be surprised if someone on a team of experienced Haskell programmers would take more than a week to come up to speed on them. It's more the rich interplay of monads and other concepts that either rewards deeper study or is a dangerous rabbit hole depending on your perspective). In both cases it's a matter of how deep the rabbit hole goes in the concepts that the languages do offer and how much a given code base uses those concepts to their fullest potential.
It's a double-edged sword. The same concepts that yield so many layers of power, expressiveness, and universality can yield frighteningly arcane codebases in the hands of advanced novices, those with "just enough knowledge to do harm." There's a reason the meme of Lisp programmers going through a phase where they want to "macro all the things" before calming down and using them more judiciously is a meme.
Likewise a boring language like Go that purposefully puts abstraction ceilings has a double-edged sword whose edges cut the other way.
An excellent team with good taste will be at home in whatever language they end up choosing, almost by definition. The opposite direction is a road much more fraught with danger.
Haskell programmers may say the same thing about Haskell, but it’s not the same thing as far as Lisp is concerned.
Yes, it’s possible to concoct the weirdest stuff in Lisp, but that would probably be bad code. Some macros might take time to learn, but unlike Haskell where you need to know monads to be productive, you do not need to know how to author macros to read and write most Lisp.
Languages that have these “abstraction ceilings”, as you put it, have their own danger as well—just a different danger. Namely, you’re choosing boringness of the language as a feature, in many cases requiring some multiple of additional programmers, which means some multiple of additional time spent hiring and money spent paying salaries. For some companies, money isn’t a problem, so choosing boring tech with a gazillion programmers and double-digit churn rate is just fine.
Programming in Go doesn’t automatically resolve all the issues of running and maintaining a technical project. It makes one particular problem trivially solved (inconsistent code format, expressiveness going off the rails, etc.). But choices and policy around how code is contributed to, how software is documented, how architecture is developed and reviewed, how dependencies are managed and added, how code is reviewed, etc. still falls on humans to conceive and implement as agreed-upon policies. In Lisp, how code is formatted and what language features might be off-limits can also be solved by way of such policy-making. Its not theoretical either; even Google publishes a style guide on Lisp development that they follow internally, and it works fine. It’s really not as “dangerous” as everyone seems to make it out to be.
> but unlike Haskell where you need to know monads to be productive
I disagree. You don't need to author monads to be productive in Haskell (and in a production codebase rarely would, the monads you use change rarely). Having written a good of deal of Lisp going out to production in the form of Clojure and a good deal of Haskell I really think monads and macros are a rather fitting analogy here. It's really the same deal. You need to know how use monads in Haskell just as you need to know how to use macros in a Lisp. Are you going to be writing a lot of your own new ones in a pre-existing production codebase? Unlikely. Are you going to be creating a couple of them here and there in a lot of the new libraries you write? Probably. Are they both things that sound really scary to outsiders but are really quite tame if you have a team surrounding you helping you out? Yeah.
> It’s really not as “dangerous” as everyone seems to make it out to be.
Again I don't think it's dangerous. I just think it's a weird direction to take. There are no good languages in isolation. There are only languages that are the best fits for a given team.
EDIT:
> Languages that have these “abstraction ceilings”, as you put it, have their own danger as well—just a different danger. Namely, you’re choosing boringness of the language as a feature, in many cases requiring some multiple of additional programmers, which means some multiple of additional time spent hiring and money spent paying salaries. For some companies, money isn’t a problem, so choosing boring tech with a gazillion programmers and double-digit churn rate is just fine.
Indeed, I'm not particular champion of Go or Java (the traditional boring languages people bring up). Their very boring-ness has its own drawbacks as you say, ones that I feel acutely whenever I've worked on teams using them. But these are a series of true trade-offs, one where different situations will call with different needs. I have met plenty of people who are extremely competent Common Lisp, Haskell, Clojure, etc. programmers who have proactively chosen to go with a "boring" language for a new team (for a variety of reasons, not necessarily because the "interesting" language has sharp edges).
The team (and by extension the company) precedes and determines the language, not the other way around.
I do agree most Haskell programmers don’t need to be authoring monads to be productive. My take was more along the lines that monads expose an interface/API to the programmer that is used pervasively in “ordinary programming”. This interface/API is defined rather abstractly and doesn’t come equipped with “obvious”, self-evident definitions. You can get pretty far in Haskell though just sort of parroting these operators for specific class instances of interest, like IO and Maybe.
This is, I claim, unlike macros where you really don’t need to know about the machinery of macros (or even of their existence) to understand and use AND, OR, DOLIST, LOOP, etc. The notion of a macro doesn’t “leak” in “ordinary programming” whereas ‘>>=‘ or ‘return’ do, even if in just type errors.
> This is, I claim, unlike macros where you really don’t need to know about the machinery of macros (or even of their existence) to understand and use AND, OR, DOLIST, LOOP, etc.
Nah you do need to know about them, otherwise you won't understand a bunch of weird errors in your program (e.g. hmmmm... why does (reduce or my-collection) blow up?).
I agree you don't need to know much about them. Really a novice Lisp programmer just needs to know, "don't use macros in anything that isn't a first-order construct, otherwise just treat it like a normal function," and that'll get them pretty far. But that's, as you say, really just the same difficulty as "sort of parroting these operators for specific class instances of interest, like IO and Maybe."
And to understand one more level further down is really easy in both cases if you have any amount of hand-holding by an experienced mentor.
Yes if you have problems with macros there are ways of debugging them. My point (in reply to reikonomusha) was simply that you do need to know about "the machinery of macros" and certainly their existence to do day-to-day programming in a Lisp.
> Yes, it’s possible to concoct the weirdest stuff in Lisp, but that would probably be bad code. Some macros might take time to learn, but unlike Haskell where you need to know monads to be productive, you do not need to know how to author macros to read and write most Lisp.
You need to know what the macros a given codebase uses do if you want to work in that codebase. This is actually a worse situation than in Haskell, where if you're comfortable with monads in general you can probably be productive in a given codebase without knowing about the specifics of the monads that codebase is using.
It’s not “actually” a worse situation except in some pathological academic sense. The point of a macro is to make some idea clearer to understand by way of syntax. If it’s not doing that, it’s probably a bad macro.
> The point of a macro is to make some idea clearer to understand by way of syntax.
The question I think is clearer for who? Macros, as all abstractions, seem to be a way to make the expert faster at the expense of the beginner. Lisp is probably better at avoiding accidental complexity coming with it, but I don't think it avoids complexity entirely.
The way to settle this, at least for a given macro or macros, is to bring the expert and beginner together, and have both of them examine the macro calls, their documentation, and a version of the code in which the macro calls have been replaced by their hand-written expansions.
If you restrict yourself to a reasonable-to-understand subset of possible macros - such as those that could be expressed as monads in Haskell - then a reader that knows that you've so restricted yourself can work in your codebase without understanding all your macros (as long as you don't have any bugs in your macro implementations). But in that case why use a language with macros?
You can teach someone to use the built-in loop macro without using the term "macro."
It's far more hairy and complicated then any non built-in macro you will encounter in the wild. Literally the only time I worry about macro vs special operator vs function is when doing higher order programming, which is less common in lisp than Haskell
You can teach someone to use a fixed set of macros. But again, in that case why use a language with macro features? If you're never going to use a non-built-in macro with the complexity of the built-in macros, why does the language need this complex macro system rather than a set of built-in keywords and a simpler extension system?
> You can teach someone to use a fixed set of macros. But again, in that case why use a language with macro features?
Because sometimes writing a simple macro makes it easier to write (and read) other code.
> If you're never going to use a non-built-in macro with the complexity of the built-in macros, why does the language need this complex macro system rather than a set of built-in keywords and a simpler extension system?
There's a lot to unpack here. A good lisp developer will very rarely need to write a macro that approaches the complexity of the loop macro. Firstly, that's not never. Being able to write a hairy macro is more like an air-bag than power windows. You're glad to have it when you need it, but you mostly forget its there the rest of the time. Secondly the same macro system that lets you write something like loop also lets you write simpler macros that make code both easier to read and harder to write incorrectly.
Lisp does not have a "complex" macro system. It has one of the simplest extension systems of any language I've used that has an extension system.
There are also plenty of people who think loop is too complicated and too inflexible and so avoid it.
Finally everything you said about macros and loop can be applied to other languages and the standard library. If nobody's going to write anything as complex as printf(), why does C need variadic functions? If nobody's going to write anything as complex as itertools, why does python need higher-order functions and generators?
I don't actually see the extensibility of lisp as its "killer feature" but I definitely see it as a net positive, while a lot of people seem to see it as a net negative. I blame books like "Let Over Lambda" which are more to be seen as examples of what you can do rather than what you should do.
There's lots of things to hate about C, but the fact that you can do:
enum { TRUE = 0, FALSE=1 };
is not one of them. There's plenty to hate about lisp, but the fact that it's possible to write bad macros is not one of them.
> Finally everything you said about macros and loop can be applied to other languages and the standard library. If nobody's going to write anything as complex as printf(), why does C need variadic functions? If nobody's going to write anything as complex as itertools, why does python need higher-order functions and generators?
IMO the essence of good language design is balancing expressiveness with enough constraint to make code understandable. I agree that printf is a mistake (string interpolation is enough of a special case that it's better as a language-level builtin). As for itertools I'd say that I do write code that's just as complex as the itertools builtins, but also that the itertools builtins don't hurt code readability so much because they're all (more or less) legitimate functions that can be reasoned about compositionally. Code that contains a call to a complex itertools builtin is not particularly hard to understand, because even the most complex itertools builtin returns a plain(ish) value that obeys simple rules. Not so for code that contains a call to a complex macro.
> Lisp does not have a "complex" macro system. It has one of the simplest extension systems of any language I've used that has an extension system.
"complex macro" system rather than complex "macro system", but yeah, I expressed myself badly. The problem isn't complexity per se (and as you say, the actual system is very simple) but that it's an unconstrained one where a macro can do almost anything.
> There's plenty to hate about lisp, but the fact that it's possible to write bad macros is not one of them.
I disagree. In my experience the main reason people give for moving away from lisp on any given project is "maintainability", and if you dig a little deeper that usually boils down to "we wrote too many bad macros". And a lot of the "limited tooling" complaints boil down to the effectiveness of IDEs etc. being limited by the presence of macros.
A program can do almost anything; that's the result of internal Turing completeness, plus unfettered external platform access. Macros do not cause this. A program can do anything using nothing but functions.
Macros are supposed to be used when they organize some particular anything in a way that makes it easier to understand and maintain (or even more performant) compared to the best available macro-free approach.
Everything can be abused: control structures, data structures, functions, variables, ...
I think most programmers would agree that running a code-generation program at build time is something that should be approached with a little more caution (not saying that you blanket shouldn't do it, but you should check the reputation of the code generator, what the output looks like, how well your debugger/profiler can handle it...) than using a plain-code library. But a lot of people seem to somehow fail to realise that using a macro is the same thing.
Everything can be abused, which is why it's best to write everything in the most restrictive environment that still allows expressing what you need to. See Dhall or Noether for what that looks like for everyday code. At the language design level, there may be cases where users need to apply an arbitrary Turing-complete AST->AST function, but it shouldn't be the first thing they reach for; if there are features (such as do notation) or patterns (such as Python decorators) that allow expressing most macro use cases via a more constrained (and therefore less maintainability-impacting) feature, that's a big win.
>Lisp doesn’t really have many, if at all, super weird concepts that need months of thinking about in order to understand.
I dunno, using a DSL for linked lists as the only syntax construct is pretty radical. CAR/CDR and CONS pair dot notation must feel pretty esoteric. The concept of macros that transform the syntax tree is unusual, and knowing when to use them requires a restraint that comes only with experience. I could see it taking months if not years to fully grasp the zen of Lisp.
(if (= x 2)
(print "x is two!")
(print "x is not two!"))
or
(defclass person ()
name
address)
will not take a professional programmer who is prepared to receive a 6-figure salary “months” to learn. We might as well call
[2*x for x in range(10) if x % 2 == 0]
a “DSL for constructing lists” in Python which, incidentally, also doesn’t take most paid programmers months to learn. This doesn’t even get into the zoo of syntax that languages like Python have to offer, like decorators, generator expressions, walrus operators, loop keywords (“else” in a for-loop?), etc.
In my experience training both interns and new hires, getting from zero-to-first-commit is like a week. Getting on code reviews, 2-3 weeks. Feeling confident and comfortable, 4-5.
Also, nobody is being paid to feel language zen. Yes, mastering Lisp or becoming anything close to an expert might take years. But being a language academic usually isn’t the goal of the working programmer.
If that's the difference, why would anyone care about using lisp? The examples you've chosen are nothing more than "parens before the statement instead of squiggles after", more or less. Why would anyone advocate for lisp if that was the difference.
I'm not a lisper, so that's an honest question. I can't believe people are so evangelical about the language over such a trite difference from C/Python/Java.
A language is much, much more than syntax. I showed some trivial syntax, but I didn’t talk about how fast it might be to execute that code, nor easy or hard it might be to debug it, how straightforward it is to pull in a dependency, etc. You’re right, there’s little that can be inferred from such a simple code example, except one thing: the syntax really isn’t that hard to grok.
With that said, to throw a bone, for me, Lisp is valuable because it’s an incrementally and interactively developed language that makes available syntactic extension to the programmer. This makes it possible to be extraordinarily productive as a programmer on even the most difficult technical challenges.
Incremental and interactive: there is no edit-compile-run cycle. Feedback is instantaneous. Debugging is extremely fast. Concentration rarely falters.
Syntactic extension: One can author new syntax that fits problem domains. If I’m writing a lot of RESTy APIs, I might make a nice syntax for them that makes the route and consequent action front-and-center with no boilerplate.
There’s so much more than just these though. Common Lisp implementations are incredible stable and mature, there’s a huge library ecosystem, it’s possible to write code as fast as C, and the language has stood the test of several decades of time.
It seems like there would be a large space (that's sidestepped here) between learning basic syntactic differences and 'mastering' Lisp. Are there not novel concepts (outside of syntax) which programmers struggle with in that space? And does gp's comment about needing to learn 'restraint' from experience not apply to that middle space too?
IME if you were diligent about only hiring very smart + intrinsically interested programmers, the kind of outcome you're describing might be realistic. But I don't see this going well if you draw from the large pool otherwise available of more paycheck-motivated programmers who may or may not have a strong grasp of conceptual underpinnings of programming languages.
In anything, Lisp included, there are difficult concepts to comprehend. You can throw a dart and it’ll probably land on an interesting and difficult subject.
throws dart
How does finalization of objects in a generationally garbage collected implementation of Lisp work?
throws dart
How does a local dynamic variable interact with a new thread being spawned?
throws dart
What is a “three-comma macro” and how do I write one?
You get the picture.
Lisp has plenty of concepts that you can spend ages mulling over. But I contend that Common Lisp is in a sweet spot where it’s not particularly dogmatic about those concepts and as such don't consistently present themselves as hurdles to learning the language and being productive.
Honestly the biggest hurdle of Common Lisp is that Emacs is the #1 doctor-recommended solution for editing Lisp—in the open source world at least.
Paycheck programmers are “easier” to teach in fact because they don’t care about the “zen” of Lisp or any of that. Tell them the rules, tell them what code to mimic, and tell them what issue tickets need servicing. If they’re otherwise fine programmers, they’ll make some mistakes, but those mistakes get ironed out during code reviews, and they happen less and less frequently.
But of course, the “intrinsically motivated” programmers are likely to produce better solutions to problems more consistently, and I try to hire for that regardless of the language.
That Python snippet reminds me of my least favorite part of CL: the LOOP macro. Although calling it part of CL is a bit of a misnomer since the standard specifies it as, paraphrasing, do loops n stuff.
It's more like 3 pages, with examples, when presented formally[1]. My point though is the standard specified behavior is quite limited, but it gives implementers broad latitude to "do loops and stuff" in a non-portable way[2] while still being in some sense standard. For example, Lispworks extends LOOP over SQL queries[3]. So sure I was being a bit tongue-in-cheek, but LOOP really is a set of not particularly compatible imperative languages that happen to be embedded in the various CL implementations. This is an artifact of the CL standardization process, which was virtually taking the set union of the features of the most popular implementations of the time, except in notable cases like LOOP where it was so hairy and incompatible that the standards body had to pick a relatively minimal intersection instead.
I don't think your second link says what you claim it does. Aside from being explicit that using let is totally fine (good) and that types are allowed, but not required to be specified (just like everywhere else that bindings are established), it just says that implementations can optimize things (duh?) and that there is no standardized way to extend it.
The last point is probably everyone's biggest beef with LOOP since it's in stark contrast to most other things in the standard which often have very good ways to extend things.
I find loop to be a fairly nice DSL for iteration. It could be better, it could be worse, and I like it better than all the DO constructs.
> I don't think your second link says what you claim it does.
Based on the historical context of the standardization effort, I read "There is no standardized mechanism for users to add extensions to loop." as the standards body punting and basically giving implementers free reign to include whatever extensions they like. That's certainly what they did anyhow. However on a more careful reading I agree I probably misread that particular line. That's not to say that the practical effect isn't what I observed though. It's hard to find a solid reference, but look at this mess[1]. "Each of the above three LOOP variations can coexist in the same LISP environment." gives a hint of the messiness that follows.
Having different iteration constructs or having multiple versions of the same was not untypical on a Lisp Machine, which booted into a single Lisp runtime, but which contained several different Lisp dialects and their language constructs. That way one did not need to reboot the computer to use a different LOOP variant - because the Lisp image was shared in the OS.
The LOOP code you mentioned came from MIT and Symbolics from the 70s/80s, when multiple Lisp dialects were under use and sometimes, on a Lisp Machine, in the same runtime. The original MIT LOOP implementation had a single source code, which could be loaded into different Lisp dialects and systems.
The CL standardization of LOOP was more than ten years later.
Multiple implementations of loop being able coexist in the same language is an example of clean design that speaks in favor of macros.
If there is an issue in the implementation of for in your C++ compiler, or JavaScript machine, can you cleanly bring in another for implementation and switch the code over to it?
While code that is not affected by the problem sticks with the same one?
Same could be said about the zen of, eg, Python though. Deep experience pays off for library/framework authors, but most practitioners don’t have daily need for __new__ or slots or...
I remember young guys on Coursera proglang MOOC, many were very very very frustrated by the flattened layers of abstraction of lisp.
The most peculiar part of this was the group of elder already versed in lisp / macros and eDSLs who couldn't find a way to explain to them. A ancestor to the Monad curse.
Personally I abhor complexity and to me cons / car / cdr everywhere felt like a revelation :)
> Training seems like some lost art as we collectively move toward a more disposable programming ethos.
I think you're right on that, and that is damaging for Lisp. These days most training seems to be done by external sources, for example Pluralsight, Udemy, medium articles about how to do a specific things, etc. Since those have commercial incentives, they target the biggest audiance, which is usually JavaScript, Java, Python, or something like that. On the other hand, the companies don't want to train people in a specific language, so they hire people that already know their language. That creates a positive feedback cycle that makes less used languages even less used. Same thing about open source contributions (tooling, libraries, etc). I think the insistance on Emacs may also be damaging because it's an even bigger barrier of entry.
"Lisp doesn’t really have many, if at all, super weird concepts that need months of thinking about in order to understand."
I don't think we can have both that and the traditional "Lisp is a great language to learn to expand your mind."
"Training seems like some lost art as we collectively move toward a more disposable programming ethos."
Speaking as an engineering lead, my problem is increasingly not that I have an unwillingness to train, but that I need to do something other than train; that is, I need to exploit the training to get some real work done at some point. On my team, to be a fully functional member already requires fluency in 4 general purpose programming languages, 2 devops langauges, all the misc other details needed like SQL knowledge that I could hardly even enumerate off the top of my head if I wanted to, and the associated system knowledge for half-a-dozen systems as to how they are deployed, how to fix them, etc. Bringing up a fresh college grad who happens to have zero of these could be the work of plural years already.
I compare this to what was required in my first engineering job in ~1997: One programming language (VBScript), basic SQL, and "fluency" in HTML that today would be considered "basic" HTML, and with that we had everything we needed at the time. Adding things to the stack was easy to justify back then. It's harder now.
If you want to add another language to the pile, I'm going to need you to have a darned good reason beyond "It's a good language" or "But I like it".
2 weeks is an absurd underestimate for obtaining Lisp fluency, by the way. Surface syntax understanding and the ability to build something someone else already built, sure, but learning the libraries, the existing code base, the other quirks (especially in a language like Lisp that is all but made out of quirks if you start using the language well), there's no way that's two weeks.
It's not that I'm not open to languages, it's that even on my own personal team I can't afford to have everyone drag in whatever language they want to use for some task. It's not a coincidence that so many companies end up with the Approved Language list. Smarter companies with such a list will have a way to get specific exemptions and make it not too hard to get them for good reasons (e.g., if your list is C++ and Javascript one has a good case for adding Python for an ML or NumPy project), but if you let everyone drag in whatever language they like you'll end up with enforced fragmentation of teams and significant difficulties with bringing new developers up to speed in positions where they have to become fluent in perhaps literally 10 general purpose languages to function fully.
What manafers tend to forget is the selwction bias. While it is definitely harder to find a Lisp/Haskell programmer than say, a Java programmer, the former has a much higher chance of being competent and quick-learner, because thats the kind of people that tend to learn non-mainstream languages. They will likely be onboarded faster, and likely to be far more productive than their peers.
I'm forgettimg whose quote it was, but it goes (in 2000s) something along "I'd rather hire a Python programmer, even if the application is written in Java, because a Python prpgrammer has selected him/herself by putting in efforts to learn Python."
I partly agree but this can too easily become an argument in favour of the lowest common denominator. I can hardly imagine a language that's easier to learn than Python, and for that reason a) lots of people already know it, and b) those that don't can learn fast. But that's at least partly because it just doesn't have features that are widely regarded as beneficial - a type system, metaprogramming, good functional constructs etc.
Before anyone comes back at me for suggesting that Python lacks essential features, I know there are third-party libraries or core features to do this stuff, but they're pretty far off-meta, so your team isn't going to know them, or having good idioms/consensus for using them, and so at that point you might as well just learn something a little more advanced anyway (Julia would be an excellent choice).
One other large issue is that less popular languages (CL, Scheme, Smalltalk, etc) sometimes just don't have the library you need or the libraries are immature and not fully developed.
Database adapters for something like SQL Server is a great example. Most languages have a Postgres library, but sometimes the business requires a SQL Server connection...it's rarely been by choice that I've wanted to use it, but by necessity.
Another is Salesforce...I don't know anyone who would voluntarily want to work with Salesforce, but more and more it's often a requirement to have a working soap library, Salesforce client, etc. Sure, you could write your own...but I can't think of something less interesting to do in development, when you could just switch languages and integrate directly without having to write a lot of glue code.
I do feel like I get more done in Common Lisp when the language 100% hits my use case, but it seems invariable that my use case involves external services I didn't plan on that have to be re-implemented or a work around/microservice written for them in a different language.
It's a harder problem when the language is unopinionated. It's easy to bootstrap productivity when everyone is already on the same page about how you write code in this language. Whether it's "OO" or "pythonic", having clear opinions that are baked into the language and shared across the industry means anyone can become productive quickly in a new code base.
The problem that lisp has is that its very expressivity means that most lisp programs are written in their own custom DSL. This is exactly what people love about it but makes knowledge transfer between organizations hard.
Most Lisp programs are not written in their own DSL.
Programming teams can be opinionated. Just someone with authority needs to write down the opinions.
Common Lisp in particular can be pretty opinionated if everybody shares the same configuration of SLIME and Emacs. There are a few policy rules atop that really make code style consistent and watertight. But, unlike Go, it’s not “for free” with a canonical tool.
The best way to bootstrap language adoption is to get a big tech company to push it. Java (Sun, Oracle, Google Android), C# (Microsoft), JavaScript (all browser vendors, Facebook React), Rust (Mozilla/Microsoft), Swift (Apple), Go (Google).
Most of the popular languages are backed by one or more large tech companies. This gives managers the confidence to adopt them without reservation. It's the old "nobody ever got fired for choosing IBM" line, broadened.
Then there's python. Ruby (currently waning in popularity, but we are talking about bootstrapping popularity), php (well fb is a major company but php was popular long before that), perl, not to mention C and C++.
These languages have all at one point been "top" languages (moreso than swift, anyways).
Well, big tech backing is a sufficient condition for popularity, but not a necessary one. If you're looking for necessary conditions then that's a far more difficult question.
Why do other things (books, songs, movies, games) become popular? Some things just go viral for whatever reason and it's very difficult to explain why, even after the fact. Like trying to predict where lightning will strike next.
Big tech backing is no where near sufficient. Dart almost died (and the jury is still out). Facebook created a statically typed erlang, and I think that one is already dead, not a year after it was supposed to be announced
On the other hand, developers who choose to learn Haskell are usually above average, so if you're really looking to hire above average developers, you're making your job easier.
I think many excellent devs know Haskell, have used it at some point or at least played with it, or can get up to speed in it within a few, maybe 2 or 3, weeks.
While that may be true and does not actually oppose what I wrote, I think we are talking about 2 different kinds of excellency.
One kind of excellency is to be able to quickly solve problems in one given language. For example an excellent Python developer. There are many of those.
The other kind of excellency, in my opinion, goes much further than that. A very solid background in CS, a deep understanding of programming concepts of not only one but many programming languages, some knowledge in programming language theory, knowledge about various programming paradigms, their concepts and how to apply them in problem solving and daily programming tasks.
The second kind of excellency I am quite sure to find within a developer, who got a background with Haskell and some proper work done in that on their CV. I would be less sure, to find that in just any good Python developer. Why do I think so? I think so, because Python is becoming more and more popular and is an "easy language to learn" and kind of follows the mainstream OOP approach. So people learning it get some exposure to the other things I count under the second type of excellency, not too bad, but not really that much, unless they are special cases and actively make an effort to get that exposure. In Haskell you are thrown into another not so mainstream programming paradigm, you are learning a not so mainstream language, and you touch on many programming concepts, which you might not necessarily get in touch with using Python.
(Don't get me wrong, I write a lot of Python code myself, but exposure to and frequent usage of some other languges, including mostly functional programming ones and strongly statically typed ones, make me miss a lot of things in Python and make me see the language's shortcomings.)
That second kind of excellency is learned by studying hard or years of experience, and requires you to dip your toes into unfamiliar waters. This kind of excellency enables you to quickly learn about the power of alternative systems or languges and grasp what they can do for you.
In the end it is all about getting to know and understanding lots of concepts. The more exposure you have to different (!) things, the more knowledgable you will be. So even better chances to find an excellent developer of the second kind, if that person happens to have experience in Haskell and Python or whatever other languages you come up with, which are significantly differnt from whatever others one already knows.
Suppose there are 20x [1] more Python developers, though that seems really low to me, I would've guessed more like 500x. Suppose skill is distributed exponentially. How much better is the average haskeller than pythonista? 1.1x? 2x? 10x? Given those assumptions, how many developers at each level of quality are there? I got curious so https://i.imgur.com/kEkmAsf.png . I think that means average Haskellers have to be a lot better than average Pythonistas before there start being more 5x-quality developers in Haskell than Python (because there are so many Python developers).
That's the wrong question. If you try to attract as many developers as possible, you're going to attract more good developers, that's true. But you're going to attract much more bad developers.
The right question is — how many of developers who know Haskell are excellent? And how many of developers who know Python are? The first figure would be much higher.
Part of the dynamics was also the late 90s OO shift. It was a catastrophy where utter verbosity was confused with engineering. It's basically 1/lisp ..
yes it did in the 80s, it has been on top 3 of the Tiobe index for many years. It's ranked 25 this month, before Rust, Julia, Haskell, Scala, Lua, Elixir, Clojure… funny right? Yes the Tiobe index doesn't measure very meaningful stuff.
The REPL, the static checks, the binaries/deployment options, the stability, the load times, the ecosystem… are weaker in Julia.
This quote makes me wonder:
> This means there is an upper bound to how good the editing and refactoring tooling will be. I suspect the best will be a little below Pythong's tooling, because Python's class helps in disambiguating methods.
There's Mercury. I think it's a pretty good language. It's better than Lisp at least, which doesn't do anything interesting for execution power or safety besides table stakes like memory management.
There could/should be because of Answer Set Programming, which addresses one of the features of prolog that derailed it from widespread adoption. Prolog makes a committed choice as it expands it's search tree, and this can lead to surprising behaviors and failures.
Now - most Prolog folk ardently deny that and say "no these are what the language does, so that's not a point of discussion." But I will write an example and then say why this matters.
In SWI prolog (and all the other prologs I've tried) if you write a program like :
The thing is clause orders matter because the inference engine explores them in it's matching process in order and it's really really easy to introduce a killer search like this. This is also really difficult to debug and it's a bug that is very difficult to predict - in that any prolog program of a large size may suddenly manifest this behavior with no warning. You may be able to find the bug, you may not...
So - this really kills enthusiasm for prolog in commercial environments.
Answer Sets get over this - the order of the predicates is not important, the solver can cope with the massive recurrent searches. Knowledge Based Systems and deductive inference are now more possible than ever before - and we have orders of magnitude more resources to throw at them than we did.
Additionally we have mechanisms for probablistic reasoning in MCMC that we didn't before and a unified paradigm of logical programming looks like its on!
The sad bit is that not many people seem to want it - it's quite hard to think of the real problems that we will solve this way. We know that efforts like Cyc are just not going to work out, so there's no talk of "universal knowledge bases" or building common sense in.
The only technical barrier I can think of though is that programming in the large isn't really thought through in these kind of large inferencing systems - there's a unified model that essentially has to be written by one person at the heart of them. We need to have collaboration and concept formation tools and processes that work for large teams to develop scaled systems like these.
Ugh, no. SQL is a special-purpose database language. I've tried to look at Prolog from a databases point of view and it makes no sense that way. Even datalog (a decidable subset of Prolog without functions and negation) doesn't make that much sense from a databases point of view and it was specifically designed as a database language from the start, far as I can tell.
In practical terms, SQL is like one third of Prolog. Prolog programs are stored in a database as sets of Horn clauses that can be conditionally true (definite clauses or "rules"), unconditionally true (unit clauses or "facts") or unconditionally false (Horn goals, or "queries"). A Prolog program is executed by trying to refute a Horn goal by proving it false in the context of the program with an automated theorem prover and binding the variables in the goal in the process. Unit clauses correspond to database rows in SQL, so a SQL database is basically a set of "facts", seen from the Prolog point of view. But there are no "rules" and "queries" in the database - instead the program is written in an altogether different language that sits on top of the "facts" in the database. So from the point of view of Prolog, SQL is two languages only one of which bears some resemblance with Prolog.
From the point of view of SQL also, Prolog is proabably quite alien - you can write recursive loops with your database rows O.o They're just two very different languages anyway you look at them.
SQL and Prolog have a large area of functional overlap as they are both based on predicate logic. However, SQL more naturally aligned with the database-centric worldview of many business systems and offered a more natural way (for business users) to ask business questions. So when SQL took off, it was adopted for many solutions where Prolog might have been used previously.
I won't pretend this is a universal truth, but often code that is quicker and easier to design is quicker and easier to read and understand. A language with less necessary boilerplate can take less reading to understand. Maybe you have a clever IDE that folds away boilerplate code for your chosen language, so that may not always apply.
Lisp originated as an intermediate code. It's basically an s-expression encoding of an AST. There was at one time a frontend planned, but people started writing all sorts of code directly in Lisp.
If your developers don't understand any layers of the stack below their own input, that's a bad thing. Lisp's interpretation or compilation is largely tied more or less directly to the input syntax. It's a language made powerful by the interactions of a handful of first-class concepts being allowed to interact with few artificial limits. Once your developers understand those concepts, it's pretty easily read except perhaps deeply nested constructs.
> Ok, back to JavaScript popularity. We know certain Ajax libraries are popular. Is JavaScript popular? It’s hard to say. Some Ajax developers profess (and demonstrate) love for it. Yet many curse it, including me. I still think of it as a quickie love-child of C and Self. Dr. Johnson‘s words come to mind: “the part that is good is not original, and the part that is original is not good.”
IO was closer to Self as a language than JS, since JS combined the prototypal inheritance with Java-like syntax, making things confusing. But Self was an evolution of Smalltalk (replace classes with prototypes), so the live programming using a GUI running inside the Self VM was part of the deal, instead of writing to files in your editor of choice.
> Also a language is neither fast nor slow, the implementation of said language is.
I don't like how much this simplification gets used. Of course there is more than one way to implement a given language. But there is also more than one way to design a language, and that can have big effects on its implementation(s), and that's usually what people mean when they speak of "fast languages."
Not to mention that many languages only have one implementation, or else many that have all converged to the same performance characteristics.
And on top of that, it's rarely a question of how fast or slow an entire implementation is, either! Individual codebases or problem spaces need different things from their languages.
It’s not a simplification, it’s observing a basic category error. It’s confusing the procedure and the process. Consider for example how the same procedure executing in the same runtime on different processor families will result in highly divergent executing processes. And practically speaking even then they may not even be equivalent up to the explicitly or implicitly specified behavior. Nevertheless ideally when we talk about a language we are talking about a set of semantics that is independent of whatever mechanism executes the processes specified by procedures written in that language.
Some semantics are inherently slow though. If you are allowed to add fields to a class at runtime, you have to box all your objects. If you can eval in local scopes, you need to check to make sure you're running the right code. There are many types of semantic decisions with inevitable performance impact.
I would not expect that such features are added under the assumption that it is rarely used.
Common Lisp for example has the assumption in the standard, that the object system is fully dynamic and supports a wide variety of live changes (adding/removing/changing of methods, classes, slots, superclasses, ...).
"Rarely" meaning compared to other elementary operations, such as function or method calls. So it makes sense to make function or method calls faster at the expense of making restructuring slower. That's basically the same principle as with Smalltalk and Lisp structures/objects being vectors as opposed to hash tables (like in Ruby or Python), even if it means that adding a field necessitates having to change the layout of existing instances. This way you'd need to also change the layout of other types that use the unboxed representation of the type being altered but the basic principle is the same and you already have the mechanism for updating your existing heap anyway -- so it should be possible and mere possibility of altering a type shouldn't necessitate boxing everything. "If you are allowed to add fields to a class at runtime, you have to box all your objects" should be no more correct than "if you are allowed to add fields to a class at runtime, all your object instances have to be hash tables".
> We want a language that’s homoiconic, with true macros like Lisp, but with obvious, familiar mathematical notation like Matlab.
In this case, the makers of the language clearly give a nod to Lisp as an inspiration, but even were that not the case, it's basically nonsense to say that any language "has nothing to do with lisp". Lisp has been a driving force in PL for a generation, and its influences are everywhere.
>One would say the target audience of Julia - matlab converts, would barely use that feature (macros) directly.
The 'target audience of Julia' is everyone, not just 'Matlab converts'.
It is most focused on high performance numerics, but is a true general purpose programming language. I expect its use to grow explosively as more and more people recognize its elegance and power.
My perspective is that of a senior engineer with decades of experience with dozens of programming languages.
"The strongest legacy of Lisp in the Julia language is its metaprogramming support. Like Lisp, Julia represents its own code as a data structure of the language itself."
On top of that, Julia's non-lispy surface syntax has a perfectly valid s-expression representation that can be displayed with `Meta.show_sexpr`, and for which you could hypothetically enable a repl in only a few lines (e.g. [1])
I know hardly anything about either language, but 'Julia' does sound suspiciously like 'Dylan', and Dylan https://en.wikipedia.org/wiki/Dylan_(programming_language) is usually described as a successor to Common Lisp in heavy disguise.
One look at that parser code makes me, a seasoned web and desktop app developer with cocky high minded "architectural" thinking, shrivel with insignificance and embarrassment.
Mad respect to the process and people that makes this possible.
As someone who focuses almost exclusively on backend development, I look at complex ui code similarly. Mad respect to the people who have the vision and skill to design and write well incredible UIs.
Julia does support REPL style development, and the little I have used it, it was effective for deep learning, text processing, and SPARQL/linked data clients.
My biggest complaint is static compilation. There are things you can do with PackageCompiler, but they don't work for some of the code I write. This makes deployment of artefacts harder than it needs to be.
Most of the infrastructure is there in the code, it just needs to be tied together, in such a way that juliac file.jl -o file.x results in a deployable (preferably static) binary.
To clarify (Correct me if this isn't what you mean), Julia compiles not only program code, but dependency code each time you run a program. This can make simple programs and scripts take a long time to start. A common response is "Use the REPL", where it compiles the dependencies once at load.
Julia has a JIT compiler but it doesn't store the compiled code between sessions. This can add a delay on the first time you call a function. By keeping the REPL open you can avoid having these delays multiple times.
I've been writing "scripts" with it that extract multi GB data sets using various APIs, perform relevant processing, and spit out actionable visualizations and data.
The APIs are (these days) mostly web/REST-ish, though I'm starting to work on writing some wire-protocol stuff as well.
It (Julia) is an amazingly powerful language today. Excellent for data scientists and engineers, for deep and complex analytics. I am using it in a professional (e.g. paid) setting. Along side python, C++, and other things.
Of course they are, what do you call a language that does not permit a fast implementation, or a language that exposes low-level primitives specifically for performance work?
> What happened to the programmer’s time is more important than the machine’s?
I've never thought this way, primarily because I sympathize with and love computers, and don't like to watch them suffer. But there's more than that even, because there's more to software than just finding the most expressive language.
Someone has to package, deploy, verify, monitor, scale, debug and optimize the software and do all of these things with as little money and as little CPU/RAM as possible. It's often a matter of worse being better when it comes to these types of concerns. Simplicity is exactly what is needed when performance starts to matter, and "clever" men often seem to be conspicuously absent when it comes time to take out the trash.
Sure, it's fun to gather up your little clique of ivory tower CS nerds and get into a DSL measuring contest, but I've never seen the value in it myself, other than as defining one extreme end of the overton programming window.
Somehow, I’ve been able to keep stable employment as a programmer that writes Lisp for over a decade. I’ve written Lisp at startups. I’ve written Lisp at Facebook. No, I haven’t been a consultant on some weird legacy Lisp project either. The difference between me and someone who doesn’t have a career in Lisp is:
- I choose Lisp for projects that benefit from Lisp;
- I make the case to technical and non-technical people that I can solve their problem cheaply and efficiently, often with a concrete demo and without any of the vague promises of Lisp’s power;
- I put in extra effort to make sure we can hire for Lisp, train for Lisp, and integrate with Lisp.
If you want to write Lisp in industry, and if you’re a senior-or-greater engineer (whatever that means), then get some buy-in and write it. Learn the language backwards and forwards, and be responsible and accountable for your choice.
Unless you’re writing the world’s most boring software that needs 5,000 Java programmers all working on it at the same time, I can assure you, there is no issue with writing and maintaining Lisp, unless everybody you work with, including yourself, likes to pass the buck.