Hacker Newsnew | past | comments | ask | show | jobs | submit | cstrahan's commentslogin

I think they are likely referring to doubly linked lists in Rust, specifically.

See, for example:

https://rust-unofficial.github.io/too-many-lists/

https://news.ycombinator.com/item?id=22390662


https://bablr.org/

> The next-gen LR parser framework for creating elegant and efficient language tools

> BABLR is a new kind of thing that does not quite fit into any category of things that has existed before it. In purpose it is made to be an instrument of code literacy -- a unified toolchain for software developers that supports a new generation of richly visual interfaces for coding. In form BABLR is a collection of scripts and virtual machines written in plain Javascript that run in almost any modern web browser. BABLR is also a community and an ecosystem, including a small but rapidly growing collection of ready-to-use parsers for popular languages.


At first brush, everything about this sounds like overly ambitious vapourware. Is there a reason to think this is going to deliver? People involved, what's already shipped, etc?

I particularly loved this from their roadmap:

> Completed

> Shift operation

> Enables LR parsing of expressions like 2+2

Being able to parse 2 + 2 is definitely good!

And their thoughts on testing:

> How our project reaches production stability is a process that often surprises people. We don't write a lot of tests for example, and we often don't do much testing before we ship releases. Instead we test exhaustively after we ship releases, which is the only way we know of knowing for sure that the product we shipped does what we think it does. [...] We also don't (usually) practice TDD. If you look at the number of tests we have, it likely won't seem like it's anywhere near enough to keep a project of this size stable! The secret sauce here is that our key invariants aren't written in our test files, they're baked into the core of the implementation. Every time you use the code, you're essentially testing it. To gain confidence in our core, we simply try to use it to do a lot of real work.

Man, why did i not think of that, i could have got out of writing so many tests if i'd just baked the invariants into the core of the implementation!


In this case the tool is meant to parse programming languages, so once I write some parser grammars every valid code file in existence is a test case. Seen that way I have more test cases than I know what to do with.

We've come a ways from 2 + 2. This week my goal is to feed our own whole codebase through the JS parser, and I should be able to. I managed to parse a few hundred lines of real JS last week before running into Automatic Semicolon Insertion trouble that I needed to tinker with the core to fix.

While I get that our low profile smacks of vapor, we actually have working packages published: bablr and @bablr/cli. I'd consider them to be beta quality right now, having gone through many previous releases that I'd only consider alpha-quality, and even more releases before that.


It's not too hard to verify my central claim here which is that we're giving away what they charge money for. Their serialization format is secret, proprietary. Ours, CSTML, is open: https://docs.bablr.org/guides/cstml. Their free product make you re-parse the entire project with every code change you make. Ours is built with copy-on-write immutable data structures so that you can always build new things without losing old ones. Our way you can compose fragments of trees together with new code into new trees like you're playing with lego bricks.

> Any self respecting engineer should recognize that these tools and models only serve to lower the value of your labor.

Depends on what the aim of your labor is. Is it typing on a keyboard, memorizing (or looking up) whether that function was verb_noun() or noun_verb(), etc? Then, yeah, these tools will lower your value. If your aim is to get things done, and generate value, then no, I don't think these tools will lower your value.

This isn't all that different from CNC machining. A CNC machinist can generate a whole lot more value than someone manually jogging X/Y/Z axes on an old manual mill. If you absolutely love spinning handwheels, then it sucks to be you. CNC definitely didn't lower the value of my brother's labor -- there's no way he'd be able to manually machine enough of his product (https://www.trtvault.com/) to support himself and his family.

> Using these things will fry your brain's ability to think through hard solutions.

CNC hasn't made machinists forget about basic principles, like when to use conventional vs climb milling, speeds and feeds, or whatever. Same thing with AI. Same thing with induction cooktops. Same thing with any tool. Lazy, incompetent people will do lazy, incompetent things with whatever they are given. Yes, an idiot with a power tool is dangerous, as that tool magnifies and accelerates the messes they were already destined to make. But that doesn't make power tools intrinsically bad.

> Do you want your competency to be correlated 1:1 to the quality and quantity of tokens you can afford (or be loaned!!)?

We are already dependent on electricity. If the power goes out, we work around that as best as we can. If you can't run your power tool, but you absolutely need to make progress on whatever it is you're working on, then you pick up a hand tool. If you're using AI and it stops working for whatever reason, you simply continue without it.

I really dislike this anti-AI rhetoric. Not because I want to advocate for AI, but because it distracts from the real issue: if your work is crap, that's on you. Blaming a category of tool as inherently bad (with guaranteed bad results) suggests that there are tools that are inherently good (with guaranteed good results). No. That's absolutely incorrect. It is people who fall on the spectrum of mediocrity-to-greatness, and the tools merely help or hinder them. If someone uses AI and generates a bunch of slop, the focus should be on that person's ineptitude and/or poor judgement.

We'd all be a lot better off if we held each other to higher standards, rather than complaining about tools as a way to signal superiority.


Your brother's livelihood is not safe from AI, nor is any other livelihood. A small slice of lucky, smart, well-placed, protected individuals will benefit from AI, and I presume many unlucky people with substantial disabilities or living in poverty will benefit as well. Technology seems to continue the improve the outcomes at the very top and very bottom, while sacrificing the biggest group in the middle. Many HN Software Engineers here immensely benefitted from Big Tech over the past 15 years -- they were a part of that lucky privileged group winning 300k+ USD salaries plus equity for a long time. AI has completely disrupted this space and drastically decreased the value of their work, and it largely did this by stealing open source code for training data. These Software Engineers are right to feel upset and threatened and oppose these AI tools, since they are their replacement. I believe that is why you see so much AI hate in HN


I'm not trying to signal superiority, I'm legitimately worried about the value of my livelihood and skills I'm passionate about. What if McDonalds went around telling chefs that they're cooking wrong, that there's no reason to cook food in a traditional manner when you can increase profit and speed with their methods?

It would be insulting, they'd get screamed out of the kitchen. Now imagine they're telling those chefs they're going to enforce those methods on them regardless whether they like it or not.


Vertical CNC mills and CNC lathes are, obviously, different machines with different use cases. But if you compare within the categories, the designs are almost all conceptually the same.

So, what about outside of some set of categories? Well, generally, no such thing exists: new ideas are extremely rare.

Anyone who truly enjoys entering code character for character, refusing to use refactoring tools (e.g. rename symbol), and/or not using AI assistance should feel free to do so.

I, on the other hand, want to concern myself with the end product, which is a matter of knowing what to build and how to build it. There’s nothing about AI assistance that entails that one isn’t in the driver’s seat wrt algorithm design/choices, database schema design, using SIMD where possible, understanding and implementing protocols (whether HTTP or CMSIS-DAP for debugging microcontrollers over USB JTAG probe), etc, etc.

AI helps me write exactly what I would write without it, but in a fraction of the time. Of course, when the rare novel thing comes up, I either need to coach the LLM, or step in and write that part myself.

But, as a Staff Engineer, this is no different than what I already do with my human peers: I describe what needs doing and how it should be done, delegate that work to N other less senior people, provide coaching when something doesn’t meet my expectations, and I personally solve the problems that no one else has a chance of beginning to solve if they spent the next year or two solely focused on it.

Could I solve any one of those individual, delegated tasks faster if I did it myself? Absolutely. But could I achieve the same progress, in aggregate, as a legion of less experienced developers working in parallel? No.

LLM usage is like having an army of Juniors. If the result is crap, that’s on the user for their poor management and/or lack of good judgement in assessing the results, much like how it is my failing if a project I lead as a Staff Engineer is a flop.


Sounds like she doth protest too much?


Or you already know all of the details, and you don’t want typing to be the bottleneck to getting things done.


https://vimhelp.org/motion.txt.html#%7B

    { [count] paragraphs backward.  exclusive motion.
    } [count] paragraphs forward.  exclusive motion.


What does exclusive motion mean here?


Motions can be inclusive or exclusive. It works like the different ways of annotating ranges: [0,1] and (0,1).

Consider the command `d` (delete) combined with the motions for `"`.

First we have `da"`, it deletes the everything between the pair of `"` characters that surround my cursor. Next, `di"` deletes the contents of the `"` pair.

The movement `a"` is inclusive (think 'a quote') and `i"` is exclusive (think 'inside quote'). Combined with the command you get "delete a quote" and "delete inside quote" when the mnemonics are spelled out.

https://vimhelp.org/motion.txt.html#exclusive


oh, wow, great info, thanks. i knew about the general concept from high school math (where it is called open and closed intervals) and also about Python ranges, but didn't know about it in connection with vim. Got it now.


Also, I love mnemonics. They make many topics easier to remember.

Related: Sanskrit has tons of them.

https://duckduckgo.com/?t=fpas&q=sanskrit+mnemonics&ia=web


Do you take issue with companies stating that they (the company) built something, instead of stating that their employees built something? Should the architects and senior developers disclaim any credit, because the majority of tickets were completed by junior and mid-level developers?

Do you take issue with a CNC machinist stating that they made something, rather than stating that they did the CAD and CAM work but that it was the CNC machine that made the part?

Non-zero delegation doesn’t mean that the person(s) doing the delegating have put zero effort into making something, so I don’t think that delegation makes it dishonest to say that you made something. But perhaps you disagree. Or, maybe you think the use of AI means that the person using AI isn’t putting any constructive effort into what was made — but then I’d say that you’re likely way overestimating the ability of LLMs.


Could we please avoid the strawmen? Nowhere have I claimed that they didn't put work into this. Nowhere did I say that delegation is bad. I'd like to encourage a discussion, but then please counter the opinion that I gave, not a made-up one that I neither stated nor actually hold.


> You mean you told Claude a bunch of details and it built it for you?

> Nowhere have I claimed that they didn't put work into this.

There's some mental gymnastics.

> please counter the opinion that I gave

The reply your responding to did exactly that, and you just gave more snarky responses.


We all agree that crafting the right prompts (or however we call the CLAUDE.md instructions) is a lot of work, don't we? Of course they put work into this, it's a file of substantial size. And then Claude used it to build the thing. Where is a contradiction? I don't see the mental gymnastics, sorry.


Let me rephrase GP into (I hope) a more useful analogy. — actually, here’s the whole analogous exchange:

“A rectangle is an equal-sided rectangle (i.e. “square”) though. That’s what the R stands for.”

“No? Why would you think a rectangle is a square?”

Just as not all rectangles are squares (squares are a specific subset of rectangles), not all datagram protocols are UDP (UDP is just one particular datagram protocol).


The obvious answer is "I didn't know datagrams were a superset of UDP". I don't really understand how "how do you not know this" is a reasonable or useful question to ask.


You read it that way because that’s the sensible way to read it. Everyone suggesting you missed the plot is in turn making a rather large logical leap.


What whoknowsidont is trying to say (IIUC): the models aren't trained on particular MCP use. Yes, the models "know" what MCP is. But the point is that they don't necessarily have MCP details baked in -- if they did, there would be no point in having MCP support serving prompts / tool descriptions.

Well, arguably descriptions could be beneficial for interfaces that let you interactively test MCP tools, but that's certainly not the main reason. The main reason is that the models need to be informed about what the MCP server provides, and how to use it (where "how to use it" in this context means "what is the schema and intent behind the specific inputs/outputs" -- tool calls are baked into the training, and the OpenAI docs give a good example: https://platform.openai.com/docs/guides/function-calling).


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: