I think the sprouts trauma is the result of picking the wrong cooking method.
I was so surprised when I tried baked sprouts for the first time (use a really host cast iron skilet for even better results) that I started to believe that every vegetable can be delicious as long as you bake it!
There’s many delicious and easy ways to eat vegetable! Two of my favorite:
- Belgian Stoemp: basically smashed-potatoes with smashed-other legumes. Cook everything together (with herbs if you can), smash, add lipid and salt and you’re done!
- German Ein Topf: put vegetables, beans and sausages in a pot (I use tofu ones or tempeh). Cover, cook slowly. It’s almost a salty Tajin from the north.
- Recover bland vegetables (sprouts or anything) to a fantastic soup in 5 minutes: add a bit of water, coconut cream (or caw cream / silken tofu…), spices. A bit of tahin and corail lentils if you have. Mix and adjust water.
"I started to believe that every vegetable can be delicious as long as you bake it!"
Baking is good, but I also came to another conclusion - vegetables that are disgusting if they are cooked to a slimy paste, can be delicious eaten raw in a salad!
Raw broccoli in a salad will get you tossed out of the country I was born in :-)
Now the true surprise with broccoli for me was learning that you can dice the stems and butter-roast them, then use them in pretty much anything from rice dishes to pies. Amazing
I get the value of a personal CRM and potential power of having one locally managed by LLMs and I'd love to see such a solution, because to your point, outreach is just a small part of what you can do with a personal CRM. But, the way you describe and deliver this project is very confusing to me, it's a CRM but also Cursor for your Mac (what does that even mean?), I already run Cursor on my Mac, it also has a file tree view to use it as a better MacOS find I guess?
I think that a much cleaner messaging on what this tool is for would help.
Also a question about the implementation, why DuckDB for a CRM?
Something like SQLite feels like a much natural fit for a CRM where you primarily create, update and maybe delete records and you really care for the integrity of the data model.
From a quick look on the data model, everything seems to be a VARCHAR, if this is the case, why not just store everything in the file system instead? You do that with the md files and whatever is getting extracted from the SaaS tools.
I'm definitely biased here, but the OpenClaw hype is making people disregard the economics of it all.
Building Auto-CRM.com, my primary concern was building a system that runs well while not costing 200$ per month to keep up, and of course, while also maintaining security. I assume the good guys at Folk, Pipedrive, etc also had similarly priorities.
A lot of good work is being done within the OpenClaw ecosystem regarding RAG and memory, but specialised orchestration process to be a more reliable system.
Building TUIs might be easy now but building good user experience on a TUI is feels harder than ever has been to me. The modern libraries make a lot of things easy but we are currently pushing terminals far beyond what they were designed for.
Claude Code et.al. are good examples of that. Diffs, user approval flows, non-linear flows in general and a ton of text buffered are all elements that we know really well how to handle in web interfaces but are challenging for the terminal.
It's important in a book treating an emerging field (data eng for LLMs) to mention emerging categories related to it such as storage formats purpose built for the full ML lifecycle.
Lance[1] (the format, not just LanceDB) is a great example, where you have columnar storage optimized for both analytical operations and vector workloads together with built-in versioning for dataset iteration.
Plus (very important) random access, which is important for stuff like sampling and efficient filtering during curation but also for working with multimodal data, e.g. videos.
Lance is not alone, vortex[2] is another one, nimble[3] from Meta yet another one and I might be missing a few more.
AI can be an amazing productivity multiplier for people who know what they're doing.
This result reminded me of the C compiler case that Anthropic posted recently. Sure, agents wrote the code for hours but there was a human there giving them directions, scoping the problem, finding the test suites needed for the agentic loops to actually work etc etc. In general making sure the output actually works and that it's a story worth sharing with others.
The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding. It works great for creating impressions and building brand value but also does a disservice to the actual researchers, engineers and humans in general, who do the hard work of problem formulation, validation and at the end, solving the problem using another tool in their toolbox.
>AI can be an amazing productivity multiplier for people who know what they're doing.
>[...]
>The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.
You're sort of acting like it's all or nothing. What about the the humans that used to be that "force multiplier" on a team with the person guiding the research?
If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.
For a more current example: do you think all the displaced Uber/Lyft drivers aren't going to think "AI took my job" just because there's a team of people in a building somewhere handling the occasional Waymo low confidence intervention, as opposed to being 100% autonomous?
Where I work, we're now building things that were completely out of reach before. The 90% job loss prediction would only hold true if we were near the ceiling of what software can do, but we're probably very, very far from it.
A website that cost hundreds of thousands of dollars in 2000 could be replaced by a wordpress blog built in an afternoon by a teenager in 2015. Did that kill web development? No, it just expanded what was worth building
> If a piece of software required a team of ten to people, and instead it's built with one engineer overseeing an AI, that's still 90% job loss.
Yes, but this assumes a finite amount of software that people and businesses need and want. Will AI be the first productivity increase where humanity says ‘now we have enough’? I’m skeptical.
> Yes, but this assumes a finite amount of software that people and businesses need and want.
A lot of software exists because humans are needy and kinda incompetent, but we needed to enable to process data at scale? Like, would you build SAP as it is today, for LLMs?
This is all inevitable with the trajectory of technology, and has been apparent for a long time. The issue isn't AI, it's that our leaders haven't bothered to think or care about what happens to us when our labor loses value en masse due to such advances.
Maybe it requires fundamentally changing or economic systems? Who knows what the solution is, but the problem is most definitely rooted in lack of initiative by our representatives and an economic system that doesn't accommodate us for when shit inevitably hits the fan with labor markets.
there's 90% job loss assuming that this is a zero sum type of thing where humans and agents compete for working on a fixed amount of work.
I'm curious why you think I'm acting like it's all or nothing. What I was trying to communicate is the exact opposite, that it's not all or nothing. Maybe it's the way I articulate things, I'm genuinely interested what makes it sound like this.
Fully agree with your og comment and I didn’t get the same read as the person above at all.
This is a bizarre time to be living in, on one hand these tools are capable of doing more and more of the tasks any knowledge worker today handles, especially when used by an experienced person in X field.
On the other, it feels like something is about to give. All the superbowl ads, AI in what feels like every single piece of copy coming out these days. AI CEOs hopping from one podcast to another warning about the upcoming career apocalypse…I’m not fully buying it.
The optimistic case is that instead of a team of 10 people working on one project, you could have those 10 people using AI assistants to work on 10 independent projects.
That, of course, assumes that there are 9 other projects that are both known (or knowable) and worth doing. And in the case of Uber/Lyft drivers, there's a skillset mismatch between the "deprecated" jobs and their replacements.
Well those Uber drivers are usually pretty quick to note that Uber is not their job, just a side hustle. It's too bad I won't know what they think by then since we won't be interacting any more.
> The "AI replaces humans in X" narrative is primarily a tool for driving attention and funding.
It's also a legitimate concern. We happen to be in a place where humans are needed for that "last critical 10%," or the first critical 10% of problem formulation, and so humans are still crucial to the overall system, at least for most complex tasks.
But there's no logical reason that needs to be the case. Once it's not, humans will be replaced.
The reason there is a marketing opportunity is because, to your point, there is a legitimate concern. Marketing builds and amplifies the concern to create awareness.
When the systems turn into something trivial to manage with the new tooling, humans build more complex or add more layers on the existing systems.
The logical reason is that humans are exceptionally good at operating at the edge of what the technology of the time can do. We will find entire classes of tech problems which AI can't solve on its own. You have people today with job descriptions that even 15 years ago would have been unimaginable, much less predictable.
To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.
> To think that whatever the AI is capable of solving is (and forever will be) the frontier of all problems is deeply delusional. AI got good at generating code, but it still can't even do a fraction of what the human brain can do.
AGI means fully general, meaning everything the human brain can do and more. I agree that currently it still feels far (at least it may be far), but there is no reason to think there's some magic human ingredient that will keep us perpetually in the loop. I would say that is delusional.
We used to think there was human-specific magic in chess, in poker, in Go, in code, and in writing. All those have fallen, the latter two albeit only in part but even that part was once thought to be the exclusive domain of humans.
When I refer to AI, I mean the "AI" that has materialized thus far - LLMs and their derivatives. AGI in the sense that you mean is science fiction, no less than it was 50 years ago. It might happen, it might not, LLMs are in all likelihood not a pathway to get there.
I'm not sure you can call something an optimizing C compiler if it doesn't optimize or enforce C semantics (well, it compiles C but also a lot of things that aren't syntactically valid C). It seemed to generate a lot of code (wow!) that wasn't well-integrated and didn't do what it promised to, and the human didn't have the requisite expertise to understand that. I'm not a theoretical physicist but I will hold to my skepticism here, for similar reasons.
sure, I won't argue on this, although it did manage to deliver the marketing value they were looking for, at the end their goal was not to replace gcc but to make people talk about AI and Anthropic.
What I said in my original comment is that AI delivers when it's used by experts, in this case there was someone who was definitely not a C compiler expert, what would happen if there was a real expert doing this?
>It's amazing it can do it at all... but the resulting compiler is not actually good enough to be worth using.
No one has made that assertion; however, the fact that it can create a functioning C compiler with minimal oversight is the impressive part, and it shows a path to autonomous GenAI use in software development.
So, I just skimmed the discussion thread, but I am not seeing how this shows that CCC is not impressive. Is the point you're making that the person who opened the issue is not impressive?
We will be producing them even less. I fear for the future graduates, hell even for school children, who are now uncontrollably using ChatGPT for their homework. Next level brainrot
Right. If it hadn't been Nicholas Carlini driving Claude, with his decades of experience, there wouldn't be a Claude c compiler. It still required his expertise and knowledge for it to get there.
Many times I read something on HN and come back to find it after a few days or weeks and using the current keyword based search has been consistently giving me a hard time, so I played around with LLMs as an alternative way of searching and finding information on HN.
Thats a fairly specialized chip and requires a bunch of custom software. The only way it can run apps unmodified is if the math libraries have been customized for this chip. If the performance is there, people will buy it.
For a minute I thought maybe it was Risc-V with a big vector unit, but its way different from that.
The quote at the end of the posted Reuters article (not the one you’re responding to) says that it doesn’t require extensive code modifications. So is the “custom software” is standard for the target customers of nextsilicon?
Companies often downplay the amount of software modifications necessary to benefit from their hardware platform's strengths because quite often, platforms that cannot run software out of the box lose out compared to those that can.
By the time special chips were completed and mature, the developers of "mainstream" CPUs had typically caught up speedwise in the past, which is why we do not see any "transputers" (e.g. Inmos T800), LISP machines (Symbolics XL1200, TI Explorer II), or other odd architectures like the Connection Machine CM-2 around anymore.
For example, when Richard Feynman was hired to work on the Connection Machine, he had to write a parallel version of BASIC first before he could write any programs for the computer they were selling:
https://longnow.org/ideas/richard-feynman-and-the-connection...
It's a bit more complicated, you need to use their compiler (LVVM fork with clang+fortran). This in itself is not that special as most accelerators (ICC, nvcc, aoc) already require this.
Modifications are likely on the level of: Does this clang support my required c++ version? Actual work is only required when you want to bring something else, like Rust (AFAIK not supported).
However, to analyze the efficiency of the code and how it is interpreted by the card you need their special toolchain. Debugging also becomes less convenient.
>> says that it doesn’t require extensive code modifications
If they provide a compiler port and update things like BLAS to support their hardware then higher level applications should not require much/any code modification.
Yeah, it's an unfortunate overlap.
The Mill-Core in NextSilicon terminology is the software defined "configuration" of the chip so to speak that represents swaths of the application that are deemed worthy of acceleration as expressed on the custom HW.
So really the Mill-Core is in a way the expression of the customer's code. really.
A framework for optimizing LLM agents, including but not limited to RL. You can even do fine tuning, they have an example with unsloth in there.
The design of this is pretty nice, it's based on a very simple to add instrumentation to your agent and the rest happens in parallel while your workload runs which is awesome.
You can probably do also what DSPy does for optimizing prompts but without having to rewrite using the DSPy API which can be a big win.
We are very excited for this integration with HF datasets. Datasets have a huge potential to deliver some much needed developer experience when it comes to working with data and LLMs/agentic architectures. Happy to answer any questions and also hear what the community thinks.
I was so surprised when I tried baked sprouts for the first time (use a really host cast iron skilet for even better results) that I started to believe that every vegetable can be delicious as long as you bake it!