it is interesting that the video demo is generating .stl model.
I run a lot of tests of LLMs generating OpenSCAD code (as I have recently launched https://modelrift.com text-to-CAD AI editor) and Gemini 3 family LLMs are actually giving the best price-to-performance ratio now. But they are very, VERY far from being able to spit out a complex OpenSCAD model in one shot. So, I had to implement a full fledged "screenshot-vibe-coding" workflow where you draw arrows on 3d model snapshot to explain to LLM what is wrong with the geometry. Without human in the loop, all top tier LLMs hallucinate at debugging 3d geometry in agentic mode - and fail spectacularly.
Hey, my 9 year old son uses modelrift for creating things for his 3d printer, its great! Product feedback:
1. You should probably ask me to pay now, I feel like i've used it enough.
2. You need a main dashboard page with a history of sessions. He thought he lost a file and I had to dig in the billing history to get a UUID I thought was it and generate the url. I would say naming sessions is important, and could be done with small LLM after the users initial prompt.
3. I don't think I like the default 3d model in there once I have done something, blank would be better.
We download the stl and import to bambu. Works pretty well. A direct push would be nice, but not necessary.
Thank you for this feedback, very valuable!
I am using Bambu as well - perfect to get things printed without much hassle. Not sure if direct push to printer is possible though, as their ecosystem looks pretty closed. It would be a perfect use case - if we could use ModelRift to design a model on a mobile phone and push to print..
If you want that to get better, you need to produce a 3d model benchmark and popularize it. You can start with a pelican riding a bicycle with working bicycle.
I am building pretty much the same product as OP, and have a pretty good harness to test LLMs. In fact I have run a tons of tests already. It’s currently aimed for my own internal tests, but making something that is easier to digest should be a breeze. If you are curious: https://grandpacad.com/evals
Yes, I've been waiting for a real breakthrough with regard to 3D parametric models and I don't think think this is it. The proprietary nature of the major players (Creo, Solidworks, NX, etc) is a major drag. Sure there's STP, but there's too much design intent and feature loss there. I don't think OpenSCAD has the critical mass of mindshare or training data at this point, but maybe it's the best chance to force a change.
yes, i had the same experience. As good as LLMs are now at coding - it seems they are still far away from being useful in vision dominated engineering tasks like CAD/design. I guess it is a training data problem. Maybe world models / artificial data can help here?
If you like OpenSCAD, you should check https://modelrift.com which is an OpenSCAD browser-based IDE which uses LLM to generate .scad and instantly shows the .stl 3d model result via 3d model viewer. Since AI models are still not good at openscad, the useful feature of modelrift is the "screenshot-powered" iteration where human annotates visual problems and sends it back to AI assistant to fix, all using hotkey shortcuts.
It is a (rather messy) node.js codebase. Two rendering engines, including a hacked puppeteer package with stealth mode for better success rate. A big set of proxy providers under the hood. Bootstrapped.
Quite curious, I have been scraping some websites for my girlfriend with nodejs/puppeteer and put the content on an .epub file (she likes to read on her e-reader) and it can be quite annoying to bypass some anti-scraping techniques.
I use Clickhouse to store close to 1TB of API analytics data (which would be 10TB in MongoDB, Clickhouse has insane compression ) and it's a wonderful and stable SQL-first alternative to DuckDB - which is a very exciting piece of software, but is indeed too young to embed into boring production. The last time I checked DuckDB npm package, it used callbacks instead of awaits..
I can understand how the older callback API for node.js might form a negative impression, but it's really not indicative of the maturity of the core db engine at all. And remember: the vast majority of users use the Python API.
Even better news is that, as of a couple of months ago, there is now this package (which I wrote at MotherDuck and we have open sourced) which provides typed promise wrappers for the Duckdb API: https://www.npmjs.com/package/duckdb-async. This is an independent npm package for now, but was developed in close coordination with the DuckDb core team.
scrapeninja.net /scrape-js endpoint scrapes company pages of g2 without big troubles (with "us"/"eu" proxy geo in their online sandbox: https://scrapeninja.net/scraper-sandbox ).
They also have /scrape which is much faster because it does not bootstrap real browser, and bypasses CloudFlare TLS fingeprint check: https://pixeljets.com/blog/bypass-cloudflare/
If you are a minimalist, and are using VS Code, try https://marketplace.visualstudio.com/items?itemName=humao.re... which is a pure text syntax to describe API requests, and execute them right from the editor window. I now have api.http text file in every API-first project I am building and I love it.
I like this one because it's easy to keep API workflows with my projects. The scripting ability here is phenomenal. However only really useful if you code in VS Code.
Jetbrains also provides a similar, albeit slightly incompatible syntax for the same thing.
In the end, I think hurl [0] is nicer, because it’s open source and it’s a cli tool (and VS code also has a syntax highlighting plugin for it), making it editor independent.
Not OP but you can store all your routes in one file or multiple, it's up to you.
Personally what I do is I script out full API workflows in different files. So one file might login, then POST to add an object, then GET that object off an endpoint, then patch that endpoint, then trigger the GET object again.
Another workflow might login, upload an image, get that image, etc. For me the scripting is what makes this appealing.
But you could setup one file that documents and tests all your endpoints similar to Postman.
For me, it is always a pain to write and test cheerio code unless I was doing it on the previous week. The syntax of cheerio is somewhat similar to jQuery, but this is still node.js, and not "real" DOM.
I was suffering every time I was googling for "Cheerio quick examples", so I have built a cheerio sandbox to quickly test cheerio syntax against various test inputs. This is already helpful for myself and saves me up to 15-30 minutes on every simple scraper I am writing, I think, just because I have working selectors samples at hand and I can quickly test my new selectors.