I've seen too many times in real life people who do arts and want to try to sell it not understand that once you switch from a hobby to a business, you need to spend at least 50% of your time on the business/marketing/logistics/etc side of things, hence failing miserably. The best possible outcome that I've seen is that they miraculously hit a nerve on the first hit, become famous, and at some point realize they need to pay taxes and do so in a decent timeframe.
So I found this article great to explain those things, and also how it's not just "you", but it's "the part of you that people need to buy" to make it into an actual business the thing that it's important. I'll be sharing it a bunch, I'm so happy fnnch wrote this!
Or delegate that stuff and become a "sellout". Just don't get taken advantage of. Oh, and have actual talent. Or don't, doesn't really matter, if the salesperson has some of their own.
I've seen some discussions and I'd say there's lots of people who are really against the hyped expectations from the AI marketing materials, not necessarily against the AI itself. Things that people are against that would seem to be against AI, but are not directly against AI itself:
- Being forced to use AI at work
- Being told you need to be 2x, 5x or 10x more efficient now
- Seeing your coworkers fired
- Seeing hiring freeze because business think no more devs are needed
- Seeing business people make a mock UI with AI and boasting how programming is easy
- Seeing those people ask you to deliver in impossible timelines
- Frontend people hearing from backend how their job is useless now
- Backend people hearing from ML Engineers how their job is useless now
- etc
When I dig a bit about this "anti-AI" trend I find it's one of those and not actually against the AI itself.
The most credible argument against AI is really the expense involved in querying frontier models. If you want to strengthen the case for AI-assisted coding, try to come up with ways of doing that effectively with a cheap "mini"-class model, or even something that runs locally. "You can spend $20k in tokens and have AI write a full C compiler in a week!" is not a very sensible argument for anything.
It’s hard to say. The compiler is in a state that isn’t useful for anything at all and it’s 100k lines of code for something that could probably be 10k-20k.
But even assuming it was somehow a useful piece of software that you’d want to pay for, the creator setup a test harness to use gcc as an oracle. So it has an oracle for every possible input and output. Plus there are thousands of C compilers in its training set.
If you are in a position where you are trying to reverse engineer an exact copy of something that already exists (maybe in another language) and you can’t just fork that thing then maybe a better version of this process could be useful. But that’s a very narrow use case.
The cost argument is a fallacy, because right now, either you have a trained human in the loop, or the model inevitably creates a mess.
But regardless, services are extremely cheap right now, to the point where every single company involved in generative AI are losing billions. Let’s see what happens when prices go up 10x.
Maybe, but I seriously doubt that new DRAM and chip FABs aren't being planned and built right now to push supply and demand to more of an equilibrium. NVIDIA and Samsung and whoever else would love to expand their market than to wait for a competitor to expand it for them.
How long do you think it takes for those factories to go from nothing to making state-of-the-art chips at a scale that's large enough to influence the supply even by 1%?
There are plenty of them being built, yes. Some of them will even start outputting products soon enough. None of them are gonna start outputting products at a scale large enough to matter any time soon. Certainly not before 2030, and a lot of things can change until then which might make the companies abandon their efforts all together or downscale their investments to the point where that due date gets pushed back much further.
That's not even discussing how easier it is for an already-established player to scale up their supply versus a brand-new competitor to go from zero to one.
If you keep digging, you will also find that there's a small but vocal sock puppet army who will doggedly insist that any claims to productivity gains are in fact just hallucinations by people who must not be talented enough developers to know the difference.
It's exhausting.
There are legitimate and nuanced conversations that we should be having! For example, one entirely legitimate critique is that LLMs do not tell LLM users that they are using libraries who are seeking sponsorship. This is something we could be proactive about fixing in a tangible way. Frankly, I'd be thrilled if agents could present a list of projects that we could consider clicking a button to toss a few bucks to. That would be awesome.
But instead, it's just the same tired arguments about how LLMs are only capable of regurgitating what's been scraped and that we're stupid and lazy for trusting them to do anything real.
I loved Heroku, but moved away a couple of years back. Tried 3 major "alternatives" (dokku, Render, Fly.io), and the big clouds, and the only thing that made me happy at the end was Coolify. I do keep Netlify for FE-only projects though.
I’ll be trying out Hyperion now as default launcher, list of stuff I checked quickly:
Kvaesitso (FLOSS) and AIO: Different style of launcher that I don’t want, so out.
Action: Felt weird to use, didn’t find a setting for auto search in app drawer
Smart launcher: The most expensive one at 25€, and no proper app drawer search either.
Lawnchair (FLOSS): Annoying animations, widgets don’t work properly (many widgets require Yx2 sizing that should work as Yx1)
Octopi: Slightly better widgets than lawnchair, but still sizing issues. Without that I’d probably have gone with it first.
Hyperion: This is what I’ll be testing for now. The only Nova feature I’m missing is showing recently installed apps in the drawer, but that’s extremely minor. Apparently support is bad and updates rare, but neither is an issue for me.
I'm trying that one too. I did not like Octopi or KISS. Kvaesitso looks nice, I like the drawer and the drawer widgets are kind of cool too. The annoying thing that might drive me away is that you cannot set the order of favourites and the order changes depending on what you last used. Changing the order of something has to be very thoughtful or it quickly becomes frustrating - and with the favourites its not obvious that should be like a "recently used" list.
I have a funny story I need to tell some day about how I could get a 4GB JSON loaded purely in the browser at some insane speed, by reading the bytes, identifying the "\n" then making a lookup table. It started low stakes but ended up becoming a multi-million internal project (in man-hours) that virtually everyone on the company used. It's the kind of project that if started "big" from the beginning, I'd bet anything it wouldn't have gotten so far.
Edit: I did try JSON.parse() first, which I expected to fail and it did fail BUT it's important that you try anyway.
Yes, but I didn't read the full file, I kept the File reference and read the bytes in pages of 10MB IIRC to find all of the line break offsets. Then used those to slice and only read the relevant parts.
I was going to comment exactly the same thing, thanks for expressing it so well and here's my upvote. I think part of why I wanted to comment the same, is that for me this IS exactly the reason I make open source! It is my gift for everyone, please use it well.
I do think it would be nice to get paid anything at all, but that wouldn't change at all how I do things/release code. In fact, unless it'd be really no-strings-attached, I'd prefer to keep the current arrangement than being paid a pittance per month and then have extra obligations.
AFAIK them not providing an address is not the main point, it's them not collaborating on your case for a copyright infringement. The not providing an address is just more evidence of them not following on their responsibilities regarding the copyright infringement.
> PEP 658 (2022) put package metadata directly in the Simple Repository API, so resolvers could fetch dependency information without downloading wheels at all.
> Fortunately RubyGems.org already provides the same information about gems.
> [...]
> After we unpack the gem, we can discover whether the gem is a native extension or not.
Why not adding the meta information of whether the gem is a native extension or not directly to rubygems.org? You could fully parallelize whole installation trees of dependencies then.
Had the same thought reading this but I suspect what's in the gemspec could accidentally differ from what's in the RubyGems.org metadata, although that should probably not be possible.
From working on RubyGems.org a long time ago I vaguely remember that the metadata extracted from the gemspec is version-specific. So if you add a new native_extension boolean you'd have to artificially reprocess those previously published gemspecs to change the metadata for all past versions.
Being able to mutate metadata for past versions is dangerous enough that I'd be surprised it's allowed or even possible. So that might not even be something Aaron considered here for that reason. That said, it seems reasonable to me to suggest this improvement going forward to make unpacking the gem unnecessary to know whether it'll affect installation order.
Just make the rule apply only to packages published after a given date, and then manually backfill that metadata into the service-backend DB with a one-time scrape through all packages from before that date.
https://web.archive.org/web/20260211225255/https://crabby-ra...
reply