Hacker Newsnew | past | comments | ask | show | jobs | submit | artski's commentslogin

I don’t know how South Korea works politically but Ik an example of Malaysia - they spent years on their nuclear road map -> new administration comes in who hates nuclear it gets scrapped. And now they are back to one that doesn’t mind it and have to start from zero


Yeah I think the world is screwed. These aren't things you can shut down instantly without losing a bunch of money/the time not spent building better alternatives - all these projects have long lead times.


I’ve been thinking a lot about how new features and systems are built lately, especially with everything that’s happened over the past few years. It’s interesting how most of the AI stuff we see in products today is basically tacked on after the fact to trace the trend - some more valuable than others depending on how forced it feels. You build your tool, your dashboard, your app, and then you try to layer in some sort of automation or “assistant” once it’s already working. And I get why - it makes sense when you’ve already got an established thing and you want to enhance it without breaking what people rely on. I did a main writeup in substack about it but figured I'd expand the discussion.

But I wonder if we’re now at a point where that can’t really be the default anymore. If you’re building something new in 2025, whether it’s a product, internal tool, or even just a feature, maybe it should be designed from the ground up to be usable not just by a human clicking buttons, but by another system entirely. A model, a script, an orchestration layer - whatever you want to call it.

It’s not about being “AI-first” in the marketing sense. It’s more about thinking: can this thing I’m building be used by something else without needing a human in the loop? Can it expose its core functions as callable actions? Can its state be inspected in a structured way? Can it be reasoned about or composed into a workflow? That kind of thinking, I think, will become the baseline expectation - not just a “nice to have.”

It’s also not really that complicated. Most of the time it just means thinking in terms of well-structured APIs, surfacing decisions and logs clearly, and not baking critical functionality too deeply into the front-end. But the shift is mental. You start designing features as tools - not just user flows - and that opens up all kinds of new possibilities. For example, someone might plug your service into a broader workflow and have it run unattended, or an LLM might be able to introspect your system state and take useful actions, or you can just let users automate things with much less effort.

There’s been some early but interesting work around formalising how systems expose their capabilities to automation layers. One effort I’ve been keeping an eye on is the MCP. A quick summary is basically that It aims to let a service describe what it can do - what functions it offers, what inputs it accepts, what guarantees or permissions it requires -in a way that downstream agents or orchestrators can understand without brittle hand-tuned wrappers. It’s still early days, but if this sort of approach gains traction, I can imagine a future where this kind of “self-describing system contract” becomes part of the baseline for interoperability. Kind of like how APIs used to be considered secondary, and now they are the product. It’s not there yet, but if autonomous coordination becomes more common, this may quietly become essential infrastructure.

I don’t know. Just a thought I’ve been chewing on. Curious what other people think. Is anyone building things with this mindset already or are there good examples out there of products or platforms that got this right from day one?


Yeah I thought about this and maybe down the line, but wanted to start with the pure statistics part as the base so it's as little of a black box as possible.


Crazy how far people go for these things tbh.


For each spike it samples the users from that spike (I set it to a high enough value currently it essentially gets all of them for 99.99% of repos - though that should be optimised so it's faster but just figured I will just grab every single one for now whilst building it). It checks the users who caused this spike for signs of being "fake accounts".


It's a project I'm making purely for myself and I like to share what I make - sorry I didn't put up most effort in the commit messages, will not do that again.


Don’t apologize. You didn’t do anything wrong. It’s your repo, use it how you wish. You don’t owe that guy anything.


Well I initially planned to use GraphQL and started to implement it, but switched to REST for now as it's still not fully complete, just to keep things simpler while I iterate and the fact that it's not required currently. I’ll bring GraphQL back once I’ve got key cycling in place and things are more stable. As for the rate limit, I’ve been tweaking things manually to avoid hitting it constantly which I did to an extent—that’s actually why I want to add key rotation... and I am allowed to leave comments for myself for a work in progress no? or does everything have to be perfect from day one?

You would assume if it was pure ai generated it would have the correct rate limit in the comments and the code .... but honestly I don't care and yeah I ran the read me through GPT to 'prettify it'. Arrest me.


You probably should have put "v0.1" or "alpha/beta" in the post title or description - currently it reads like it's already been polished up IMO.


It would still count as "trustworthy" just wouldnt come out to 100/100 :(.


Ironically your chance of getting a PR through is about 10x higher on smaller one-man-show repos than more heavily trafficked corporate repos that require all manner of hoops to be jumped through for a PR.


I haven't done that before so it would be a small learning curve for me to figure that out. Feel free to make a pull request.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: