Hacker Newsnew | past | comments | ask | show | jobs | submit | alwillis's commentslogin

Been on the Jekyll bandwagon for a long time now; it's my go-to static site generator.

Have you ever been tempted to look at others like Hugo? Or do you get what you need from Jekyll?

> 1. We pay for saas, so we don't have to manage it. If you vibe-code or use these AI things, then you are managing it yourself.

> 2. Most Saas is like $20-$100/month/person for most Saas. For a software engineer, that maybe <1h of pay.

    |Segment                   |Median Enterprise Price                   |
    |--------------------------|------------------------------------------|
    |Mid-market                |~$175/user/month                          |
    |Enterprise (<100 seats)   |~$470/seat/month implied (~$47K ACV)      |
    |Enterprise (100-500 seats)|~$312–$1,560/seat/month range (~$156K ACV)|

Enterprise contracts almost always include a platform fee on top of per-seat costs (67% of contracts), plus professional services that add 12–18% of first-year revenue.

So for a lot of companies, it's worth using AI to create a replacement.


> So for a lot of companies, it's worth using AI to create a replacement.

I'll add the nuance that those might be big companies with slack capacity, or at least firms that already are at a point in their effort/performance curve where marginal effort injections in their core business are not worthy enough (a point that, without being big companies, would be actually weird). Even with AI and as processes become more efficient effort is at premium, and depending on your firm situation an man-hour used in your business might be a better use of effort and time that using it on non-core services.




> At this point perhaps a million person-years have been sacrificed to the semantically incoherent shit UX of git. I have loathed git from the beginning but there's effectively no other choice.

Yes! We mostly wouldn’t tolerate the complexity and the terrible UX of a tool we use everyday--but there's enough Stockholm Syndrome out there where most of us are willing to tolerate it.


Unless you're aware that such powerful commands are something you need once in a blue moon, and then you're grateful that the tool is flexible enough to allow them in the first place.

Git may be sharp and unwieldy, but it's also one of the decreasing amount of tools we still use - the trend of turning tools into toys consumed the regular user market and is eating into tech software as well.


Tools, done right, are a joy to use and allow you to be expressive and precise while also saving you labor. Good tools promote mastery and creative inquiry.

Git is NOT that.

Git is something you use to get stuff done, until it becomes an irritating obstacle course of incidental complexity.


Hg is a joy to use compared to git. Sure wish hg had won.

Why should there be tolerance? You look it up once, then write a script or an alias if it's part of your workflow. Or made a note if it's worth that. I use magit and I get quick action and contextual help at every step of my interaction with git.

> More than killer AI I'm afraid of Anthropic/OpenAI going into full rent-seeking mode so that everyone working in tech is forced to fork out loads of money just to stay competitive on the market.

You should be more concerned about killer AI than rent seeking by OpenAI and Anthropic. AI evolving to the point of losing control is what scientists and researchers have predicted for years; they didn’t think it would happen this quickly but here we are.

This market is hyper competitive; the models from China and other labs are just a level or two below the frontier labs.


> They should just say they'll never release a model of this caliber to the public at this point and say out loud we'll only get gimped versions.

That’s not going to happen. If you recall, OpenAI didn’t release a model a few years ago because they felt it was too dangerous.

Anthropic is giving the industry a heads up and time to patch their software.

They said there are exploitable vulnerabilities in every major operating system.

But in 6 months every frontier model will be able to do the same things. So Anthropic doesn’t have the luxury of not shipping their best models. But they also have to be responsible as well.


> I struggle to believe that a ton of seemingly intelligent software engineers are too dumb to figure out how to use Claude code to get reliable results.

They're not dumb, but I'm not surprised they're struggling.

A developer's mindset has to change when adding AI into the mix, and many developers either can’t or won’t do that. Developers whose commits that look something like "Fixed some bugs" probably aren’t going to take the time to write a decent prompt either.

Whenever there's a technology shift, there are always people who can't or won't adapt. And let's be honest, there are folks whose agenda (consciously or not) is to keep the status quo and "prove" that AI is a bad thing.

No wonder we're seeing wildly different stories about the effectiveness of coding agents.


They seem to know what they’re doing. Anthropic entered 2025 with a run rate of $1 billion; the run rate for March 2026 is estimated at $19 billion.

Internal projections show the company reaching cash-flow break-even in 2028, after stopping cash burn in 2027.

They’ve already implemented several of the features that put OpenClaw on the map.


> Anthropic entered 2025 with a run rate of $1 billion; the run rate for March 2026 is estimated at $19 billion.

I don't know what that means in this context.

> Internal projections show the company reaching cash-flow break-even in 2028, after stopping cash burn in 2027.

What does that have to do with them implementing restrictions on their plans because they are currently running at a loss?

Okay, lets say their internal projections[1] are accurate: were those before or after Openclaw released? Maybe their projections were made on the assumption that people would stop using $10k/m worth of tokens on a $200/m plan? Or that those users doing that will only be doing code? Or that the plan users won't be running requests at a rate of 5/minute, every minute of every hour of every day?

--------------------------------

[1] Where did you find those projections? I'm skeptical, at their current prices and current plans, that a break-even at any point in the future is possible unless they shut off or severely scale down training. Running at a per-unit loss means that the more you sell, the larger your loss - increasing your sales increases your loss.


I think the point the author is trying to make is dealing with environment variables was a solved problem 40+ years ago on FreeBSD and other Unix-like systems.

And that we shouldn't need to download something like dotenv to address something the operating system should handle.


> On your lan, you really want to be doing ".local" / ".lan" / ".home".

The "official" is home.arpa according to RFC 8375 [1]:

    Users and devices within a home network (hereafter referred to as
    "homenet") require devices and services to be identified by names
    that are unique within the boundaries of the homenet [RFC7368].  The
    naming mechanism needs to function without configuration from the
    user.  While it may be possible for a name to be delegated by an ISP,
    homenets must also function in the absence of such a delegation.
    This document reserves the name 'home.arpa.' to serve as the default
    name for this purpose, with a scope limited to each individual
    homenet.
[1]: https://datatracker.ietf.org/doc/html/rfc8375

It may be the most officially-recommended for home use, but .internal is also officially endorsed for "private-use applications" (deciding the semantics of these is left as an exercise to the reader): https://en.wikipedia.org/wiki/.internal

That is a classical "design by committee" thing.

".home" and ".lan" along with a bunch of other historic tlds are on the reserved list and cannot be registered.

Call techy people pathologically lazy but no one is going to switch to typing ".home.arpa" or ".internal". They should have stuck with the original proposal of making ".home" official, instead of sticking ".arpa" behind it. That immediately doomed the RFC.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: