Hacker Newsnew | past | comments | ask | show | jobs | submit | candiddevmike's commentslogin

I thought this was an unhinged parody of a design site, kinda surprised it's a real thing. Unfortunately the design isn't for me, things look off center and the overall "weight" of components feels off.

Agreed. I’ve spent considerable time on scale-based design, and 1.618 always feels too large of an interval.

If I include licensed code in a prompt and have a LLM include it in the output, is it still licensed?

I was hoping with IPv6, getting an address space as an individual would go back to how it was in the early IPv4 days, but alas you need to be a multihomed individual with tons of usage instead of just a sophisticated netzien that wants to own their block.

One of my customers was handing out /64s for a while but it was more hassle than it was worth. I only ever saw one residential customer use it, and he was just smart enough to cause problems.

Its one of those things that there needs to be strong consumer demand for, or it will just never happen tbh.

From our perspective, what we want more than anything in the universe is to never do NAT or DNS ever again. I would much rather maintain a billing system indicating you rent a small block of IPV6 space, with a nice little static route, over maintaining never ending NAT and DNS logs for the benefit of police forces who cant shit without collecting every micron of data. But NAT is basically security these days, and theres a negative driver in exposing customer routers directly to the internet (in that, if it even supports v6 its likely to be rooted) Customers will leave if telcos do things properly, and theres literally zero reward for being nice about it.


Interesting, my two ISPs (one in Belgium, one in France, not business ISPs) hand out fixed /48 blocks to every customer. As far as I know, that's what RIPE recommends, they actively discourage from assigning longer prefixes than /56.

The modems they provide handle it without needing anything special from the customers. The devices get IPv6 addresses from this prefix, and are firewalled by default. It's pretty simple so I'm not sure what could go wrong there.


In some countries you're only required to turn over logs that you chose to collect, but you're not required to collect them.

Yes, same here. Very frustrating. It is almost as if the powers that be don't want lowly netizens controlling their own destiny.

Actually, they don't want to pollute the internet routing table with routes that are fully subsumed into other routes. The effect on address ownership is a side effect.

Actually, they just want to milk the money out of you. It's a matter of how much your willing to pay, as a business customer, it's all possible.

Most ISP do not have such pure goals, as to protect the global routing tables ;)


RIRs, not ISPs, allocate addresses at the top level, they make money on each address allocation, and they still won't allocate addresses to you if you don't multihome because they have a duty to conserve resources.

When you get PI addresses your LIR/ISP just passes your data on to the RIR.


Just like many industries there's a retail side and a wholesale side. You're asking to get a wholesale product from a retail channel. If you become a wholesale customer you can get what you want, for a price.

I don't want an address, they should be cheap, meaningless (sans routing, the longer the common prefix, the closer geographically you should be) and not conflated with identifiers.

I just want a way to do public-key based discovery. I'm not sure if wireguard + DHT would do though as it'd also mean that it's easy to track your PK (and maybe you through your devices/services announced with PKs).

Maybe you can announce your IP in a neat encryption scheme that adds some privacy without increasing costs too much?


Basically Yggdrasil?


Oh, that's interesting

What is the point of owning public address space?

Anything in your private network (even if it goes over public internet) should be encrypted and locked up anyway. Something like Wireguard or Nebula only needs a few (maybe just one) publicly accessible address. Inside the overlay network, it's easy to keep IP addresses stable.

Anything public-facing likely needs a DNS record, updatable quickly when the IP of a publicly accessible interface changes (infrequently).

What am I missing?


The realistic point is to have your own abuse email contact, to evade the banhappy policies that most server hosts have even when you did nothing wrong. Usually they suspend your account if you don't reply within 24 hours, even if the complaint is obvious nonsense.

It's the only real way of running reliable IPv6 networks with multiple uplinks. Unless you want NATv6.

DNS updates are slow. BGP can react to a downed link in <1 sec.

Even fast LACP needs three seconds and that's on the same collision domain.

How does BGP actually detect a link is down? Keep alive default is 30s but that can be changed. If you set it to say one second, is that wise? Once a link is down, that fact will propagate at the speed of BGP and other routing protocols. Recovery will need a similar propagation.

Depending on where the link is, a second can be a "life time" these days or not. It really depends on the environment what an appropriate heart beat interval might be.

Also, given that BGP is TCP based, it might have to interact with other lower level link detection protocols.


BFD or Ethernet-OAM is the standard here.

It can get a bit hardware dependant but getting <50ms failovers from software based BFD in BIRD or FRR is fairly easy, and I've tested down to < 1ms before with hardware based BFD echo. ~50ms is the point at which a user making a traditional VOIP call won't notice the path switch.

You can get NIC's for computers (like most Nvidia/Meallanox or higher end Broadcom/Intel NIC's that do hardware BFD, and its obviously included in higher end networking kit.

You then link the BGP routes to the health of the BFD session for which that path is the next hop, and you get super quick withdrawls.


I.e. bird detects interface failure but this affects only your side of decision making. For bidirectional failure detection you do BFD with BGB. BFD default timers are 3 times 30 ms, iirc.

I have both my own multihomed ASN and operate my own nameservers. The latter has usually been about as fast for failover overall in practice. BGP may look to converge near instantly from your 2-3 peer outbound perspective but the inbound convergence from the 100k networks on the rest of the internet is much slower and has a long tail very akin to trying to set your DNS TTL to 0 and having the rest of the internet decide to do it slower for cache/churn reasons anyways.

The bigger problem, and where BGP multihoming is most handy, is it's just so much easier to get a holistic in+out failover where nothing really changes vs in DNS where it's more about getting the future inbound stuff to change where it goes. E.g. it's a pain to break an active session because the address had to change, even if DNS can update where the new service is quickly.


The long tail of routers receiving your update doesn’t matter. Once the common transit networks get it, that’s where the rest would dump the traffic to reach you anyway. The only time slow propagation to the edges matters is the first time announcing a prefix after it has been fully withdrawn.

Using the wrong route to get the packet in your general direction still gets you the packet as long as it hits an ISP along the way that got the update.

We could fully drain traffic from a transit provider in <60s with a withdrawal with all of the major providers you get at the internet exchanges. If you weren’t seeing that your upstream ISPs may have penalized you for flapping too much and put in explicit delays.


<60s sounds about right as a general safe estimate. I just mean people should expect 1-2ish orders of magnitude more than <1s from a downed link with internet BGP upstreams in a multihomed situation.

I’m saying that’s not a correctly configured link for fast failure.

<1 second was normal for hard link down events or explicit withdrawals. Anything above that was waiting for some BGP peer timeout or some IGP event.

If your ISP is taking longer than 1 second to propagate your change, you’ve been put in some dunce protection box.


If it were flap suppression/slow peer detection/"the dunce bucket" there wouldn't be a long tail of convergence - it'd just be nothing until all at once. This also isn't something I've only seen on my personal AS alone, it's what I've come to expect in many enterprise cutovers while previously working at a network VAR. The personal AS is however much more carefree to move around to different random providers on a whimthough of course :).

I found some data from an oldish post by benjojo https://blog.benjojo.co.uk/post/speed-of-bgp-network-propaga... which confirm various tirr 1s do propagate updates across their networks very fast (<2ish seconds) while others certainly do not. Notably, Level 3 (now Lumen) is the largest BGP presence by prefix count and was the worst tested in the list - starting to apply at ~20s after to finishing at ~50s after. This was for announce specifically, which should be the clearer case.


Honestly it's not free but it's really not that expensive. With RIPE it's about 75€ per year for the ASN and being multihomed is not really a problem, there are multiple services that will let you announce through them for free or very cheap. You don't have volume minimums.

I do agree it should be simpler, but it is accessible to individuals today.


I feel you. Us nerds have been ignored by modern day home user contracts.

Que? 4,722,366,482,869,645,213,696 addresses isn't enough for you?

They want the address block registered directly to them instead of their ISP

> In April 2009 RIPE accepted a policy proposal of January 2006 to assign IPv6 provider-independent IPv6 prefixes. Assignments are taken from the address range 2001:678::/29 and have a minimum size of a /48 prefix.

You can have your own PI bloc and move it between ISPs if you so desire. You effectively own the bloc.


> If you want to code by hand, then do it! No one's stopping you. But we shouldn't pretend that you will be able to do that professionally for much longer.

If you can't code by hand professionally anymore, what are you being paid to do? Bring the specs to the LLMs? Deal with the customers so the LLMs don't have to?


This is what I don’t understand: why highly-paid SWEs seem to think that their salaries will remain the same (if they even still have a job) if their role is now a glorified project manager.

Recently, I had to do an integration with a Chinese API for my company. I used Codex to do the whole thing.

Yet, there is no way a product manager without any coding experience could have done it. First, the API needed to communicate to the main app correctly such as formatting, correcting data. This required human engineer guidance and experience working with expected data. AI was lost. Second, the API was designed extremely poorly. You first had to make a request, then retry a second endpoint over and over again while the Chinese API did its thing in the background. Yes, I had to poll it. I then had to do load testing to make sure it was reliable (it wasn't). In the end, I gave a recommendation that we shouldn't rely on this Chinese company and back out of the deal before we send them a huge deposit.

A non-technical PM couldn't have done what I did... for at least a few more years. You need a background and experience in software development to even know what to prompt the AI. Not only that, in the last 3 years, I developed an intuition on where LLMs fail and succeed when writing code.

I still have a job. My role has changed. I haven't written more than 10 lines of code in a day for months now. Yes, it's kind of scary for software devs right now but I'm honestly loving this as I was never the kind of dev who loved the code, just someone who needed to code to get what I wanted.


Architects and engineers are not construction workers. AI can build the thing but it needs to be told exactly what to build by someone who knows how software works.

I’ve spent enough time working with cross-functional stakeholders to know that the vast majority of PM (whether of the product, program, or project variety), will not be capable of running AI towards any meaningful software development goal. At best they can build impressive prototypes and demos, at worst they will corrupt data in a company-destroying level of failure.


Agree. I’m finding quite a lot of success with AI but i’m writing detailed prompts. In turn the LLM’s are producing 99% error free massive refactors.

No one but seniors with years and years of experience is producing like that. As evidenced how much the juniors i work with struggle to do the same


> can build the thing but it needs to be told exactly what to build by someone who knows how software works.

How do you tell a computer exactly what you want it to do, without using code?


Basically you feed it a massive volume of application code. It turns out there is a lot of commonality and latent repetition that can be teased out by LLMs, so you can get quite far with that, though it will fall down when you get into more novel terrain.

> AI can build the thing but it needs to be told exactly what to build by someone who knows how software works.

If AI was following my instructions instead of ignoring them, and after complaining telling me it is sorry, and returns some other implementation which also fails to follow my instructions ... :-(


Don't be stupid, if an AI can figure out how to arrange code, it can also figure out how to pick the right architecture choices.

Right now millions of developers are providing tons of architecture questions and answers. That's all going to be used as training data for the next model coming out in 6 months time.

This is a moat on our jobs as deep as a puddle.

If you believe LLMs will be able to do complex coding tasks, you must also concede they will be able to make the relatively simpler architecture choices easily simply by asking the right questions. Something they're already starting to be able to do.


> [...] by asking the right questions [...]

Now you've put your finger on something. Who is capable of asking the right questions?


It already asks questions in plan mode.

It's not a massive jump to go from, 'add a button above the table to the right that when clicked downloads and excel file', to 'The client's asking to dowbload an excel file".

If you believe the LLMs will graduate from junior level coding to senior in the next year, which they're clearly not capable of doing yet despite all the hype, there is no moat of going from coder to BA to PM.

And then you don't need middle management either.


Good project managers (with a technical focus) are not low-paid at all, even compared to SWE's.

Sure, but you need 1/10th the amount of PMs as you do SWEs.

But (the thinking) goes, with AI in the mix, spinning up a new project or feature will be so low-friction that there will be 10x as many projects created. So our jobs are saved!

(Color me skeptical.)


You have to move up the stack and make yourself a more valuable product. I have an analogy…

I’ve been working for cloud consulting companies/departments for six years.

Customers were willing to pay mid level (L5) consultants with @amazon.com by their names (AWS ProServe) $x to do one “workstream”/epic worth of work. I got paid $x - Amazon’s cut in cash and RSUs.

Once I got Amazon’ed, I had to get a staff level position (senior equivalent at BigTech) at a third party company where now I am responsible for larger projects. Before I would have needed people - now I need code gen tools and my quarter century of development experience and my decade of experience leading implementations + coding.


Doesn't this mean the ones that should be really worried are the project managers, since the SWE has better understanding over what's being done and can now orchestrate from a PM level?

Both should realize that if this all works out according to plan then there eventually reaches a point that there is no longer a need for their entire company, let alone any individual role in it.

It's even worse - project management where you have to micromanage every thing the AI is doing.

But yeah, if anybody can do it, the salaries are going to plummet. You don't need a CS degree to tell the AI to try again.


Salaries might remain the same, but they'll be expected to produce a lot more.

We produce way more than the punch-card wielding developers of yesteryear and we’re doing just fine (better even).

And we get paid less.

They're delusional, but that's to be expected if you imagine them as the types for whom everything in life has always just kinda worked out. The idea that things could suddenly not work out is almost unimaginable to them, so of course things will change, but not, for them, substantially for the worse.

You are under delusion. Glorified project manager will not produce production quality code no matter what. At least not until we will have reached that holy grail of AGI. But if that ever happens the world will have way bigger problems to deal with.

This is what I don't understand: everyone who thinks we're still relevant with the same job and salary expectations.

Everything just changed. Fundamentally.

If you don't adapt to these tools, you will be slower than your peers. Few businesses will tolerate that.

This is competitive cycling. Claude is a modern bike with steroids. You can stay on a penny farthing, but that's not advised.

You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.

What remains to be seen is how many of us the market needs and how much the market will pay us.

I'm hoping demand and comp remain constant, but we'll see.

The one thing I will say is that we need ownership in these systems ASAP, or we'll become serfs to computing.


I don’t think that the real dichotomy here. You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.

The management has decided that the latter is preferable for short term gains.


> You can either produce 2-5x good maintainable code, or 10-50x more dogshit code that works 80-90% of the time, and that will be a maintenance nightmare.

It's actually worse than that, because really the first case is "produce 1x good code". The hard part was never typing the code, it was understanding and making sure the code works. And with LLMs as unreliable as they are, you have to carefully review every line they produce - at which point you didn't save any time over doing it yourself.


It's not dogshit if you're steering.

That's what so many of you are not getting.

Look at the pretty pictures AI generates. That's where we are with code now. Except you have ComfyUI instead of ChatGPT. You can work with precision.

I'm a 500k TC senior SWE. I write six nines, active-active, billion dollar a day systems. I'm no stranger to writing thirty page design documents. These systems can work in my domain just fine.


  > Look at the pretty pictures AI generates. That's where we are with code now.
Oh, that is a great analogy. Yes, those pictures are pretty! Until you look closer. Any experienced artist or designer will tell you that they are dogshit and don't have value. Don't look further than at Ubisoft and their Anno 117 game for a proof.

Yep, that's where we are with code now. Pretty - until you look close. Dogshit - if you care to notice details.


Not to mention how hard it is to actually get what you want out of it. The image might be pretty, and kinda sorta what you asked for. But if you need something specific, trying to get AI to generate it is like pulling teeth.

I've developed a new hobby lately, which I call "spot the bullshit."

When I notice a genAI image, I force myself to stop and inspect it closely to find what nonsensical thing it did.

I've found something every time I looked, since starting this routine.


I agree entirely, except i don't know that I've seen pretty pictures from AI.

"Glossy" might be a good word (no i don't mean literally shiny, even if they are sometimes that).


Since we’re apparently measuring capability and knowledge via comp, I made 617k last year. With that silly anecdote out of the way, in my very recent experience (last week), SOTA AI is incapable of writing shell scripts that don’t have glaring errors, and also struggles mightily with RDBMS index design.

Can they produce working code? Of course. Will you need to review it with much more scrutiny to catch errors? Also yes, which makes me question the supposed productivity boost.


The problem is not that it can’t produce good code if you’re steering. The problem is that:

There are multiple people on each team, you can not know how closely each teammate monitored their AI.

Somebody who does not car will vastly outperform your output. By orders of magnitude. With the current unicorn chasing trends, that approach tends to be more rewarded.

This produces an incentive to not actually care about the quality. Which will cause issues down the road.

I quite like using AI. I do monitor what it’s doing when I’m building something that should work for a long time. I also do total blind vibe coded scripts when they will never see production.

But for large programs that will require maintenance for years, these things can be dangerous.


> You can write 10x the code - good code. You can review and edit it before committing it. Nothing changes from a code quality perspective. Only speed.

I agree, but this is an oversimplification - we don't always get the speed boosts, specifically when we don't stay pragmatic about the process.

I have a small set of steps that I follow to really boost my productivity and get the speed advantage.

(Note: I am talking about AI-coding and not Vibe-coding) - You give all the specs, and there are "some" chances that LLM will generate code exactly required. - In most cases, you will need to do >2 design iterations and many small iterations, like instructing LLMs to properly handle error gracefully recover from errors. - This will definitely increase speed 2x-3x, but we still need to review everything. - Also, this doesn't take into account the edge cases our design missed. I don't know about big tech, but when I have to do the following to solve a problem

1. Figure out a potential solution

2. Make a hacky POC script to verify the proposed solution actually solves the problem

3. Design a decently robust system as a first iteration (that can have bugs)

4. Implement using AI

5. Verify each generated line

6. Find out edge cases and failure modes missed during design and repeat from step3 to tweak the design, or repeat from step4 to fix bug.

WHENEVER I jump directly from 1 -> 3 (vague design) -> 5, Speed advantages become obsolete.


> You can write 10x the code - good code.

This is just blatantly false.


Every engineer in the next two years needs to prepare themselves for this conversation to play out (from Office Space):

> Bob Slydell: What you do at Initech is you take the specifications from the customer and bring them down to the software engineers?

> Tom Smykowski: Yes, yes that's right.

> Bob Porter: Well then I just have to ask why can't the customers take them directly to the software people?

> Tom Smykowski: Well, I'll tell you why, because, engineers are not good at dealing with customers.

> Bob Slydell: So you physically take the specs from the customer?

> Tom Smykowski: Well... No. My secretary does that, or they're faxed.

> Bob Porter: So then you must physically bring them to the software people?

> Tom Smykowski: Well. No. Ah sometimes.

> Bob Slydell: What would you say you do here?

The agents are the engineers now.


PMs can always keep their jobs because they appear to be working and they keep contact with the execs directly. They have taken a bigger and bigger part of the tech pie over the years and soon they finally take it all.

And when they’re actually good at their job, they’re invaluable in my opinion

Yeah, the best way to learn the value of project management is to work somewhere without it.

That's not what i am seeing being played out at a big corp. In reality everyone gets thrown under the bus, no matter if c-level or pleb if they don't appear to know how to drive the ai metrics up. Just being a PM won't save your job any more than that of the dev who doesn't know how to acquire and use new skills. On the contrary, jobs of the more competent devs are safer than those of some managers here who don't know the tech.

And that "ah sometimes" costs what? Not forgetting you are also paying for tokens.

It's a bit like eating junk food everyday and ah sometimes I go see the doctor he keep saying I should eat more healthy and lose some weight.



I am currently doing 6 projects at the same time, where before I would only of doing one at a time. This includes the requirements, design, implementation and testing.

Sounds awful

Code IS spec.

Your code in $INSERT_LANGUAGE is no less of a spec to machine code than english is to $INSERT_LANGUAGE.

Spec is still needed, spec is the core problem of engineering. Too much specialization have made job titles like $INSERT_LANGUAGE engineer, which deviated too far from the core problem, and it is being rectified now.


I have people skills! I am good at dealing with people!

When the cost of defects and of the AI tooling itself inevitably rises, I think we are likely to see a sudden demand for the remaining employed developers to do more work "by hand".

"Dang, the AI really screwed up this time. Call in the de-sloppers."

>"If you can't code by hand professionally anymore"

Then you are simply fucked. The code you deliver will contain bugs which LLM sometimes will be able to fix and sometimes will be not. And as a person who has no clue you will have no idea how to fix it when LLM can not. Also even when LLM code is correct it can and sometimes does introduce gross performance fuckups, like using patterns that employ N-square complexity instead of N for example. Again as a clueless person you are fucked. And if one goes to areas like concurrency, multithreading optimizations one gets fucked even more. I can go on and on on way more particular reasons to get screwed.

For a person who can hand code AI becomes amazing tool. For me - it helps immensely.


What are you going to do for work in 2 years?

I have enough savings for a few years, so I might just move to a lower COL area, and wait it out. Hopefully after the initial chaos period things will improve.

For someone at your position with your experience it’s quite depressing that your job is going to be automated. I feel quite anxious when I see young generations in my country that say themselves they are lazy about learning new things. The next generation will be useless to capitalist societies, in a sense that they won’t be able to bring value through administrative or white collar work. I hope some areas of the industry will move slowly toward AI

Yes, they will output the same file hash every time, short of some build time mutation. Thus we can have nice things like reproducible builds and integrity checks.

I wish these folks would tell me how you would do a reproducible build, or reproducible anything really, with LLMs. Even monkeying with temperature, different runs will still introduce subtle changes that would change the hash.

This reminds me of how you can create fair coins from biased ones and vice versa. You toss your coin repeatedly, and then get the singular "result" in some way by encoding/decoding the sequence. Different sequences might map to the same result, and so comparing results is not the same as comparing the sequences.

Meanwhile, you press the "shuffle" button, and code-gen creates different code. But this isn't necessarily the part that's supposed to be reproducible, and isn't how you actually go about comparing the output. Instead, maybe two different rounds of code-generation are "equal" if the test-suite passes for both. Not precisely the equivalence-class stuff parent is talking about, but it's simple way of thinking about it that might be helpful


There is nothing intrinsic to LLM prevents reproducibility. You can run them deterministically without adding noise, it would just be a lot slower to have a deterministic order of operations, which takes an already bad idea and makes it worse.

Please tell me how to do this with any of the inference providers or a tool like llama.cpp, and make it work across machines/GPUs. I think you could maybe get close to deterministic output, but you'll always risk having some level of randomness in the output.

It's just arithmetic, and computer arithmetic is deterministic.

On a practical level, existing implementations are nondeterministic because they don't take care to always perform mathematically commutative operations in the same order every time. Floating-point arithmetic is not commutative, so those variations change the output. It's absolutely possible to fix this and perform the operations in the same order every time, implementors just don't bother. It's not very useful, especially when almost everything runs with a non-zero temperature.

I think the whole nondeterminism thing is overblown anyway. Mathematical nondeterminism and practical nondeterminism aren't the same thing. With a compiler, it's not just that identical input produces identical output. It's also that semantically identical input produces semantically identical output. If I add an extra space somewhere whitespace isn't significant in the language I'm using, this should not change the output (aside from debug info that includes column numbers, anyway). My deterministic JSON decoder should not only decode the same values for two runs on identical JSON, a change in one value in the input should produce the same values in the output except for the one that changed.

LLMs inherently fail at this regardless of temperature or determinism.


Just because you can’t do it with your chosen tools it does not mean it cannot be done. I’ve already granted the premise that it is impractical. Unless there is a framework that already guarantees determinism you’ll have to roll your own, which honestly isn’t that hard to do. You won’t get competitive performance but that’s already being sacrificed for determinism so you wouldn’t get that anyway.

Any good research papers on the impact of short form video on the human brain? This is a major cause for the attention crisis we're facing IMO.

what would you rather people pay attention to?

I wish PostgreSQL had a native vector implementation instead of using extensions. They're kind of a pain in the ass to maintain, especially with migrations.

Interestingly almost all of postgres is an extension including the stuff you expect to be built in. All data types, all index types, all operators, and the implementation of ordinary tables I think

For me the showstopper missing feature is a standard and native implementation of temporal tables. Once you use those effectively in an application, it become something you can't do without.

You're leaving out how much it costs to pull the lever, both in time and money.

If we're making a reasonable analogy, then successful pulls cost much less than $5 of time and money.

If the analogy is comparing to downtime, then unsuccessful pulls cost basically nothing.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: