Hacker Newsnew | past | comments | ask | show | jobs | submit | xg15's commentslogin

That's not a third category, that's just a sociopath as seen by themself.

I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.


> I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.

Yes that is the core trait I highlighted in the 1st bullet.


The consequence of "space is cheap" / "If I didn't use that RAM, it would just sit there unused anyway" etc.

But, well, how does it do the human-like-text-outputting exactly?

I’m guessing you aren’t just asking how an LLM works, but attempting to make the point that humans are also statistical next-token predictors or something?

Humans make predictions, that doesn’t mean that’s all we do.


No, my point is that "statistical next-token predictor" is an empty phrase that doesn't really explain much. Markov chains are statistical next-token predictors as well and nevertheless, no one would confuse a markov chain with a conscious being (or deem the generated texts in any way useful for that matter).

The question is how the prediction works in detail, and those details are still being researched, as Anthropic does here, and the research can yield unexpected results.


> These are totally normal though and it would be more surprising if they weren't carrying out military exercises at all.

In principle yes, but not if the exercises are literally rehearsing a blockade of Taiwan.


Why do you say that?


Interestingly, the travel permit requirement already existed in law before, but it was tied to the Spannungsfall/Verteidigungsfall conditions.

This new law removed this condition and preemptively "activated" the requirement even before the Spannungsfall was declared.


I think a counterargument would be parallel evolution: There are various examples in nature, where a certain feature evolved independently several times, without any genetic connection - from what I understand, we believe because the evolutionary pressures were similar.

One obvious example would be wings, where you have several different strategies - feathers, insect wings, bat-like wings, etc - that have similar functionality and employ the same physical principles, but are "implemented" vastly differently.

You have similar examples in brains, where e.g. corvids are capable of various cognitive feats that would involve the neocortex in human brains - only their brains don't have a neocortex. Instead they seem to use certain other brain regions for that, which don't have an equivalent in humans.

Nevertheless it's possible to communicate with corvids.

So this makes me wonder if a different "implementation" always necessarily means the results are incomparable.

In the interest of falsifiability, what behavior or internal structures in LLMs would be enough to be convincing that they are "real" emotions?


"Parallel" evolution is just different branches of the same evolutionary tree. The most distantly related naturally evolved lifeforms are more similar to each other than an LLM is to a human. The LLM did not evolve at all.

Evolution is the way how the "mechanism" came to be, which is indeed very different. But the mechanism itself - spiking neurons and neurotransmitters on one hand vs matrix multiplications and nonlinear functions (both "inspired" by our understanding of neurons) don't seem so different, at least not on a fundamental level.

What is different for sure is the time dimension: Biological brains are continuous and persistent, while LLMs only "think" in the space between two tokens, and the entire state that is persisted is the context window.


> The LLM did not evolve at all.

Evolution and Transormer training are 'just' different optimization algorithms. Different optimizers obviously can produce very comparable results given comparable constraints.


The training process shares a lot of high-level properties with the biological evolution.

"Minimize training loss while isolated from the environment" is not at all similar to "maximize replication of genes while physically interacting with the environment". Any human-like behavior observed from LLMs is built on such fundamentally alien foundations that it can only be unreliable mimicry.

The environment for the model is its dataset and training algorithms. It's literally a model of it, in the same sense we are models of our physical (and social) environment. Human-like behavior is of course too specific, but highest level things like staged learning (pretraining/posttraining/in-context learning) and evolutionary/algorithmic pressure are similar enough to draw certain parallels, especially when LLM's data is proxying our environment to an extent. In this sense the GP is right.

> where someone fired bullets into the coolant reservoirs and caused a several day power outage.

So you mean to say, one doesn't even need drones, a datacenter could be (temporarily) taken out with a handgun?


This is the same sinking realization people had after 9/11 when thinking about infrastructure. Just damaging one or two substations serving the downtown core of a major city could cause massive economic damage.

Yes, though with a rifle (higher stopping power than a handgun).

Large parts of our society are built on trust, and there is societal ignorance of how vulnerable our infrastructure is.

Criminals generally aren't that sophisticated or intelligent, so they aren't aware they can target these places.


Both GP's and your example in effect mean "I'm fine with other people doing this, but I don't want to have anything to do with it, or at least be able to decide case-by-case."

Which is a valid stance IMO.

In the OP, a vibecoded UI when the whole project emphasizes "I did this myself, from scratch" is a bit awkward.

Does "I did this myself" mean they read all the relevant specs and then wrote the code - or did they just write the prompts themselves?

Edit: OP already answered and confirmed that they in fact did write the code themselves.


Moon landing 1969: 4 KB RAM for the guidance computer is enough.

Moon landing 2026: Two instances of MS Outlook sort of started themselves on the guidance computer and we have no idea why.


1969: Every line of assembly code has been coded according to rigorous standards and vetted and reviewed by a panel of experts.

2026: lol we just realized there's a few million lines of extra code running but we can't figure out why


1969: Every bit of every line of assembly manually woven into core rope memory by highly skilled technicians.

2026: We filled up our 2 TB flash. How do we get another?


1969: Our toilets suck, better be miticulous and careful with waste

2026: Too much shit, we need to design new toilets


The Apollo missions used a disposable bag you taped to your butt.

"Give me a napkin quick. There's a turd floating through the air" - Tom Stafford, Apollo 10 Commander (1969) [1]

"I used to want to be the first man to Mars. This has convinced me that, if we got to go on Apollo, I ain't interested" - Ken Mattingly, Apollo 16 Pilot (1972) [2]

[1] https://www.vox.com/2015/5/26/8646675/apollo-10-turd-poop

[2] https://apollojournals.org/afj/ap16fj/24_Day9_Pt1.html#:~:te...


How come all hiking and road disposable bags don't have the tape.

Because hiking is usually done under gravity.

What could go wrong!

We went from NASA Jet Propulsion Laboratory’s Power of Ten rules to ‘have you tried restarting Microsoft Outlook?’

https://en.wikipedia.org/wiki/The_Power_of_10:_Rules_for_Dev...


Restart which one?!

Genuinely shocking that the guidance computer would be running Windows at all.

"... preparing for re-entry, adjusting azimuth, ... APPLY UPDATES AND REBOOT? APPLY UPDATES AND SHUT DOWN? QUEEN? UPDATES?"


Microsoft said something about a Copilot…

Can you instead compare the mission critical code of Artemis instead of the email client?

How many KB is the flight controller?


Your parent’s comment was a joke and you’re replying as if it went over your head.

> on the guidance computer

Source for this running on the GN&C (guidance and nav) computer? Isn’t that built by the ESA?


Ah, good point. The tweet just mentioned the "Artemis computer", but according to https://www.tomshardware.com/software/microsoft-office/artem... it's a separate system and not navigation.

ESA? I wonder if the OS will have to confirm that the users are over 18.

…and we managed to do this without AI!

but... but... mandatory AI quota!

Was the OS vibe-coded?

Yep! Found the YouTube channel a bit more accessible (if you don't speak japanese) : https://m.youtube.com/@karakurist

Found the "unorthodox" uses of balance wheels interesting. Not everything that ticks is a clock, apparently.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: