I doubt most sociopaths, when they’re honest, would agree they feel much guilt or remorse at all.
Whereas the people in the category I’m describing might feel those things, but prioritize those feelings far below the benefits of achieving what they set out to achieve.
I’m guessing you aren’t just asking how an LLM works, but attempting to make the point that humans are also statistical next-token predictors or something?
Humans make predictions, that doesn’t mean that’s all we do.
No, my point is that "statistical next-token predictor" is an empty phrase that doesn't really explain much. Markov chains are statistical next-token predictors as well and nevertheless, no one would confuse a markov chain with a conscious being (or deem the generated texts in any way useful for that matter).
The question is how the prediction works in detail, and those details are still being researched, as Anthropic does here, and the research can yield unexpected results.
I think a counterargument would be parallel evolution: There are various examples in nature, where a certain feature evolved independently several times, without any genetic connection - from what I understand, we believe because the evolutionary pressures were similar.
One obvious example would be wings, where you have several different strategies - feathers, insect wings, bat-like wings, etc - that have similar functionality and employ the same physical principles, but are "implemented" vastly differently.
You have similar examples in brains, where e.g. corvids are capable of various cognitive feats that would involve the neocortex in human brains - only their brains don't have a neocortex. Instead they seem to use certain other brain regions for that, which don't have an equivalent in humans.
Nevertheless it's possible to communicate with corvids.
So this makes me wonder if a different "implementation" always necessarily means the results are incomparable.
In the interest of falsifiability, what behavior or internal structures in LLMs would be enough to be convincing that they are "real" emotions?
"Parallel" evolution is just different branches of the same evolutionary tree. The most distantly related naturally evolved lifeforms are more similar to each other than an LLM is to a human. The LLM did not evolve at all.
Evolution is the way how the "mechanism" came to be, which is indeed very different. But the mechanism itself - spiking neurons and neurotransmitters on one hand vs matrix multiplications and nonlinear functions (both "inspired" by our understanding of neurons) don't seem so different, at least not on a fundamental level.
What is different for sure is the time dimension: Biological brains are continuous and persistent, while LLMs only "think" in the space between two tokens, and the entire state that is persisted is the context window.
Evolution and Transormer training are 'just' different optimization algorithms. Different optimizers obviously can produce very comparable results given comparable constraints.
"Minimize training loss while isolated from the environment" is not at all similar to "maximize replication of genes while physically interacting with the environment". Any human-like behavior observed from LLMs is built on such fundamentally alien foundations that it can only be unreliable mimicry.
The environment for the model is its dataset and training algorithms. It's literally a model of it, in the same sense we are models of our physical (and social) environment. Human-like behavior is of course too specific, but highest level things like staged learning (pretraining/posttraining/in-context learning) and evolutionary/algorithmic pressure are similar enough to draw certain parallels, especially when LLM's data is proxying our environment to an extent. In this sense the GP is right.
This is the same sinking realization people had after 9/11 when thinking about infrastructure. Just damaging one or two substations serving the downtown core of a major city could cause massive economic damage.
Both GP's and your example in effect mean "I'm fine with other people doing this, but I don't want to have anything to do with it, or at least be able to decide case-by-case."
Which is a valid stance IMO.
In the OP, a vibecoded UI when the whole project emphasizes "I did this myself, from scratch" is a bit awkward.
Does "I did this myself" mean they read all the relevant specs and then wrote the code - or did they just write the prompts themselves?
Edit: OP already answered and confirmed that they in fact did write the code themselves.
"Give me a napkin quick. There's a turd floating through the air" - Tom Stafford, Apollo 10 Commander (1969) [1]
"I used to want to be the first man to Mars. This has convinced me that, if we got to go on Apollo, I ain't interested" - Ken Mattingly, Apollo 16 Pilot (1972) [2]
reply