Hacker Newsnew | past | comments | ask | show | jobs | submit | breuleux's commentslogin

> A bit like how using "Bruxelles" in a comment about the EU is a giveaway that your British and/or a (former?) brexiteer.

"Bruxelles" is the official French spelling, and French is the city's most spoken language, so maybe they just, you know, live there.


And my take! A fork of fish where any command that starts with > or a capital letter is fed to $fish_llm_command: https://github.com/breuleux/fish-shell. With Claude's help, that took all of 30 minutes to make.


I don’t tidy up very often, but when I do, it doesn’t take much time or energy. I just dump everything that isn’t version controlled into a junk folder, and it feels great.


I keep Inbox zero, mostly, using this system. If I haven't read it, how important could it have been, CTRL+A, DEL gets you to zero.


Instructions unclear: I purchased multiple bins, labeled them V1, V2, V3, and have dumped most of my pens, pencils and notebooks into them. What now?

Lol


> It's easier for a small number of people to coordinate, than a large number.

That's basically my main argument for replacing election-based democracy by lottery-based democracy. Electing the right representatives is a coordination problem in and of itself, a process which the wealthy are already quite adept at manipulating, so we might as well cut the middle man and pick a random representative sample of the population instead, who can then coordinate properly.


Whomever controls the process that decides what a representative sample is and selects candidates is now the middleman.


It's generally easier to make such a process tamper-proof than an election. You can pick a cryptographically secure open source PRNG and determine the seed in a decentralized way by allowing anyone to contribute a salt into a list which is made public at the deciding moment. Then anyone can verify the integrity of the process by verifying the seed includes their contribution, and computing the candidates themselves.


>You can pick a cryptographically secure open source PRNG and determine the seed in a decentralized way by allowing anyone to contribute a salt into a list which is made public at the deciding moment.

If that were a viable model for the real world, we could make existing elections just as tamper-proof.


If the government doesn't have enough power, the wealthy won't need to bribe politicians to do their bidding. They will do their own bidding directly, and there will be nobody to stop them.

It's like, if you want to sell your cyanide penis pills under big government, you need to bribe someone. If you want to sell them under small government, you just... you just sell them, that's what.

There may be ways to design a government where power is better distributed, e.g. using sortition, but ultimately it needs to be richer and more powerful than its wealthiest citizens, otherwise these wealthy citizens will assess, correctly, that when push comes to shove, the laws won't apply to them, and they do not need the government's permission to do what they want.


Even a small government still has courts, in fact they would be a far more sizeable fraction of the government and thus a lot more effective. So if people like Epstein engage in criminal behavior, or even just unlawful behavior that they would be liable for, they can definitely be held accountable.


Courts are only a remedy if you're breathing. If the cyanide penis pills kill you and your family then who is left to file suit?


What stops me, a multibillionaire, from hiring someone to shoot the small government judge in the head?


But suppose you have egalitarian nation N -- what stops the billionaire from non-egalitarian nation B from influencing your politicians? Especially if nation N is small and nation B is large.

Moreover -- why would low-level elites (think: entrepreneurs, small business owners, etc.) stay in nation N if it was more profitable to do business in nation B -- recall this is precisely the type of person that is often most mobile and internationalized.


> These feel like they involve something beyond "predict the next token really well, with a reasoning trace."

I don't think there's anything you can't do by "predicting the next token really well". It's an extremely powerful and extremely general mechanism. Saying there must be "something beyond that" is a bit like saying physical atoms can't be enough to implement thought and there must be something beyond the physical. It underestimates the nearly unlimited power of the paradigm.

Besides, what is the human brain if not a machine that generates "tokens" that the body propagates through nerves to produce physical actions? What else than a sequence of these tokens would a machine have to produce in response to its environment and memory?


> Besides, what is the human brain if not a machine that generates "tokens" that the body propagates through nerves to produce physical actions?

Ah yes, the brain is as simple as predicting the next token, you just cracked what neuroscientists couldn't for years.


The point is that "predicting the next token" is such a general mechanism as to be meaningless. We say that LLMs are "just" predicting the next token, as if this somehow explained all there was to them. It doesn't, not any more than "the brain is made out of atoms" explains the brain, or "it's a list of lists" explains a Lisp program. It's a platitude.


It's not meaningless, it's a prediction task, and prediction is commonly held to be closely related if not synonymous with intelligence.


In the case of LLMs, "prediction" is overselling it somewhat. They are token sequence generators. Calling these sequences "predictions" vaguely corresponds to our own intent with respect to training these machines, because we use the value of the next token as a signal to either reinforce or get away from the current behavior. But there's nothing intrinsic in the inference math that says they are predictors, and we typically run inference with a high enough temperature that we don't actually generate the max likelihood tokens anyway.

The whole terminology around these things is hopelessly confused.


Well it's the prediction part that is complicated. How that works is a mystery. But even our LLMs are for a certain part a mystery.


I mean.. i don't think that statement is far off. Much of what we do is entirely about predicting the world around us, no? Physics (where the ball will land) to emotional state of others based on our actions (theory of mind), we operate very heavily based on a predictive model of the world around us.

Couple that with all the automatic processes in our mind (filled in blanks that we didn't observe, yet will be convinced we did observe them), hormone states that drastically affect our thoughts and actions..

and the result? I'm not a big believer in our uniqueness or level of autonomy as so many think we have.

With that said i am in no way saying LLMs are even close to us, or are even remotely close to the right implementation to be close to us. The level of complexity in our "stack" alone dwarfs LLMs. I'm not even sure LLMs are up to a worms brain yet.


> Simplicity comes from strong definitions

Sure, you can put it this way, with the caveat that reality at large isn't strongly definable.

You can sort of see this with good engineering: half of it is strongly defining a system simple enough to be reasoned about and built up, the other half is making damn sure that the rest of reality can't intrude, violate your assumptions and ruin it all.


It is also a courtesy that free countries respect US copyright. I wouldn't be surprised if EU countries have already started ramping up corporate espionage and are making contingency plans to seize all data and assets on their territory. If they manage to get ahold of source code and data, they may be able to keep some services running without US involvement.

Netflix is a good example: the functionality isn't difficult to reproduce, and the only thing that restricts its library is copyright, which the EU could just stop enforcing for American companies.


> It is also a courtesy that free countries respect US copyright

Which, itself, is downstream of the US signing onto the Berne convention. American copyright actually used to be reasonable and (western) Europe was the insane one with life terms. All that is ugly about the US was buried so deeply in Europe that it is outside, here, with us.

Then America had the extremely short-sighted idea to assign copyright to software, and then use software to enforce copyright, and then make it independently illegal to tell anyone how to bypass that enforcement software. This was all then foisted back onto Europe, whose creative industries begged them for it, not knowing that it basically meant surrendering to the US before the war had even started.

Seizing American copyright would be a good start, but what you really want is to drop anti-circumvention law. Because that's the first domino[0] in the chain. Europe is chock full of businesses that would absolutely fall in line around a tyrant king just like American businesses have, and laws like that enable such businesses to exist.

[0] https://pooper.fantranslation.org/@kmeisthax/110771126221131...


What we observe is also consistent with the idea that when humans have no idea what they're talking about, it's usually more obvious than when LLMs have no idea what they're talking about. In which case the author is lulling themselves into a false sense of confidence chatting with AI instead of humans, merely trading one form of incompetence for another.


> when humans have no idea what they're talking about, it's usually more obvious

Is it?

That's not my experience.


I think so, yes. We rely a lot on eloquence and general knowledge as signals of competence, and LLMs beat most people at these. That's the "usually" -- I don't think good human bullshitters are more obvious than LLMs.

This may not apply to you if you regard LLMs, including their established rhetorical patterns, with greater suspicion or scrutiny (and you should!) It also does not apply when talking about subjects in which you are knowledgeable. But if you're chatting about things you are not knowledgeable about, and you treat the LLM just like any human, I think it applies. There's a reason LLM psychosis is a thing, rhetorically these things can simulate the ability of a cult leader.


I think I'm going to have to disagree. When people tell you something incorrect, they usually believe it's correct and that they're trying to help. So it comes across with full confidence, helpfulness, and a trustworthy attitude. Plus people often come with credentials -- PhD's, medical degrees, etc. -- so we're even more caught off-guard when they turn out to be totally and completely wrong about something.

On the other hand, LLM's are just text on a screen. There are zero of the human signals that tell us someone is confident or trustworthy or being helpful. It "feels" like any random blog post from someone I don't know. So it makes you want to verify it.


There is a relatively hard upper bound on streaming video, though. It can't grow past everyone watching video 24/7. Use of genAI doesn't have a clear upper bound and could increase the environmental impact of anything it is used for (which, eventually, may be basically everything). So it could easily grow to orders of magnitude more than streaming, especially if it eventually starts being used to generate movies or shows on demand (and god knows what else).


This argument could be made for almost any technology.


Well, yeah, sort of. Why do you think the environmental situation is so dire? It's not exactly the first time we make this mistake.


Perhaps you are right in principle, but I think advocating for degrowth is entirely hopeless. 99% of people will simply not chose to decrease their energy usage if it lowers their quality of life even a bit (including things you might consider luxuries, not necessities). We also tend to have wars and any idea of degrowth goes out of the window the moment there is a foreign military threat with an ideology that is not limited by such ways of thinking.

The only realistic way forward is trying to make energy generation greener (renewables, nuclear, better efficiency), not fighting to decrease human consumption.


I agree that people won't accept degrowth.

This being said, I think that the alternatives are wishful thinking. Better efficiency is often counterproductive, as reducing the energy cost of something by, say, half, can lead to its use being more than doubled. It only helps to increase the efficiency of things for which there is no latent demand, basically.

And renewables and nuclear are certainly nicer than coal, but every energy source can lead to massive problems if it is overexploited. For instance, unfettered production of fusion energy would eventually create enough waste heat to cause climate change directly. Overexploitation of renewables such as solar would also cause climate change by redirecting the energy that heats the planet. These may seem like ridiculous concerns, but you have to look at the pattern here. There is no upper bound whatsoever to the energy we would consume if it was free. If energy is cheap enough, we will overexploit, and ludicrous things will happen as a result.

Again, I actually agree with you that advocating for degrowth is hopeless. But I don't think alternative ways forward such as what you propose will actually work.


If humanity's energy consumption is so high that there is an actual threat of causing climate change purely with waste heat, I think our technological development would be so advanced that we will be essentially immortal post-humans and most of the solar system will be colonized. By that time any climate change on Earth would no longer be a threat to humanity, simply because we will not have all our eggs in one basket.


But why do you think that? Energy use is a matter of availability, not purely of technological advancement. For sure, technological advancement can unlock better ways to produce it, but if people in the 50s somehow had an infinite source of free energy at their disposal, we would have boiled off the oceans before we got the Internet.

So the question is, at which point would the aggregate production of enough energy to cause climate change through waste heat be economically feasible? I see no reason to think this would come after becoming "immortal post-humans." The current climate change crisis is just one example of a scale-induced threat that is happening prior to post-humanity. What makes it so special or unique? I suspect there's many others down the line, it's just very difficult to understand the ramifications of scaling technology before they unfold.

And that's the crux of the issue isn't it? It's extremely difficult to predict what will happen once you deploy a technology at scale. There are countless examples of unintended consequences. If we keep going forward at maximal speed every time we make something new, we'll keep running headfirst into these unintended consequences. That's basically a gambling addiction. Mostly it's going to be fine, but...


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: