Hacker Newsnew | past | comments | ask | show | jobs | submit | t8sr's commentslogin

I read the tweet twice and I don’t see any mention of free speech. What he’s describing, when you look past the rhetoric, sounds ridiculous: a single medium sized country is demanding power to institute global blocks of content on the internet? If that’s an accurate description, that’s deeply concerning for the long term viability of the internet.


> And in this case @ElonMusk is right: #FreeSpeech is critical and under attack from an out-of-touch cabal of very disturbed European policy makers.


[flagged]


Which other speech?


He banned any mentions of bluesky and substack. When Paul Graham alluded to making an account and to view his thoughts there, he was also banned


Words used by the trans community


Read the tweet a 3rd time. Free Speech is mentioned in Paragraph 4 when he's thanking Vance and Musk. It's highlighted in Blue. It's a Hashtag.


seems perfectly reasonable for a country of any size to exercise this sort of power within their own borders

the US constitution doesn't apply worldwide

if Petulant Prince doesn't like it: he can leave


Emphasis on global blocks. Meaning everywhere in the world.


He is mentioning Vance and Musk as beacons of democracy and free speech.


Did you and I read different tweets?

"While there are things I would handle differently than the current U.S. administration" and "in this case @ElonMusk is right" are not how you talk about beacons.


[flagged]


Yes, the Administration who is famously so pro Free-Speech, that they intimidate and prosecute senators, when they make a video about "PSA: You can refuse illegal orders"


Well, what is it now?

UK, Italy, Europe, European Union?

Seems hard to differentiate for many, it seems.


Please read the entire tweet. Free speech is mentioned at character number 1779.


I cannot believe this is the first time that Cloudflare has been confronted by a local government which asked to perform "global" filtering of content. It is clear for anyone who has worked with bureaucrats that their "global" means "within our jurisdiction". It is extremely weird that he feels emboldened to publicly lash out like this and pull in people who are extremely unpopular in Europe.


You keep saying this, but 'global' has never meant 'in my jurisdiction' in any conversation or document I've ever read. What additional information can you provide the confirms your interpretation is correct?


Something about the way the author expresses himself (big words, “I am so smart”, flowery filler) makes me unsurprised he finds it hard to have satisfying conversations with people. If he talked to me like this IRL I wouldn’t be trying to have a deep conversation either, I’d just be looking for the exit.

Lacking a theory of mind for other people is not a sign of superiority.


You are being too generous by saying that there are big words in the text. I find it blunt and uncouth. Actually, that's the problem that I see in the text, an attitude of pessimism and lack of self-reflection. An LLM would certainly give me something more interesting to read!


Jumping from "the author uses language I dislike" straight to "also, he has no theory of mind" is a bit of a leap. Like world record winning long jump kinda stuff.

Also, what big words? 'Proliferation'? 'Incoherent'? The whole article is written at a high school reading level. There's some embedded clauses in longer sentences, but we're not exactly slogging our way through Proust, here.


Sadly that’s a personality trait that’s far too common in the field, and it can get pretty annoying.


I've worked with 5 different SCMs and I'm convinced that the reason why git repos often have such poor commit messages is because of the git commit style guide. So much of it only makes sense once you realize it's been optimized for like 5 of its original users reading it on 72 character terminal screens.

Asking people to fit a meaningful description of the change into 50 characters is silly, and it's IMO the reason why so many of them just write "fix bug" and call it a day.

Someone else has posted the Google guide for CL (change list) messages, but let me boost the signal: https://google.github.io/eng-practices/review/developer/cl-d...

This is, I believe, still the best guide out there. When I'm coaching juniors, I recommend this guide, over the opinionated and outdated git "best practices" and I think the results are much better.


That Google guide also says that the first line of the commit message should be a "short summary of what is being done". Is your complaint that 50/72 characters is too short? How long can it be before no longer meets the Google criteria, in your opinion?


Yeah, I specifically think 50 is too short. I am a big fan of brevity, but I think the sweet spot is somewhere around 100. Consider that all of these messages exceed the 50-character limit:

  [startup] Don't drop uid until all controller fds are open
  [bpf] Fix the exec exchange hitting verifier limit on Fedora
  [controller] Optimize partial policy updates with delta
They're as short (IMO) as can be without omitting useful information, but git says they're all illegal by some margin.

I agree on the value of concise writing and dislike word salads, but if you're a junior engineer, then I have maybe 1 hour with you per week and I probably shouldn't spend that time being your English teacher.


I know I've seen some people recommend 50 (my guess is that's to make some room for metadata from commands like `git log --online` in an 80-column terminal?) but I've personally always capped all lines at 72, including the first. 100 seems like a reasonable limit too, but I probably wouldn't go above that.

> git says they're all illegal by some margin

Git will accept almost anything as a commit message. Is there a specific style guide you're referring to?

---

EDIT: Huh, I guess git does have an official recommendation. I'd never noticed this text in `git help commit`:

> Though not required, it's a good idea to begin the commit message with a single short (less than 50 character) line summarizing the change, followed by a blank line and then a more thorough description.

I wouldn't feel bad about not following this advice. Even the author doesn't seem dogmatic about it.


> if you're a junior engineer, then I have maybe 1 hour with you per week and I probably shouldn't spend that time being your English teacher

It's not important enough to make a big deal out of, but if I see it over and over from the same person I might mention it during code review ("your commit messages are a bit wordy; for example instead of '…', consider '…'") or write down as part of a style guide (e.g. if we're tagging commits like in your examples, wherever we wrote down the rules for that). It's the same for other unimportant things that it's nice to agree on (e.g. capitalization of initialisms in code—is it `.toJson()` or `.toJSON()`?).


Short is a relative term, and in this sense is relative to the long form explanation of the code.

If a commit is sufficiently complex the long form could be 600 characters and the short form 200.


I may be misunderstanding: are you saying a commit message with a 200-character-long first line could be an example of a "good" message to you/Googlers? To me that seems like something that could almost certainly be summarized further, regardless of how complex the changeset is (if not it's a sign the commit should be broken up into multiple simpler commits).

Can you give me an example of a commit where the "short, focused summary" can only be usefully-expressed in 200+ characters?

Notably, all of the "good" examples in https://google.github.io/eng-practices/review/developer/cl-d... have first lines under 72 characters.


A good dev isnt necessarily a good writer. Summarising complexity concisely is difficult.

Restrictions lead devs to write to useless messages like 'fixed a bug' rather than messages that are slightly verbose but actually useful.

Most messages wont be 200 chars. But id rather 200 chars of useful text than 72chars of useless text.

The real world is full of average devs that are average writers under time pressure to deliver code. Expecting them to deliver above average commit messages like googles examples is a pipe dream.


> A good dev isnt necessarily a good writer.

I think they should be (or at least "decent").

Should this reasoning be applied to other skills involved in software engineering? If someone never writes tests, or over-architects everything, or has terrible people skills, or never updates documentation, or doesn't bring attention to blockers, or constantly forgets to update the issue tracker, or doesn't follow the local code conventions, or works on random tasks and ignores high-priority ones, etc etc etc the solution isn't usually "don't ask them to do a thing they're bad at", it's "help them get better".

The important question is whether "skimmable commit messages" is a thing you care about enough to advocate for and teach people about. Maybe you don't, and that's fine.

> But id rather 200 chars of useful text than 72chars of useless text.

I completely agree with this. I just don't think those are the only two options.

> Expecting them to deliver above average commit messages like googles examples is a pipe dream.

This thread started with praise for that Google guide and I assumed you were of similar opinion given your reply. That's why I kept referring to it.


Two things:

1. I basically agree with everything you say, I only think the limit should be ~50-100% higher. Not 10x, but 1.5-2x.

2. For some reason, concise writing is the hardest thing to teach and demand consistently. I think a part of is that people try to hide incompetence behind a word salad, but also I think non-native speakers use more words than they need. It gets to a point where you either have to become everyone's English teacher or accept some amount of word salad.


No, I think a reasonable limit is around 100 with the mode being around 80. 50 is not a reasonable limit for a commit message in English.


I think most projects limit commit message titles to 68-72 characters, not 50, which is indeed too little to say much of anything. I don't find it hard to write 70-character summaries, and as a maintainer, I find it very useful to have those available for skimming.

When I do maintainer work that involves skimming a list of dozens of commits for information, it's very helpful to have concise commit titles.


The Google guide assumes a repo other than git. Consider this:

  Using tags

  Tags are manually entered labels that can be used to categorize CLs. These may be supported by tools or just used by team convention.
A "tag" means something entirely different to git.


This is excellent, thank you! Don't hate me but I'm going to change my Claude Code /commit command by pointing it at this guide.


It surprises me how many people working as software engineers absolutely despise programming. Consider that if you must “grind” through leetcode questions, maybe you will be unhappy in the job itself. There are other roles, technical roles even, that don’t require algorithms. Why not pursue those?


Abstraction for its own sake, especially with js frameworks, doesn't make anything more readable or maintainable. React apps are some of the most spaghetti style software I've ever seen, and it takes like 10 steps to find the code actually implementing business logic.


Some of that is the coding standards rather than the framework. I think Dan Abramov did a hang-up job on React, but his naming conventions and file structure are deranged.

Unfortunately there isn't any one preferred alternative convention. But if you ignore his and roll your own it will almost certainly be better. Not great for reading other people's code but you can make your own files pretty clear.


What "naming conventions and file structures" are you referring to? I don't think Dan ever really popularized anything like that for _React_.

If you're thinking of _Redux_, are you referring to the early conventions of "folder-by-type" file structures? ie `actions/todos.js`, `reducers/todos.js`, `constants/todos.js`? If so, there's perfectly understandable reasons why we ended up there:

- as programmers we try to "keep code of different kinds in different files", so you'd separate action creator definitions from reducer logic

- but we want to have consistency and avoid accidental typos, especially in untyped plain JS, so you'd extract the string constants like `const ADD_TODO = "ADD_TODO"` into their own file for reuse in both places

To be clear that was never a requirement for using Redux, although the docs did show that pattern. We eventually concluded that the "folder-by-feature" approach was better:

- https://redux.js.org/style-guide/#structure-files-as-feature...

and in fact the original "Redux Ducks" approach for single-file logic was created by the community just a couple months after Redux was created:

- https://github.com/erikras/ducks-modular-redux

which is what we later turned into "Redux slices", a single file with a `createSlice` call that has your reducer logic and generates the action creators for you:

- https://redux.js.org/tutorials/essentials/part-2-app-structu...


Do they do this notably worse than say a Spring boot API or a Vue frontend? I don't think this is a React thing. Those spaghetti projects would be so with or without React.


The problem with React apologetics is that you need to only take a cursory look at literally every production app written in React to see it's terrible and must be abandoned in the long-term.

To see how fast a properly engineered app can be if it avoids using shitty js frameworks just look at fastmail. The comparison with gmail is almost comical: every UI element responds immediately, where gmail renders at 5 fps.


> [most used web framework, powering innumerable successful businesses]

> [literally unusable]

> [one of the most successful web apps]

> [look at how bad it is]

Your standards might be uncalibrated with reality

I use gmail every day and it's fine, apart from when they push AI features I don't want, but I can't blame that on the framework


Well yeah, most software is bad. In fact it's so bad that's its almost unbelievable.

We're all used to it and that's fine. But it's still bad. We're still wasting, like, 10,000x more resources than we should to do basic things, and stuff still only works, like, 50% of the time.


GMail is becoming the Lotus Notes of the 21st century. It uses half a gigabyte of RAM, for evey tab. God forbid you need to handle several accounts, i.e., for monitoring DMARC reports across domains.

And IT IS SLOW, despite your experience, which is highly dependant on how much hardware can you throw at it.


> [most used web framework, powering innumerable successful businesses]

> [literally unusable]

It's gotten a lot of critique over the complexity it has over the years, the same way how Next.js also has. I've also seen a frickload of render loops and in some cases think Vue just does hooks better (Composition API) and also state management better (Pinia, closer to MobX than Redux), meanwhile their SFC compiler doesn't seem to support TypeScript types properly so if you try to do extends and need to create wrapper components around non-trivial libraries (e.g. PrimeVue) then you're in for a bunch of pain.

I don't think any mainstream options are literally unusable, but they all kinda suck in subtly different ways. Then again, so did jQuery for anything non-trivial. And also most back end options also kind of suck, just in different ways (e.g. Spring Boot version upgrades across major versions and how verbose the configuration is, the performance of Python and the dependency management at least before uv), same could be said for DBs (PostgreSQL is pretty decent, MariaDB/MySQL has its hard edges) and pretty much everything else.

Doesn't mean that you can't critique what's bad in hopes of things maybe improving a bit (that Spring Boot config is still better than Spring XML config). GMail is mostly okay as is, then again the standards for GUI software are so low they're on the floor - also extends to Electron apps.


> I use gmail every day and it's fine

The past couple of weeks I've been having loading times up to 1 minute to open gmail.

No idea what they are up to. Loading google workshop or something like that, takes eons.


My friend, it renders at 15 fps on a literal supercomputer. It takes 30 seconds to load. The time between clicking a button and something happening is measured in seconds. It may be successful, but it is not good.

The problem is that you’ve (and we all have) learned to accept absolute garbage. It’s clearly possible to do better, because smaller companies have managed to build well functioning software that exceeds the performance of Google’s slop by a factor of 50.

I’m not saying RETVRN to plain JS, but clearly the horrid performance of modern web apps has /something/ to do with the 2 frameworks they’re all built on.


> Takes 30 seconds to load.

Tried a cleared cache load, open and usable in 3 seconds, loading my work inbox which is fairly busy and not clean.

I'm not sure what FPS has to do with this? Have you some sort of fancy windows 11 animations extension installed that star wipes from inbox to email view and it's stuttering??

I click and email it shows instantly, the only thing close to "low FPS" is it loads in some styles for a calendar notification and there's a minor layout shift on the email.

What / how are you using it that you apparently get such piss poor performance?


Our experiences of gmail are very different.


> clearly the horrid performance of modern web apps has /something/ to do with the 2 frameworks they’re all built on.

Nonsense. Apps from all frameworks and none show the same performance issues, and you can find exceptionally snappy examples from almost all frameworks too. Modern webapps are slow because the business incentives are to make them slow, the technology choices are incidental.


Enshitification, I've been using Gmail for decades and it was significantly faster and more responsive in the past. It still works fine tbh, but it did work better. Whether or not something is successful has little to do with its quality or performance these days.

There was also a time where once a website or application loaded, scrolling never lagged. Now when something scrolls smoothly it's unusual, and I appreciate it. Discord has done a really good job improving their laggy scroll, but it's still unbelievably laggy for literal text and images, and they use animation tricks to cover up some of the lag.


When did gmail migrate to react?


Definitionally, yes. It’s inert but lenses light around it.

The paper is more about the technical achievement of detecting it, IIUC. It’s not the first dark matter inference we’ve had, and doesn’t really tell us anything new about the stuff.


It challenges warm dark matter and ultralight dark matter theories because they'd be less likely to clump into something so small. Similarly MOND would have trouble explaining a completely isolated chunk of it at this size (any baryonic matter trapped in a region this small would almost certainly emit enough light to detect).


I’m admittedly a few years out of date in this, but weren’t those already kinda ruled out? I’ve never met anyone who took MOND seriously - it looks like it’s a pet project of a small number of people who cite each other, and people in different subfields have always been saying it doesn’t work for them (diffuse galaxies, etc.).

I know the current models favor cold DM, I thought the hot DM model was abandoned already when it became clear that clusters of any size exist?


Your assessment is spot on, but for whatever reason it's extremely popular with amateur physicists. The HN crowd likes it a lot, too. Almost every thread about dark matter has at least one comment that goes like "I'm not a physicist but dark matter always seemed like a cop out to me and will go the way of the luminiferous aether". I'm surprised that we're not seeing them here, perhaps because MOND can't explain this.


And yes, hot dark matter has largely been ruled out, but there are still extensions into warm dark matter and ultralight dark matter that seem more manufactured but are still plausible. The observations in this paper creates some additional challenges for those theories.

But yes, CDM is what most researchers expect, by a large margin.


(I’m an astrophysics undergrad.) Black holes aren’t composed of anything, they’re just defined by their charge, spin and mass equivalent.

Dust clouds have those mass ranges. It’s not a galaxy-scale mass by any measure.

This thread has a lot of CS people being confident about physics.


I was always surprised that when we talk about BHs mass, charge, and spin that we really mean U(1) (electromagnetic) gauge charge and not charges from global symmetries. (If BHs had global charge, you could at least say that this or that black hole was made out of N baryons, or whatever.)

But it's really so---according to GR, black holes don't have global charges. So even if you see a star made out of baryons collapse into a black hole, once the BH settles down into a steady state you can't say it's "really" got baryons inside: the baryon number gets destroyed.

(Of course, a different model of gravity that preserves unitarity might upset this understanding.)


And that a BH made from matter and one made from antimatter are mathematically identical, and merging them would not cause any explosion.


Thanks! That made me (superficially of course) understand it. Super weird stuff.


Basically, the event horizon is the event horizon is the event horizon. If two non-steady-state matter/anti black holes merge (like before everything has hit the singularities), it will cause explosions inside the BH, but energy is mass is energy is mass, so for an external observer it's indiscernable. It will look no different from two all-matter BHs merging.


I mean, I included a disclaimer... But regardless, you appear to be wrong on both counts (or at least contradicting Wikipedia):

1. "The presence of a black hole can be inferred through its interaction with OTHER MATTER and with electromagnetic radiation such as visible light." https://en.wikipedia.org/wiki/Black_hole

2. "A dwarf galaxy is a small galaxy composed of ABOUT 1000 up to several billion stars" https://en.wikipedia.org/wiki/Dwarf_galaxy

Darn astrophysics majors being confident about astronomy! ;)


1. Your argument is about the grammar of a sentence about black holes on Wikipedia? This isn’t some kind of gotcha.

2. I missed the dwarf part, but think about what you’re arguing: the mass range of a loosely defined category (the lower bound of a few thousands is not one I’ve ever heard, btw) that has nothing to do with the paper in question. Collections of stars of any kind produce light. This doesn’t. What are you saying?

What do you think physicists do all day?


Welcome to Hacker News.


Zurich and its environs are basically the Bay Area of Europe. Probably explains the huge concentration of HN users.


Also, it's currently the middle of the night in the USA, so current users will skew towards those outside the Americas.


Well there are the Zoogler's and the ETH ;)


While Zurich is definitely an important IT hub, but "Bay Area of Europe" stretches things a lot.


Name one other place on the continent that has:

1) At least token engineering presence by every major tech company

2) Tech-savvy VC, legal, audit and tax services you can get on a short notice

3) A pool of talent to fill any engineering position

4) A funnel from a big engineering university to the industry that generates startups

5) Tax authorities willing to work through complicated situations like acquihires, IP riders in contracts for a consideration in the form of stock, etc.

It’s much smaller than the Bay Area, of course, but it’s the only place in Europe that has everything you need in one spot. (Except maybe London, but that’s more like the New York of Europe, minus the high salaries.)

Also, “IT hub” is a place where salaries are low and you plop down a call center. IT are the support roles that install antivirus, not a profit center. There’s a huge difference between that and a “tech industry.”


Berlin?

>>A pool of talent to fill any engineering position

Is it true for Zurich? Due to extremely high cost of living I guess there is small amount of unemployed IT people there.


Berlin’s tech industry is to Zurich and London as Berlin’s art scene is to New York or Paris.

And yes, Switzerland, but especially Zurich, is on another level compared to the rest of Europe. (Except maybe London.) I’ve been a hiring manager at multiple large tech companies: Europe in general has less tech talent than the US, but in London and Zurich you can fill any role, from kernel, through ML, computer vision, hardware, manufacturing, robots, quantum computing, etc.


You said it yourself. I like Zurich, but I'm not sure just a list of checkmarks makes it comparable to Bay Area, or even brings it to the top in Europe. Quantitative metrics are important, and I think Zürich is too a bit small, and not quite fast-moving for the grande title.


Dublin competes on almost all these bullets, except maybe #4, even though there are a few universities in the area with prestige.


I’ve done hiring in tech, and the availability of broad spectrum talent in Europe really only exists in London and Zurich. There are hubs for different fields, but only those two places have everything, IME.


I've taught programming to some people who had no previous experience with it, and I can tell you that the list of concepts you have to learn at once is basically as long for Python, the quintessential "beginner" language.

The author's argument feels intellectually dishonest for that reason. Especially glaring is the comparison to JavaScript. The latter has an insane amount of concepts to deal with to do anything, including some truly bizarre ones, like prototypes.

Rust is hard to learn, IMO, for precisely two reasons:

1) The borrow checker is in an uncomfortable place, where it's dumb enough that it rejects perfectly valid code, but smart enough that it's hard to understand how it works.

2) As the author points out, there are a lot of levers available for low-level control, precise allocation, etc.

With respect to the second point, the author describes a language he'd like to see: green threads, compiler deciding on allocation, fewer choices and easy thread safety.

This language already exists (minus algebraic types). It's called Go. It's perfectly fine, well-designed and good for beginners. Some people don't like its aesthetics, but that's not reason enough to invent it again, only with Rust-inspired syntax.


Go is not thread safe. It even allows data races in its own runtime. Go 1.25 fixed a nil-checking error that has been sitting in the runtime for two years.

Generally, Go will let you compile just about anything when it comes to channels, mutexes (or not..), WaitGroups, atomics, and so on. It usually compiles as there are no static checks, and then just fails at runtime.


That’s a good point. I remember that in the earlier days of Go, there was a lot more optimism about the compiler’s future ability to statically decide more about the program than it currently does. It’s unfortunate that the language never got there.

Sadly, given Google’s new culture and all the people who have left, it seems unlikely it’ll ever get there.


> the list of concepts you have to learn at once is basically as long for Python, the quintessential "beginner" language

IMO, Python is great. But deploying Python is more work than learning Rust, oh and the tooling is something that requires continual education: I hear UV is what all the serious python teams are using now. Better learn it or be left behind!

Meanwhile, vanilla Rust has everything you need. I'm glad I understood sunk cost fallacy and got out of Python after many months of coming up to speed with it.


I just watched an experienced programmer on twitch struggle for an hour trying to install a Python-written program using UV! The churn is real, and distributing python programs is still a mess.


that's very unfair towards the both uv (uv is great) and the absolutely atrocious state of Python tooling until basically 2025 (slow and brittle, if it was even installed - hello, debian derivatives). uv is the cargo of Python we've been waiting for and it is expected you won't need to learn anything else for a long time.


> intellectually dishonest

Your claim here is that the author's argument is intentionally misleading. That's an awfully strong accusation, and you don't support it in your comment.

You're also immediately undermining your own argument when you go on to list difficulties in rust that javascript doesn't have.


The venue bigger issue with rust as that the compiler is so bitchy it is actively hostile to incremental add-a-line-at-a-time incremental development.

Especially unused variables being a hard compilation error, not a warning.



What? Unused variables are warnings in Rust. Are you thinking of Go?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: